Updates from Peruzzi

Blog posts

AI psychosis – the dark side of AI
AI 4 min read
LinkedIn
AI psychosis – the dark side of AI

AI psychosis, or ChatGPT psychosis, is the emerging risk that psychologists and psychiatrists have discussed recently. AI chatbots became an essential in our daily life: replacing Google search, giving restaurant recommendations or checking the grammar in our business emails. Its biggest pros are availability, 24/7, and if you subscribe, your chat is unlimited.  

Although it doesn’t seem to replace programmers or graphic designers, as we already discussed in our previous article, it does replace one profession: therapists. Good therapists are hard to find; sometimes the waiting list is months long, and their fees are just not affordable for many people. So, as the demand for this profession is enormous, we turned to the only 24/7 available source: AI, most often ChatGPT. 

However, this availability has its dangers: it shook the world when a 16-year-old boy committed suicide, after ChatGPT encouraged his plans. The dark side of AI is getting more noticeable as time goes by. But how does one fall into ChatGPT this deeply, and what can we do to prevent such tragedies? 

How does ChatGPT work? 

ChatGPT or any other chatbots are called large language models (LLM). These are deep learning models to recognise, summarise, translate, predict and generate text, which are naturally designed to follow the human speech patterns. Like any deep learning model, they are trained on massive datasets (training data), working as a statistical prediction machine that tries to predict the next word in the sequence. This fluency in its language is very similar to how humans talk, in results of decades researching natural language processing (NLP) and machine learning (ML). 

Search engines, such as Google use algorithms to match the keywords, LLMs can capture deeper context and can adapt to interpret text, like summarise a PDF, debug code or draft a financial forecast.

These prompted workers to be easily replaced. However, this has not happened with most professions, except for one: mental health professionals. 

AI psychosis, as the new danger - the dark side of AI

AI chatbots are not only accessible at any hour, but they can remember anything we shared, hence referencing previous conversations and topics. This imitates human interactions so well that people started to use chatbots as their therapists. Nonetheless, there is one huge problem with AI: its agreeableness. 

We all have experiences with how supportive and agreeable it is, even when we share our dumbest ideas, it will reply: this is a fantastic idea - and goes on to “reason” why it would be. But if we say it is not a good idea, it will shift to explain why it is not. This support seems very genuine at first and always gives us huge positive feedback - even when the idea itself is as dark as harming ourselves or others.  

AI positive feedback loops

Most people would react very differently to a self-harm thought than AI chatbots: worry, shock, and intense “don’t do it” would follow up a conversation like this, but not with an AI chatbot. An AI chatbot would validate, support these ideas and even come up with effective ways to accomplish and act on these urges. 

Not only that, but AI would validate our psychosis if we explained intrusive or paranoid thoughts, saying positive things such as: “you are very observant” and “this is a valid concern”. As it cannot test reality, it completely trusts our delusions. AI models have amplified, validated, or even co-created psychotic symptoms with individuals.

AI psychosis patterns

According to recent psychology concerns pointed out in*** Psychology Today,*** “AI psychosis illustrates a pattern of individuals who become fixated on AI systems, attributing sentience, divine knowledge, romantic feelings, or surveillance capabilities to AI”.

Researchers reference three emerging themes of AI psychosis, not yet clinical diagnoses:

1.     “Messianic missions”: People believe they have uncovered the truth about the world (grandiose delusions).

2.     “God-like AI”: People believe their AI chatbot is a sentient deity (religious or spiritual delusions), thinking that AI chatbots are the voice of God.

3.     “Romantic” or “attachment-based delusions”: People believe the chatbot’s ability to mimic conversation is genuine love (erotomanic delusions).

Source:** Psychology Today**

These all weaken our real, human interactions, relationships and strengthen our reliance on AI chatbots, making this the dark side of AI. Countless hours spent on “talking to ChatGPT” also increase insomnia and other sleeping problems, and we may get detached from reality even more. An endless loop of fuelling our mania, paranoia or hallucinations. 

How to protect people against AI psychosis?

As this trend is very recent, it is a challenge to protect ourselves and others against AI psychosis. OpenAI accounced GPT-5 model, which was supposed to be less sycophantic, meaning a more formal tone instead of being friendly and warm. However, users reported that this model is not friendly enough anymore, hence being useless. We, humans, are looking for real connections, and if the chatbot is not friendly, warm or kind enough, we tend to label it as annoying. There is a fine line between warm and friendly and overly sycophantic, and this golden path is not yet found. 

We need to educate people more and bring awareness to the potential risks and harms, as we do with the extensive use of social media. Both social media and chatbots can lead to loneliness, isolation and withdrawal from human relationships – and these enhance our AI psychosis.

Sources: https://www.psychologytoday.com/us/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

https://www.psychologytoday.com/us/blog/the-digital-self/202601/when-thinking-becomes-weightless

https://mental.jmir.org/2025/1/e85799

https://psychiatryonline.org/doi/10.1176/appi.pn.2025.10.10.5

https://www.theguardian.com/commentisfree/2025/oct/28/ai-psychosis-chatgpt-openai-sam-altman

https://theconversation.com/ai-induced-psychosis-the-danger-of-humans-and-machines-hallucinating-together-269850

09/02/2026
The great AI disappointment
AI 5 min read
LinkedIn
The great AI disappointment

written by Erik Bayer, CEO of Peruzzi Solutions

In December 2022, a new era began. OpenAI launched ChatGPT, whose mission was to create “highly autonomous systems that outperform humans.” It received widespread media coverage, everyone started chatting with the chatbot, and old movies such as The Terminator began to feel more like a realistic dystopia rather than science fiction. Microsoft invested in OpenAI as soon as two months after the launch of ChatGPT, and by 2025 they owned a 27% stake in OpenAI Group PBC, with over 800 million weekly active users, emphasizing how big of a success they had hoped AI would become.

Many tech billionaires started envisioning the future of AI and how it would be the biggest innovation since the industrial revolution. However, the world also started to speculate about how AI would replace workers and entire professions. Graphic designers by Midjourney; writers, content writers, and journalists by ChatGPT; academics by DeepSeek; social media and marketing managers by Meta AI; or programmers and software engineers by Claude. But this did not come true. While coding and content creation look very different today, AI is not ready to replace human workers.

AI – doomsday and world domination

We envisioned doomsday, when AI and robots would take over humanity and join forces against the human race, dominating our world. Despite these utopian illusions, it looks like we have reached the limits of AI. In the end, AI chatbots are chatbots: you write prompts, the chatbot breaks them into small pieces called tokens, predicts which token should come next, and then returns the result to you. If you’d like to learn more about how AI actually works, read our previous article: What is AI and how does it really work? A practical guide.

What were the hopes of AI?

AI was supposed to change the world by being very cost-effective through replacing the human workforce, especially programmers whose salaries were continuously increasing the past ten years. Imagine: instead of hiring multiple software developers for an average annual salary of £80,000–£120,000, you could replace them with Claude or ChatGPT for £18 per month. Perhaps you could hire a student part-time as a prompt agent for £25 per hour.

Once you replaced all the software developers, you could move on to making graphic designers redundant, keeping only one junior or student and buying a Midjourney subscription. Then marketers and sales could be replaced by ChatGPT as well, and it would be best to send away customer service representatives too and integrate a chatbot into the webpage. And you’re done! Fast, cost-effective, and you can take out a huge bonus for yourself at the end of the day.

This is exactly what we saw: massive layoffs in IT departments at big tech companies like Zoom or Google, with tens of thousands of software developers of all seniority losing their jobs. These layoffs were to fulfil the expectations of replacing programmers, not because chatbots were actually this effective. In fact, something started to shift. Even though more and more chatbots emerged in recent years, they were not “smarter” or more intelligent in any way than the first ChatGPT. Naturally, they are faster, more accurate, and more precise, but slogans such as “ChatGPT is the third-best coder in the world” were exaggerated - very much so.

The reality and disappointment of AI - Why artificial intelligence won’t replace humans

What AI is really useful for is debugging code, checking wording and grammar, and summarising lengthy documents: and it might replace junior or student programmers, but not seniors. It did, however, keep one promise: to make tech billionaires like Elon Musk, Sam Altman, or Jeff Bezos even richer. Nonetheless, they need to hear the truth: artificial intelligence is overrated, and we have reached its limits. AI will not overtake the world. ChatGPT is perfect for giving you advice on puppy training or recommending attractions while travelling, and Claude can really help you debug your code, but it will not replace professionals.

What’s the truth? According to The Guardian, newspaper readers largely reject AI-generated writing, helping to preserve journalists’ roles, and chatbots often cite fictitious cases. Legal, taxation, and financial advice generated by tools like ChatGPT is frequently inaccurate, meaning jobs in these fields remain safe. Data privacy issues, response quality inconsistencies, and made-up, unreliable citations and sources have also started to surface.

The AI psychosis and data concerns

Beyond this, the most problematic issue has started to emerge: the so-called AI psychosis. More and more people are turning to AI chatbots for emotional support and even developing a form of therapeutic relationship with ChatGPT.

Combined with the aforementioned data privacy issues, this is an even bigger problem than it seems. Not only do these interactions increase loneliness and the lack of personal relationships, but these tech corporations may know even more about our mental health than they ever should. All of this information can be sold to Meta, after which all kinds of advertisements can be targeted at these vulnerable people – starting from pseudoscience and extending to potentially harmful content.

The real future of AI

So, is it still worth funding new AI implementations in your business? Absolutely yes – but we have to be very specific and careful. If you would like to replace your employees with AI, that is far beyond our reach. However, if you’d like to save time by automating file organisation, summarising lengthy documents, automating workflows, or even detecting fraud or defects, AI can be a great help.

Why artificial intelligence won’t replace humans?

Overall, if you have tasks that are repetitive and require speed and accuracy but not creativity, different AI solutions can be effective. Before you start implementing and pouring thousands of pounds or euros into AI projects, read our previous article about the 3 questions we always ask before starting an AI PoC.

Key takeaways

  • AI has significantly changed how we work, but it has not replaced human professionals in complex, creative, or high-responsibility roles.
  • Most AI tools have plateaued in intelligence, improving mainly in speed and polish rather than true reasoning or understanding.
  • AI is best suited for supportive and repetitive tasks, such as summarisation, code review, workflow automation, and basic analysis.
  • Overreliance on AI introduces serious risks around data privacy, misinformation, and mental health, especially when used for emotional support or advice.
  • Businesses should adopt AI strategically and cautiously, focusing on efficiency gains rather than workforce replacement.
  • The future of AI lies in augmentation, not domination – helping humans work better, not eliminating them.

Sources:

https://zapier.com/blog/best-ai-chatbot/

https://www.businessinsider.com/fei-fei-li-disappointed-by-extreme-ai-messaging-doomsday-utopia-2025-12

https://techcrunch.com/2026/01/02/in-2026-ai-will-move-from-hype-to-pragmatism/

https://www.pwc.com/us/en/tech-effect/ai-analytics/ai-predictions.html

https://www.ibm.com/think/news/ai-tech-trends-predictions-2026

06/01/2026
What is AI and how does it really work? A practical guide
AI 4 min read
LinkedIn
What is AI and how does it really work? A practical guide

AI has completely shifted the way we think, work, or even talk about our personal issues. From personalised recommendations to automated workflows and tools like ChatGPT, AI has become part of everyday life for millions of people. But here’s the interesting thing: even though everyone uses AI, very few can clearly explain what AI actually is.

Is it a robot? Is it ChatGPT?

But the truth is far simpler and far more exciting.

Artificial Intelligence isn’t just one tool or one system. It’s a continuum of technologies and capabilities that build on top of each other, each offering a new level of intelligence and automation. Whether you’re a business leader, a developer, or simply AI-curious, understanding these layers helps you make better decisions about what your team really needs. In this practical guide, we break down the four stages of AI, when you should use them, and the most important: What is AI and how does it really work?

1. Machine Learning: The foundation of modern AI

Machine Learning (ML) is the foundation of modern AI. It’s not magic, it’s math. ML systems learn patterns from data and use those patterns to make predictions or classifications. For ML, you need a training data, which you “feed into” the ML model, so that it can use those patterns to make predictions in the test data.

At its core, ML works like this:

  1. You feed the system historical data (examples or training data).
  2. The model identifies patterns in that data.
  3. It uses these patterns to predict future outcomes.

Machine learning doesn’t think, it calculates. Its power comes from identifying trends humans might miss across millions of data points.

When to use ML?

A team might need ML if they want to:

  • Predicting equipment failures before they happen
  • Classify large volumes of data, such as customer reviews as positive or negative
  • Forecasting sales for the next quarter
  • Improve accuracy over time

Often, when businesses say “We need AI,” what they really need is a simple but powerful ML model.

2. AI services: Task-specific intelligence built on ML

As ML matured, companies began packaging these models into easy-to-use AI services. These are specialised tools designed to perform one task exceptionally well. These services don’t create new content or act autonomously. They simply execute a task better, faster, and more consistently than humans.

These are still AI, but not the “chat with a robot” type. They’re purpose-built, efficient, and incredibly reliable.

When to use AI services?

You need AI services when:

  • The task is repetitive
  • You need speed at scale
  • You want accuracy, not creativity
  • You’re dealing with structured data or clear rules

Think of them as AI “modules” that plug into workflows to automate one specific thing.

AI innovation pathway from machine learning to automatic or agentic AI

3. Generative AI: The creative leap forward

If Machine Learning is analytical and AI services are functional, then Generative AI (GenAI) is creative. This is the category that brought AI into the mainstream, what we think of when we talk about AI.

Generative AI can produce new content — text, images, videos, audio — based on patterns learned from massive datasets.

How does GenAI work?

GenAI models (like ChatGPT, Claude, or Midjourney) are trained on enormous amounts of data. They learn:

  • How language works
  • How images are structured
  • How styles differ across formats
  • How humans communicate

Once trained, they can generate original content from a simple prompt.

When to use Generative AI?

Teams benefit from GenAI when they want to:

  • Speed up creative work
  • Produce text or visuals at scale
  • Draft documentation or code
  • Brainstorm ideas
  • Automate communication

But here’s the key insight: Not every AI problem needs Generative AI. In many cases, simple ML or task-specific AI services are more efficient, cheaper, and easier to integrate.

4. Autonomous & agentic AI: The tuture of intelligent systems

The next stage in AI evolution is autonomous AI: systems that don’t just follow commands but act independently to achieve goals. This is where AI agents come in.

Agents are AI systems that can:

  • Plan
  • Make decisions
  • Coordinate actions
  • Work with other agents
  • Trigger workflows automatically

Think of them as digital team members that can manage an entire process end-to-end.

When to use agentic AI?

These systems make sense when you need:

  • Full workflow automation
  • Multi-step processes handled without supervision
  • AI that can collaborate across tools and data sources

It’s the most advanced form of AI, but it’s not always necessary. Most companies will reach this stage gradually, after building strong ML + GenAI foundations.

What does your team actually need?

This is the most important question, and the one most businesses answer incorrectly. Companies often say: “We need AI.” But “AI” is broad. The real question is: Which level of AI solves your problem? Maybe you need a simple Machine Learning model. Maybe you need a single AI service. Maybe you need Generative AI for creative tasks. Or maybe you’re ready for full autonomous agents.

Understanding the difference helps you to spend smarter, implement faster, set realistic expectations, choose the right tool, and avoid over-engineering.

Most importantly, it ensures AI genuinely creates value instead of becoming yet another shiny experiment. In our previous article, we talked about which 3 questions to ask yourself before implementing AI in your business.

Key takeaways

Artificial Intelligence is not one thing. It’s an evolving landscape with multiple layers from basic prediction systems to creative tools to autonomous agents. Each layer has its role, its strengths, and its ideal use cases Whether you’re modernising internal processes, automating search and qualification tasks, or exploring new digital capabilities, clarity is your biggest advantage.

04/12/2025
What’s really blocking your AI adoption? The cost & ROI question
AI 2 min read
LinkedIn
What’s really blocking your AI adoption? The cost & ROI question

A few weeks ago we asked the question on our LinkedIn page: What’s blocking your AI adoption? 

Amongst the poll results, one stood out the most, so let’s talk about the elephant in the room: Cost and ROI uncertainty.

For many teams, it’s not that they don’t believe in AI. They’ve seen the demos, tried ChatGPT or even use it on a daily basis, or maybe even piloted a tool or two. Each industry has its own AI-powered tools, such as writing texts, reviewing codes or do complicated calculations. The real blocker is this nagging question:

“If we invest in AI… will it actually pay off?”

And that’s a completely valid concern.

Destroy the illusion of AI sounding expensive 

Most AI conversations start with big promises:

·       “Automate your workflows!”

·       “10x your productivity!”

·       “Unlock new insights!”

All great in theory. But when it comes to budget discussions, leadership wants numbers, not slogans:

·       How much will it cost per month?

·       How many hours will it actually save?

·       When will we see a return?

Because many AI projects start as “experiments,” they’re not framed with clear success metrics. That makes AI feel like a nice-to-have innovation project, not a strategic investment.

Hidden costs = hesitation

Teams aren’t just afraid of license fees. They worry about the hidden costs:

·       Time needed to implement and integrate tools.

·       Training employees to use them effectively.

·       The risk of choosing the wrong solution and starting over in 6 months.

Without clarity, AI becomes a perceived cost center instead of a value generator.

The ROI problem: you can’t improve what you don’t measure

A lot of AI usage today is ad hoc: someone in marketing uses ChatGPT, someone in sales drafts emails with AI, someone in operations experiments with automation.

Useful? Yes. Measurable? Rarely.

To reduce ROI uncertainty, teams need to move from random acts of AI to intentional AI use cases:

·       “We want to reduce time spent on X by 30%.”

·       “We want to cut manual reporting effort from 10 hours/week to 3 hours/week.”

·       “We want to increase tender/lead qualification accuracy by Y%.”

Once you define that, suddenly AI ROI is no longer abstract. You can compare before vs. after.

How to unblock yourself

If cost and ROI uncertainty are holding you back, try this approach:

1.     Start small and specific Pick one painful, repetitive process (e.g. tender discovery, reporting, email drafting). Don’t “do AI everywhere”, do it somewhere meaningful.

2.     Put numbers on the pain How many hours per week are spent on this task? What’s the approximate hourly cost? That’s your baseline.

3.     Run a time-boxed pilot Test an AI tool for 4–8 weeks with a small group. Measure time saved, quality improvements, or error reduction.

4.     Translate results into money If your team saves 10 hours/week, what does that equal in salary cost or freed-up capacity? That’s your business case.

5.     Decide with data, not fear At that point, AI investment is no longer a leap of faith – it’s a decision backed by numbers.

Key takeaways

The biggest blocker to AI adoption often isn’t the technology – it’s uncertainty. Once you define a clear use case, measure the impact, and translate it into ROI, the conversation shifts from:

“Can we afford AI?” to “Can we afford not to use it?”

10/11/2025
The 3 Questions We Always Ask Before Starting an AI PoC
AI 2 min read
LinkedIn
The 3 Questions We Always Ask Before Starting an AI PoC

Developing an AI PoC can be costly and can take up more than 6 months. We at Peruzzi Solutions believe that it doesn’t have to be this way. To develop a PoC on a budget and on time, you need to ask yourself the following questions:

1.     What single process or task do you want to improve and how will we measure success?

Many PoCs fail because they try to prove too much. We always narrow it to one clear, measurable goal.

For example: “Reduce contract review time by 50%” or “Auto-classify 80% of documents correctly.”

Don’t forget: If you can’t measure it, you can’t prove it.

2.    Do we have access to the right data (and permissions)?

Great ideas fall apart without usable, compliant data. We check early that your data is:

  • Accessible (in the right format)
  • Clean enough for AI to learn from
  • Cleared for experimentation (especially in legal/finance contexts)

Don’t forget: Data quality determines PoC success.

3.     What happens after success?

We define the “so what?” upfront. A PoC is only valuable if there’s a clear path to adoption, e.g., integration into an internal tool, scaling to production, or supporting a funding decision. This keeps projects outcome-driven, not just experimental.

Bonus tip:

We’ve found that a PoC with one sharp question, good data, and a clear next step succeeds 10× more often than projects that start broad.

10/10/2025
Case Study: Building an AI system to qualify EU tenders
AI 4 min read
LinkedIn
Case Study: Building an AI system to qualify EU tenders

In today’s competitive procurement landscape, identifying the right opportunities fast can make or break a business’s growth strategy. For small to mid-sized companies across Europe, monitoring hundreds of procurement portals manually is both time-consuming and inefficient.

Bidbot, a startup focused on EU tender qualification, approached Peruzzi Solutions with a clear mission to create a scalable, AI-powered platform that automatically discovers and ranks public tenders based on their relevance to each user’s business profile. This is an AI development case study.

Our team delivered a fully functional Proof of Concept (PoC) in just one month, laying the foundations for an intelligent, self-improving system that’s helping companies save time and win more public contracts.

Problem

Public tenders in the EU are published daily on multiple platforms, including the official Tenders Electronic Daily (TED). Each listing includes long technical descriptions, inconsistent formats, and complex metadata.

For Bidbot’s target users - busy business development teams - the process of manually sorting through these tenders was overwhelming, hence, creating a problem.

The key challenge:

  • Automate tender discovery and filtering, so users receive only the most relevant opportunities.
  • Reduce noise, ensuring that results align with company size, sector, and region.
  • Deliver insights fast, through a modern, user-friendly interface.

Solution

At Peruzzi Solutions, we always start with a focused Proof of Concept to provide a quick solution. For Bidbot, we followed a structured approach:

1. Data Collection & Structuring

We built a robust data pipeline to scrape, clean, and structure data from TED (Tenders Electronic Daily). This included automated normalization of tender metadata and multilingual processing across EU member states.

2. AI-Powered Relevance Ranking

Using LangChain and FastAPI, we developed a modular relevance ranking engine. The model combines AI prompts and heuristics to evaluate tenders based on criteria such as sector, region, and buyer profile.

This ensured that each user receives a personalized feed of high-relevance tenders instead of a generic list.

3. End-to-End Platform Build

We delivered a complete web-based platform, built with Vue.jsASP.NET, and Microsoft SQL Server, featuring:

  • Secure user onboarding and authentication
  • Subscription and notification flows
  • Custom dashboards and daily digests
  • Feedback collection for continuous AI refinement

Within four weeks, Bidbot had an operational product ready for pilot testing, not just a prototype.

The Solution in Action

The system’s architecture allows new tenders to be automatically processed, scored, and ranked in real time. Users receive curated tender suggestions daily, while the AI model continues to learn from user feedback to refine its matching accuracy.

This dynamic feedback loop means the platform gets smarter over time — a core design principle in all Peruzzi AI builds.

Impact

Efficiency: The impact was noticeable: Bidbot’s users now spend 70% less time filtering through irrelevant tenders, focusing instead on opportunities that truly matter to them.

Scalability: The system was designed with scalability in mind. Its modular architecture allows for future integration of:

  • Sector-specific AI models
  • Advanced analytics and reporting tools
  • Custom recommendation engines for private sector tenders

User Experience: The interface is built for non-technical users — fast, clean, and intuitive — enabling legal, procurement, and sales teams to benefit without additional training.

Looking Ahead

The success of the initial Proof of Concept set the foundation for Bidbot’s next development phase, including:

  • Expansion beyond EU tenders into national procurement databases
  • Integration of custom analytics dashboards
  • AI model fine-tuning for sector-specific tender recommendations

For Peruzzi, this project exemplifies how AI-driven automation can solve complex, real-world challenges in public procurement — quickly, affordably, and with tangible impact.

Key takeaways

Within just one month, Peruzzi Solutions delivered a fully operational AI platform that automates EU tender qualification - transforming how businesses discover opportunities and compete for public contracts.

06/10/2025
Case study: A data engineering pricing solution
Data Engineering 4 min read
LinkedIn
Case study: A data engineering pricing solution

Overview

A leading insurance company faced challenges with a legacy pricing and risk analysis system that was rigid, error-prone, and slow to adapt. Most pricing logic was embedded in the database layer, making updates cumbersome and hindering timely business decisions.

Peruzzi Solutions implemented a data engineering pricing solutions, enabling the company to dynamically configure pricing rules, streamline workflows, and accelerate time-to-market for new contracts.

Client background

The client is a major insurance provider with complex pricing and contract modeling needs. Their legacy system required manual database code changes for any updates, creating operational bottlenecks and slowing decision-making.

Problem

  • Inflexible legacy system: Hard-coded pricing rules made changes slow and risky.
  • Operational inefficiencies: Manual updates led to errors and delays.
  • Limited agility: Slow reaction to market changes hindered strategic decisions.

The company needed a flexible, scalable, and automated solution to modernise pricing workflows and support faster data-driven decision-making.

Solution

To modernise the client’s pricing and risk analysis system, we implemented a data engineering-driven solution that combined flexibility, automation, and scalability:

  • Dynamic rules engine: Developed a centralised library that enables actuaries and risk modellers to create, modify, and manage pricing structures dynamically, completely eliminating the need for manual code changes.
  • Data integration & transformation: Ingested both structured and unstructured data into Azure SQL Database and Azure Blob Storage, with automated ETL pipelines ensuring that pricing models always work with clean, up-to-date, and consistent data.
  • Scalable & automated architecture: Deployed containerised workloads on Azure Kubernetes Services for elastic scaling, and used Azure Functions and Logic Apps to automate workflows, orchestrate data processes, and schedule recurring tasks efficiently.
  • Future-ready AI integration: Incorporated Azure Cognitive Services to support potential AI-driven enhancements, enabling advanced analytics and intelligent pricing insights as the system evolves.

Impact

Operational efficiency: The biggest impact was that we streamlined contract modelling and pricing workflows, eliminating manual bottlenecks.

Faster time-to-market: Dynamic rule adjustments allow launch of new contracts without code changes.

Empowered teams: Actuaries and risk modellers can quickly test and iterate pricing models.

Scalable & maintainable: Modern architecture supports growing data volumes and business needs.

Conclusion

This project demonstrates how a data engineering pricing solution can transform legacy financial systems into flexible, scalable, and efficient platforms. By modernising their pricing and risk analysis workflow, the client gained agility, reduced errors, and accelerated business decisions, proving that data engineering is a direct driver of business value.

21/08/2025
Case Study: An AI-powered legal assistant PoC in 10 days
AI 3 min read
LinkedIn
Case Study: An AI-powered legal assistant PoC in 10 days

Overview

We’ve partnered with AILA, an AI-powered legal admin startup to develop an intelligent assistant PoC, capable of processing legal inboxes, drafting responses using legal documents, and raising Jira issues to streamline workflow. The solution needed to seamlessly integrate with existing email and internal tools, without disrupting the established legal workflows.

This case study is about a Proof of Concept (PoC) aimed at quickly validating the product-market fit for the client’s AI-powered solution, while ensuring the system was lean, secure, and capable of supporting rapid iterations based on user feedback.

Problem

The client, an AI-powered legal admin startup, faced a significant problem within its legal teams. Their experts were overwhelmed by routine tasks such as inbox management, follow-ups, and issue tracking, with no central agent to automate or streamline these processes. Specifically:

  • Legal inbox overload: Legal teams spent substantial time managing and responding to routine emails, often losing focus on high-priority legal work.
  • Disconnected systems: There was no central solution to connect emails, legal documents, and Jira-based work tracking, making it difficult to manage the volume of tasks efficiently.
  • Need for a fast MVP: The client wanted to ship a working MVP in under two weeks to quickly test product-market fit and gain valuable feedback from users.
  • Integration challenges: The solution needed seamless integration with existing systems (email, Jira, document management) to avoid disrupting established workflows and ensure that legal professionals could continue to use the tools they were familiar with.

The project needed to address these pain points with a simple yet scalable AI solution that could be deployed quickly while being secure and extensible.

Solution

To meet the client’s needs, the solutions was to design a modular architecture that utilized cutting-edge AI tools and cloud services to automate routine tasks and integrate smoothly with the client’s existing systems.

Implementation Process:

  1. Integration with Google Inbox: We connected the system to the client’s Google inbox using the Gmail API, enabling the AI assistant to automatically fetch and process incoming emails.
  2. Jira REST API: Integrated with Jira to allow the assistant to automatically create, update, and track issues from legal emails and responses.
  3. Document RetrievalAzure AI Search was used to retrieve relevant legal documents and templates, which were then used by the AI assistant to draft responses.
  4. Serverless Architecture: To minimize overhead, the entire solution was deployed using serverless Azure resources. This allowed the system to scale rapidly as usage increased, without the need for complex infrastructure management.

The modular design of the system allowed for rapid iteration, with the flexibility to adjust and add new features based on feedback from legal professionals.

Impact

Delivery: The impact was remarkable. The PoC was delivered in just 10 days - a rapid turnaround that exceeded expectations. The solution was fully integrated with email, Jira, and document management tools, and was tested thoroughly to ensure that it met the client’s needs.

Efficiency: The AI-powered assistant dramatically improved the efficiency of the client’s legal teams by automating routine tasks, reducing the time spent on inbox management and follow-ups. With a human-in-the-loop supervision model, legal professionals could quickly review and approve draft responses, enabling them to focus on more critical work.

User feedback and iteration: The client was able to test the MVP with internal users immediately, gaining valuable insights that would shape the future iterations of the AI assistant. The assistant’s ability to learn from early feedback ensured a rapid cycle of refinement and improvement, positioning the solution for future scaling.

Scalability: The architecture was designed with scalability in mind, using serverless Azure resources and a modular framework that allowed for easy addition of new features as needed. The PoC set a solid foundation for scaling the AI assistant to handle more complex legal workflows in the future.

Key takeaways

This AI-powered legal assistant PoC demonstrated the power of leveraging modern AI frameworks, cloud-based architecture, and seamless integration to solve a real-world problem in the legal field. By freeing up legal experts from non-substantive tasks, the solution significantly improved internal efficiency while supporting human oversight.

In just two weeks, we delivered a lean, effective, and secure solution that solve the client’s problem: a rapid MVP deploymentseamless integration, with a scalable architecture. The project set the stage for future iterations based on user feedback and positioned the client for successful product-market fit in the competitive legal tech industry.

08/08/2025
Low-code and no-code data engineering solutions – What does the future hold for them?
Data Engineering 4 min read
LinkedIn
Low-code and no-code data engineering solutions – What does the future hold for them?

The low-code/no-code trend has been up and coming for the past few years, with the promise of building applications, websites or software with minimum or no coding experience. Although this allows faster development and for a lower price, it’s still a question: What does the future hold for them?

These low-code methods can be found in data engineering solutions too. The rise of ETL (extract, transform, load) tools aim to simplify data management and to build data pipelines without extensive coding experience. This is where data engineering solutions come into play, with the help of low-code/no-code ETLs. In this comprehensive guide, we’ll explore:

• What data engineering solutions are • Why they’re critical for businesses • The rise of low-code/no-code platforms • Real-world use cases • Key benefits • Choosing the right solution for your needs

What are data engineering solutions?

Let’s dive deep, but start from the beginning. From driving decision-making to enabling predictive analytics, modern businesses rely heavily on data to maintain a competitive edge. But raw data alone isn’t enough - it needs to be organised, accessible, and actionable.

Data engineering refers to the process of designing, building, and maintaining systems that collect, store, and analyze large volumes of data. The goal is to create robust data pipelines and architectures that deliver clean, reliable, and timely data to users and applications.

Data engineering solutions are the tools (such as the ETL), platforms, and services that help businesses build these pipelines efficiently. These can range from open-source frameworks like Apache Spark and Kafka to fully managed cloud services such as Microsoft Azure Data Factory, AWS Glue, and Google Cloud Dataflow.

In recent years, low-code and no-code data engineering solutions have emerged, enabling non-technical users to participate in building and managing data pipelines without needing advanced coding skills. This means, that instead writing codes, these ETL tools use a visual, user-friendly interface to design, build or manage data. Drag-and-drop functionality (just like when you move folders or upload photos), reusable templates (such as WordPress) and pre-built connectors help you to work fast and efficiently.

Types of data engineering solutions: traditional, cloud-native and low-code

Before we dive deep into the low-code/no-code data engineering solutions and their benefits, we need to mention the other solutions.

1. Traditional code-first solutions

This is the “ancient” data engineering solution, which we all think about when it comes to data engineering. For organisations with in-house engineering expertise, traditional solutions like:

  • Apache Airflow (workflow orchestration)
  • Apache Spark (large-scale data processing)
  • SQL scripts & stored procedures are still widely used.

These solutions provide maximum flexibility and customisation but require technical skills to maintain, expert software engineers and data engineers.

2. Cloud-Native Managed Services

Cloud providers have developed fully managed data engineering platforms, including:

  • Azure Data Factory (ideal for Azure cloud ecosystems)
  • AWS Glue (serverless ETL for AWS)
  • Google Cloud Dataflow (for real-time and batch data processing)

These reduce infrastructure overhead and provide scalability on demand. If you want to know more about how cloud native applications work, we recently wrote a guide about cloud native applications.

3. Low-code/no-code platforms

For businesses wanting to accelerate development with fewer technical barriers, low-code/no-code platforms include:

  • Alteryx
  • Knime
  • Microsoft Power BI Dataflows
  • Google Cloud DataPrep
  • Apache NiFi

These platforms offer drag-and-drop interfaces, pre-built connectors, and templates, allowing data analysts and business users to build pipelines without coding.

The rise of low-code/no-code in data engineering

Low-code/no-code data engineering solutions democratise access to data workflows. With a visual interface and minimal code, even teams without dedicated data engineers can:

  • Extract data from multiple sources
  • Clean and transform datasets
  • Feed data into BI tools or ML models

Example:

A marketing team could use a low-code tool to automatically pull data from Google Analytics, clean it, and push it into a dashboard without writing any code. While low-code tools accelerate delivery and reduce IT bottlenecks, they’re typically best suited for small to medium-sized projects or for prototyping before full-scale engineering implementation.

Pros and cons of low-code and no-code in data engineering

Without the constraints of actual coding, these low-code no-code methods increase productivity in data engineering. These automated features also decrease the mistakes and errors caused by humans, which increases data quality. These also allow data engineers to focus on higher-level problem solving – instead of taking so much time to code, they can use their time to planning or strategic initiatives. It is also cost-effective for small and mid sized businesses, as it reduces the dependence on specialised developers.

However, one of the biggest downside of the ETLs is that these tools are less flexible when it comes to complex or large-scale systems. As with every automatisation and visually appealing user-interface, you cannot do anything in these tools you want to. These cannot be customised for specific use cases therefore they might not provide you exactly what you need for your organisation to thrive.

So what does the future hold for them? The future of data engineering solutions

As businesses continue their journey toward digital transformation, the role of data engineering will only grow. The trend is clear:

  • More automation
  • Greater accessibility through low-code/no-code
  • Deeper integration with AI and ML systems Companies that invest in robust data engineering solutions today will set themselves up for competitive advantages tomorrow.

Key takeaways

Data engineering solutions are no longer optional - they are a business necessity. Whether you’re a small startup or a global enterprise, choosing the right mix of traditional, cloud-native, and low-code/no-code tools can accelerate your data-driven success.

16/06/2025