The Hidden Cost of iPaaS and IFTTT

In today's fast-paced business world, self-service automation and AI offer immense productivity. Yet, without high-quality data delivery and governance, this potential is wasted. Many organizations prioritize quick wins, risking long-term setbacks. Solving our data issues is key to AI productivity.

The Hidden Cost of iPaaS and IFTTT

Introduction

In today's rapidly evolving business landscape, self-service business automation tools (Integration Platform as a Service (iPaaS), If This Then That (IFTTT), etc.) have garnered a lot of attention – and for good reason. The promise of empowering business users to independently access data and automate workflows, free from reliance on Information Technology (IT) teams, is an appealing vision that many organisations strive to achieve. The Artificial Intelligence (AI) revolution promises a whole new level of productivity waiting to be unlocked.

Unfortunately, things are never as straightforward as most vendors would have you believe. Citizen automation and AI offer huge potential but require easy and responsible access to robust, unambiguous data and information flows. It's disheartening that today, organisations are still struggling to establish the underlying data architectures needed for responsible innovation at the speed and scale required to take full advantage of emerging technologies. It's understandable, therefore, when business users sidestep data governance and go directly to the source for their data. This approach disregards many of the well earned lessons that systems and software engineering have taught us over the years resulting in a risky and costly illusion of progress.

Going nowhere... in style

Let's start with an analogy. Self-service motorised transport has been around since the late 1800s; back then, it was a luxury for wealthy individuals. In the 1920s, cars became more affordable through the invention of mass production techniques, and by the 1960s, car ownership was widespread among the middle class. Come the 1990s, and there was a surge in the number of two-car households. Fast forward to today, and our personalised motor transport is powered by AI-assisted control systems, automated driving aids, and self-driving technology.

What's missing from this picture is all the innovation the supporting infrastructure and systems that have got us here. The roads, fuel and charging networks, the satellites that power our navigation systems, the rules, laws, and legislation that keep us safe, the manufacturing and distribution processes, and the standardisation and componentisation that enables us to rapidly build better cars without the need to constantly reinvent the wheel.

Imagine for a moment that none of this happened, but by a stroke of luck, a passing Elon from a parallel dimension happened to drop off a shiny Tesla Model S. At first we'd be awestruck, but our delight would quickly fade as we found ourselves stuck in the mud, spinning our wheels, going nowhere... and then the battery would go flat.

Just as our personal intelligent vehicles of today require roads, power and regulation to get anywhere, so our modern self-service AI powered businesses requires infrastructure in the form of high-quality data delivery networks, governance and standards.

Enough of the analogy, let's get to specifics…

Data (and context) is everything

I remember it well: back in the early 2010s, Microsoft Excel revolutionised self-service analytics by introducing a new set of features that allowed you to effortlessly import and transform data from a range of sources, including flat files and SQL databases. This would later become Power Query, one of the core technologies that underpins Microsoft Power BI today. It was soon possible to connect to an array of third-party products and services via out-of-the-box connectors. The freedom to grab the data you needed was liberating. No longer did we need to wait months (or longer) for centralised IT to get the data we needed. We had access to the data and insights when we needed them. That felt like, and was, a huge boost for productivity.

But there are always trade-offs...

The databases and APIs we connected to had entity names that we sort of understood and fields that contained information that kind of looked right. We'd traded access to data for correctness, relevance, and tight coupling.

Vendors and excited business users gave compelling demos to business leaders of how quickly they could automate away inefficiencies. In turn, business leaders made the problem of pesky data governance go away. We ended up trading security, utility, maintainability, flexibility and scale for speed of output.

We granted access to information and systems on the chance that it might be useful one day, and in the process, traded data privacy for convenience.

And in the age of self-service AI, we are feeding models with information in faith that the required knowledge can be inferred. In doing so, we risk trading productivity for a (very realistic) illusion of productivity.

1. Correctness, relevance, and coupling

We have learned over the years the importance of developing software solutions with a clear goal, within defined constraints and risk tolerances. We discovered the power of componentisation and reuse, understood the importance of decoupling, and created automated guardrails to ensure we built the right thing. In the pursuit of self-service automation, these concerns somehow feel less important.

Correctness means that the solution is fit for purpose, relevant edge cases are considered, and that the system continues to work as intended over whatever timeframe you need it to. This requires deep domain and technical knowledge of the upstream processes you depend on. Much of this knowledge can't be inferred; it requires a broader understanding of process, strategy, and business constraints.

This will govern how your source systems are being used, how they have been configured, and what each data point means for your specific business. The chances are that information lives in the heads of subject matter experts. They are either going to have to write it down, or they will quickly become a bottleneck. If that sounds like too much friction, then you can always just make assumptions and hope you get it right, but be ready to wear the cost if you get it wrong.

And then we've got technical concerns. How do you connect to the source systems? Are you really going to use personal credentials that will stop working when your refresh token expires or when you leave the company? Can you rely on the source system to always notify you when something changes? What about temporal data and understanding change over time? What if your customer data is spread across multiple systems, how will you bring this information together? Who is going to troubleshoot the problem when your automation stops working? How are you going to catch up when the upstream system has been offline for half a day?

Finally, the elephant in the room: we all know that tightly coupling software systems makes change hard. Now that every business department is connected directly to the core systems of an organisation (after all that’s where the interesting data is!?), how do you identify, make, and coordinate change across the business, or worse yet decommission and replace those core systems when their data model is spread across the organisation?

2. Data onboarding and governance

In the race to automate, time is of the essence. The allure of speed is all too tempting, especially when it comes to implementing automation to boost productivity. Vendors are quick to demonstrate how quickly their tools can automate tasks, impressing business leaders with flashy demos that showcase instant results. Once someone of sufficient influence has been sold the 100x business productivity story, data governance suddenly becomes less of a concern.

Understandably, many of us see data governance as a roadblock to innovation, mostly based on poor past experiences of process-heavy enterprise initiatives. Our natural instincts are to work around the problem at every chance rather than face it and solve the underlying issue.

Data onboarding should take hours, not months. Governance should be seamless, on by default, working away in the background, protecting us from ourselves, and enabling efficient and effective business automation that scales. Organisations that sidestep data governance may see short-term productivity benefits but will feel the pain in the long run.

3. Data privacy and exposure

As data privacy and data breach regulations become more stringent, the risks and costs of exposing data escalate. What starts as a well-intentioned effort to empower users can quickly spiral into a privacy and security nightmare.

Organisations need to strike a delicate balance between enabling user convenience and safeguarding sensitive information. Business processes that go directly to the source have little choice but to access information on the vendor's terms. If all data is treated equally, then all data should be treated at the highest level of sensitivity, or you put yourself and the business at risk. The greater the sensitivity of data, the greater the governance burden and the greater the temptation to sidestep it.

4. The Illusion of AI Productivity

The potential of AI is immense, and the pace of change is relentless. This puts further pressure on organisations already struggling to manage their data. But there is a bigger issue at play. AI requires relevant data and business context. The more context you feed an AI, the more meaningful work it can do for you. How much is enough? At what point does the output from an AI become meaningless? With AI output being so believable, can you spot that moment?

The problem is, we've not been great at capturing business context historically. Business concepts, ideas and strategies have been formed through human to human conversations and interactions. The output of these decisions are then encoded in the source code and configuration of our business systems. It is implied context that is lost in the process and is impossible to reverse engineer. For simple problems, it may be enough to simply plug an AI into some line of business system, but without the information that led us there you'll miss out on the real opportunities.

In the race to incorporate AI everywhere, the biggest risk is that people are once again willing to sidestep the difficult problems and favour superficial outcomes rather than revolutionary ones. Let's make it simple: business context is just data, and we are once again back to having a data problem. Maybe it's about time we fixed that?

Conclusion

The path to genuine business efficiency requires a smarter approach to capturing, assembling and processing information in and around the business. Those who recognise this challenge are still struggling with people, process, and a lack of technology that truly commoditises business information and delivers compounding benefits. The current hype wave appears to be incentivising vendors to build solutions that superficially empower and amaze while deliberately papering over the cracks.

We need a fresh approach to solving the currently messy process of onboarding fragmented data, establishing rich and meaningful semantics and leveraging optimised data storage and processing so that we can finally reach the level of AI-powered, self-service business productivity that we’ve been promised.