AI Amplifies What’s In Your Data—Including the Gaps

As I've explored AI and worked with leaders implementing it in their organizations, I've had a nagging worry: AI provides an immense increase in analytical capability, but the quality of that analysis might be limited—unknowingly so—by missing data.

Here are some examples.

1. Missing data because it was never captured

Sometimes, the most important information you need to power analysis may not be accessible to your AI tools at all. For example, front-line staff often have rich in-person conversations with clients about how they’re doing and how they’re using the product, which would be relevant to understanding how to better meet their needs. Unfortunately, that information might not be captured in electronic form because no one is recording casual interactions. 

Similarly, the feedback data an organization collects is often biased toward very negative and very positive perspectives because the people in the middle don’t feel strongly enough to share their thoughts. They’re missing from the data. 

In cases like those, even the most sophisticated analytical tools might tell us the wrong story. The problem, however, is that many of those tools will also make the analysis look polished and analytically reasonable, tricking us into thinking we’re seeing the right picture. 

2. Missing data due to human errors

The finance lead of a nonprofit recently told me that the hardest part of analyzing how much the organization was spending against a philanthropic grant was getting employees to complete their timesheets. Once the data is in the system, it’s easy to analyze. However, the reasons people didn’t complete the timesheets had nothing to do with technology. It happens for all the reasons you’d expect—people are busy, timesheets are relatively low priority, and it’s an annoying task. 

A different set of nonprofit leaders told me about a similar challenge with reconciling data across different systems. In their case, some of the issues stemmed from errors introduced by manual data entry, but the deeper issue was that important information people don’t think to include when entering it. For example, when a person uses their corporate card to make a donation. It might be recorded as an individual donation when the donor is actually a company. In this case, the data isn’t “wrong” so much as it’s not “right enough” for the downstream users to understand it without additional context. 

In both cases, the quality of the data available to advanced tools is limited by errors and complications in human processes. 

3. Missing data because it’s not easily captured

Even before generative AI, there were many tools capable of creating dashboards that perfectly track and display data—or at least the data we know how to capture. But those tools may be less effective for showing all aspects of success. 

For example, customers like the service, but do they love it? 

Sure, the operations team closed all service requests within agreed-upon standards, but do the people requesting service believe the operations team works in true partnership?

Sure, the front desk area at the hotel is clean and orderly, but are the vibes right?

In each situation, the former aspect of performance is easy to capture in dashboards, but understanding the latter is harder to measure and often just as important for success. It’s often missing in the data, even though it might be obvious to everyone. 

AI creates an additional problem because the instinct to automate can remove the manual steps that might capture the missing data. If I were filling out a dashboard in PowerPoint, it would be easy to add a qualitative note that captures whether the vibes are right. The metrics look good, but I’m worried. Or, The metrics are all red, but we’re making steady-enough progress to feel good about what’s around the corner. But relying solely on technology to create the dashboard might mean this critical step gets skipped.


In all three situations, “Let’s use AI for this” would be the wrong instinct. The result would be no better—and potentially worse—than the results we’d get from the tools we’ve used to date. A more effective approach is to think about these processes holistically, recognizing that to get the full benefit from AI, we might have to start by addressing the problems that occur before and after the parts of the process that use AI.

Previous
Previous

Finding the Right Hoses

Next
Next

March Madness and $1 Wagers