Our industry has a dirty little secret. Come closer, I’ll whisper it to you.

(Much of the data held in organizational databases, warehouses, lakes and stores is not very good.)

There, I’ve said it. Data quality remains a persistent problem for enterprises, and there are many reasons as to why. It could be that fields were filled out incorrectly, or that differences between what things are called are pervasive. Or, calculations that were done and stored have grown out of date or were incorrect to begin with.

RELATED CONTENT: 2020: The year of integration

Do you live on Main St. or Main Street? Is your job software engineer, or just developer, or — as has been seen — code ninja? How do you know if they’re the same thing or not? Or, is 555-111-6666 an actual phone number? It looks like one. It has 10 digits. But is it valid? Is 

mickemouse@noaddress.org’ an actual email address? It looks like one. But will anything mailed to this address go through or get bounced?

What if your company relies on data for annual financial predictions, but the underlying data is fraught with errors? Or what if your company is in trucking, and can’t maximize the amount of goods a truck can hold because the data regarding box sizes is incorrect?

Data quality is “something everyone struggles with, just keeping it clean at point of entry and enriched and useful for business purposes,” Greg Brown of data quality company Melissa told me at the company’s offices in Rancho Santa Margarita (mas tequila, por favor!), California. And, because few companies want to talk about the issue, Brown said, “That prevents other people from really knowing not only how pervasive the problem is, but that there are lots of solutions out there.”

“A position I was in at another company before this, we’d just get return mail all the time and it was just the cost of doing business,” he continued. “We were marketers but we weren’t really direct mailers. We had enough knowledge to get something out, but we didn’t really know all of the parameters on how to process it to prevent undeliverable mail, and it was just really the cost of doing business. We’d get these big cartons of mail back, and we’d just dump them, and nobody would say anything.”

Do people not talk about because they don’t want their customers to know just how bad their data problem is? “Oh yeah, we have that all the time,” Brown said. “It’s almost impossible for us to get case studies. The first thing they do with us is slap on an NDA, before we even get to look under the kimono, as Hunter Biden would say. Before we see their dirty laundry, they definitely want the NDA in place. We’ll help them out, and they’ll never, ever admit to how bad it was.”

In some organizations, some team is in charge of making the data available to developers and different departments, and there is another team making sure it’s replicated across multiple servers. But who’s really in charge of making sure it’s accurate? “That’s where we struggle with talking about the stewards,” Brown said. “Do they really care how accurate it is if they’re not the end consumer of the data?”

Throwing another wrench into all of this is the data matching rules outlined in the European Union’s General Data Protection Regulation. Brown said it seems to contradict well-understood data management practices. “One of the things those guys are saying is typical MDM logic is 180 degrees from what GDPR is recommending,” Brown said. “Traditionally, MDM, if it has a doubt about whether or not two records are identical or duplicates, it’s going to err on the side of, they’re not. What’s really the lost opportunity cost associated with merging them was greater than sending a couple of duplicate catalogs before you could ascertain that. GDPR says, if there’s almost any doubt that Julia Verella and Julio Verella are actually the same person, you’ve got to err on the side that they are the same person. So the logic behind the matching algorithms is completely different.” 

In software development, we hear all the time that organizations should “shift left” practices such as testing, and security. So, why don’t we as an industry shift data quality left? I get that is a massive undertaking in and of itself, and organizations are making progress in this regard with anomaly detection and more machine learning. But if, as everyone says, data is the lifeblood of business going forward, then data scientists must be open to talking about solutions. Because if your system breaks because data you’re connecting to from an outside source is incorrect or invalid, you, your partners and your customers suffer.