In the fast-paced world of machine learning, innovation requires utilizing data. However the reality for many companies is that data access and environmental controls which are vital to security can also add inefficiencies to the model development and testing life cycle. 

To overcome this challenge — and help others with it as well — Capital One is open-sourcing a new project called Synthetic Data. “With this tool, data sharing can be done safely and quickly allowing for faster hypothesis testing and iteration of ideas,” said Taylor Turner, lead machine learning engineer and co-developer of Synthetic Data.

Synthetic Data generates artificial data that can be used in place of “real” data. It often contains the same schema and statistical properties as the original data, but doesn’t include personally identifiable information. It’s most useful in situations where complex, nonlinear datasets are needed which is often the case in deep learning models.

Capital One open sources federated learning with Federated Model Aggregation
How Capital One uses Python to power serverless applications

To use Synthetic Data, the model builder provides the statistical properties for the dataset required for the experiment. For example, the marginal distribution between inputs, correlation between inputs, and an analytical expression that maps inputs to outputs. 

“And then you can experiment to your heart’s content,” said Brian Barr, senior machine learning engineer and researcher at Capital One. “It’s as simple as possible, yet as artistically flexible as needed to do this type of machine learning.”

According to Barr, there were some early efforts in the 1980s around synthetic data that led to capabilities in the popular Python machine learning library scikit-learn. However, as machine learning has evolved those capabilities are “not as flexible and complete for deep learning where there’s nonlinear relationships between inputs and outputs,” said Barr.

The Synthetic Data project was born in Capital One’s machine learning research program that focuses on exploring and elevating the forward-leaning methods, applications and techniques for machine learning to make banking more simple and safe. Synthetic Data was created based on the Capital One research paper, “Towards Ground Truth Explainability on Tabular Data,” co-written by Barr.

The project also works well with Data Profiler, Capital One’s open-source machine learning library for monitoring big data and detecting sensitive information that needs proper protection. Data Profiler can assemble the statistics that represent the dataset and then synthetic data can be created based on those empirical statistics.

“Sharing our research and creating tools for the open source community are important parts of our mission at Capital One,” said Turner. “We look forward to continuing to explore the synergies between data profiling and synthetic data and sharing those learnings.”

Visit the Data Profiler and Synthetic Data repositories on GitHub