Eric Madariaga, chief marketing officer at CData Software
“Ultimately, we solve data connectivity challenges through standardization — providing a common layer for accessing different data and systems. We then combine that with tooling to solve specific problems within messaging, data pipeline connectivity, and data management.

While APIs have become the standard for integration, varying applications and processes all approach APIs a different way. There are different protocols, different ways to authenticate and secure APIs, and different data structures and payloads. You can submit an invoice to a partner in a B2B transaction, and the EDI and protocol request will look very different from the API integration that you would make to your ERP or CRM system. 

RELATED CONTENT: Data without integration is just data

We focus on standardizing connectivity between these systems and provide a comprehensive suite of tools to support integration. For example, CData Sync is a data pipeline tool focused on bulk data movement for data warehousing initiatives. Instead of providing real-time connectivity to data like our data virtualization components, Sync replicates data from hundreds of different systems into a common database or data warehouse like Snowflake, BigQuery, or any standard RDBMS where you can do analytics or other processing.

Our ArcESB product is another tool which is more message-driven. It’s really good at moving data from one application to another and handling the business logic that often drastically differs between two companies. 

All our solutions leverage established standards to simplify integration between applications and data. Everything we do centers around the context of making it as easy as possible for organizations to leverage their data assets and get the most value from their data, no matter which systems and tools they may be using.”

Arawan Gajajiva, a principal solution architect at Matillion
“One of the key differentiators with Matillion versus other tools is we subscribe to what’s called an E-L-T architecture and where that’s different is we push all of the data transformations to your cloud data warehouse. This is different from many other tools out there that use E-T-L where the data transformation is done in the data integration layer. 

With Matillion’s E-L-T architecture, the benefit here is that as your data volumes grow, your cloud data warehouse is going to be really well-suited for these really large workloads. What Matillion offers here is maximizing your investment in your cloud data warehouse technology so that as your data volumes grow, your Matillion footprint doesn’t need to grow. We also offer an easy UI to make it all intuitive at the end of the day. 

Matillion ETL is our flagship product that is deployed into customers’ cloud environments. There are separate versions of Matillion ETL based upon which cloud you’re using whether it’s AWS, Azure, GCP, and then also separate versions depending on which cloud data warehouse you’re using. So that means different products for AWS Redshift or Snowflake for example. 

We also offer Matillion Data Loader for free as a SaaS product for loading data into your cloud data warehouse, with native integrations to many popular common on-premise and cloud data sources including Salesforce, Excel, and Google Analytics.

Matillion ETL comes with an extensive list of pre-built data source connectors for on-premises and cloud databases, SaaS applications, documents, NoSQL sources, and more to quickly load data into your cloud data environment as well as the ability to Create Your Own Connector to easily build custom connectors to any REST API source system.”

Sameer Parulkar, product marketing director for Red Hat Integration
“With the Red Hat Integration portfolio, we have a set of cloud-native capabilities that enable customers to connect their data whether that’s through aggregating that data or real-time sharing of that data.

Red Hat Integration includes our messaging capability based on Apache ActiveMQ, which is used to share data in real-time. We offer data streaming capabilities based on Apache Kafka which is increasingly being adopted to stream and share data.

The portfolio also contains the Apache Camel-based integration framework for data connectivity and API Management to share, secure, control, analyze and monetize APIs.

We also offer Red Hat Runtimes, a set of products, tools, and components for developing and maintaining cloud-native applications.

All of that is supported with Red Hat Integration.

The key aspect is that all these capabilities are container- native or cloud-native. Red Hat OpenShift provides a good foundation to host and scale data workloads, AI workloads or AI machine learning workloads and deploy them across hybrid cloud whether it’s on-premise, a private cloud or a public cloud.

Our overall focus is to support event-driven architecture and initiatives.

Moving forward, we’re working to expand our traditional messaging and the data streaming capabilities with a focus on Kafka in the Kubernetes environment, and to expand our API capabilities to support event-driven use cases. We want to take our change data capture and serverless data sharing capabilities to the next level — we want to make it much easier for customers to adopt these technologies.”