Organizations are looking for ways to gain an advantage with data, now more than ever. One strategy pertains to application architecture, especially in relation to the cloud. Historically, IT professionals have seen the cloud as a different science from data center deployments, but from an application development and deployment standpoint, the distinction is less clear. Organizations are now viewing the cloud as a vehicle in an overall strategy for delivering globally distributed data and applications. For that reason, organizations should take advantage of multiple cloud vendors and multiple on-premises data centers, all working together. This multi-vendor approach might conflict with the ideal setup of a single corporate standard, but with the growing opportunities of Big Data, the advantage of global reach outweighs the benefits of a single deployment model.
The key objectives of using the cloud have changed over the years. IT decision-makers once considered lower total cost of ownership as a primary benefit, though today they bemoan the rapidly rising costs. Also, many organizations once questioned the security of private data at a widely accessible third-party site, but less so today. The immediacy of the cloud, with respect to easy provisioning and elasticity, continues to have an advantage over deploying physical hardware. However, on-premises deployments continue, due to advantages such as greater control over data location, flexibility for performance tuning, and avoidance of cloud lock-in. Cloud versus on-premises should be more of a tactical, location-based decision on how data is managed, and it ultimately should not deeply impact application design.
Architectural considerations for applications today should address the expectation of having data distributed in many distinct locations. One objective is to ensure the same application code can be used anywhere. Application developers should not focus on the differences in hardware characteristics and costs, but rather, on organization-wide, data-driven goals. For example, even if a location only has non-sensitive data, security requirements are inevitable in the future and should also be incorporated to support locations with sensitive data now. Also, uptime should be considered, even if service-level agreements are not yet established. Capabilities such as failover and immediate recovery need to be integrated into the application development strategy.
Microservices and containers help with having the same application code everywhere. A microservices architecture includes small, task-specific applications that are relatively easy to build and maintain. The limited scope of microservices makes it easier to avoid dependencies that make applications harder to deploy in new locations. Microservices for tasks such as data ingestion, cleansing, and enrichment are easy to implement if you break down work into small components. The simplicity of microservices make them easier to deploy across global deployments. At the same time, the use of containers improves portability. By packaging microservices into containers, DevOps teams have an easier way to quickly deploy applications into production environments with fewer problems due to environmental differences. The use of microservices and containers for today’s globally distributed applications not only addresses portability, but also promotes agility, fault tolerance, scale, and speed.
It is easy to become complacent about security and reliability, but we cannot afford to be. These topics tend to become secondary concerns to other objectives like time-to-value and agility. And to make matters worse, many believe that security and reliability can be added later. This approach is asking for trouble. We periodically hear about high-profile breaches and outages, and oftentimes those incidents could have been mitigated with diligence on the application architecture. Security and reliability should always be primary concerns, so if you have not incorporated a strategy for these, you should start.
We can think about application architecture strategies in the context of the key global processing requirements today. One top global processing requirement is enabling information sharing. Often this is implemented as a single global namespace, in which any local infrastructure logically appears to be a local view of global data. As examples, data such as sensor readings and sales performance figures offer important insights to the entire organization, not only the local site.
Another requirement is the use of continuous, coordinated data flows. It entails having a streaming architecture in which data is viewed as an ongoing flow, rather than a point-in-time state. For environments where data is continually changing and growing, a streaming architecture is a great foundation for sharing geo-distributed data. A strong, omni-directional replication capability built into your architecture helps to facilitate information sharing in a fast, scalable, and reliable way.
Location awareness is important for regulatory requirements on data residence, mostly related to personally identifiable information (PII). This typically pertains to data privacy. Privacy is, along with other security concerns, at risk of being relegated to a secondary concern. With the costs of privacy gaps becoming higher today, safeguarding private data with a well-defined access policy and strong security tools will help to reduce the risk.
Finally, data consistency is an important characteristic for ensuring accuracy. Consistency is sometimes traded off for data availability. This concession is unfortunate, because it suggests that ongoing access to stale data is better than a temporary outage of accurate data. The topic of consistency versus availability is a complex one and cannot be given justice in this article, but the recommendation is to deliver strong data consistency at a local level to ensure accurate data. One can then minimize the window of inaccessibility with failover strategies.
Including the cloud as part of your overall data architecture should not be an added burden. The benefits of the cloud should be a natural complement to your on-premises data centers to enable global access to your data. Ultimately, we should treat all deployment locations as the same so that application development strategies do not become more complex. A microservices architecture combined with containers is only one approach for creating a more agile global deployment.