As both public and private clouds evolve, there is a surge to evaluate, design and build applications on hybrid cloud architectures. These cloud native applications follow the trend where development and deployment are scalable and cost-efficient. This is mostly achieved by leveraging cloud services for run-time platform capabilities such as performance, scalability and security out of the box. An example, “Cloud native Hello World application” might involve a webUI to select display properties and the result would be displayed on any virtual machine/container hosted on any of the public or private cloud. The application would be auto deployed and the result would be monitored in the cloud for scalability and performance.
- CI/CD and DevOps: Requires the collaboration between software developers and IT operations to constantly deliver good quality software; releasing rapid changes to software without disrupting existing deployment.
- Microservices and Containers: Microservices architecture aims at development of small services which are mutually exclusive, stateless. They can be independently developed, deployed and upgraded. The low overhead of creating and destroying containers, high packing density architecture and effective management have made the Containers popular for Microservices based designs. Containerized applications are more suited for cloud native developments as they are predictable; provide OS abstraction and can be provisioned in right sized capacity to scale up or scale down as needed. Containerized applications also help collaboration between developers and Operations teams, effectively helping continuous and rapid development. They also provide avenues to carry out disaster recovery which is an important prerogative of cloud native applications.
The cloud native applications need to follow best design methods and principles. As per cloud standard council, below is the set of principles for cloud native applications:
- Collaborate between development, IT Ops and business teams
- Determine the right application and data modeling model:
- Document the cloud resources to be deployed
- Create diagram describing, what applications, services and data should be deployed in hosts/containers inside the cloud
- User “steady speed” vs “fast speed” for delivering services to cloud
- Linking on-premise capabilities to public cloud services:
- Integrating the process of an application invoking other
- Integrating data where application may share common data
- Integrating presentation where applications share their results to user via user interface
- Meeting the Connectivity requirements in terms of service level agreements (SLA), cloud security policies and IT management strategies
- Checking functioning of:
- Network links
- Virtualized network connectivity
- Floating IP address plan
- Cloud security
- Availability of DDOS protection services and service continuity.
- Cloud management and monitoring
- Outline how operations will take shape when deployed in cloud: This should take care of internal operations, services deployment, circuit breakers for fault handling, Risk handling and compliance as per legal and contract
- Change Management: One of important aspects of cloud governance is managing changes related to automation, self-service, backup and disaster recovery. This is mainly handled by maintaining service catalogs, access controls and data storage
- Define clear governance for cloud by:
- Cloud monitoring
- Log monitoring
- Resolve security challenges
- Disaster recovery
- Identifying gaps in measurement and fixing them
The picture below shows an approach for deployment of cloud native application followed for one of Calsoft’s customer. The developed application is initially tested on premise setup and then auto-deployed in a cloud setup.
The workflow depicts following sequence of operations and principles followed for development, deployment and monitoring of cloud native applications:
- Developers design services with data access graphs, security and cloud deployment in mind
- Develop deployment scripts keeping in mind the cloud, so minor tweaks help to adhere to cloud templates and deploy the micro-services on cloud
- Execute every microservice on a separate port so that inter microservice communication issues never arise
- Independent stateless microservices are developed and made ready for production
- Production ready containerized microservices are deployed on premise setup using build integration tools.
- Once the service is verified for production, it gets auto-deployed in production cloud deployment.
- Multiple production ready containerized services keep getting deployed in cloud platform.
- There is a separate container to run the performance, stress tests of the microservices already deployed in cloud
- The deployment process is fully automated using CI/CD DevOps tools.
- In order to upgrade a particular microservice on the cloud, bundle the upgraded micro-service into separate containers and auto-launch it in the cloud. Then bring down the existing micro-service containers.
- The cloud is pre-configured for multi-tenancy, monitoring and security aspects like identity management, data segregation per tenant, data encryption network and security path security.
- Cloud is orchestrated using tools like k8s.
- Cloud monitoring is performed by Prometheus (or Cadvisor). Graphana provides single pane view of complete cloud deployment along with resource consumption and constraints.
- Critical events are notified to administrator by notification tools.
Public clouds like AWS provide most of these features as in-built. The tools like AWS GovCloud, gemalto, vometric, built-in backup, disaster recovery tools are self-sufficient for AWS. But, customers don’t want to hold back to single cloud deployment and look for multiple cloud deployment options with one time development efforts. That’s where hybrid cloud deployment approach fits in by simplifying the cloud migrations easily.
Latest posts by Kiran Divekar (see all)
- Principles of Cloud Native Applications: A Hybrid Approach - October 31, 2017
- OpenStack Summit, May 2017, Boston – An Experience - May 18, 2017
- Kubernetes Networking Internals - March 8, 2017