Have you ever wondered if it is possible to migrate virtual machines to containers by a click of a button? Pretty cool right, this article will introduce you to the Anthos cloud service offering from Google Cloud Platform.
Topics to be covered
- Cloud Computing
- Virtual Machines
- Workload orchestration
Cloud computing has really attracted enterprise organisations to upgrade their Information Technology (IT) systems as they look into fast-tracking digital transformation so as to drive business value in a modern way. That journey has not been a smooth one as questions are raised over how easy operations will be, skills required, cost and fear of vendor lock-in. However, Kubernetes (k8s) has managed to answer some of those questions but again it is a complex platform which needs the right skill sets to be successful.
Enterprise operations consist of legacy systems which are delicate and need to be handled with care. As these mostly do not follow the 12 Factor app requirements for cloud-native application implementation, a tool to easily migrate them to be k8s ready is imperative. Before we dig deep into Anthos let me first get to explain the fundamental technologies which have pioneered such ideas.
Virtual Machines vs Containers
Below we see an evolution of ways to deploy applications over the past years. Traditional vs Virtualised vs Container Deployment .
- Virtual machines (VMs) are an abstraction of physical hardware turning one server into many servers. In the data centre, the hypervisor enables multiple VMs to run a full copy of an operating system.
- Containers are an abstraction at the application layer that packages code and dependencies together. They are becoming the new easy way to package software into standardised units for development, shipment and deployment. Their similarities are on resource isolation and allocation benefits but however function differently. The next challenge the industry had to tackle was whether containers are enough to run workloads at scale when you have to factor in the following;
- Scaling up and down
- Configurations and secrets management
- Service discovery
- Redundancy for high availability
- Rolling out and back
- Health Checks.
The response to this was container orchestration, some examples are Docker Swarm, Apache Mesos, Openshift and k8s. The winner currently has been k8s as highlighted in this article here.
Kubernetes is an open-source container orchestration engine for automating deployment, scaling and management of containerised applications. Kubernetes in Greek means governor and it was inspired by Google’s internal Borg system where they claim to launch a billion containers a week. They then decided to donate it as an open-source project to be hosted by the Cloud Native Computing Foundation (CNCF), which seeks to build a sustainable ecosystem for cloud-native software. Because its 100% open-source, it supports multiple clouds and bare-metal environments. Why k8s you may ask?
- No vendor lock-in, k8s works the same in GCP, AWS, Azure and any cloud service providers even on raspberry pis
- Auto Scaling is efficiently applied as needed be it vertical or horizontal
- High availability and failover with high fault tolerance
- Rolling upgrades is easy
- Maximum utilisation of resources is a reality.
Now that we have a good understanding of the core concepts leading to containers, cloud computing and orchestration we can now discuss further the main reason for this blog series.
What is Anthos?
It is a Google Cloud Platform(GCP) hybrid and multi-cloud application solution that seeks to assist organisations like enterprises, to migrate their virtual machine-based legacy applications into container-based workloads with minimum consistent effort. Consistency across all deployed environments is one of the objectives, with the same policies declaratively initiated in GCP being enforced on-prem, and in other clouds like Amazon Web Services (AWS) and Microsoft Azure. This is an aggressive approach from GCP to try tap into the enterprise cloud market by offering a simplistic and convenient way to move critical, loyal and delicate on-prem workloads into the cloud.
Enterprise IT requirements are usually complex and require special approaches to be able to modernise their software. GCP offers this as a managed service approach by handling all the underlying integrations of the open-source tooling behind the technology such as k8s, Knative, Istio and Tekton.
Right now you are probably asking yourself the question why the name Anthos? From my research, Anthos is a flower in Greek which grows in the ground/earth but needs rain from the clouds to flourish. Interesting right! That means workloads on-prem will require some help from the cloud using Anthos to be able to modernise and flourish.
The primary computing environment for Anthos is Google Kubernetes Engine (GKE) which is a managed Kubernetes service to easily handle workload deployment and orchestration at scale. Its technology stack is as follows from top to bottom;
- Application development - Cloud Code
- Application deployment - CI/CD tools like Cloud Build, Gitlab
- Policy enforcement - Anthos Config Management, Anthos Enterprise Data Protection, Policy Controller
- Service management - Anthos Service Mesh
- Cluster management - GKE, Ingress for Anthos
- Infrastructure management - GKE Anthos technical overview .
The benefits offered if implemented as described are as follows;
- Enables the management of applications anywhere thus within GCP, on-prem or other clouds like AWS and Azure
- Deliver software faster by adopting cloud-native CI/CD approaches
- Protect applications and software supply chain by leveraging programmatic, outcome-focused approach to managing policies for apps across environments
- Enables greater awareness and control with a unified view of your services’ health and performance
- Monitor, troubleshoot, and improve application performance making use of GCP operations or open-source integrations of your choice like Prometheus and Grafana.
This was the first of more articles to come for this series on Anthos, today we just focused on the introduction, benefits and familiarity with the technical overview of the stack used. In the next article, we will delve deeper into the infrastructure and cluster management, in order to explain the individual components that make this service offering compelling for the enterprise IT modernisation efforts, within organisations looking into adopting cloud-native technologies.
That's all folks! I hope this was helpful. If so, like and share this article, lastly, let us connect on Twitter.