Service Management2024-12-05T15:28:12+00:00
Knowledge Base Article

Service Management

Serverless computing is a cloud computing execution model in which the cloud provider runs the server, and dynamically manages the allocation of machine resources. Serverless computing has seen an increase in popularity because of extensive improvement. This can be due to the increasing requirement for traditional innovation to go serverless to get headways. It reallocates that the whole base by isolating the beginning and finish aside from the program.

Native provides several open-source tools that integrate natively with Kubernetes and automate much of the work that was previously performed manually to build containers or to deploy containerized code into a serverless environment. Knative eliminates the tasks of provisioning and managing servers. This lets developers focus on their code without having to worry about setting up complex infrastructure. This benefit is extended even further if entire application components are incorporated from a third party through Backend-as-a-Service (BaaS), rather than being written in-house.

AWS Fargate is a serverless compute engine for containers that work with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate allocates the right amount of computing, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design.

Cloud Run is a fully managed to compute platform for deploying and scaling containerized applications. Cloud Run takes any container images and pairs great with the container ecosystem. Cloud Run is a managed compute platform that enables you to run stateless containers that are invocable via web requests or Pub/Sub events. Cloud Run is serverless: it abstracts away all infrastructure management, so you can focus on what matters most — building great applications.

Azure Function is a serverless compute service that enables users to run event-triggered code without having to provision or manage infrastructure. Azure Functions allows developers to take action by connecting to data sources or messaging solutions thus making it easy to process and react to events. Developers can leverage Azure Functions to build HTTP-based API endpoints accessible by a wide range of applications, mobile, and IoT devices. Azure Functions is scale-based and on-demand, so you pay only for the resources you consume.

In software architecture, a service mesh is a dedicated infrastructure layer for facilitating service-to-service communications between microservices, often using a sidecar proxy. A service mesh, like the open-source project Istio, is a way to control how different parts of an application share data with one another. Unlike other systems for managing this communication, a service mesh is a dedicated infrastructure layer built right into an app. This visible infrastructure layer can document how well (or not) different parts of an app interact, so it becomes easier to optimize communication and avoid downtime as an app grows.

Istio is an open-source platform for providing a uniform way to integrate microservices, manage traffic flow across microservices, enforce policies, and aggregate telemetry data. Istio is an open-source service mesh platform that provides a way to control how microservices share data with one another. Google, IBM, and Lyft launched Istio in May 2017 to address the compliance and security challenges that arise when integrating application based microservices in distributed systems. The initial release was designed to be used in a Kubernetes cluster environment. Istio has since added support for Nomad and Consul clusters.

AWS App mesh can run with AWS Fargate, Amazon EC2, Amazon ECS, Amazon EKS, and Kubernetes running on AWS environment. Amazon Web Services (AWS) delivers reliable, scalable, and cost-effective computing resources on which to host your applications.

Anthos is a platform that allows users to run applications on-premise not just in Google Cloud but also with other providers such as Amazon Web Services (AWS) and Microsoft Azure. Anthos is basically a service for Hybrid Cloud and workload management that runs on the Google Kubernetes Engine (GKE) and apart from the Google Cloud Platform (GCP), you will be able to manage workloads running on third-party clouds like AWS and Azure.

Azure Service Fabric is Microsoft’s Platform-as-a-Service (PaaS) and is used to build and deploy microservices-based cloud applications. Azure Service Fabric is a distributed systems platform that makes it easy to package, deploy, and manage scalable and reliable microservices and containers. Service Fabric also addresses the significant challenges in developing and managing cloud-native applications.

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. Container images become containers at runtime and in the case of Docker containers – images become containers when they run on Docker Engine. Available for both Linux and Windows-based applications, containerized software will always run the same, regardless of the infrastructure.

Kubernetes (K8s) is an open-source platform for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes builds upon 15 years of experience of running production workloads at Google, combined with best-of-breed ideas and practices from the community.

Amazon EKS is a managed version of the Kubernetes platform and Amazon has another platform called ECS Elastic Container Service which is based on the Docker platform but the industry is moving towards using Kubernetes as a defacto standard for container orchestration and Amazon embraced it with EKS.

Google Kubernetes Engine is the managed version of the Kubernetes platform, Google invented the Kubernetes platform and then outsourced it to opensource which is now a managed service known as GKE. GKE clusters are powered by the Kubernetes open-source cluster management system. Kubernetes provides the mechanisms through which you interact with your cluster. You use Kubernetes commands and resources to deploy and manage your applications, perform administration tasks, set policies, and monitor the health of your deployed workloads.

Azure Kubernetes Service (AKS) is a managed Kubernetes offering that further simplifies container-based application deployment and management. Azure Kubernetes Service is a robust and cost-effective container orchestration service that helps you to deploy and manage containerized applications in seconds where additional resources are assigned automatically without the headache of managing additional servers.

In computing, a virtual machine is an emulation of a computer system. Virtual machines are based on computer architectures and provide the functionality of a physical computer. A virtual machine (VM) is a software-based computer that exists within another computer’s operating system, often used for the purposes of testing, backing up data, or running SaaS applications. To fully grasp how VMs work, it’s important to first understand how computer software and hardware are typically integrated by an operating system.

A virtual computer system environment created on a physical hardware system (located off- or on-premises) with its own allocated CPU, memory, network interface, and storage resources. It is a guest system that is created within a computing environment called a host system using a Hypervisor.

Amazon EC2 instance is a virtual server in Amazon’s Elastic Compute Cloud (EC2) for running applications on the Amazon Web Services (AWS) infrastructure. Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud. It is designed to make web-scale cloud computing easier for developers. Amazon EC2’s simple web service interface allows you to obtain and configure capacity with minimal friction. It provides you with complete control of your computing resources and lets you run on Amazon’s proven computing environment.

Google Compute Engine (GCE) is part of Google’s Infrastructure-as-a-Service (IaaS) offering, where you can build high-performance, fault-tolerant, massively scalable compute nodes to handle your application’s needs. Google Compute Engine is the Infrastructure as a Service component of Google Cloud Platform which is built on the global infrastructure that runs Google’s search engine, Gmail, YouTube, and other services. Google Compute Engine enables users to launch virtual machines on demand.

Azure Virtual Machine provides on-demand computing resources that come under the Infrastructure as a Service (Iaas) category in Azure. We can use an image provided by Azure, or a partner, or we can use our own to create the virtual machine. Azure Virtual Machines (VM) is one of several types of on-demand, scalable computing resources that Azure offers. Typically, you choose a VM when you need more control over the computing environment than the other choices offer.

A hypervisor (or virtual machine monitor, VMM, virtualizer) is computer software, firmware, or hardware that creates and runs virtual machines. A computer on which a hypervisor runs one or more virtual machines is called a host machine, and each virtual machine is called a guest machine. The hypervisor presents the guest operating systems with a virtual operating platform and manages the execution of the guest operating systems. Multiple instances of a variety of operating systems may share the virtualized hardware resources: for example, Linux, Windows, and macOS instances can all run on a single physical x86 machine. This contrasts with operating-system-level virtualization, where all instances (usually called containers) must share a single kernel, though the guest operating systems can differ in userspace, such as different Linux distributions with the same kernel.

Vmware uses Vmware ESXi hypervisor to virtualize the bare-metal hardware resources and effectively helps in partitioning hardware to consolidate applications and cut costs. VMware software provides a completely virtualized set of hardware to the guest operating system. VMware software virtualizes the hardware for a video adapter, a network adapter, and hard disk adapters. The host provides pass-through drivers for guest USB, serial, and parallel devices. In this way, VMware virtual machines become highly portable between computers, because every host looks nearly identical to the guest. In practice, a system administrator can pause operations on a virtual machine guest, move or copy that guest to another physical computer, and there resume execution exactly at the point of suspension. 

OpenStack is an open-source platform that supports several open-source hypervisors but the most common hypervisor is KVM. The software platform consists of interrelated components that control diverse, multi-vendor hardware pools of processing, storage, and networking resources throughout a data center. Users either manage it through a web-based dashboard, through command-line tools or through RESTful web services.

Hyper-V is a windows virtualization software or hypervisor, which requires a fixed processor and feature requirements to be installed on the bare metal server. Hyper-V implements isolation of virtual machines in terms of a partition. A partition is a logical unit of isolation, supported by the hypervisor, in which each guest operating system executes.

Go to Top