This functionality is built into many process managers and https servers such as PM2 and Gunicorn. With web applications, you'll often see the term load balancing, too. Have you ever had an application that worked great until too many people started using it at once? You take care of this by scaling the instances of your application up or down. The monitoring layer answers the question "How is our app doing"? Ideally it would answer questions like "How much CPU is left on that machine?" and "Are we out of memory yet"? To be real, we want an API to essentially do this. Site reliability is our ability to confidently say our application is up, running, and will probably stay up and running! You might have run these directly, or done something like a proxy pass in NGINX or Apache. This is generally accomplished by saying "Hey, I have an app running on this port". The services layer is where we expose our Application to the outside world. It needs to keep track of information about itself.Īn NGINX server would be a Kubernetes Deployment, because it does not need to keep track of any information about itself - it is stateless. An application could be an NGINX web server, a Python or Node.js app, or a Spark application to name a few.Īpplications are either Kubernetes Deployments or Stateful Set, depending on whether or not they persist data (or have a state).Ī MySQL database would be an example of a Stateful application. Its the part we apt-get install, npm run or docker run. The application layer is what we typically think of in a deployment. If you are using a database the database also eventually persists to a filesystem. This can be local storage or some sort of networked file system (NFS). When you need to persist data you persist it to a filesystem. Application Layersĭata Layer / Persistent Volume Claims (PVCs) Once you understand what those fundamental layers are you can get cooking. I think of pretty much everything, tech concepts in particular, as a series of layers. Docker has whales, Kubernetes has pods (of whales), and its logo looks like the steering portion of a ship, and Helm is the helm of a ship.Īren't they cute? Deploying an Application on Kubernetesįirst of all, no matter where you deploy an application there are going to be some things that remain the same anywhere, and I do mean anywhere! -) Whether you are deploying to your laptop, a remote server, an AWS EC2 instance, High Performance Computing systems, or Kubernetes, the underlying concepts do not change. The entire container ecosystem, including Docker, has a very fun nautical theme. I like to think of Kubernetes + Helm as a one stop shop for my application DevOps needs. There are a ton of Helm packages already created to take care of your application deployment needs! The templates themselves are instructions that are then run using the Kubernetes API. The applications are packaged with Docker, and then the logic surrounding the deployment of the application is expressed using Helm templates. ![]() Kubernetes is essentially a very nice set of APIs for deploying, managing, and scaling applications. ![]() Now you might be asking yourself, "Well, what does that mean?" What is Kubernetes? Kubernetes (K8s) is an open-source system for automating deployment, scaling, and management of containerized applications I'll also give you a brief introduction to Kubernetes terminology. ![]() Before we dive into the Helm package manager, I'm going to explain some key concepts to deploying any application anywhere.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |