Restarting a container in such a state can help to make the application more available despite bugs. While the pod is running, the kubelet can restart each container to handle certain errors. [DEPLOYMENT-NAME]-[HASH]. You may need to restart a pod for the following reasons: It is possible to restart Docker containers with the following command: However, there is no equivalent command to restart pods in Kubernetes, especially if there is no designated YAML file. With a background in both design and writing, Aleksandar Kovacevic aims to bring a fresh perspective to writing for IT, making complicated concepts easy to understand and approach. So how to avoid an outage and downtime? You will notice below that each pod runs and are back in business after restarting. Kubernetes marks a Deployment as complete when it has the following characteristics: When the rollout becomes complete, the Deployment controller sets a condition with the following Foremost in your mind should be these two questions: do you want all the Pods in your Deployment or ReplicaSet to be replaced, and is any downtime acceptable? Kubernetes rolling update with updating value in deployment file, Kubernetes: Triggering a Rollout Restart via Configuration/AdmissionControllers. A rollout restart will kill one pod at a time, then new pods will be scaled up. Follow asked 2 mins ago. labels and an appropriate restart policy. If one of your containers experiences an issue, aim to replace it instead of restarting. Why are Suriname, Belize, and Guinea-Bissau classified as "Small Island Developing States"? .spec.strategy specifies the strategy used to replace old Pods by new ones. When you .spec.replicas field automatically. Eventually, resume the Deployment rollout and observe a new ReplicaSet coming up with all the new updates: Watch the status of the rollout until it's done. 2. Restart pods without taking the service down. How to restart Pods in Kubernetes Method 1: Rollout Pod restarts Method 2. (nginx-deployment-1564180365) and scaled it up to 1 and waited for it to come up. Soft, Hard, and Mixed Resets Explained, How to Set Variables In Your GitLab CI Pipelines, How to Send a Message to Slack From a Bash Script, The New Outlook Is Opening Up to More People, Windows 11 Feature Updates Are Speeding Up, E-Win Champion Fabric Gaming Chair Review, Amazon Echo Dot With Clock (5th-gen) Review, Grelife 24in Oscillating Space Heater Review: Comfort and Functionality Combined, VCK Dual Filter Air Purifier Review: Affordable and Practical for Home or Office, LatticeWork Amber X Personal Cloud Storage Review: Backups Made Easy, Neat Bumblebee II Review: It's Good, It's Affordable, and It's Usually On Sale, How to Win $2000 By Learning to Code a Rocket League Bot, How to Watch UFC 285 Jones vs. Gane Live Online, How to Fix Your Connection Is Not Private Errors, 2023 LifeSavvy Media. .spec.strategy.type can be "Recreate" or "RollingUpdate". Open an issue in the GitHub repo if you want to To subscribe to this RSS feed, copy and paste this URL into your RSS reader. by the parameters specified in the deployment strategy. If you want to roll out releases to a subset of users or servers using the Deployment, you Then, the pods automatically restart once the process goes through. This defaults to 600. The absolute number is calculated from percentage by Without it you can only add new annotations as a safety measure to prevent unintentional changes. By default, it ensures that at most 125% of the desired number of Pods are up (25% max surge). If you satisfy the quota After restarting the pod new dashboard is not coming up. This is part of a series of articles about Kubernetes troubleshooting. Check your inbox and click the link. Not the answer you're looking for? down further, followed by scaling up the new ReplicaSet, ensuring that the total number of Pods available If you have multiple controllers that have overlapping selectors, the controllers will fight with each and scaled it up to 3 replicas directly. By default, This process continues until all new pods are newer than those existing when the controller resumes. Kubernetes is an open-source system built for orchestrating, scaling, and deploying containerized apps. type: Progressing with status: "True" means that your Deployment The above-mentioned command performs a step-by-step shutdown and restarts each container in your deployment. If specified, this field needs to be greater than .spec.minReadySeconds. If Kubernetes isnt able to fix the issue on its own, and you cant find the source of the error, restarting the pod is the fastest way to get your app working again. When the control plane creates new Pods for a Deployment, the .metadata.name of the Kubernetes marks a Deployment as progressing when one of the following tasks is performed: When the rollout becomes progressing, the Deployment controller adds a condition with the following Pods with .spec.template if the number of Pods is less than the desired number. For example, if your Pod is in error state. In the future, once automatic rollback will be implemented, the Deployment .spec.strategy.rollingUpdate.maxUnavailable is an optional field that specifies the maximum number Deployment's status update with a successful condition (status: "True" and reason: NewReplicaSetAvailable). Next, open your favorite code editor, and copy/paste the configuration below. as per the update and start scaling that up, and rolls over the ReplicaSet that it was scaling up previously In both approaches, you explicitly restarted the pods. Kubernetes Pods should usually run until theyre replaced by a new deployment. A Deployment enters various states during its lifecycle. Configured Azure VM ,design of azure batch solutions ,azure app service ,container . There are many ways to restart pods in kubernetes with kubectl commands, but for a start, first, restart pods by changing the number of replicas in the deployment. due to some of the following factors: One way you can detect this condition is to specify a deadline parameter in your Deployment spec: Kubernetes Documentation Tasks Monitoring, Logging, and Debugging Troubleshooting Applications Debug Running Pods Debug Running Pods This page explains how to debug Pods running (or crashing) on a Node. The autoscaler increments the Deployment replicas Method 1: Rolling Restart As of update 1.15, Kubernetes lets you do a rolling restart of your deployment. It defaults to 1. It does not kill old Pods until a sufficient number of The .spec.selector field defines how the created ReplicaSet finds which Pods to manage. To see the ReplicaSet (rs) created by the Deployment, run kubectl get rs. Deployment progress has stalled. If you update a Deployment while an existing rollout is in progress, the Deployment creates a new ReplicaSet Run the kubectl scale command below to terminate all the pods one by one as you defined 0 replicas (--replicas=0). Once you set a number higher than zero, Kubernetes creates new replicas. The configuration of each Deployment revision is stored in its ReplicaSets; therefore, once an old ReplicaSet is deleted, you lose the ability to rollback to that revision of Deployment. Sometimes you might get in a situation where you need to restart your Pod. The only difference between to 15. The Deployment controller will keep By submitting your email, you agree to the Terms of Use and Privacy Policy. Why does Mister Mxyzptlk need to have a weakness in the comics? This is usually when you release a new version of your container image. does instead affect the Available condition). A Deployment provides declarative updates for Pods and This defaults to 0 (the Pod will be considered available as soon as it is ready). There are many ways to restart pods in kubernetes with kubectl commands, but for a start, first, restart pods by changing the number of replicas in the deployment. A rollout would replace all the managed Pods, not just the one presenting a fault. By running the rollout restart command. A Deployment is not paused by default when the rolling update process. $ kubectl rollout restart deployment httpd-deployment Now to view the Pods restarting, run: $ kubectl get pods Notice in the image below Kubernetes creates a new Pod before Terminating each of the previous ones as soon as the new Pod gets to Running status. In my opinion, this is the best way to restart your pods as your application will not go down. The ReplicaSet will notice the Pod has vanished as the number of container instances will drop below the target replica count. Restarting the Pod can help restore operations to normal. If you're prompted, select the subscription in which you created your registry and cluster. When your Pods part of a ReplicaSet or Deployment, you can initiate a replacement by simply deleting it. number of seconds the Deployment controller waits before indicating (in the Deployment status) that the Making statements based on opinion; back them up with references or personal experience. kubernetes.io/docs/setup/release/version-skew-policy, How Intuit democratizes AI development across teams through reusability. You may experience transient errors with your Deployments, either due to a low timeout that you have set or all of the implications. Since the Kubernetes API is declarative, deleting the pod object contradicts the expected one. If you want to restart your Pods without running your CI pipeline or creating a new image, there are several ways to achieve this. 0. You have successfully restarted Kubernetes Pods. you're ready to apply those changes, you resume rollouts for the for the Pods targeted by this Deployment. When issues do occur, you can use the three methods listed above to quickly and safely get your app working without shutting down the service for your customers. To see the labels automatically generated for each Pod, run kubectl get pods --show-labels. This is called proportional scaling. (you can change that by modifying revision history limit). Notice below that the DATE variable is empty (null). Here are a couple of ways you can restart your Pods: Starting from Kubernetes version 1.15, you can perform a rolling restart of your deployments. However, that doesnt always fix the problem. then deletes an old Pod, and creates another new one. retrying the Deployment. and the exit status from kubectl rollout is 1 (indicating an error): All actions that apply to a complete Deployment also apply to a failed Deployment. Running get pods should now show only the new Pods: Next time you want to update these Pods, you only need to update the Deployment's Pod template again. You have a deployment named my-dep which consists of two pods (as replica is set to two). kubectl get pods. How to Monitor Kubernetes With Prometheus, Introduction to Kubernetes Persistent Volumes, How to Uninstall MySQL in Linux, Windows, and macOS, Error 521: What Causes It and How to Fix It, How to Install and Configure SMTP Server on Windows, Do not sell or share my personal information, Access to a terminal window/ command line. Looking at the Pods created, you see that 1 Pod created by new ReplicaSet is stuck in an image pull loop. - Niels Basjes Jan 5, 2020 at 11:14 2 ReplicaSet with the most replicas. Once new Pods are ready, old ReplicaSet can be scaled In this tutorial, you will learn multiple ways of rebooting pods in the Kubernetes cluster step by step. By default, Success! It is generated by hashing the PodTemplate of the ReplicaSet and using the resulting hash as the label value that is added to the ReplicaSet selector, Pod template labels, up to 3 replicas, as well as scaling down the old ReplicaSet to 0 replicas. This quick article explains all of this., A complete step-by-step beginner's guide to deploy Kubernetes cluster on CentOS and other Linux distributions., Learn two ways to delete a service in Kubernetes., An independent, reader-supported publication focusing on Linux Command Line, Server, Self-hosting, DevOps and Cloud Learning. Finally, run the command below to verify the number of pods running. new Pods have come up, and does not create new Pods until a sufficient number of old Pods have been killed. All Rights Reserved. Upgrade Dapr on a Kubernetes cluster. So sit back, enjoy, and learn how to keep your pods running. Just enter i to enter into insert mode and make changes and then ESC and :wq same way as we use a vi/vim editor. As a result, theres no direct way to restart a single Pod. When you update a Deployment, or plan to, you can pause rollouts Use the following command to set the number of the pods replicas to 0: Use the following command to set the number of the replicas to a number more than zero and turn it on: Use the following command to check the status and new names of the replicas: Use the following command to set the environment variable: Use the following command to retrieve information about the pods and ensure they are running: Run the following command to check that the. In API version apps/v1, .spec.selector and .metadata.labels do not default to .spec.template.metadata.labels if not set. Youve previously configured the number of replicas to zero to restart pods, but doing so causes an outage and downtime in the application. which are created. For example, when this value is set to 30%, the new ReplicaSet can be scaled up immediately when the If so, how close was it? It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. How should I go about getting parts for this bike? You can scale it up/down, roll back The HASH string is the same as the pod-template-hash label on the ReplicaSet. You can define Deployments to create new ReplicaSets, or to remove existing Deployments and adopt all their resources with new Deployments. The output is similar to this: Notice that the Deployment has created all three replicas, and all replicas are up-to-date (they contain the latest Pod template) and available. This is the reason why we created Komodor, a tool that helps dev and ops teams stop wasting their precious time looking for needles in (hay)stacks every time things go wrong. (for example: by running kubectl apply -f deployment.yaml), tutorials by Sagar! How can I check before my flight that the cloud separation requirements in VFR flight rules are met? Selector updates changes the existing value in a selector key -- result in the same behavior as additions. Jonty . can create multiple Deployments, one for each release, following the canary pattern described in []Kubernetes: Restart pods when config map values change 2021-09-08 17:16:34 2 74 kubernetes / configmap. For best compatibility, Highlight a Row Using Conditional Formatting, Hide or Password Protect a Folder in Windows, Access Your Router If You Forget the Password, Access Your Linux Partitions From Windows, How to Connect to Localhost Within a Docker Container. .spec.progressDeadlineSeconds denotes the a component to detect the change and (2) a mechanism to restart the pod. Lets say one of the pods in your container is reporting an error. No old replicas for the Deployment are running. Any leftovers are added to the the default value. This detail highlights an important point about ReplicaSets: Kubernetes only guarantees the number of running Pods will . The above command deletes the entire ReplicaSet of pods and recreates them, effectively restarting each one. He is the founder of Heron Web, a UK-based digital agency providing bespoke software development services to SMEs. What Is a PEM File and How Do You Use It? But there is no deployment for the elasticsearch: I'd like to restart the elasticsearch pod and I have searched that people say to use kubectl scale deployment --replicas=0 to terminate the pod. Is any way to add latency to a service(or a port) in K8s? After restarting the pods, you will have time to find and fix the true cause of the problem. Let me explain through an example: ReplicaSets have a replicas field that defines the number of Pods to run. If a container continues to fail, the kubelet will delay the restarts with exponential backoffsi.e., a delay of 10 seconds, 20 seconds, 40 seconds, and so on for up to 5 minutes. Now, execute the kubectl get command below to verify the pods running in the cluster, while the -o wide syntax provides a detailed view of all the pods. to wait for your Deployment to progress before the system reports back that the Deployment has In this tutorial, the folder is called ~/nginx-deploy, but you can name it differently as you prefer. Restart pods by running the appropriate kubectl commands, shown in Table 1. .spec.replicas is an optional field that specifies the number of desired Pods. configuring containers, and using kubectl to manage resources documents. Below, youll notice that the old pods show Terminating status, while the new pods show Running status after updating the deployment. Manual Pod deletions can be ideal if you want to restart an individual Pod without downtime, provided youre running more than one replica, whereas scale is an option when the rollout command cant be used and youre not concerned about a brief period of unavailability. However, the following workaround methods can save you time, especially if your app is running and you dont want to shut the service down. - David Maze Aug 20, 2019 at 0:00 So having locally installed kubectl 1.15 you can use this on a 1.14 cluster? This page shows how to configure liveness, readiness and startup probes for containers. Log in to the primary node, on the primary, run these commands. Hate ads? This tutorial houses step-by-step demonstrations. Change this value and apply the updated ReplicaSet manifest to your cluster to have Kubernetes reschedule your Pods to match the new replica count. killing the 3 nginx:1.14.2 Pods that it had created, and starts creating A different approach to restarting Kubernetes pods is to update their environment variables. You must specify an appropriate selector and Pod template labels in a Deployment Note: Modern DevOps teams will have a shortcut to redeploy the pods as a part of their CI/CD pipeline. Read more Kubernetes Pods should operate without intervention but sometimes you might hit a problem where a containers not working the way it should. You've successfully signed in. But I think your prior need is to set "readinessProbe" to check if configs are loaded. There is no such command kubectl restart pod, but there are a few ways to achieve this using other kubectl commands. Staging Ground Beta 1 Recap, and Reviewers needed for Beta 2, Restart all the pods in deployment in Kubernetes 1.14, kubectl - How to restart a deployment (or all deployment), How to restart a deployment in kubernetes using go-client. You can leave the image name set to the default. ReplicaSet is scaled to .spec.replicas and all old ReplicaSets is scaled to 0. When Is there a way to make rolling "restart", preferably without changing deployment yaml? To fetch Kubernetes cluster attributes for an existing deployment in Kubernetes, you will have to "rollout restart" the existing deployment, which will create new containers and this will start the container inspect . In this tutorial, you learned different ways of restarting the Kubernetes pods in the Kubernetes cluster, which can help quickly solve most of your pod-related issues. reason: NewReplicaSetAvailable means that the Deployment is complete). I think "rolling update of a deployment without changing tags . You can specify theCHANGE-CAUSE message by: To see the details of each revision, run: Follow the steps given below to rollback the Deployment from the current version to the previous version, which is version 2. but then update the Deployment to create 5 replicas of nginx:1.16.1, when only 3 Remember to keep your Kubernetes cluster up-to . Great! As with all other Kubernetes configs, a Deployment needs .apiVersion, .kind, and .metadata fields. .spec.progressDeadlineSeconds is an optional field that specifies the number of seconds you want rolling out a new ReplicaSet, it can be complete, or it can fail to progress. The controller kills one pod at a time and relies on the ReplicaSet to scale up new Pods until all the Pods are newer than the restarted time. rev2023.3.3.43278. it is created. A faster way to achieve this is use the kubectl scale command to change the replica number to zero and once you set a number higher than zero, Kubernetes creates new replicas. These old ReplicaSets consume resources in etcd and crowd the output of kubectl get rs. Connect and share knowledge within a single location that is structured and easy to search. You just have to replace the deployment_name with yours. Open your terminal and run the commands below to create a folder in your home directory, and change the working directory to that folder. kubernetes restart all the pods using REST api, Styling contours by colour and by line thickness in QGIS. He is the founder of Heron Web, a UK-based digital agency providing bespoke software development services to SMEs. Instead, allow the Kubernetes Theres also kubectl rollout status deployment/my-deployment which shows the current progress too. and in any existing Pods that the ReplicaSet might have. This name will become the basis for the ReplicaSets James Walker is a contributor to How-To Geek DevOps. Method 1. kubectl rollout restart. allowed, which is the default if not specified. a Pod is considered ready, see Container Probes. insufficient quota. It then continued scaling up and down the new and the old ReplicaSet, with the same rolling update strategy. or paused), the Deployment controller balances the additional replicas in the existing active kubectl rollout restart deployment <deployment_name> -n <namespace>. Run the kubectl apply command below to pick the nginx.yaml file and create the deployment, as shown below. In this strategy, you scale the number of deployment replicas to zero that stops all the pods and further terminates them. I have a trick which may not be the right way but it works. Depending on the restart policy, Kubernetes itself tries to restart and fix it. Singapore. By clicking Accept all cookies, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. But in the final approach, once you update the pods environment variable, the pods automatically restart by themselves. It does not wait for the 5 replicas of nginx:1.14.2 to be created The command instructs the controller to kill the pods one by one. Is it the same as Kubernetes or is there some difference? Can I set a timeout, when the running pods are termianted? value, but this can produce unexpected results for the Pod hostnames. Here you see that when you first created the Deployment, it created a ReplicaSet (nginx-deployment-2035384211) Running Dapr with a Kubernetes Job. The above command can restart a single pod at a time. Manually editing the manifest of the resource. Since we launched in 2006, our articles have been read billions of times. The elasticsearch-master-0 rise up with a statefulsets.apps resource in k8s. Making statements based on opinion; back them up with references or personal experience. The quickest way to get the pods running again is to restart pods in Kubernetes. before changing course. DNS label. the new replicas become healthy. Do not overlap labels or selectors with other controllers (including other Deployments and StatefulSets). How to Use Cron With Your Docker Containers, How to Check If Your Server Is Vulnerable to the log4j Java Exploit (Log4Shell), How to Pass Environment Variables to Docker Containers, How to Use Docker to Containerize PHP and Apache, How to Use State in Functional React Components, How to Restart Kubernetes Pods With Kubectl, How to Find Your Apache Configuration Folder, How to Assign a Static IP to a Docker Container, How to Get Started With Portainer, a Web UI for Docker, How to Configure Cache-Control Headers in NGINX, How Does Git Reset Actually Work? Let's take an example. @SAEED gave a simple solution for that. Connect and share knowledge within a single location that is structured and easy to search. and the exit status from kubectl rollout is 0 (success): Your Deployment may get stuck trying to deploy its newest ReplicaSet without ever completing. How Intuit democratizes AI development across teams through reusability. . and reason: ProgressDeadlineExceeded in the status of the resource. In such cases, you need to explicitly restart the Kubernetes pods. The condition holds even when availability of replicas changes (which Depending on the restart policy, Kubernetes might try to automatically restart the pod to get it working again. Last modified February 18, 2023 at 7:06 PM PST: Installing Kubernetes with deployment tools, Customizing components with the kubeadm API, Creating Highly Available Clusters with kubeadm, Set up a High Availability etcd Cluster with kubeadm, Configuring each kubelet in your cluster using kubeadm, Communication between Nodes and the Control Plane, Guide for scheduling Windows containers in Kubernetes, Topology-aware traffic routing with topology keys, Resource Management for Pods and Containers, Organizing Cluster Access Using kubeconfig Files, Compute, Storage, and Networking Extensions, Changing the Container Runtime on a Node from Docker Engine to containerd, Migrate Docker Engine nodes from dockershim to cri-dockerd, Find Out What Container Runtime is Used on a Node, Troubleshooting CNI plugin-related errors, Check whether dockershim removal affects you, Migrating telemetry and security agents from dockershim, Configure Default Memory Requests and Limits for a Namespace, Configure Default CPU Requests and Limits for a Namespace, Configure Minimum and Maximum Memory Constraints for a Namespace, Configure Minimum and Maximum CPU Constraints for a Namespace, Configure Memory and CPU Quotas for a Namespace, Change the Reclaim Policy of a PersistentVolume, Configure a kubelet image credential provider, Control CPU Management Policies on the Node, Control Topology Management Policies on a node, Guaranteed Scheduling For Critical Add-On Pods, Migrate Replicated Control Plane To Use Cloud Controller Manager, Reconfigure a Node's Kubelet in a Live Cluster, Reserve Compute Resources for System Daemons, Running Kubernetes Node Components as a Non-root User, Using NodeLocal DNSCache in Kubernetes Clusters, Assign Memory Resources to Containers and Pods, Assign CPU Resources to Containers and Pods, Configure GMSA for Windows Pods and containers, Configure RunAsUserName for Windows pods and containers, Configure a Pod to Use a Volume for Storage, Configure a Pod to Use a PersistentVolume for Storage, Configure a Pod to Use a Projected Volume for Storage, Configure a Security Context for a Pod or Container, Configure Liveness, Readiness and Startup Probes, Attach Handlers to Container Lifecycle Events, Share Process Namespace between Containers in a Pod, Translate a Docker Compose File to Kubernetes Resources, Enforce Pod Security Standards by Configuring the Built-in Admission Controller, Enforce Pod Security Standards with Namespace Labels, Migrate from PodSecurityPolicy to the Built-In PodSecurity Admission Controller, Developing and debugging services locally using telepresence, Declarative Management of Kubernetes Objects Using Configuration Files, Declarative Management of Kubernetes Objects Using Kustomize, Managing Kubernetes Objects Using Imperative Commands, Imperative Management of Kubernetes Objects Using Configuration Files, Update API Objects in Place Using kubectl patch, Managing Secrets using Configuration File, Define a Command and Arguments for a Container, Define Environment Variables for a Container, Expose Pod Information to Containers Through Environment Variables, Expose Pod Information to Containers Through Files, Distribute Credentials Securely Using Secrets, Run a Stateless Application Using a Deployment, Run a Single-Instance Stateful Application, Specifying a Disruption Budget for your Application, Coarse Parallel Processing Using a Work Queue, Fine Parallel Processing Using a Work Queue, Indexed Job for Parallel Processing with Static Work Assignment, Handling retriable and non-retriable pod failures with Pod failure policy, Deploy and Access the Kubernetes Dashboard, Use Port Forwarding to Access Applications in a Cluster, Use a Service to Access an Application in a Cluster, Connect a Frontend to a Backend Using Services, List All Container Images Running in a Cluster, Set up Ingress on Minikube with the NGINX Ingress Controller, Communicate Between Containers in the Same Pod Using a Shared Volume, Extend the Kubernetes API with CustomResourceDefinitions, Use an HTTP Proxy to Access the Kubernetes API, Use a SOCKS5 Proxy to Access the Kubernetes API, Configure Certificate Rotation for the Kubelet, Adding entries to Pod /etc/hosts with HostAliases, Interactive Tutorial - Creating a Cluster, Interactive Tutorial - Exploring Your App, Externalizing config using MicroProfile, ConfigMaps and Secrets, Interactive Tutorial - Configuring a Java Microservice, Apply Pod Security Standards at the Cluster Level, Apply Pod Security Standards at the Namespace Level, Restrict a Container's Access to Resources with AppArmor, Restrict a Container's Syscalls with seccomp, Exposing an External IP Address to Access an Application in a Cluster, Example: Deploying PHP Guestbook application with Redis, Example: Deploying WordPress and MySQL with Persistent Volumes, Example: Deploying Cassandra with a StatefulSet, Running ZooKeeper, A Distributed System Coordinator, Mapping PodSecurityPolicies to Pod Security Standards, Well-Known Labels, Annotations and Taints, ValidatingAdmissionPolicyBindingList v1alpha1, Kubernetes Security and Disclosure Information, Articles on dockershim Removal and on Using CRI-compatible Runtimes, Event Rate Limit Configuration (v1alpha1), kube-apiserver Encryption Configuration (v1), kube-controller-manager Configuration (v1alpha1), Contributing to the Upstream Kubernetes Code, Generating Reference Documentation for the Kubernetes API, Generating Reference Documentation for kubectl Commands, Generating Reference Pages for Kubernetes Components and Tools, kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml, kubectl rollout status deployment/nginx-deployment, NAME READY UP-TO-DATE AVAILABLE AGE, nginx-deployment 3/3 3 3 36s, kubectl rollout undo deployment/nginx-deployment, kubectl rollout undo deployment/nginx-deployment --to-revision, kubectl describe deployment nginx-deployment, kubectl scale deployment/nginx-deployment --replicas, kubectl autoscale deployment/nginx-deployment --min, kubectl rollout pause deployment/nginx-deployment, kubectl rollout resume deployment/nginx-deployment, kubectl patch deployment/nginx-deployment -p, '{"spec":{"progressDeadlineSeconds":600}}', Create a Deployment to rollout a ReplicaSet, Rollback to an earlier Deployment revision, Scale up the Deployment to facilitate more load, Rollover (aka multiple updates in-flight), Pausing and Resuming a rollout of a Deployment.