Skip to content

Commit

Permalink
TIL: force-update k8s deployments
Browse files Browse the repository at this point in the history
  • Loading branch information
pew committed Jan 2, 2024
1 parent 759a7e4 commit 8a0d2f9
Showing 1 changed file with 50 additions and 34 deletions.
84 changes: 50 additions & 34 deletions man/kubernetes.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,32 @@
---
tags:
date created: Wednesday, January 23rd 2019, 7:59:04 pm
date modified: Tuesday, January 2nd 2024, 9:53:05 am
tags:
- k8s
- kubernetes
date created: Wednesday, January 23rd 2019, 7:59:04 pm
date modified: Sunday, July 31st 2022, 9:27:38 am
title: kubernetes / k8s
---

# kubernetes / k8s

this is going to be a big one. see also [kubectl](/man/kubectl/)

## default hostname

this is for my future self, if you want to access services between deployments/pods:

```shell
<service-name>.<namespace>.svc.cluster.local
```

## get logs from all pods part of deployment

…use labels:

```shell
kubectl logs -f -l app=name
```

## rancher, canal, flannel listen on network interface

I use wireguard to tunnel traffic between kubernetes nodes. When setting up a new *rancher* cluster, you can put in all your information in a GUI, set canal as the default network driver. afterwards **edit the YAML** part and go look for `network`. in there you can change the networking interface for the driver
Expand All @@ -26,6 +42,14 @@ network:
the `iface` part is important

## re-deploy kubernetes pods / deployment

say, you have a deployment running and set it to *Always* pull an image, but to trigger this you need to redeploy it. let's also say, you're lazy and don't have a fully featured ci/cd pipeline for your raspberry pi kubernetes cluster at home. so let's just do this to re-deploy your stuff and force a re-download of the image

```shell
kubectl rollout restart deploy <deployment-name>
```

## reisze aws-ebs

* [some sources, most of it was mine](https://kubernetes.io/blog/2018/07/12/resizing-persistent-volumes-using-kubernetes/) | [even more here](https://akomljen.com/easy-way-to-resize-kubernetes-persistent-volumes/)
Expand Down Expand Up @@ -96,56 +120,48 @@ afterwards you can delete the pod to re-create it (if you've got a replicaset ho
kubectl delete pod gitlab-redis-7b9d4587f8-v8jgw
```
## suspend all cronjobs
to disable / suspend all cronjobs, do this:
```
kubectl get cronjobs | grep False | cut -d' ' -f 1 | xargs kubectl patch cronjobs -p '{"spec" : {"suspend" : true }}'
```
re-enable like this:
## run one-off cronjob immediately
```
kubectl get cronjobs | grep True | cut -d' ' -f 1 | xargs kubectl patch cronjobs -p '{"spec" : {"suspend" : false }}'
kubectl create job --from=cronjob/your-configured-cron your-cron-manual
```
[source](https://stackoverflow.com/a/55090194/10272994)
## stop kubernetes deployment
## run one-off cronjob immediately
set the replicas to 0, it'll delete the pods and the deployment will stay intact.
```
kubectl create job --from=cronjob/your-configured-cron your-cron-manual
kubectl scale --replicas=0 deployment <deployment-name>
```
## re-deploy kubernetes pods / deployment
## suspend all cronjobs
say, you have a deployment running and set it to *Always* pull an image, but to trigger this you need to redeploy it. let's also say, you're lazy and don't have a fully featured ci/cd pipeline for your raspberry pi kubernetes cluster at home. so let's just do this to re-deploy your stuff and force a re-download of the image
to disable / suspend all cronjobs, do this:
```shell
kubectl rollout restart deploy <deployment-name>
```
kubectl get cronjobs | grep False | cut -d' ' -f 1 | xargs kubectl patch cronjobs -p '{"spec" : {"suspend" : true }}'
```
## stop kubernetes deployment

set the replicas to 0, it'll delete the pods and the deployment will stay intact.
re-enable like this:
```
kubectl scale --replicas=0 deployment <deployment-name>
kubectl get cronjobs | grep True | cut -d' ' -f 1 | xargs kubectl patch cronjobs -p '{"spec" : {"suspend" : false }}'
```
## get logs from all pods part of deployment
[source](https://stackoverflow.com/a/55090194/10272994)
…use labels:
## force-update images
```shell
kubectl logs -f -l app=name
```

## default hostname
if you're using images with the `:latest` tag or another tag which changes remotely but not by tag name/version, you can configure the *imagePullPolicy* to always pull the image when the pod is being deleted, or the [deployment restarted](#re-deploy%20kubernetes%20pods%20/%20deployment).
this is for my future self, if you want to access services between deployments/pods:
Here's an example deployment:
```shell
<service-name>.<namespace>.svc.cluster.local
```yaml
kind: Deployment
spec:
template:
spec:
containers:
- image: example:latest
imagePullPolicy: Always
```

0 comments on commit 8a0d2f9

Please sign in to comment.