Kubectl
Best alias ever
alias k='kubectl'
List everything in a particular namespace
kubectl get all -n $NAMESPACE_NAME
Resource: https://stackoverflow.com/questions/62240272/deleting-namespace-was-stuck-at-terminating-state
Force delete a namespace
If a namespace is stuck in the terminating state, you can run this to finish it:
for ns in $(kubectl get ns --field-selector status.phase=Terminating -o jsonpath='{.items[*].metadata.name}')
do
kubectl get ns $ns -ojson | jq '.spec.finalizers = []' | kubectl replace --raw "/api/v1/namespaces/$ns/finalize" -f -
done
for ns in $(kubectl get ns --field-selector status.phase=Terminating -o jsonpath='{.items[*].metadata.name}')
do
kubectl get ns $ns -ojson | jq '.metadata.finalizers = []' | kubectl replace --raw "/api/v1/namespaces/$ns/finalize" -f -
done
Resource: https://stackoverflow.com/questions/65667846/namespace-stuck-as-terminating
List all pods
kubectl get pods
List all pods sorted by node name
kubectl get pods -o wide --sort-by="{.spec.nodeName}"
List all containers in all pods
kubectl get pods -o='custom-columns=NameSpace:.metadata.namespace,NAME:.metadata.name,CONTAINERS:.spec.containers[*].name'
List all containers in a pod
kubectl get pods $POD_NAME -o='custom-columns=NameSpace:.metadata.namespace,NAME:.metadata.name,CONTAINERS:.spec.containers[*].name'
Resource: https://serverfault.com/questions/873490/how-to-list-all-containers-in-kubernetes
List pods in a namespace
kubectl get pods -n <namespace>
Get all pods running in all namespaces
kubectl get pods --all-namespaces
Get all container images
kubectl get pods --all-namespaces -o=jsonpath="{..image}
Get all container images filtered by pod label
kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=<name>
For example, for a label with the app name ’nginx’ created with this tutorial:
kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l app=nginx
Get Pod IP Address
kubectl get pods -l app=<app name> -o yaml | grep podIP
Delete a pod
kubectl delete pod $POD_NAME
Resource: https://www.fairwinds.com/blog/how-to-create-view-and-destroy-a-pod-in-kubernetes
Force delete a pod
kubectl delete pod $POD_NAME --grace-period=0 \
--force --namespace cattle-system
Resource: https://stackoverflow.com/questions/62240272/deleting-namespace-was-stuck-at-terminating-state
Delete all pods with an error state
kubectl get pods | grep Error | awk '{print $1}' | xargs kubectl delete pod
Resource: https://gist.github.com/zparnold/0e72d7d3563da2704b900e3b953a8229
List all nodes
kubectl get nodes
Get more information about a node
This will have things like the pods that are running on a node.
kubectl describe nodes $NODE_NAME
Get all services in every namespace
k get svc --all-namespaces -o wide
Get load balancers and associated IPs
kubectl get svc --all-namespaces -o jsonpath='{range .items[?(@.spec.type=="LoadBalancer")]}{.metadata.name}:{.status.loadBalancer.ingress[0].ip}{"\n"}{end}'
Resource: https://dev.to/peterj/expose-a-kubernetes-service-on-your-own-custom-domain-52dd
Edit existing service
SVC=rancher
NS=cattle-system
kubectl edit svc $SVC -n $NS
Resource: https://www.ibm.com/docs/en/netcoolomnibus/8?topic=private-changing-service-type-during-helm-upgrade
Get information about all deployments
kubectl describe deployments -A
Get information about a deployment
kubectl describe deployment nginx-deployment
Get shell to pod in deployment
DEP_NAME=postgres
kubectl exec -it "deployment/${DEP_NAME}" -- bash
Get shell to first container in a pod
kubectl exec -it $pod_name -- bash
Get shell to specific container in a pod
kubectl exec -it $pod_name --container $container_name -- sh
Secrets
Get a list of secrets
kubectl get secrets
Describe a secret
kubectl describe secret $SECRET_NAME
View a secret
kubectl get secret $SECRET_NAME -o json | jq .
You can also opt to output as yaml as well:
kubectl get secret $SECRET_NAME -o yaml
If a secret is a json blob with multiple key/value
pairs (like a kubernetes.io/service-account-token
type for example),
you can get the associated value from one of the keys like so:
kubectl get secret $SECRET_NAME -o jsonpath='{.data.keyname}' | base64 -d
Resource: https://kubernetes.io/docs/concepts/configuration/secret/
List all services
kubectl get services
Get names of services
kubectl get services --sort-by=.metadata.name
Resource: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
Get more information about a pod
kubectl describe pod $POD_NAME
Alternatively, you can use:
kubectl describe pods/$POD_NAME
Resources: https://stackoverflow.com/questions/34848422/how-to-debug-imagepullbackoff https://github.com/cloudnativelabs/kube-router/issues/711
Delete an application
Find the deployment and service beforehand:
kubectl delete deployment.apps/<name> service/<name>
For example, for a service and deployment called 'app'
:
kubectl delete deployment.apps/app service/app
Alternatively, you could just run this in the directory with all of the files:
kubectl delete -k ./
Resource: https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/
Get external IP addresses of all nodes
kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="ExternalIP")].address}'
Resource: https://kubernetes.io/docs/reference/kubectl/cheatsheet/
List all namespaces
kubectl get namespace
Create new namespace
NAMESPACE_NAME=blablabla
kubectl create namespace $NAMESPACE_NAME
Resource: https://jhooq.com/helm-chart-wordpress-installation/
Delete namespace
NAMESPACE_NAME=blablabla
kubectl delete namespace $NAMESPACE_NAME
Show Persistent Volumes
kubctl get pvc
Delete Persistent Volume
PVC_NAME=somepvc
kubectl delete pvc $PVC_NAME
Troubleshooting
Use this to get root access to a k8s node via running a privileged pod that has the necessary tools and ability to interact with the host system directly. This essentially acts as a debug container.
Here’s an example of a debug pod:
---
apiVersion: v1
kind: Pod
metadata:
name: debug-pod
spec:
containers:
- name: debug-container
image: busybox
command: ["/bin/sh", "-c", "--"]
args: ["while true; do sleep 30; done;"]
securityContext:
privileged: true
hostNetwork: true
hostPID: true
In this pod, you have root access. This is a very powerful, but potentially dangerous tool. It should only be used for debugging and should be deleted as soon as you are done with your investigation.
Once the pod is running, you can exec into it with:
kubectl exec -it debug-pod -- /bin/sh
Get all of the most recent events in all namespaces:
kubectl get events --all-namespaces --sort-by='.metadata.creationTimestamp'
Log all of the events in all namespaces to a log file:
kubectl get events --all-namespaces -w > kevents.logs
Resources:
- https://serverfault.com/questions/728727/kubernetes-stuck-on-containercreating
- https://stackoverflow.com/questions/36377784/pod-in-kubernetes-always-in-pending-state
Get detailed information dump of overall cluster health:
kubectl cluster-info dump
Resources: https://kubernetes.io/docs/tasks/debug/debug-cluster/_print/
Copy file from pod to system
kubectl cp $POD_NAME:/run/secrets/kubernetes.io/serviceaccount .
Copy file from system to pod
kubectl cp file $POD_NAME:/tmp
Get all clusters
kubectl config view -o jsonpath='{"Cluster name\tServer\n"}{range .clusters[*]}{.name}{"\t"}{.cluster.server}{"\n"}{end}'
Resource: https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/
List all service accounts
kubectl get serviceaccounts
List all clusterroles
kubectl get clusterrole
Get yaml file from running pod
kubectl get po $POD_NAME -o yaml | tee file.yaml
Resource: https://stackoverflow.com/questions/43941772/get-yaml-for-deployed-kubernetes-services
Set namespace
This example sets the namespace to kube-system
:
kubectl config set-context --current --namespace=kube-system
Resource: https://stackoverflow.com/questions/55373686/how-to-switch-namespace-in-kubernetes
Get Kubernetes Master
kubectl cluster-info
Get pod logs
POD_NAME=yourpod
kubectl logs $POD_NAME
Tail pod logs
k logs -f $POD_NAME
Resource: https://www.dnsstuff.com/how-to-tail-kubernetes-and-kubectl-logs
Check for insecure kubelet API access
From a pod
curl -k https://localhost:10250/pods
Remotely
curl -k https://<target system running kub>:10250/pods
Resource: https://sysdig.com/blog/kubernetes-security-kubelet-etcd/
Kubernetes config file location
env |grep KUBECONFIG
View config
kubectl config view
Use config file
# run before setting the env var
kubectl config view
export KUBECONFIG=/path/to/config/file
# run view config again to see the changes
kubectl config view
You can also run it like this if you don’t want to export the environment variable for whatever weird reason you have:
KUBECONFIG=/path/to/config/file kubectl config view
Resource: https://ahmet.im/blog/mastering-kubeconfig/
Access K8s API from inside a Pod
Setup
Set these variables to start:
K8S=https://$KUBERNETES_SERVICE_HOST:$KUBERNETES_SERVICE_PORT
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
CACERT=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
NAMESPACE=$(cat /var/run/secrets/kubernetes.io/serviceaccount/namespace)
Health Check
curl -s -H "Authorization: Bearer $TOKEN" --cacert $CACERT $K8S/healthz
Show pods
curl -s -H "Authorization: Bearer $TOKEN" \
--cacert $CACERT $K8S/api/v1/namespaces/$NAMESPACE/pods/
Get network policies
kubectl get networkpolicy
Get information about a particular policy
kubectl describe networkpolicy $NETWORK_POLICY_NAME
Resource: https://www.stackrox.com/post/2020/02/azure-kubernetes-aks-security-best-practices-part-2-of-4/
Default network access policies
If network policies are defined, the Kubernetes default policy is allow. Subsequently, you can talk to networked assets from within a container. To test this, exec into a container:
kubectl exec -it <container name> sh
Once inside, you can try things like querying the AWS metadata service (or another networked resource):
wget -O - -q http://169.254.169.254/latest/meta-data/identity-credentials/ec2/security-credentials/ec2-instance/
If this works and you’re doing an incident response, it’s time to check your k8s service logs and network logs.
Resolution
Specify network access configurations that minimize ingress and egress access for each pod.
Port forwarding
Service to localhost
This will forward the service running on 8443 to localhost:1234
kubectl port-forward service/<service name> 1234:8443 -n <namespace>
Pod to localhost
This will forward the service running on $pod-port
to localhost:$pod-port
kubectl port-forward $POD_NAME $pod-port
Pod to the network
This will expose the service running on
$pod-port
in $POD_NAME
to the system running
the kubectl
command on $localhost-port
to other
systems on the network:
kubectl port-forward --address <hostname or IP of system> $POD_NAME $localhost-port:$pod-port
Resource: https://stackoverflow.com/questions/51468491/how-kubectl-port-forward-works
Set cluster
kubectl config use-context <cluster>
Get list of everything a service account can do
kubectl auth can-i --list # -n namespace
Resource: https://lobuhisec.medium.com/kubernetes-pentest-recon-checklist-tools-and-resources-30d8e4b69463
Use local image
Add this line to your pod yaml file:
imagePullPolicy: Never
Resource: https://stackoverflow.com/questions/55392014/kubectl-get-pods-shows-errimagepull
Run docker in docker
This is a very bad thing to do from a security standpoint, but when you need it, this is how you do it:
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: dev
imagePullPolicy: Never
name: dev
volumeMounts:
- name: docker-sock-volume
mountPath: "/var/run/docker.sock"
volumes:
- name: docker-sock-volume
hostPath:
# location on host
path: /var/run/docker.sock
Resources: https://stackoverflow.com/questions/56462126/how-to-add-v-var-run-docker-sock-var-run-docker-sock-when-running-container https://devops.stackexchange.com/questions/2506/docker-in-kubernetes-deployment
Security Tools and Techniques
Gain access to node’s root filesystem
Create evil.yaml
:
apiVersion: v1
kind: Pod
metadata:
name: evil
labels:
app: ubuntu
spec:
containers:
- name: evil
image: ubuntu:latest
command: ["/bin/sleep", "3650d"]
imagePullPolicy: IfNotPresent
volumeMounts:
- name: root-fs
mountPath: /mnt
restartPolicy: Always
volumes:
- name: root-fs
hostPath:
path: /
type: Directory
Create a pod and access it:
k create -f evil.yaml
k exec --stdin --tty -f evil.yaml -- /bin/bash
Get into the host filesystem:
chroot /mnt
bash
Run Kubeletmein
From a pod on the target, run this command to get the binary:
wget -q -O kubeletmein https://github.com/4ARMED/kubeletmein/releases/download/v1.0.2/kubeletmein_1.0.2_linux_amd64 && chmod +x kubeletmein
Generate a kube config:
./kubeletmein generate
Make a nice alias to save on typing:
echo "alias k='kubectl --kubeconfig kubeconfig.yaml'" | tee -a ~/.bashrc
source ~/.bashrc
Run Kubehunter
Grab kube-hunter job yaml:
wget https://raw.githubusercontent.com/aquasecurity/kube-hunter/main/job.yaml
Start the job and grab the pod name:
k apply -f job.yaml
KHPOD=$(k describe job kube-hunter |grep 'Created pod:' | awk -F ' ' '{print $7}')
Monitor the findings - these will be passive:
k logs -f $KHPOD
Delete the job when you’re done:
k delete jobs kube-hunter
Run active scan
Modify job.yaml
:
---
apiVersion: batch/v1
kind: Job
metadata:
name: kube-hunter
spec:
template:
spec:
containers:
- name: kube-hunter
image: aquasec/kube-hunter
command: ["kube-hunter"]
args: ["--pod", "--active"]
restartPolicy: Never
backoffLimit: 4
Start the job and grab the pod name:
k apply -f job.yaml
KHPOD=$(k describe job kube-hunter |grep 'Created pod:' | awk -F ' ' '{print $7}')
Monitor the findings:
k logs -f $KHPOD
Delete the job when you’re done:
k delete jobs kube-hunter
Resources:
Run Kube-Bench
From a pod on the target, run this command
to get the job.yaml
file from the repo:
wget https://raw.githubusercontent.com/aquasecurity/kube-bench/main/job.yaml
Start the job and grab the pod name:
k apply -f kube-bench.yaml
KBPOD=$(k describe job kube-bench |grep 'Created pod:' | awk -F ' ' '{print $7}')
Monitor the findings - these will be passive:
k logs -f $KBPOD
Run against an EKS deployment
Create an Amazon Elastic Container Registry (ECR) repository to host the kube-bench container image:
aws ecr create-repository --repository-name k8s/kube-bench --image-tag-mutability MUTABLE
Set ${AWS_REGION}
if it’s not already set.
Then download, build, and push the kube-bench container
image to your ECR repo:
git clone https://github.com/aquasecurity/kube-bench.git
cd kube-bench
AWS_ACCT_ID=12312312312
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin $ACCOUNT_ID.dkr.ecr.${AWS_REGION}.amazonaws.com
aws ecr get-login-password --region ${AWS_REGION} | docker login --username AWS --password-stdin ${AWS_ACCT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com
docker build -t k8s/kube-bench .
docker tag k8s/kube-bench:latest ${AWS_ACCT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/k8s/kube-bench:latest
docker push ${AWS_ACCT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/k8s/kube-bench:latest
Copy the URI of your pushed image; the URI format looks like this: ${AWS_ACCT_ID}.dkr.ecr.${AWS_REGION}.amazonaws.com/k8s/kube-bench:latest
Replace the image
value in job-eks.yaml
with the URI you just copied.
Run the job on a pod in your cluster and monitor the findings:
kubectl apply -f job-eks.yaml
KBPOD=$(k describe job kube-bench |grep 'Created pod:' | awk -F ' ' '{print $7}')
k logs -f $KBPOD
Resources:
Run MKIT
You will need to export your AWS credentials as local environment variables:
export AWS_PROFILE=asdfasdf
export AWS_ACCESS_KEY_ID=asdfasdf
export AWS_SECRET_ACCESS_KEY=asdfdsf
Next clone the repo:
git clone git@github.com:darkbitio/mkit.git
cd mkit
Get cluster information:
aws eks list-clusters --region us-west-2 | jq
Run mkit for each cluster:
make run-eks awsregion=us-west-2 clustername=myawesomecluster
Navigate to http://localhost:8000/ to see results for each.
Resource: https://github.com/darkbitio/mkit#example-run-against-a-standalone-cluster
Helm
Debug template
helm template: <template name> -f ./values.yaml --debug
For example, if we deployed the bitnami/ghost
template,
we could use this command to debug it:
helm template bitnami/ghost -f ./values.yaml --debug
Resource: https://www.reddit.com/r/kubernetes/comments/j3j3ox/ghost_helm_no_pod_showing_up/
Search for repository by name
This example will search for ghost
repositories:
helm search hub ghost --max-col-width=0
Open the desired link to get the helm repo add
command to use.
Show local repo list
helm repo list
Remove repo from local repo list
This example will remove the bitnami repo:
helm repo rm bitnami
Add repo to local repo list
This particular example will add the bitnami repo:
helm repo add bitnami https://charts.bitnami.com/bitnami
Search for versions of a chart in local repo
This will return a list of ghost charts:
helm search repo ghost --versions
Get latest version of a chart in local repo
helm search repo ghost --versions | sed -n 2p | awk '{print $2}'
List Releases in all namespaces
helm ls --all-namespaces
# Shorthand:
helm ls -a
Resources:
- https://github.com/helm/helm/issues/7527
- Tutorial with some great helm examples
- Helm Docs
- Add repo found with search command
Install Plugin
This example will install the helm diff plugin:
helm plugin install https://github.com/databus23/helm-diff
Resources: https://jhooq.com/helm-chart-plugin/ - tutorial https://github.com/helm/helm/issues/3156 - initial suggestion
Uninstall Plugin
This example will uninstall the helm diff plugin:
helm plugin uninstall diff
Show helm values
In the same repo as a values.yaml:
helm show values "${OWNER}/${REPO}"
# ex.
helm show values k8s-at-home/home-assistant
Create values file
OWNER='traefik'
REPO='traefik'
helm show values "${OWNER}/${REPO}" > /tmp/traefik-chart.yaml
Resource: https://www.virtualizationhowto.com/2022/06/traefik-helm-install-and-configuration/
Download helm chart
helm fetch k8s-at-home/home-assistant
EKS
Get all eks clusters
aws eks list-clusters --region us-west-2 | jq
Get all node groups in a cluster
Pick one of the clusters from the previous command and plug it in for cluster_name
.
aws eks list-nodegroups --region us-west-2 --cluster-name $cluster_name | jq
List all nodes in all clusters
eks_get_all.sh
:
REGION="us-west-2"
CLUSTERS=($(aws eks list-clusters --region ${REGION} | jq -c '.[][]' | tr -d '"'))
for cluster in ${CLUSTERS[@]}; do
echo "Nodes in ${cluster}:"
aws eks list-nodegroups --region ${REGION} --cluster-name ${cluster} | jq -c '.[][]' | tr -d '"'
done
Auto configure kubeconfig
aws eks --region us-west-2 update-kubeconfig --name ${cluster}
Resource: https://www.bluematador.com/blog/my-first-kubernetes-cluster-a-review-of-amazon-eks
Kompose
Kompose is a tool to convert a docker-compose file to Kubernetes manifests.
Create chart
kompose convert -c
Resource: https://kompose.io/
Kind
Kind is a tool for running local Kubernetes clusters using Docker container “nodes”.
Installation
brew install kind
Create cluster
kind create cluster --name testing-kind
Destroy cluster
kind delete cluster --name testing-kind
Resource: https://kind.sigs.k8s.io/
Deploy Kind + Rancher
gh repo clone ozbillwang/rancher-in-kind
cd rancher-in-kind
# Create deployment with 1 worker and rancher
bash rkind.sh create
# Destroy deployment
bash rkind.sh create
# Set admin pw
docker exec -it rancher-for-kind reset-password
Resource: https://github.com/ozbillwang/rancher-in-kind
Change rancher admin pw
kubectl -n cattle-system exec $(kubectl -n cattle-system get pods -l app=rancher | grep '1/1' | head -1 | awk '{ print $1 }') -- reset-password
Dump config
kubectl config view --raw >~/.kube/config
Resource: https://github.com/k3s-io/k3s/issues/1126
Get pod CIDR
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}'
Resource: https://devops.stackexchange.com/questions/5898/how-to-get-kubernetes-pod-network-cidr
Delete everything
Sometimes you just want to watch the world burn…
kubectl delete all --all
k delete ns --all
List all container images in namespace
NS=kube-system
kubectl get pods --namespace $NS -o jsonpath="{.items[*].spec.containers[*].image}"
Delete pods stuck terminating
kubectl get pods --all-namespaces | grep Terminating | while read line; do
pod_name=$(echo $line | awk '{print $2}' ) \
name_space=$(echo $line | awk '{print $1}' ); \
kubectl delete pods $pod_name -n $name_space --grace-period=0 --force
done
Resource: https://stackoverflow.com/questions/35453792/pods-stuck-in-terminating-status
Fix stuck CRD
CRD=clusterissuers.cert-manager.io
kubectl get \
--raw "/apis/apiextensions.k8s.io/v1/customresourcedefinitions/${CRD}" |
jq 'del(.metadata.finalizers)' |
kubectl replace \
--raw "/apis/apiextensions.k8s.io/v1/customresourcedefinitions/${CRD}" -f -
Flux CD
Bootstrap github deployment with Flux CD
Create a Personal Access Token with full repo
scoped
permissions and assign it to $GITHUB_TOKEN
.
Make sure your k8s cluster will work with flux:
flux check --pre
Fill in the rest of the environment variables below and run this command:
# Example values:
export GITHUB_TOKEN=$FLUX_PAT_GOES_HERE
export PATH_TO_FLUX_DEPLOYMENT=./kubernetes/flux-system/config
export REPO_OWNER=CowDogMoo
export REPO_NAME=walls-of-excellence
flux bootstrap github \
--owner=$REPO_OWNER \
--repository=$REPO_NAME \
--path=$PATH_TO_FLUX_DEPLOYMENT \
--personal \
--token-auth
The path
parameter’s value is set with the assumption that
you’re working out of the root of the repo that flux will be using.
Resources:
- https://www.youtube.com/watch?v=PFLimPh5-wo
- https://github.com/onedr0p/flux-cluster-template/blob/main/.taskfiles/ClusterTasks.yml
Install helm chart with Flux CD
This particular example will deploy the
k8s-at-home home-assistant helm chart
using the helm values at helm/ha-chart-values.yaml
in the ha namespace:
NAME=home-assistant
CHART_URL=https://k8s-at-home.com/charts
NS=ha
flux create source helm $NAME \
--url $CHART_URL \
--namespace $NS \
--export > ./$NAME-source.yaml
flux create helmrelease $NAME \
--source HelmRepository/$NAME \
--chart $NAME \
--namespace $NS \
--values helm/$NAME-chart-values.yaml \
--export > ./$NAME-helmrelease.yaml
List all flux resources
flux get all
List HelmReleases
:
flux get hr --all-namespaces
List all failing resources in all ns
flux get all -A --status-selector ready=true
Delete HelmRelease
flux delete helmrelease -n $NS $HR_NAME
Manual Reconcile
flux reconcile kustomization flux-system
Resource: https://github.com/obrodinho/flux-intro
Exclude sources
Create a .sourceignore
file. For example:
# .sourceignore
# Flux ignore
# https://fluxcd.io/docs/components/source/gitrepositories/#excluding-files
# Exclude all
/*
# Include manifest directories
!/resources/
!/clusters/
!/infrastructure/
Resource: https://github.com/fluxcd/flux2/discussions/2539
Get flux error logs
flux logs --all-namespaces --level=error
Resource: https://fluxcd.io/flux/cheatsheets/troubleshooting/
Stream flux logs
flux logs -f
Update flux
flux install
Resource: https://fluxcd.io/flux/installation/#in-cluster-upgrade
Delete flux
flux uninstall
Manually reconcile flux resources
#!/bin/bash
set -ex
for dir in $(find . -type d); do
pushd "$dir" >/dev/null
# 1. Run kubectl apply -k .
if [[ -f "kustomization.yaml" ]]; then
kubectl apply -k .
fi
popd >/dev/null
done
for ks_file in $(find . -name "ks.yaml"); do
# 2. Run kubectl apply -f ks.yaml (in the same directory as kustomization.yaml)
kubectl apply -f "$ks_file"
done
for app_dir in $(find . -type d -name "app"); do
pushd "$app_dir" >/dev/null
# 3. Run kubectl apply -k .
if [[ -f "kustomization.yaml" ]]; then
kubectl apply -k .
fi
popd >/dev/null
done
Backup/Restore of cert-manager resource configurations
Backup:
kubectl get --all-namespaces -o yaml issuer,clusterissuer,cert > backup.yaml
# The tls key
kubectl get secrets -n cert-manager letsencrypt-production -o yaml > backup-tls-sensitive.yaml
Restore:
kubectl apply -f backup-tls-sensitive.yaml
# Make sure the secret is created so that you don't accidentally
# burn a bunch of calls to letsencrypt's prod endpoint.
kubectl apply -f <(awk '!/^ *(resourceVersion|uid): [^ ]+$/' backup.yaml)
Resource: https://cert-manager.io/docs/tutorials/backup/
Start cronjob manually
CRON_JOB=delete-pods-cronjob
JOBNAME=some-job
kubectl create job -n $NS --from=cronjob/$CRON_JOB $JOB_NAME
Get logs for all containers with a specific label
kubectl logs -l name=myLabel
Resource: https://sematext.com/blog/tail-kubernetes-logs/
View PVC contents
Create the inspector pod:
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
name: pvc-inspector
spec:
containers:
- image: busybox
name: pvc-inspector
command: ["tail"]
args: ["-f", "/dev/null"]
volumeMounts:
- mountPath: /pvc
name: pvc-mount
volumes:
- name: pvc-mount
persistentVolumeClaim:
claimName: YOUR_CLAIM_NAME_HERE
EOF
Inspect the contents:
kubectl exec -it pvc-inspector -- sh
$ ls /pvc
Clean Up:
kubectl delete pod pvc-inspector
Apply kustomize manifests
kubectl apply -k .
Get all pulled container images from nodes
kubectl get node -o json | jq -r '.items[].status.images[].names'
Resource: https://stackoverflow.com/questions/57460704/list-all-pulled-docker-images-from-k8s-worker-node