📝 context = group of access parameters (cluster, user, namespace…)
kubectl config get-contexts
kubectl config current-context
kubectl config use-context my-cluster-name
kubectl config rename-context old-name new-name
kubectl get ns
kubectl create namespace my_namespace
kubectl config set-context --current --namespace my-namespace
kubectl get event –namespace my-namespace
kubectl get services
kubectl get pods --all-namespaces
kubectl get pods -o wide
kubectl describe deployment
kubectl get deployment my-deployment
kubectl get pods
kubectl get pod my-pod -o yaml
kubectl describe nodes my-node
kubectl describe pods my-pod
kubectl exec --stdin --tty pod_name -- /bin/bash
kubectl logs pod_name
kubectl logs pod_name > app.log
📝 Open output logs and don’t show lines with “abc” grep -v "abc" app.log |
---|
📝 Open output logs and only show lines with “abc” grep "abc" app.log |
📝 Make it case insensitive with the parameter -i |
helm list
helm install release_name ./chart_path --dry-run --debug
K8s Cluster (= Kubernetes environment)
—K8s namespace (= logical container for resources)
——-helm release (= instance of a helm chart running in a K8s namespace)
———–K8s deployment (= set of identical pods)
—————-k8s pod (= smallest deployable unit in Kubernetes)
——————-Docker container
K8s node (= server that runs the Kubernetes runtime and hosts the pods)
CI/CD pipeline | helm | Kubernetes | Container (Linux env) | Application (source code) |
---|---|---|---|---|
variable | values.yml * | configmap/secrets/config file (reference configurations and provide them to the container) | Environment variable (or volume for files) | app properties |
FOO_BAR = 1 | foo: bar: $FOO_BAR | data: FOO_BAR = (name used in the values.yaml file to do the mapping, could be whatever name) | FOO_BAR = (value received from Kubernetes) | foo.bar = $ (syntax depends on project language) |
(*) Don’t use dash -
in values.yaml keys!
Replicaset: 2 => 2 pods, distribution is not transparent Daemonset: 2 => 2 pods, but each pod is distributed on a different node (1 pod per node)
Abstraction to group pods under a common access policy. They have a virtual IP which clients within the cluster can access and proxied to the pods in the service.
Collection of containers that share the same resources.
All containers in a pod have the same host.
Inside the pod, each container has a localhost address (127.0.0.1) and a static port (user defined in the pod definition) => easy to move from a local/uncontainerized environment to a containerized environment. These ports are not visible outside the pod.
The pod has an IP address, fix and unique in the cluster.
A pod is created with the resource type (kind) “deployment”. The kind “pod” is hardly ever used, it creates a standalone pod that is not managed by the controller.
Kubernetes components are stateless. They read and update their state in a database provided by the Kubernetes API. That database is a key-value store named “etcd”. When a component restarts, it retrieves its state from etcd.
kind: Service kind: Deployment
spec: metadata:
selector: labels:
app: APPNAME app: APPNAME
component:
Any pod with label app: APPNAME will be targeted by the service with that selector
Create a container with an environment variable with a specific key of a configMap
spec:
containers:
- name: <name>
valueFrom:
configMapKeyRef:
name: <configmap_name>
key: <key>
Create a container with environment variables with all keys of a configMap
spec:
containers:
- name: <name>
envFrom:
configMapRef:
name: <configmap_name>