Custom Thumbnail 

drop img here or browse to upload

add answer
create album
  • feeling
  • traveling
  • watching
  • playing
  • listening
  • Happy
  • Loved
  • Sad
  • Very sad
  • Angry
  • Confused
  • Hot
  • Broken
  • expressionless
  • Cool
  • Funny
  • Tired
  • Lovely
  • Blessed
  • Shocked
  • Sleepy
  • Pretty
  • Bored
0%

Find Home Tutors | Jobs | At your location

  • Post a To-let
  • Sell Product
  • Get Tuition
  • Find Soulmate
  • Live Chat
  • Contact Support

New Tutor

cover-image
Imtiaz Uddin
Imtiaz Uddin Reyan

Student of Govt.Kobi Nazul Collage.Dhaka

Monthly 5000 Tk.

cover-image
Shahin
Shahin Hossain

Student of Govt. Shahid suhrawardi college

Monthly 4000 Tk.

cover-image
Md Helal
Md Helal Karim

Student of Habibullah Bahar Universit, Shantinagar, Dhaka

Monthly 4000 Tk.

cover-image
Anik
Anik Mondal

Student of Tejgaon university and college

Monthly 3000 Tk.

cover-image
Kalyan
Kalyan Halder

Sr. Developer of SoftBD LTD.

Monthly 4000 Tk.

cover-image
Debasree
Debasree Banik

Monthly 3500 Tk.

    certified kubernetes administrator moc exam question

    Pass Percentage - 74%

    Q. 2

    info_outlineQuestion

    List the InternalIP of all nodes of the cluster. Save the result to a file /root/CKA/node_ips.

    Answer should be in the format: InternalIP of controlplane<space>InternalIP of node01 (in a single line)

    info_outlineSolution

    Explore the jsonpath loop. 
    kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/CKA/node_ips

    Q. 3

    info_outlineQuestion

    Create a pod called multi-pod with two containers. 
    Container 1, name: alpha, image: nginx
    Container 2: name: beta, image: busybox, command: sleep 4800 

    Environment Variables:
    container 1:
    name: alpha

    Container 2:
    name: beta

    info_outlineSolution

    Solution manifest file to create a multi-container pod multi-pod as follows:

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: multi-pod
    spec:
      containers:
      - image: nginx
        name: alpha
        env:
        - name: name
          value: alpha
      - image: busybox
        name: beta
        command: ["sleep", "4800"]
        env:
        - name: name
          value: beta

    Q. 4

    info_outlineQuestion

    Create a Pod called non-root-pod , image: redis:alpine

    runAsUser: 1000

    fsGroup: 2000

    info_outlineSolution

    Solution manifest file to create a pod called non-root-pod as follows:

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: non-root-pod
    spec:
      securityContext:
        runAsUser: 1000
        fsGroup: 2000
      containers:
      - name: non-root-pod
        image: redis:alpine

    Verify the user and group IDs by using below command:

    kubectl exec -it non-root-pod -- id

    Q. 5

    info_outlineQuestion

    We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it.
    Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80.

    Important: Don't delete any current objects deployed.

    info_outlineSolution

    Solution manifest file to create a network policy ingress-to-nptest as follows:

    ---
    apiVersion: networking.k8s.io/v1
    kind: NetworkPolicy
    metadata:
      name: ingress-to-nptest
      namespace: default
    spec:
      podSelector:
        matchLabels:
          run: np-test-1
      policyTypes:
      - Ingress
      ingress:
      - ports:
        - protocol: TCP
          port: 80

    Q. 6

    info_outlineQuestion

    Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine, to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image: redis:alpine with toleration to be scheduled on node01.

    key: env_type, value: production, operator: Equal and effect: NoSchedule

    info_outlineSolution

    To add taints on the node01 worker node:

    kubectl taint node node01 env_type=production:NoSchedule

    Now, deploy dev-redis pod and to ensure that workloads are not scheduled to this node01 worker node.

    kubectl run dev-redis --image=redis:alpine

    To view the node name of recently deployed pod:

    kubectl get pods -o wide

    Solution manifest file to deploy new pod called prod-redis with toleration to be scheduled on node01 worker node.

    ---
    apiVersion: v1
    kind: Pod
    metadata:
      name: prod-redis
    spec:
      containers:
      - name: prod-redis
        image: redis:alpine
      tolerations:
      - effect: NoSchedule
        key: env_type
        operator: Equal
        value: production     

    To view only prod-redis pod with less details:

    kubectl get pods -o wide | grep prod-redis

    Q. 7

    info_outlineQuestion

    Create a pod called hr-pod in hr namespace belonging to the production environment and frontend tier .
    image: redis:alpine

    Use appropriate labels and create all the required objects if it does not exist in the system already.

    info_outlineSolution

    Create a namespace if it doesn't exist:

    kubectl create namespace hr

    and then create a hr-pod with given details:

    kubectl run hr-pod --image=redis:alpine --namespace=hr --labels=environment=production,tier=frontend

    Q. 8

    info_outlineQuestion

    A kubeconfig file called super.kubeconfig has been created under /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.

    info_outlineSolution

    Verify host and port for kube-apiserver are correct.

    Open the super.kubeconfig in vi editor. 

    Change the 9999 port to 6443 and run the below command to verify: 
     

    kubectl cluster-info --kubeconfig=/root/CKA/super.kubeconfig

    Q. 9

    info_outlineQuestion

    We have created a new deployment called nginx-deploy. scale the deployment to 3 replicas. Has the replica's increased? Troubleshoot the issue and fix it.

    info_outlineSolution

    Use the command kubectl scale to increase the replica count to 3. 
     

    kubectl scale deploy nginx-deploy --replicas=3

    The controller-manager is responsible for scaling up pods of a replicaset. If you inspect the control plane components in the kube-system namespace, you will see that the controller-manager is not running. 
     

    kubectl get pods -n kube-system

    The command running inside the controller-manager pod is incorrect. 
    After fix all the values in the file and wait for controller-manager pod to restart. 

    Alternatively, you can run sed command to change all values at once:

    sed -i 's/kube-contro1ler-manager/kube-controller-manager/g' /etc/kubernetes/manifests/kube-controller-manager.yaml

    This will fix the issues in controller-manager yaml file.

    At last, inspect the deployment by using below command:

    kubectl get deploy
    Be the first person to like this

        kubernetes nodelocaldns crash - loop detected for zone

        I  have the  issue. Here is a possible solution -

        I've solved the problem by deleting the plugins 'loop' within the cm of coredns & nodelocaldns. I could not resolve dns from inside my pod. Whenever I the nodelocaldns start running, the dns problem solved. I applied following solution  but I don't know if this cloud case other porblems. I did not face any problem till now. Solution -

        1、kubectl edit cm coredns -n kube-system

        2、delete ‘loop’ ,save and exit

        3、restart nodelocaldns pods

        Be the first person to like this

            How to run a gitlab runner in your host machine

            We are going to discus how we can run gitlab runner in ubuntu server machine. 

             

            Download Package

            curl -LJO "https://gitlab-runner-downloads.s3.amazonaws.com/latest/deb/gitlab-runner_amd64.deb"
            
            dpkg -i gitlab-runner_amd64.deb

            this will install the gitlab runner in your machine. 

             

            Register a runner

            We can register runner for individual repository also group runner. To create runner go to 

            Settings > CI/CD > Runner 

            there you will find Register the runner with this URL and registration token. Now lets create a runner which will execute by a docker - 

            sudo -E gitlab-runner register
                 Enter the GitLab instance URL (for example, https://gitlab.com/): https://gitlab.com/
                 Enter the registration token: yourtoken
                 Enter a description for the runner: My runner
                 Enter tags for the runner (comma-separated): docker, node [your job tag]
                 Enter an executor: docker-ssh, parallels, ssh, virtualbox, docker, shell, docker+machine, docker-ssh+machine, kubernetes, custom: docker

             

            If you need further configuration follow the config file 

            # nano  /etc/gitlab-runner/config.toml
            concurrent = 10   #number of job at a time
            check_interval = 0
            
            [session_server]
              session_timeout = 1800
            # first runner
            [[runners]]
              name = "runner root"
              url = "https://gitlab.com/"
              token = "yourtoken"
              executor = "docker"
              [runners.custom_build_dir]
              [runners.cache]
                [runners.cache.s3]
                [runners.cache.gcs]
                [runners.cache.azure]
              [runners.docker]
                tls_verify = false
                image = "ruby:2.7"
                privileged = true
                disable_entrypoint_overwrite = false
                oom_kill_disable = false
                disable_cache = false
                volumes = ["/cache"]
                shm_size = 0
            
            # second runner
            [[runners]]
              name = "Nise T4 Task Runner"
              url = "https://gitlab.com/"
              token = "yourtoken"
              executor = "shell"
              [runners.custom_build_dir]
              [runners.cache]
                [runners.cache.s3]
                [runners.cache.gcs]
                [runners.cache.azure]
              [runners.docker]
                tls_verify = false
                image = "ruby:2.7"
                privileged = true
                disable_entrypoint_overwrite = false
                oom_kill_disable = false
                disable_cache = false
                volumes = ["/cache"]
                shm_size = 0

             

            Example docker runner full config 

            [runners.docker]
              host = ""
              hostname = ""
              tls_cert_path = "/Users/ayufan/.boot2docker/certs"
              image = "ruby:2.7"
              memory = "128m"
              memory_swap = "256m"
              memory_reservation = "64m"
              oom_kill_disable = false
              cpuset_cpus = "0,1"
              cpus = "2"
              dns = ["8.8.8.8"]
              dns_search = [""]
              privileged = false
              userns_mode = "host"
              cap_add = ["NET_ADMIN"]
              cap_drop = ["DAC_OVERRIDE"]
              devices = ["/dev/net/tun"]
              disable_cache = false
              wait_for_services_timeout = 30
              cache_dir = ""
              volumes = ["/data", "/home/project/cache"]
              extra_hosts = ["other-host:127.0.0.1"]
              shm_size = 300000
              volumes_from = ["storage_container:ro"]
              links = ["mysql_container:mysql"]
              allowed_images = ["ruby:*", "python:*", "php:*"]
              allowed_services = ["postgres:9", "redis:*", "mysql:*"]
              [[runners.docker.services]]
                name = "registry.example.com/svc1"
                alias = "svc1"
                entrypoint = ["entrypoint.sh"]
                command = ["executable","param1","param2"]
              [[runners.docker.services]]
                name = "redis:2.8"
                alias = "cache"
              [[runners.docker.services]]
                name = "postgres:9"
                alias = "postgres-db"
              [runners.docker.sysctls]
                "net.ipv4.ip_forward" = "1"

             

            If you want to run by shell please add the gitlab-user to docker group. 

            usermod -aG docker gitlab-runner

             

            Finally restart your gitlab-runner. 

            sudo gitlab-runner restart

             

            We finish all configuration. Now you will see a runner added to your gitlab runner.

             

            Be the first person to like this

              kubernetes cert-manager - how to use wildcard ssl as a certificate issuer

              I have a wildcard certificate bought from namecheap. So now I am going to use the certificate for all of my sub-domain. For this I need a cluster certificate issuer. We are going to use cert manager. So lets start -

              Step 1: Create a secret in the cert-manager namespace name with ca-secrets.yaml

              apiVersion: v1
              kind: Secret
              metadata:
                name: ca-key-pair
                namespace: cert-manager
              data:
                tls.crt:  base64 of fullchain-ca.bundle you need full chain, this may help - cat nise_gov_bd.ca-bundle nise_gov_bd.crt > chain.pem
                tls.key:  base64 cert.key

              unknown authority problem solution:  cat  server.crt  server.ca-bundle server.key >> ssl-bundle.crt
              here server.crt is the crt file only
                      server.ca-bundle the bundle file

                      server.key is the sertificate key

              Some more information - 
              fullchain.pem = cert.pem + chain.pem

              Typically use chain.pem (or the first certificate in it) when you're asked for a CA bundle or CA certificate. Example - for lets-encrypt we need to use chain.pem for ca certificate.
              Then the cert file look like - cat chain.pen cert.pem > fullchain-ca.bundle

              You can generate tls.cert and tls.key by following command - 

               cat fullchain-ca.bundle | base64 -w0
               cat cert.key | base64 -w0

              Now apply the secrets by following command - 

              kubectl apply -f ca-secrets.yaml

               

              Step 2: now create a certificate issuer name with ca-issuer.yaml 

              apiVersion: cert-manager.io/v1alpha2
              kind: ClusterIssuer
              metadata:
               name: k-issuer
               namespace: cert-manager
              spec:
               ca:
                 secretName: ca-key-pair

              Here secretName is the secret which we created in step 1 ca-key-pair

               

              Step 3: Now create a certificate name with cert.yaml to test the issuer -

              apiVersion: cert-manager.io/v1alpha2
              kind: Certificate
              metadata:
                name: test-cert-by-kalyan
              spec:
                secretName: k-key-pair
                dnsNames:
                - "*.default.svc.cluster.local"
                - "core2.default.com"
                isCA: true
                issuerRef:
                  name: k-issuer
                  kind: ClusterIssuer

              Here the issuerRef.name and issuerRef.kind is important.

               

              If you want to use with your ingress then just write this in annotations - 

              cert-manager.io/cluster-issuer: k-issuer

              Thats all we need to do. for more information follow the link CA issuer Cert-Manager If you have any question or problem please comment. I'll reply. Thank you.

              Be the first person to like this

                  How to Add a New Node to Kubespray Manased Production Ready Kubernetes Cluster

                  I have a production ready kubernetes cluster managed by kubespray. Now I need to add additional node to the cluster. Here I am showing you how you can add a additional node to your cluster. Here my existing cluster - 

                  ╰─ kubectl get nodes                                                                                                                                        
                  NAME    STATUS   ROLES                  AGE    VERSION
                  node1   Ready    control-plane,master   28d    v1.21.4
                  node2   Ready    control-plane,master   28d    v1.21.4
                  node3   Ready    <none>                 28d    v1.21.4

                   

                  Now I want to add a new node4 to the cluster. First of all I need to edit the existing ansiable host file  which is inventory/mycluster/hosts.yaml

                  # file path - inventory/mycluster/hosts.yaml
                  all:
                    hosts:
                      node1:
                        ansible_host: 10.180.63.193
                        ip: 10.180.63.193
                        access_ip: 10.180.63.193
                      node2:
                        ansible_host: 10.180.63.151
                        ip: 10.180.63.151
                        access_ip: 10.180.63.151
                      node3:
                        ansible_host: 10.180.63.30
                        ip: 10.180.63.30
                        access_ip: 10.180.63.30
                      node4:
                        ansible_host: 10.180.63.160
                        ip: 10.180.63.160
                        access_ip: 10.180.63.160
                    children:
                      kube_control_plane:
                        hosts:
                          node1:
                          node2:
                      kube_node:
                        hosts:
                          node1:
                          node2:
                          node3:
                          node4:
                      etcd:
                        hosts:
                          node1:
                          node2:
                          node3:
                      k8s_cluster:
                        children:
                          kube_control_plane:
                          kube_node:
                      calico_rr:
                        hosts: {}

                   

                  I added node4 information in the above hosts.yaml file. Here my node4 information -

                  #my node4 information 
                  
                      node4:
                        ansible_host: 10.180.63.160
                        ip: 10.180.63.160
                        access_ip: 10.180.63.160

                   

                  Now run the cluster.yml file to add the new node to the cluster.

                  ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml -u root -b -l node4

                   

                  Where,

                  • -i : inventory file to be used
                  • cluster.yml : playbook to deploy a cluster
                  • -u root : the user account which we have created on all nodes for password-less ssh access.
                  • -b : enable become – sudo access is needed for installing packages, starting services, creating SSL certificates etc.

                   

                  Wait to finish the process. 

                  All done! Now verify the newly added node4 to the cluster.

                  ╰─ kubectl get nodes                                                                                                                                        
                  NAME    STATUS   ROLES                  AGE    VERSION
                  node1   Ready    control-plane,master   28d    v1.21.4
                  node2   Ready    control-plane,master   28d    v1.21.4
                  node3   Ready    <none>                 28d    v1.21.4
                  node4   Ready    <none>                 102s   v1.21.4

                  Now node4 is a part of your cluster.

                  Vika , Razu and
                  34 more liked this

                      How to generate kubernetes dashboard access token

                      How to generate kubernetes dashboard access token

                       

                      1. Create the dashboard service account

                      Run the following command to create a service account

                      kubectl create serviceaccount kubernetes-dashboard-admin-sa -n kube-system

                      The command will create a service account in the namespace of kube-system. replace your namespace instate of kube-system

                       

                      2. Bind the service account to the cluster-admin role

                      kubectl create clusterrolebinding kubernetes-dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard-admin-sa

                       

                      3. List Secretes

                      kubectl get secrets -n kube-system

                       

                      if you using kubernetes 1.23 or above please use following to get secret

                      kubectl -n kube-system create token kubernetes-dashboard-admin-sa

                      then you dont have to follow step 4.

                       

                      4. get the token from secret

                      kubectl describe secret kubernetes-dashboard-admin-sa-token-lj8cc -n kube-system

                       

                      Here your secret name can be different. Now copy the token and use it to login kubernetes dashboard.

                       

                      Lucky , Bidyut and
                      28 more liked this
                      Back
                      friends & family