Custom Thumbnail 

drop img here or browse to upload

add answer
create album
  • feeling
  • traveling
  • watching
  • playing
  • listening
  • Happy
  • Loved
  • Sad
  • Very sad
  • Angry
  • Confused
  • Hot
  • Broken
  • expressionless
  • Cool
  • Funny
  • Tired
  • Lovely
  • Blessed
  • Shocked
  • Sleepy
  • Pretty
  • Bored
0%

Find Home Tutors | Jobs | At your location

  • Post a To-let
  • Sell Product
  • Get Tuition
  • Find Soulmate
  • Live Chat
  • Contact Support

New Tutor

cover-image
Imtiaz Uddin
Imtiaz Uddin Reyan

Student of Govt.Kobi Nazul Collage.Dhaka

Monthly 5000 Tk.

cover-image
Shahin
Shahin Hossain

Student of Govt. Shahid suhrawardi college

Monthly 4000 Tk.

cover-image
Md Helal
Md Helal Karim

Student of Habibullah Bahar Universit, Shantinagar, Dhaka

Monthly 4000 Tk.

cover-image
Anik
Anik Mondal

Student of Tejgaon university and college

Monthly 3000 Tk.

cover-image
Kalyan
Kalyan Halder

Sr. Developer of SoftBD LTD.

Monthly 4000 Tk.

cover-image
Debasree
Debasree Banik

Monthly 3500 Tk.

    how to add add new disk to lvm group

    Use the lsblk command to view your available disk devices and their mount points. The output of lsblk removes the /dev/ prefix from full device paths. Here xvda  is root device and -xvda1  is partitions.   indicate the mount point.  On the other hand xvdf no partition and mount point. 

     

    [ec2-user ~]$ lsblk
    NAME    MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
    xvda    202:0    0    8G  0 disk
    -xvda1  202:1    0    8G  0 part /
    xvdf    202:80   0   10G  0 disk

     

    Lets say we want to add disk xvdf , So first we need to determine our file system. New volumes are raw block devices, and we must create a file system to use them. We can determine file system by following command - 

    [ec2-user ~]$ sudo file -s /dev/xvdf
    /dev/xvdf: data

    If the output shows simply data, then there is no file system on the device

    [ec2-user ~]$ sudo file -s /dev/xvda1
    /dev/xvda1: SGI XFS filesystem data (blksz 4096, inosz 512, v2 dirs)

    output shows a root device with the XFS file system.

    Also we can use lsblk -f to get file information. We can get  information about all of the attached devices to the instance.

    [ec2-user ~]$ sudo lsblk -f

     

    Display all available LVM block 

    sudo lvmdiskscan
    Output
      /dev/sda   [     200.00 GiB] 
      /dev/sdb   [     100.00 GiB] 
      2 disks
      2 partitions
      0 LVM physical volume whole disks
      0 LVM physical volumes

    It'll return all information. Now if you want to know specific disk which is using lvm you can use - 

    lvmdiskscan -l
    Output
      WARNING: only considering LVM devices
      /dev/vda3                 [     <99.00 GiB] LVM physical volume
      0 LVM physical volume whole disks
      1 LVM physical volume

    The pvscan command searches all available devices for LVM physical volumes - 

    sudo pvscan
    Output
      PV /dev/sda   VG LVMVolGroup     lvm2 [200.00 GiB / 0    free]
      PV /dev/sdb   VG LVMVolGroup     lvm2 [100.00 GiB / 10.00 GiB free]
      Total: 2 [299.99 GiB] / in use: 2 [299.99 GiB] / in no VG: 0 [0   ]

    pvs and pvdisplay can find more additional information - 

    if we also want to discover logical extents that have been mapped to each volume we can use -m option to the pvdisplay command.

    sudo pvdisplay -m

    To discover available volume group we can use - 

    vgscan
    Output
      Reading all physical volumes.  This may take a while...
      Found volume group "LVMVolGroup" using metadata type lvm2

    here LVMVolGroup is the volume group where we can add more space and manage logical volume group.

     

     

    Be the first person to like this

      how to Block bad bot for your website served by nginx

      Some bad bot /web crawler  always send lot of request to our site and make our site slow. To prevent this we need to set following setting in our nginx server block - 

       

      if ($http_user_agent ~* (.Amazonbot.|.MJ12bot.|.AhrefsBot.|.DotBot.|.SemrushBot.|.petalbot.)){ return 403; }

       

      After save your site config restart nginx. 

      Be the first person to like this

        ssh and curl unable to connect target host issue - SSH2_MSG_KEX_ECDH_REPLY and timeout

        I experienced two machine unable to connect via ssh or curl. Its hang or connection close before authentication. But Finally I found that they both has the same issue. How I solved it?

        For ssh issue I saw that if I specify chipher algorithmn then problem got solved. So I ended up with following solution - 

        I add  created a config file in ~/.ssh directory then write following line - 

        # ~/.ssh/config
        Host *
        KexAlgorithms ecdh-sha2-nistp521

         

        But I faced the curl not working. It was my another problem. I finally saw that if I lower MTU value for my network interface then both problem got solve. We do not need to provide the ssh config solution.

        By following command you can change MTU value. My MTU value was 1500 and I lower it to 1200

        sudo ip li set mtu 1200 dev wlp3s0

        Or

        sudo ifconfig wlp3s0 mtu 1200

        There are many more bug in openssh package. So whenever you face a problem check bug list.

        Be the first person to like this

          Forget/not working argocd default password, Reset argocd default password

               1. Patch argocd secret to update password

          kubectl -n argocd patch secret argocd-secret  -p '{"data": {"admin.password": null, "admin.passwordMtime": null}}'

               2. Restart the api-server pod by deleting pod directly or by scaling the pod replica to zero and then back to one

          kubectl -n argocd scale deployment argocd-server --replicas=0
          # once scaled-down, make sure to scale back up and wait a few minutes before
          kubectl -n argocd scale deployment argocd-server --replicas=1

           

          Now get the password by following command

          kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d

           

          Now you can login and reset the password from argocd dashboard.

           

          Be the first person to like this

            elasticsearch - index lifecycle management

            First we need to create a Index Lifecycle Policies. So we can define policy to delete our index and keep it simple and clean. If we manually assign this policy to a index, when the index get deleted then the policy will also remove automatically and when you create the index again it dont automatically assign to the policy. 

            To solve the problem we also need a index template. So the template can include necessary setting and policy while creating again and again. We'll define rollover_alias and alias in the template so when the index will be create it can get all necessary settings. 

            For example, lets say we created a policy - test-log-policy

            now create a index template - test_log_template

            PUT _template/test_log_template
            {
             "index_patterns": ["*-_file_logs"],
             "settings": {
               "number_of_shards": 1,
               "number_of_replicas": 1,
               "index.lifecycle.name": "test-log-policy",  
               "index.lifecycle.rollover_alias": "test-log-policy"
             },
             "aliases" : {
               "test-log-policy" : {} 
             }
            }

             

            Now when a index will create by following name pattern *-_file_logs the above template will be applied and automatically get applied to the policy  test-log-policy.

            check the template details:

            GET /_template/test_log_template

            You can also match several templates by using wildcards like:

            GET /_template/test*
            GET /_template/test_log_template,template_2

            To get list of all index templates you can run:

            GET /_template

             

            Be the first person to like this

              how to change permission of file/folder from container for NFS file server

              Sometime we may need to change permission of a folder or file from container startup script. For example - docker.io/postgres:9.6.5 in kubernetes container startup script want change file permission and when you are using NFS file server, the operation may fail. Becasue its like root act as on local directory for NFS directory. So by default its off in NFS server. If you really the feature then you can enable by following step - 

              Set NFS export dir:
                      "nfs/main *(rw,sync,no_subtree_check,no_root_squash)"
              And then restart NFS server:
                     sudo /etc/init.d/nfs-kernel-server restart
                     or
                     sudo exportfs -arv

              Be the first person to like this

                certified kubernetes administrator moc exam question

                Pass Percentage - 74%

                Q. 2

                info_outlineQuestion

                List the InternalIP of all nodes of the cluster. Save the result to a file /root/CKA/node_ips.

                Answer should be in the format: InternalIP of controlplane<space>InternalIP of node01 (in a single line)

                info_outlineSolution

                Explore the jsonpath loop. 
                kubectl get nodes -o jsonpath='{.items[*].status.addresses[?(@.type=="InternalIP")].address}' > /root/CKA/node_ips

                Q. 3

                info_outlineQuestion

                Create a pod called multi-pod with two containers. 
                Container 1, name: alpha, image: nginx
                Container 2: name: beta, image: busybox, command: sleep 4800 

                Environment Variables:
                container 1:
                name: alpha

                Container 2:
                name: beta

                info_outlineSolution

                Solution manifest file to create a multi-container pod multi-pod as follows:

                ---
                apiVersion: v1
                kind: Pod
                metadata:
                  name: multi-pod
                spec:
                  containers:
                  - image: nginx
                    name: alpha
                    env:
                    - name: name
                      value: alpha
                  - image: busybox
                    name: beta
                    command: ["sleep", "4800"]
                    env:
                    - name: name
                      value: beta

                Q. 4

                info_outlineQuestion

                Create a Pod called non-root-pod , image: redis:alpine

                runAsUser: 1000

                fsGroup: 2000

                info_outlineSolution

                Solution manifest file to create a pod called non-root-pod as follows:

                ---
                apiVersion: v1
                kind: Pod
                metadata:
                  name: non-root-pod
                spec:
                  securityContext:
                    runAsUser: 1000
                    fsGroup: 2000
                  containers:
                  - name: non-root-pod
                    image: redis:alpine

                Verify the user and group IDs by using below command:

                kubectl exec -it non-root-pod -- id

                Q. 5

                info_outlineQuestion

                We have deployed a new pod called np-test-1 and a service called np-test-service. Incoming connections to this service are not working. Troubleshoot and fix it.
                Create NetworkPolicy, by the name ingress-to-nptest that allows incoming connections to the service over port 80.

                Important: Don't delete any current objects deployed.

                info_outlineSolution

                Solution manifest file to create a network policy ingress-to-nptest as follows:

                ---
                apiVersion: networking.k8s.io/v1
                kind: NetworkPolicy
                metadata:
                  name: ingress-to-nptest
                  namespace: default
                spec:
                  podSelector:
                    matchLabels:
                      run: np-test-1
                  policyTypes:
                  - Ingress
                  ingress:
                  - ports:
                    - protocol: TCP
                      port: 80

                Q. 6

                info_outlineQuestion

                Taint the worker node node01 to be Unschedulable. Once done, create a pod called dev-redis, image redis:alpine, to ensure workloads are not scheduled to this worker node. Finally, create a new pod called prod-redis and image: redis:alpine with toleration to be scheduled on node01.

                key: env_type, value: production, operator: Equal and effect: NoSchedule

                info_outlineSolution

                To add taints on the node01 worker node:

                kubectl taint node node01 env_type=production:NoSchedule

                Now, deploy dev-redis pod and to ensure that workloads are not scheduled to this node01 worker node.

                kubectl run dev-redis --image=redis:alpine

                To view the node name of recently deployed pod:

                kubectl get pods -o wide

                Solution manifest file to deploy new pod called prod-redis with toleration to be scheduled on node01 worker node.

                ---
                apiVersion: v1
                kind: Pod
                metadata:
                  name: prod-redis
                spec:
                  containers:
                  - name: prod-redis
                    image: redis:alpine
                  tolerations:
                  - effect: NoSchedule
                    key: env_type
                    operator: Equal
                    value: production     

                To view only prod-redis pod with less details:

                kubectl get pods -o wide | grep prod-redis

                Q. 7

                info_outlineQuestion

                Create a pod called hr-pod in hr namespace belonging to the production environment and frontend tier .
                image: redis:alpine

                Use appropriate labels and create all the required objects if it does not exist in the system already.

                info_outlineSolution

                Create a namespace if it doesn't exist:

                kubectl create namespace hr

                and then create a hr-pod with given details:

                kubectl run hr-pod --image=redis:alpine --namespace=hr --labels=environment=production,tier=frontend

                Q. 8

                info_outlineQuestion

                A kubeconfig file called super.kubeconfig has been created under /root/CKA. There is something wrong with the configuration. Troubleshoot and fix it.

                info_outlineSolution

                Verify host and port for kube-apiserver are correct.

                Open the super.kubeconfig in vi editor. 

                Change the 9999 port to 6443 and run the below command to verify: 
                 

                kubectl cluster-info --kubeconfig=/root/CKA/super.kubeconfig

                Q. 9

                info_outlineQuestion

                We have created a new deployment called nginx-deploy. scale the deployment to 3 replicas. Has the replica's increased? Troubleshoot the issue and fix it.

                info_outlineSolution

                Use the command kubectl scale to increase the replica count to 3. 
                 

                kubectl scale deploy nginx-deploy --replicas=3

                The controller-manager is responsible for scaling up pods of a replicaset. If you inspect the control plane components in the kube-system namespace, you will see that the controller-manager is not running. 
                 

                kubectl get pods -n kube-system

                The command running inside the controller-manager pod is incorrect. 
                After fix all the values in the file and wait for controller-manager pod to restart. 

                Alternatively, you can run sed command to change all values at once:

                sed -i 's/kube-contro1ler-manager/kube-controller-manager/g' /etc/kubernetes/manifests/kube-controller-manager.yaml

                This will fix the issues in controller-manager yaml file.

                At last, inspect the deployment by using below command:

                kubectl get deploy
                Be the first person to like this

                    kubernetes nodelocaldns crash - loop detected for zone

                    I  have the  issue. Here is a possible solution -

                    I've solved the problem by deleting the plugins 'loop' within the cm of coredns & nodelocaldns. I could not resolve dns from inside my pod. Whenever I the nodelocaldns start running, the dns problem solved. I applied following solution  but I don't know if this cloud case other porblems. I did not face any problem till now. Solution -

                    1、kubectl edit cm coredns -n kube-system

                    2、delete ‘loop’ ,save and exit

                    3、restart nodelocaldns pods

                    Be the first person to like this
                      Back
                      friends & family