Custom Thumbnail 

drop img here or browse to upload

add answer
create album
  • feeling
  • traveling
  • watching
  • playing
  • listening
  • Happy
  • Loved
  • Sad
  • Very sad
  • Angry
  • Confused
  • Hot
  • Broken
  • expressionless
  • Cool
  • Funny
  • Tired
  • Lovely
  • Blessed
  • Shocked
  • Sleepy
  • Pretty
  • Bored

Find Home Tutors | Jobs | At your location

  • Post a To-let
  • Sell Product
  • Get Tuition
  • Find Soulmate
  • Live Chat
  • Contact Support

New Tutor

Imtiaz Uddin
Imtiaz Uddin Reyan

Student of Govt.Kobi Nazul Collage.Dhaka

Monthly 5000 Tk.

Shahin Hossain

Student of Govt. Shahid suhrawardi college

Monthly 4000 Tk.

Md Helal
Md Helal Karim

Student of Habibullah Bahar Universit, Shantinagar, Dhaka

Monthly 4000 Tk.

Anik Mondal

Student of Tejgaon university and college

Monthly 3000 Tk.

Kalyan Halder

Sr. Developer of SoftBD LTD.

Monthly 4000 Tk.

Debasree Banik

Monthly 3500 Tk.

      kubernetes nodelocaldns crash - loop detected for zone

      I  have the  issue. Here is a possible solution -

      I've solved the problem by deleting the plugins 'loop' within the cm of coredns & nodelocaldns. I could not resolve dns from inside my pod. Whenever I the nodelocaldns start running, the dns problem solved. I applied following solution  but I don't know if this cloud case other porblems. I did not face any problem till now. Solution -

      1、kubectl edit cm coredns -n kube-system

      2、delete ‘loop’ ,save and exit

      3、restart nodelocaldns pods

      Be the first person to like this

          How to run a gitlab runner in your host machine

          We are going to discus how we can run gitlab runner in ubuntu server machine. 


          Download Package

          curl -LJO ""
          dpkg -i gitlab-runner_amd64.deb

          this will install the gitlab runner in your machine. 


          Register a runner

          We can register runner for individual repository also group runner. To create runner go to 

          Settings > CI/CD > Runner 

          there you will find Register the runner with this URL and registration token. Now lets create a runner which will execute by a docker - 

          sudo -E gitlab-runner register
               Enter the GitLab instance URL (for example,
               Enter the registration token: yourtoken
               Enter a description for the runner: My runner
               Enter tags for the runner (comma-separated): docker, node [your job tag]
               Enter an executor: docker-ssh, parallels, ssh, virtualbox, docker, shell, docker+machine, docker-ssh+machine, kubernetes, custom: docker


          If you need further configuration follow the config file 

          # nano  /etc/gitlab-runner/config.toml
          concurrent = 10   #number of job at a time
          check_interval = 0
            session_timeout = 1800
          # first runner
            name = "runner root"
            url = ""
            token = "yourtoken"
            executor = "docker"
              tls_verify = false
              image = "ruby:2.7"
              privileged = true
              disable_entrypoint_overwrite = false
              oom_kill_disable = false
              disable_cache = false
              volumes = ["/cache"]
              shm_size = 0
          # second runner
            name = "Nise T4 Task Runner"
            url = ""
            token = "yourtoken"
            executor = "shell"
              tls_verify = false
              image = "ruby:2.7"
              privileged = true
              disable_entrypoint_overwrite = false
              oom_kill_disable = false
              disable_cache = false
              volumes = ["/cache"]
              shm_size = 0


          Example docker runner full config 

            host = ""
            hostname = ""
            tls_cert_path = "/Users/ayufan/.boot2docker/certs"
            image = "ruby:2.7"
            memory = "128m"
            memory_swap = "256m"
            memory_reservation = "64m"
            oom_kill_disable = false
            cpuset_cpus = "0,1"
            cpus = "2"
            dns = [""]
            dns_search = [""]
            privileged = false
            userns_mode = "host"
            cap_add = ["NET_ADMIN"]
            cap_drop = ["DAC_OVERRIDE"]
            devices = ["/dev/net/tun"]
            disable_cache = false
            wait_for_services_timeout = 30
            cache_dir = ""
            volumes = ["/data", "/home/project/cache"]
            extra_hosts = ["other-host:"]
            shm_size = 300000
            volumes_from = ["storage_container:ro"]
            links = ["mysql_container:mysql"]
            allowed_images = ["ruby:*", "python:*", "php:*"]
            allowed_services = ["postgres:9", "redis:*", "mysql:*"]
              name = ""
              alias = "svc1"
              entrypoint = [""]
              command = ["executable","param1","param2"]
              name = "redis:2.8"
              alias = "cache"
              name = "postgres:9"
              alias = "postgres-db"
              "net.ipv4.ip_forward" = "1"


          If you want to run by shell please add the gitlab-user to docker group. 

          usermod -aG docker gitlab-runner


          Finally restart your gitlab-runner. 

          sudo gitlab-runner restart


          We finish all configuration. Now you will see a runner added to your gitlab runner.


          Be the first person to like this

            kubernetes cert-manager - how to use wildcard ssl as a certificate issuer

            I have a wildcard certificate bought from namecheap. So now I am going to use the certificate for all of my sub-domain. For this I need a cluster certificate issuer. We are going to use cert manager. So lets start -

            Step 1: Create a secret in the cert-manager namespace name with ca-secrets.yaml

            apiVersion: v1
            kind: Secret
              name: ca-key-pair
              namespace: cert-manager
              tls.crt:  base64 of fullchain-ca.bundle you need full chain, this may help - cat nise_gov_bd.crt > chain.pem
              tls.key:  base64 cert.key

            unknown authority problem solution:  cat  server.crt server.key >> ssl-bundle.crt
            here server.crt is the crt file only
           the bundle file

                    server.key is the sertificate key

            Some more information - 
            fullchain.pem = cert.pem + chain.pem

            Typically use chain.pem (or the first certificate in it) when you're asked for a CA bundle or CA certificate. Example - for lets-encrypt we need to use chain.pem for ca certificate.
            Then the cert file look like - cat chain.pen cert.pem > fullchain-ca.bundle

            You can generate tls.cert and tls.key by following command - 

             cat fullchain-ca.bundle | base64 -w0
             cat cert.key | base64 -w0

            Now apply the secrets by following command - 

            kubectl apply -f ca-secrets.yaml


            Step 2: now create a certificate issuer name with ca-issuer.yaml 

            kind: ClusterIssuer
             name: k-issuer
             namespace: cert-manager
               secretName: ca-key-pair

            Here secretName is the secret which we created in step 1 ca-key-pair


            Step 3: Now create a certificate name with cert.yaml to test the issuer -

            kind: Certificate
              name: test-cert-by-kalyan
              secretName: k-key-pair
              - "*.default.svc.cluster.local"
              - ""
              isCA: true
                name: k-issuer
                kind: ClusterIssuer

            Here the and issuerRef.kind is important.


            If you want to use with your ingress then just write this in annotations - 


            Thats all we need to do. for more information follow the link CA issuer Cert-Manager If you have any question or problem please comment. I'll reply. Thank you.

            Be the first person to like this

                How to Add a New Node to Kubespray Manased Production Ready Kubernetes Cluster

                I have a production ready kubernetes cluster managed by kubespray. Now I need to add additional node to the cluster. Here I am showing you how you can add a additional node to your cluster. Here my existing cluster - 

                ╰─ kubectl get nodes                                                                                                                                        
                NAME    STATUS   ROLES                  AGE    VERSION
                node1   Ready    control-plane,master   28d    v1.21.4
                node2   Ready    control-plane,master   28d    v1.21.4
                node3   Ready    <none>                 28d    v1.21.4


                Now I want to add a new node4 to the cluster. First of all I need to edit the existing ansiable host file  which is inventory/mycluster/hosts.yaml

                # file path - inventory/mycluster/hosts.yaml
                      hosts: {}


                I added node4 information in the above hosts.yaml file. Here my node4 information -

                #my node4 information 


                Now run the cluster.yml file to add the new node to the cluster.

                ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml -u root -b -l node4



                • -i : inventory file to be used
                • cluster.yml : playbook to deploy a cluster
                • -u root : the user account which we have created on all nodes for password-less ssh access.
                • -b : enable become – sudo access is needed for installing packages, starting services, creating SSL certificates etc.


                Wait to finish the process. 

                All done! Now verify the newly added node4 to the cluster.

                ╰─ kubectl get nodes                                                                                                                                        
                NAME    STATUS   ROLES                  AGE    VERSION
                node1   Ready    control-plane,master   28d    v1.21.4
                node2   Ready    control-plane,master   28d    v1.21.4
                node3   Ready    <none>                 28d    v1.21.4
                node4   Ready    <none>                 102s   v1.21.4

                Now node4 is a part of your cluster.

                Vika , Razu and
                34 more liked this

                    How to generate kubernetes dashboard access token

                    How to generate kubernetes dashboard access token


                    1. Create the dashboard service account

                    Run the following command to create a service account

                    kubectl create serviceaccount kubernetes-dashboard-admin-sa -n kube-system

                    The command will create a service account in the namespace of kube-system. replace your namespace instate of kube-system


                    2. Bind the service account to the cluster-admin role

                    kubectl create clusterrolebinding kubernetes-dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard-admin-sa


                    3. List Secretes

                    kubectl get secrets -n kube-system


                    if you using kubernetes 1.23 or above please use following to get secret

                    kubectl -n kube-system create token kubernetes-dashboard-admin-sa

                    then you dont have to follow step 4.


                    4. get the token from secret

                    kubectl describe secret kubernetes-dashboard-admin-sa-token-lj8cc -n kube-system


                    Here your secret name can be different. Now copy the token and use it to login kubernetes dashboard.


                    Lucky , Bidyut and
                    28 more liked this

                      Deploy a Production Ready Kubernetes Cluster With lxc Container and Kubespray

                      Deploy a Production Ready Kubernetes Cluster With lxc Container and Kubespray

                      I am going to show the workground how you can use lxc container to create a production grade cluster. Though its hard to create kubernetes cluster with lxc container but its possible. So lets see how we can solve all of those challenges step by step

                      Step 1: Prepare host machine

                      a) edit following file

                      nano /etc/sysctl.conf
                      # Uncomment the next line to enable packet forwarding for IPv4


                      b) disable firewall 

                      ufw disable


                      c) disable swap

                      swapoff -a; sed -i '/swap/d' /etc/fstab


                      d) update sysctl settings for kubernetes networking

                      cat >>/etc/sysctl.d/kubernetes.conf<<EOF
                      net.bridge.bridge-nf-call-ip6tables = 1
                      net.bridge.bridge-nf-call-iptables = 1
                      sysctl --system


                      Step 2: Create lxc profile

                        boot.autostart: "true"
                        linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter,nf_conntrack,xt_conntrack
                        raw.lxc: |
                 sys:rw cgroup:rw
                        security.nesting: "true"
                        security.privileged: "true"
                      description: Default LXD profile
                          name: eth0
                          network: lxdbr0
                          type: nic
                          path: /
                          pool: default
                          type: disk
                      name: microk8s
                      - /1.0/instances/node1
                      - /1.0/instances/node2
                      - /1.0/instances/node3


                      Step 3: Create a linux container 

                      lxc launch -p default -p microk8s ubuntu:21.04 node1


                      Step 4: Inside container do following 

                      a) following command should return output

                      conntrack -L
                      modinfo overlay


                      b) if above command output error then its seems that there some karnel related problem. Install and fix karnel issue

                      sudo apt install linux-generic
                      sudo apt install --reinstall linux-image-$(uname -r);
                      sudo apt install --reinstall linux-modules-$(uname -r);
                      sudo apt install --reinstall linux-modules-extra-$(uname -r);

                      this should fix karnel related issue.


                      c) Recent kubernetes versions want to read from /dev/kmsg which is not present in the container. You need to instruct systemd to always create a symlink to /dev/console instead:

                      echo 'L /dev/kmsg - - - - /dev/null' > /etc/tmpfiles.d/kmsg.conf

                      if it not working then run following

                      echo 'L /dev/kmsg - - - - /dev/console' > /etc/tmpfiles.d/kmsg.conf


                      if it still not work then do following

                      # Hack required to provision K8s v1.15+ in LXC containers
                      mknod /dev/kmsg c 1 11
                      echo 'mknod /dev/kmsg c 1 11' >> /etc/rc.local
                      chmod +x /etc/rc.local


                      d) if you need to load any module then you can run following comman

                      # cmd    module name
                      modprobe br_netfilter


                      Thats all. Now follow the Kubespray official document.

                      To access k8s cluster without execing into master node

                      Download the kubectl command into your local.

                      which kubectl
                      # output: /usr/bin/kubectl

                      Create .kube directory

                      mkdir ~/.kube

                      copy config from kmaster into .kube directory

                      lxc file pull kmaster/etc/kubernetes/admin.conf ~/.kube/config
                      #check cluster
                      kubectl get nodes



                      Maria , Kristina and
                      31 more liked this
                      friends & family