Custom Thumbnail 

drop img here or browse to upload

add answer
create album
  • feeling
  • traveling
  • watching
  • playing
  • listening
  • Happy
  • Loved
  • Sad
  • Very sad
  • Angry
  • Confused
  • Hot
  • Broken
  • expressionless
  • Cool
  • Funny
  • Tired
  • Lovely
  • Blessed
  • Shocked
  • Sleepy
  • Pretty
  • Bored
0%

Find Home Tutors | Jobs | At your location

  • Post a To-let
  • Sell Product
  • Get Tuition
  • Find Soulmate
  • Live Chat
  • Contact Support

New Tutor

cover-image
Imtiaz Uddin
Imtiaz Uddin Reyan

Student of Govt.Kobi Nazul Collage.Dhaka

Monthly 5000 Tk.

cover-image
Shahin
Shahin Hossain

Student of Govt. Shahid suhrawardi college

Monthly 4000 Tk.

cover-image
Md Helal
Md Helal Karim

Student of Habibullah Bahar Universit, Shantinagar, Dhaka

Monthly 4000 Tk.

cover-image
Anik
Anik Mondal

Student of Tejgaon university and college

Monthly 3000 Tk.

cover-image
Kalyan
Kalyan Halder

Sr. Developer of SoftBD LTD.

Monthly 4000 Tk.

cover-image
Debasree
Debasree Banik

Monthly 3500 Tk.

      How to run a gitlab runner in your host machine

      We are going to discus how we can run gitlab runner in ubuntu server machine. 

       

      Download Package

      curl -LJO "https://gitlab-runner-downloads.s3.amazonaws.com/latest/deb/gitlab-runner_amd64.deb"
      
      dpkg -i gitlab-runner_amd64.deb

      this will install the gitlab runner in your machine. 

       

      Register a runner

      We can register runner for individual repository also group runner. To create runner go to 

      Settings > CI/CD > Runner 

      there you will find Register the runner with this URL and registration token. Now lets create a runner which will execute by a docker - 

      sudo -E gitlab-runner register
           Enter the GitLab instance URL (for example, https://gitlab.com/): https://gitlab.com/
           Enter the registration token: yourtoken
           Enter a description for the runner: My runner
           Enter tags for the runner (comma-separated): docker, node [your job tag]
           Enter an executor: docker-ssh, parallels, ssh, virtualbox, docker, shell, docker+machine, docker-ssh+machine, kubernetes, custom: docker

       

      If you need further configuration follow the config file 

      # nano  /etc/gitlab-runner/config.toml
      concurrent = 10   #number of job at a time
      check_interval = 0
      
      [session_server]
        session_timeout = 1800
      # first runner
      [[runners]]
        name = "runner root"
        url = "https://gitlab.com/"
        token = "yourtoken"
        executor = "docker"
        [runners.custom_build_dir]
        [runners.cache]
          [runners.cache.s3]
          [runners.cache.gcs]
          [runners.cache.azure]
        [runners.docker]
          tls_verify = false
          image = "ruby:2.7"
          privileged = true
          disable_entrypoint_overwrite = false
          oom_kill_disable = false
          disable_cache = false
          volumes = ["/cache"]
          shm_size = 0
      
      # second runner
      [[runners]]
        name = "Nise T4 Task Runner"
        url = "https://gitlab.com/"
        token = "yourtoken"
        executor = "shell"
        [runners.custom_build_dir]
        [runners.cache]
          [runners.cache.s3]
          [runners.cache.gcs]
          [runners.cache.azure]
        [runners.docker]
          tls_verify = false
          image = "ruby:2.7"
          privileged = true
          disable_entrypoint_overwrite = false
          oom_kill_disable = false
          disable_cache = false
          volumes = ["/cache"]
          shm_size = 0

       

      Example docker runner full config 

      [runners.docker]
        host = ""
        hostname = ""
        tls_cert_path = "/Users/ayufan/.boot2docker/certs"
        image = "ruby:2.7"
        memory = "128m"
        memory_swap = "256m"
        memory_reservation = "64m"
        oom_kill_disable = false
        cpuset_cpus = "0,1"
        cpus = "2"
        dns = ["8.8.8.8"]
        dns_search = [""]
        privileged = false
        userns_mode = "host"
        cap_add = ["NET_ADMIN"]
        cap_drop = ["DAC_OVERRIDE"]
        devices = ["/dev/net/tun"]
        disable_cache = false
        wait_for_services_timeout = 30
        cache_dir = ""
        volumes = ["/data", "/home/project/cache"]
        extra_hosts = ["other-host:127.0.0.1"]
        shm_size = 300000
        volumes_from = ["storage_container:ro"]
        links = ["mysql_container:mysql"]
        allowed_images = ["ruby:*", "python:*", "php:*"]
        allowed_services = ["postgres:9", "redis:*", "mysql:*"]
        [[runners.docker.services]]
          name = "registry.example.com/svc1"
          alias = "svc1"
          entrypoint = ["entrypoint.sh"]
          command = ["executable","param1","param2"]
        [[runners.docker.services]]
          name = "redis:2.8"
          alias = "cache"
        [[runners.docker.services]]
          name = "postgres:9"
          alias = "postgres-db"
        [runners.docker.sysctls]
          "net.ipv4.ip_forward" = "1"

       

      If you want to run by shell please add the gitlab-user to docker group. 

      usermod -aG docker gitlab-runner

       

      Finally restart your gitlab-runner. 

      sudo gitlab-runner restart

       

      We finish all configuration. Now you will see a runner added to your gitlab runner.

       

      Be the first person to like this

        kubernetes cert-manager - how to use wildcard ssl as a certificate issuer

        I have a wildcard certificate bought from namecheap. So now I am going to use the certificate for all of my sub-domain. For this I need a cluster certificate issuer. We are going to use cert manager. So lets start -

        Step 1: Create a secret in the cert-manager namespace name with ca-secrets.yaml

        apiVersion: v1
        kind: Secret
        metadata:
          name: ca-key-pair
          namespace: cert-manager
        data:
          tls.crt:  base64 of fullchain-ca.bundle you need full chain, this may help - cat nise_gov_bd.ca-bundle nise_gov_bd.crt > chain.pem
          tls.key:  base64 cert.key

        unknown authority problem solution:  cat  server.crt  server.ca-bundle server.key >> ssl-bundle.crt
        here server.crt is the crt file only
                server.ca-bundle the bundle file

                server.key is the sertificate key

         

        You can generate tls.cert and tls.key by following command - 

         cat fullchain-ca.bundle | base64 -w0
         cat cert.key | base64 -w0

        Now apply the secrets by following command - 

        kubectl apply -f ca-secrets.yaml

         

        Step 2: now create a certificate issuer name with ca-issuer.yaml 

        apiVersion: cert-manager.io/v1alpha2
        kind: ClusterIssuer
        metadata:
         name: k-issuer
         namespace: cert-manager
        spec:
         ca:
           secretName: ca-key-pair

        Here secretName is the secret which we created in step 1 ca-key-pair

         

        Step 3: Now create a certificate name with cert.yaml to test the issuer -

        apiVersion: cert-manager.io/v1alpha2
        kind: Certificate
        metadata:
          name: test-cert-by-kalyan
        spec:
          secretName: k-key-pair
          dnsNames:
          - "*.default.svc.cluster.local"
          - "core2.default.com"
          isCA: true
          issuerRef:
            name: k-issuer
            kind: ClusterIssuer

        Here the issuerRef.name and issuerRef.kind is important.

         

        If you want to use with your ingress then just write this in annotations - 

        cert-manager.io/cluster-issuer: k-issuer

        Thats all we need to do. for more information follow the link CA issuer Cert-Manager If you have any question or problem please comment. I'll reply. Thank you.

        Be the first person to like this

            How to Add a New Node to Kubespray Manased Production Ready Kubernetes Cluster

            I have a production ready kubernetes cluster managed by kubespray. Now I need to add additional node to the cluster. Here I am showing you how you can add a additional node to your cluster. Here my existing cluster - 

            ╰─ kubectl get nodes                                                                                                                                        
            NAME    STATUS   ROLES                  AGE    VERSION
            node1   Ready    control-plane,master   28d    v1.21.4
            node2   Ready    control-plane,master   28d    v1.21.4
            node3   Ready    <none>                 28d    v1.21.4

             

            Now I want to add a new node4 to the cluster. First of all I need to edit the existing ansiable host file  which is inventory/mycluster/hosts.yaml

            # file path - inventory/mycluster/hosts.yaml
            all:
              hosts:
                node1:
                  ansible_host: 10.180.63.193
                  ip: 10.180.63.193
                  access_ip: 10.180.63.193
                node2:
                  ansible_host: 10.180.63.151
                  ip: 10.180.63.151
                  access_ip: 10.180.63.151
                node3:
                  ansible_host: 10.180.63.30
                  ip: 10.180.63.30
                  access_ip: 10.180.63.30
                node4:
                  ansible_host: 10.180.63.160
                  ip: 10.180.63.160
                  access_ip: 10.180.63.160
              children:
                kube_control_plane:
                  hosts:
                    node1:
                    node2:
                kube_node:
                  hosts:
                    node1:
                    node2:
                    node3:
                    node4:
                etcd:
                  hosts:
                    node1:
                    node2:
                    node3:
                k8s_cluster:
                  children:
                    kube_control_plane:
                    kube_node:
                calico_rr:
                  hosts: {}

             

            I added node4 information in the above hosts.yaml file. Here my node4 information -

            #my node4 information 
            
                node4:
                  ansible_host: 10.180.63.160
                  ip: 10.180.63.160
                  access_ip: 10.180.63.160

             

            Now run the cluster.yml file to add the new node to the cluster.

            ansible-playbook -i inventory/mycluster/hosts.yaml cluster.yml -u root -b -l node4

             

            Where,

            • -i : inventory file to be used
            • cluster.yml : playbook to deploy a cluster
            • -u root : the user account which we have created on all nodes for password-less ssh access.
            • -b : enable become – sudo access is needed for installing packages, starting services, creating SSL certificates etc.

             

            Wait to finish the process. 

            All done! Now verify the newly added node4 to the cluster.

            ╰─ kubectl get nodes                                                                                                                                        
            NAME    STATUS   ROLES                  AGE    VERSION
            node1   Ready    control-plane,master   28d    v1.21.4
            node2   Ready    control-plane,master   28d    v1.21.4
            node3   Ready    <none>                 28d    v1.21.4
            node4   Ready    <none>                 102s   v1.21.4

            Now node4 is a part of your cluster.

            Vika , Razu and
            34 more liked this

                How to generate kubernetes dashboard access token

                How to generate kubernetes dashboard access token

                 

                1. Create the dashboard service account

                Run the following command to create a service account

                kubectl create serviceaccount kubernetes-dashboard-admin-sa -n kube-system

                The command will create a service account in the namespace of kube-system. replace your namespace instate of kube-system

                 

                2. Bind the service account to the cluster-admin role

                kubectl create clusterrolebinding kubernetes-dashboard-admin-sa --clusterrole=cluster-admin --serviceaccount=kube-system:kubernetes-dashboard-admin-sa

                 

                3. List Secretes

                kubectl get secrets -n kube-system

                 

                4. get the token from secret

                kubectl describe secret kubernetes-dashboard-admin-sa-token-lj8cc -n kube-system

                 

                Here your secret name can be different. Now copy the token and use it to login kubernetes dashboard.

                 

                Lucky , Bidyut and
                28 more liked this

                  Deploy a Production Ready Kubernetes Cluster With lxc Container and Kubespray

                  Deploy a Production Ready Kubernetes Cluster With lxc Container and Kubespray

                  I am going to show the workground how you can use lxc container to create a production grade cluster. Though its hard to create kubernetes cluster with lxc container but its possible. So lets see how we can solve all of those challenges step by step

                  Step 1: Prepare host machine

                  a) edit following file

                  nano /etc/sysctl.conf
                  # Uncomment the next line to enable packet forwarding for IPv4
                  net.ipv4.ip_forward=1

                   

                  b) disable firewall 

                  ufw disable

                   

                  c) disable swap

                  swapoff -a; sed -i '/swap/d' /etc/fstab

                   

                  d) update sysctl settings for kubernetes networking

                  cat >>/etc/sysctl.d/kubernetes.conf<<EOF
                  net.bridge.bridge-nf-call-ip6tables = 1
                  net.bridge.bridge-nf-call-iptables = 1
                  EOF
                  sysctl --system

                   

                  Step 2: Create lxc profile

                  config:
                    boot.autostart: "true"
                    linux.kernel_modules: ip_vs,ip_vs_rr,ip_vs_wrr,ip_vs_sh,ip_tables,ip6_tables,netlink_diag,nf_nat,overlay,br_netfilter,nf_conntrack,xt_conntrack
                    raw.lxc: |
                      lxc.apparmor.profile=unconfined
                      lxc.mount.auto=proc:rw sys:rw cgroup:rw
                      lxc.cgroup.devices.allow=a
                      lxc.cap.drop=
                    security.nesting: "true"
                    security.privileged: "true"
                  description: Default LXD profile
                  devices:
                    eth0:
                      name: eth0
                      network: lxdbr0
                      type: nic
                    root:
                      path: /
                      pool: default
                      type: disk
                  name: microk8s
                  used_by:
                  - /1.0/instances/node1
                  - /1.0/instances/node2
                  - /1.0/instances/node3

                   

                  Step 3: Create a linux container 

                  lxc launch -p default -p microk8s ubuntu:21.04 node1

                   

                  Step 4: Inside container do following 

                  a) following command should return output

                  conntrack -L
                  modinfo overlay

                   

                  b) if above command output error then its seems that there some karnel related problem. Install and fix karnel issue

                  sudo apt install linux-generic
                  sudo apt install --reinstall linux-image-$(uname -r);
                  sudo apt install --reinstall linux-modules-$(uname -r);
                  sudo apt install --reinstall linux-modules-extra-$(uname -r);

                  this should fix karnel related issue.

                   

                  c) Recent kubernetes versions want to read from /dev/kmsg which is not present in the container. You need to instruct systemd to always create a symlink to /dev/console instead:

                  echo 'L /dev/kmsg - - - - /dev/null' > /etc/tmpfiles.d/kmsg.conf

                  if it not working then run following

                  echo 'L /dev/kmsg - - - - /dev/console' > /etc/tmpfiles.d/kmsg.conf

                   

                  if it still not work then do following

                  # Hack required to provision K8s v1.15+ in LXC containers
                  mknod /dev/kmsg c 1 11
                  echo 'mknod /dev/kmsg c 1 11' >> /etc/rc.local
                  chmod +x /etc/rc.local

                   

                  d) if you need to load any module then you can run following comman

                  # cmd    module name
                  ------------------------
                  modprobe br_netfilter

                   

                  Thats all. Now follow the Kubespray official document.

                  To access k8s cluster without execing into master node

                  Download the kubectl command into your local.

                  which kubectl
                  # output: /usr/bin/kubectl

                  Create .kube directory

                  mkdir ~/.kube

                  copy config from kmaster into .kube directory

                  lxc file pull kmaster/etc/kubernetes/admin.conf ~/.kube/config
                  
                  #check cluster
                  kubectl get nodes

                   

                   

                  Maria , Kristina and
                  31 more liked this

                    How to Permanently Solve "Temporary failure in name resolution" Issue

                    Permanently Solve Temporary Failure in Name Resolution Issue

                    Sometimes we face issue like 

                    kalyan@ubuntu:~$ ping jadukori.com
                    ping: jadukori.com: Temporary failure in name resolution

                     

                    We can easily solve the issue by editing /etc/resolv.conf  

                    just add to the file 

                    namespace 8.8.8.8

                     

                    But the solution does not work when system reboot. 

                    So lets solve permanently the problem. 

                    Step 1: Install following lib

                    sudo apt install resolvconf

                    Step 2: edit following file

                    nano /etc/resolvconf/resolv.conf.d/head

                    add following record to the file

                    nameserver 8.8.8.8
                    nameserver 8.8.4.4
                    nameserver 1.1.1.1

                     

                    now save and reboot the system. 

                    Congratulation!! We solve the problem permanently.

                     

                    Jennifer , FinderBD and
                    25 more liked this

                      Prime Minister’s Education Trust Job Circular
                      public Job
                      Job Type: Full-time
                      Salary: 35500 - 67010 tk. per month
                      Application Deadline: 2021-01-13

                      Applications are invited from genuine Bangladeshi nationals for direct appointment to the following posts in the Prime Minister's Education Assistance Trust.   Job summery at a glance - 

                       

                      Name of Organisation 

                      Prime Minister’s Education Assistance Trust 

                      Circular Publish Date 

                      21st  December 2020 

                      Source 

                      Official Website 

                      Job Type 

                      Government Job 

                      Post Name 

                      Programmer 

                      Number of Post 

                      1 

                      Nature of Job 

                      Full time 

                      Age 

                      35 years 

                      Educational Qualification 

                      As per circular 

                      Salary 

                      35500- 67010 

                      Job Location 

                      Dhaka 

                      Official Website to Apply 

                      http://www.pmeat.gov.bd/ or http://pmeat.teletalk.com.bd  

                      Deadline 

                      13th Jan 2021 

                       

                       

                       

                      Tara , Fokrul and
                      45 more liked this
                      Back
                      friends & family