INDEX
########################################################### 2024-01-03 13:30 ########################################################### Devops Journey... ArgoCD Starter Guide https://www.youtube.com/watch?v=JLrR9RV9AFA Declarative GitOps tools for CD (continuous deployment) Compares actual to desired state (in git repo) Have a separate repo for app (code to test and build) and config (manfiests, charts etc.) [app changes -> test -> build -> push to container repo -> update config repo -> argoCD] Set up argocd: kubectl create namespace argocd kubectl apply -n argocd -f ARGORUL # Downloads argoCD install manifest and runs it in kube kubectl get all -n argocd kubectl port-forward service/argocd-server -n argocd 8080:443 # Runs temporary port forwarding # Need to extract the password from secrets and base64 decode it - use this on the website kubectl -n argocd get secret argocd-initial-admin-secret \ -o jsonpath="{.data.password}" | base64 -d Now go on the website and do "New App". "Sync policy" defines when changes will be applied Usually people go manual with prod and automatic with dev/test. Source is the git repo to track, destination is the choice of kubernetes cluster "Out of Sync" (in manual mode) means you need to press "sync" - builds app If you lower the replicas value it gracefully shuts down the excessive pods To rollback, go to "History and Rollback", find the revision you want, press 3dots and rollback When Kustomise changes it makes a new config map (so changes are detectable) - Helm does not Can also control argocd from command line argocd login 127.0.0.1:8080 argocd app list # Lists applications being run # Can create a new app (APP_NAME=argocd/webapp-kustom-prod) to track argocd app create webapp-kustom-prod --repo URL --path kustom-webapp/overlays/prod \ --dest-server https://kubernetes.default.svc --dest-namespace prod kubectl create namespace prod # Need to make sure namespace exists argocd app sync APP_NAME argocd app diff APP_NAME # See what a difference is before syncing argocd app history APP_NAME # Get change history of an app argocd app rollback APP_NAME CHANGE_ID # Get id from the history argocd app get APP_NAME # Get info about an app argocd app delete APP_NAME # Deletes app and clears everything
Devops Toolkit... Argo CD - Applying GitOps Principles https://www.youtube.com/watch?v=vpWQeoaiRM4 This assumes kubernetes and argoCD are already installed - and you have some app repositories Define the kubernetes config for tracking 2 application repos kubectl create namespace production # ./helm/Chart.yaml apiVersion: v1 description: Production env name: production version: "1.0.0" # ./helm/templates/devops-toolkit.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: devops-toolkit namespace: argocd finalizers: - resources-finalizer.argocd.argoproj.io spec: project: default source: # This defines the code repo to read from path: help repoURL: https://github.com/vfarcic/devops-toolkit.git targetRevision: HEAD helm: values: | image: tag: latets ingress: host: devops-toolkit.IP_ADDRESS.xip.io # Replace this with an actual IP version: v3 destination: namespace: production server: https://kubernetes.default.svc # Run in default cluster syncPolicy: automated: # Automatically sync kubernetes with changes selfHeal: true # Undo changes that are manually done to kubernetes prune: true # Autodelete objects that are removed from git cp ./helm/templates/devops-toolkit.yaml ./helm/templates/devops-paradox.yaml # ./helm/templates/devops-paradox.yaml # SAME as devops-toolkit.yaml but change name, repoURL and ingress host Now create a separate kubernetes config repo and config file So essentially whenever the kubernetes config is updated, rebuild everything cp ./helm/templates/devops-toolkit.yaml ./apps.yaml # ./apps.yml - Define a main repo for managing everything # SAME as devops-toolkit.yaml but delete all helm parts git init gh repo create --public # argocd-15-min, None, Yes git add . git commit -m "Initial Commit" git push --set-upstream origin master # Push to newly made repo kubectl apply --filename apps.yaml # Create a single environment # Change a config file, commit and check kubernetes to see what version is running git add .; git commit -m "New release"; git push kubectl --namespace production get deployment devops-toolkit-devops-toolkit \ --output jsonpath="{.spec.template.spec.containers[0].image}"
########################################################### 2024-01-04 17:30 ########################################################### Devops Toolkit... Argo Events: Event-Based Dependency Manager for Kubernetes https://www.youtube.com/watch?v=sUPkGChvD54 Events are async - publishing and consuming should be decoupled - this is what kubernetes is kubectl create namespace argo-events kubectl apply --filename ... # Download and apply the argo-events manifest kubectl apply --namespace argo-events --filename ... # Download also "eventbus" git clone https://github.com/vfarcic/argo-events-demo.git; cd argo-events-demo # Example repo Now create event source (webhook which creates http events) # event-source.yaml apiVersion: argoproj.io/v1alpha1 kind: EventSource metadata: name: webhook spec: service: ports: - port: 12000 targetPort: 12000 webhook: # This is a type of event devops-toolkit: # Name of the event port: "12000" endpoint: /devops-toolkit # URL endpoint method: POST # Run the webhook and expose the port for it kubectl --namespace argo-events apply --filename event-source.yaml kubectl --namespace argo-events get pods # Get the name of the eventsourcep pod export EVENTPOD=... # Name of the pod kubectl --namespace argo-events port-forward $EVENTPORT 12000:12000 & # Open port in bg # Actually query the endpoint curl -X POST -H "Content-Type: application/json" \ -d '{"message": "My first webhook"}' http://localhost:12000/devops-toolkit # Response will be "success" - received an event but no info about if it uses it Now create a sensor (something that responds to events) # sensor.yaml apiVersion: argoproj.io/v1alpha1 kind: Sensor metadata: name: webhook spec: template: serviceAccountName: argo-events-sa dependencies: - name: payload # This is the name of the data object eventSourceName: webhook eventName: devops-toolkit # Links to the webhook itself triggers: # This is what is triggered when the webhook has an event - template: name: payload k8s: # This is just a type of trigger - do something with kubernetes group: "" version: v1 resource: pods operation: create # Create a pod source: # Type of pod to create resource: apiVersion: v1 kind: Pod metadata: generateName: payload- # Prefix the new pod name labels: app:payload spec: containers: # Make a basic alpine container that just echoes - name: hello image: alpine command: ["echo"] args: ["This is the message you sent me:\n", ""] # 2nd space = payload data restartPolicy: Never parameters: - src: dependencyName: payload dataKey: body.message dest: spec.containers.0.args.1 # Pass to 1st container in 2nd arg kubectl --namespace argo-events apply --filename sensor.yaml # Now rerun the curl command and it will build a pod - so check the list kubectl --namespace argo-events get pods kubectl --namespace argo-events logs --selector app=payload # Gets i/o data from trigger Other triggers could be things like argoworkflow (feed back into itself), slack or logs The event bus is what communicates these events, independent of what they actually are Could get github to respond to push webhooks and triggering workflows - event based CICD
########################################################### 2024-01-09 23:15 ########################################################### That Devops Guy... Intro to Flux CD https://www.youtube.com/watch?v=X5W_706-jSY Gitops is system where git is the single source of truth for an infrastructure Instead of pushing changes to git, changed are pull and applied from git # Install a light kubernetes cluster inside a docker container, for testing kind create cluster --name fluxcd --image kindest/node:v1.26.3 # Run a small alpine container docker run -it --rm -v ${HOME}:/root/ -v ${PWD}:/work -w /work --net host alpine sh # Install kubectl with curl kubectl get nodes "Bootstrapping" is getting flux to manage itself through git - look at github docs Can bootstrap whenever we need to store changes into git # Download flux cli from github (curl it, untar, mv to /usr/local/bin and chmod) flux check --pre # Checks cluster is compatible with flux export GITHUB_TOCKEN=... # Get access token from github flux bootstrap github --token-auth --owner=GITHUB_OWNER --repository=GITHUB_REPO \ --path=PATH_IN_REPO --personal --branch BRANCH_NAME Usually standard to have multiple repos for different environments Look in flux docs "Ways of structuring your repos" - monorepo is all in a single repo flux check # Checks cluster is okay kubectl -n flux-system get GitRepository # Object pointing to github repo kubectl -n flux-system get Kustomization # Where to find yaml in the git repo Flux only does CD (hosting the code), not CI (building the code) # Manually do the CI steps: building and pushing images to k8 docker build . -t example-app-1:0.0.1 kind load docker-image example-app-1:0.0.1 --name fluxcd Tell flux what git repo to point to in a manifest file # gitdeployment.yaml apiVersion: source.toolkit.fluxcd.io/v1 kind: GitRepository metadata: name: example-app-1 namespace: default spec: interval: 1m0s ref: branch: fluxcd-2022 url: GITHUB_URL # Full http url # kustomization.yaml apiVersion: kustomize.toolkit.fluxcd.io/v1 kind: Kustomization metadata: name: example-app-1 namespace: default spec: interval: 15m path: "./kubernetes/fluxcd/repositorie/example-app-1/deploy" # Where flux looks in the repo prune: true sourceRef: kind: GitRepository name: example-app-1 kubectl -n default apply -f repository/infra-repo/apps/example-app-1/gitrepository.yaml kubectl -n default apply -f repository/infra-repo/apps/example-app-1/kustomization.yaml kubectl get all # Checks resources being deployed Only thing the developer should have to do is change the code and push it
########################################################### 2024-01-11 ########################################################### Cloud Champ... Learn Complete GitLab CI/CD https://www.youtube.com/watch?v=JWXVijJfnHc Create account and validate payment for pipelines (this is using public version) Create a project - with no deployment - make a file for gitlab ci # .gitlab-ci.yml (look at gitlab docs) build_job: script: - echo "Hello from gitlab" # Commit this and it will tick saying "CI config is valid" - Build/Pipelines # Look at piplines -> jobs -> see logs for build CI is run with ruby on gitlab "runners" Test this through building a node js program # .gitlab-ci.yml build: script: - apt update -y - apt install npm -y - npm install deploy: script: - apt update -y - apt install nodejs -y - node app.js # Make project on gitlab, copy https link of repo git remote set-url origin URL git remote -v git branch feature; git branch feature # Change to "feature" branch git add .; git commit -m "app"; git push -u origin feature # Push code changes Now go on pipelines on gitlab and there are "build" and "deploy" jobs The "build" pipeline should be run first, so make stages # (extending) .gitlab-ci.yml stages: - build_stage - deploy_stage build: stage: build_stage script: ... deploy: stage: deploy_stage script: ... # Commit this and pipeline now how order - but they do not share data between them Use artifacts to share data between jobs # (extending) .gitlab-ci.yml ... build: ... artifacts: paths: - node_modules - package-lock.json ... # Commit and rerun - in pipeline should see artifacts (can browse them) Simplify gitlab process by running directly in node # .gitlab-ci.yaml stages: - build_stage - deploy_stage build: stage: build_stage image: node # Run using node docker image script: - npm install artifacts: paths: - node_modules - package-lock.json deploy: stage: deploy_stage image: node script: - node app.js > /dev/null 2>&1 & # Run application in background # Commit and this should actuallly deploy the site In repository, merge "feature" to "main" - approve and delete old branch - reruns pipeline Gitlab uses shared runners can you can use your own - these execute jobs in CICD Settings - CI/CD - Runners - "New project runner" - (pick OS) Can run your own runner by installing "Gitlab Runner" - can make one in AWS Open EC2 instance and copy steps in "New project runner" for Linux dpkg --print-architecture # Get cpu architecture # Follow steps given to download and install - then register # Set tags so in your pipelines you can decide what runs where # Executor is anywhere from kubernetes to shell sudo gitlab-runner status # Check if service is running - linked to gitlab vim /home/gitlab-runner/.bash_logout # Comment out these items (breaks pipelines) # Now on gitlab it shows an "assigned project runner" Try making a CI pipeline for a flask app - commit code to repo in "app.py" # Dockerfile FROM python:3.8.0-slim WORKDIR /app ADD . /app RUN pip install --trusted-hsot pypi.python.org Flask ENV NAME Mark CMD ["python", "app.py"] # .gitlab-ci.yaml stages: - build_stage - deploy_stage build: stage: build_stage script: - docker --version # Display version in logs - docker build -t pyapp . tags: # Use tags assigned to "runner" - ec2 - server deploy: stage: deploy_stage script: - docker run -d --name pythoncontainer -p 80:8080 pyapp tags: - ec2 - server # Make sure docker is installed on runner and commit this change to trigger pipeline There are predefined variables you can run in gitlab (e.g. CI_COMMIT_MESSAGE) # .gitlab-ci.yaml demo: script: - echo $CI_JOB_NAME - echo $GITLAB_USER_ID # Build this and it just pastes these values to the console Can define your own variables in a "variables" section # .gitlab-ci.yaml variables: Name: "VariableName" Message: "this is the variable content" demo: script: - echo "This is $Name and $Message" # This is VariableName and this is the variable content Can also set global variables in settings - for protected/secret keys # Say you put in the variables $DOCKER_USERNAME and $DOCKER_TOKEN for Dockerhub docker stop pythoncontainer && docker rm pythoncontainer docker run -d --name pythoncontainer -p 80:8080 pyapp docker login -u $DOCKER_USERNAME -p $DOCKER_TOKEN docker tag pyapp $DOCKER_USERNAME/pyapp docker push $DOCKER_USERNAME/pyapp Look at docs for other keywords like "before_script" and "after_script" Can set timeouts and schedules for pipelines to run
That Devops Guy... Intro to Drone CI https://www.youtube.com/watch?v=myCcJJ_Fk10 DroneCI is automated pipeline controller - can run on any platform for any code Isolate build steps in docker containers and can be run in kubernetes First thing you need is a drone server with public ip (ingress service) Then on github register OAuth app which allows webhooks to drone server (through ingress) Pipelines run commands on runners - k8 runner creates jobs using pods by linking to k8 api DroneCI needs a database in the backend (recommends postgres) # postgres.yaml (define the database needed for DroneCI) apiVersion: v1 kind: ConfigMap # Define the credentials for postgres metadata: name: postgres-config labels: app: postgres data: POSTGRES_DB: postgresdb POSTGRES_USER: postgresadmin POSTGRES_PASSWORD: admin123 # probably better to use a secret --- apiVersion: apps/v1 kind: StatefulSet # Run stateful set of postgres pods metadata: name: postgres spec: serviceName: "postgres" selector: matchLabels: app: postgres replicas: 1 template: metadata: labels: app: postgres spec: containers: - name: postgres image: postgres:10.4 imagePullPolicy: "IfNotPresent" ports: - containerPort: 5432 envFront: - configMapRef: name: postgres-config --- apiVersion: v1 kind: Service # Expose postgres to other services metadata: name: postgres labels: app: postgres spec: selector: app: postgres ports: - protocol: TCP name: http port: 5432 targetPort: 5432 kubectl create ns drone kubectl apply -f drone-ci/postgres/postgres.yaml # Apply postgres Now need to set up the network routing for droneCI # droneserver-service.yaml apiVersion: v1 kind: Service metadata: name: droneserver labels: app: drone-server spec: type: ClusterIP # As going to be exported with ingress - alt set to "LoadBalancer" selector: app: drone-server ports: - protocol: TCP name: http port: 80 targetPort: 80 - protocol: TCP name: http port: 443 targetPort: 443 # droneserver-ingress.yaml apiVersion: extensions/v1beta1 kind: Ingress metadata: name: drone-server annotations: kubernetes.io/ingress.class: "traefik" traefik.ingress.kubernetes.io/frontend-entry-points: http,https traefik.ingress.kubernetes.io/redirect-entry-point: https traefik.ingress.kubernetes.io/redirect-permanent: "true" # Force https spec: rules: - host: drone.marceldempers.dev # Points to custom domain on website (public exposed) http: paths: - backend: serviceName: droneserver servicePort: 80 path: / kubectl apply -f drone-ci/server/droneserver-service.yaml # Expose pod to k8 kubectl apply -f drone-ci/server/droneserver-ingress.yaml # Expose k8 to public Now link Oath with github - go in "developer settings" and register new Oath Use as homepage whatever the "host" is in the ingress - host or IP Callback is "https://..../login" so users can login through github on /login Gives you a client ID and a client secret - need to make a shared secret docker run -it --rm debian:buster bash apt update && apt install -y openssl # Install openssl in docker openssl rand -hex 16 | base64 -w 0 # Generate shared secret (ignore any ==) # droneserver-secrets.yaml apiVersion: v1 kind: Secret metadata: name: drone-server-secret type: Opaque data: # All of this should be pre-encoded to base64 DRONE_GITHUB_CLIENT_ID: ... # From Github OATH DRONE_GITHUB_CLIENT_SECRET: ... # From Github OATH DRONE_RPC_SECRET: ... # From that "openssl" command DRONE_DATABASE_DATASOURCE: ... # Do "postgres://USERNAME:PASS@postgres:5432/DB" DRONE_USER_CREATE: ... # Do "username:GITHUB_USERNAME,admin:true" DRONE_SERVER_HOST: drone.marceldempers.dev kubectl -n drone apply -f drone-ci/server/droneserver-secrets.yaml The droneserver itself is a basic deployment running "drone/drone:1.6.5" on 80/443 Also remember to link to environment variables to secrets file kubectl -n drone apply -f drone-ci/server/droneserver-deployment.yaml kubectl -n drone get pods kubectl -n drone logs DRONEPOD # Check if pod is starting scheduler properly Server just waits for jobs to run so needs a runner Define dronerunner rbac (role permissions for managing pods) and dronerunner delpoyment Dronerunner is just "drone/drone-runner-kube:latest" on 3000 - needs specific ENV variables Now open dronerunner on public website - authenticate and it scans all your repos Press "activate" and it creates a webhook on github for you - here you can schedule pipelines # drone.yaml kind: pipeline type: kubernetes name: default steps: - name: build-push image: docker:dind volumes: - name: dockersock path: /var/run environment: DOCKER_USER: from_secret: DOCKER_USER DOCKER_PASSWORD: from_secret: DOCKER_PASSWORD commands: - sleep 5 # Wait for Docker to statr - docker login -u $DOCKER_USER -p $DOCKER_PASSWORD # Inject secrets to pipline - docker build ./goland -t aimvector/golang:1.0.0 - docker push aimvector/goland:1.0.0 # Add secrets into dronerunner website and set "configuration" to be "drone.yaml" # Commit to github and it should run fairly quickly
That Devops Guy... Intro to Argo CD https://www.youtube.com/watch?v=2WSJF7d8dUg Argo has a good "getting started" doc kubectl create namespace argocd kubectl apply -n argocd -f URL # OR better download the code and apply locally This applies a few CRDS: you tell argo about your app and it defines deployment # app.yaml apiVersion: argoproj.io/v1alpha1 kind: Application metadata: name: example-app namespace: argocd spec: project: default source: repoURL: GITHUB_URL targetRevision: HEAD path: argo/example-app # Tells argo the application to look at directory: recurse: true # Look at all subfolders inside - e.g. configmaps, secrets destination: # Can deploy to a different cluster/namespace server: https://kubernetes.default.svc namesapce: example-app syncPolicy: automated: # Can automatically sync build when files change prune: false selfHeal: false kubectl -n argocd apply -f ./argo/argo-cd/install.yaml # Install argocd (alt to URL) You want to keep your CI/CD as secure as possible so just do port forwarding temporarily kubectl port-forward svc/argocd-server -n argocd 8080:443 # Go to localhost:8080 and you get access to dashboard while command running # Can add clusters, certs etc. - though better to manage this in gitops kubectl apply -n argocd -f ./argo/argo-cd/app.yaml # Argo syncing to the git repo # Gives a clear overview of kubernetes cluster - good for debugging too Argo doesn't actually see changes in the app code - the CI step changes what it points to
That Devops Guy... Intro to Argo CD https://www.youtube.com/watch?v=2WSJF7d8dUg Many people store code on github but use other systems to do actions - but can do it all there Every action is like it's own repo - github.com/actions/checkout@v2 - can make your own # Make a .github folder - easier to use the interface which does this for you # .github/workflows/docker.yml (Set up your own workflow) name: Docker Series Builds on: [push] # Can do on scheduling or pulls etc. jobs: build: runs-on: ubuntu-latest # This has docker pre-installed steps: - users: actions/checkout@v2 # Checks out repo and buidls it - name: docker build csharp run: | docker build ./c# -t aimvector/csharp:1.0.0 - name: docker build nodejs run: | docker build ./nodejs aimvector/nodejs:1.0.0 - name: docker build python run: | docker build ./python aimvector/python:1.0.0 - name: docker build golang run: | docker build ./golang aimvector/golang:1.0.0 # Commit this to master branch and it builds all the images Now you need to push them by adding secrets - add in settings/secrets with key-value pair Then refer to them as environment variables in the pipeline # (extending) .github/workflows/docker.yml - name: docker login # Put just after checkout env: DOCKER_USER: ${{ secrets.DOCKER_USER }} DOCKER_PASSWORD: ${{ secrets.DOCKER_PASSWORD }} run: | docker login -u $DOCKER_USER -p $DOCKER_PASSWORDa ... # to the end - name: docker push run: | docker push aimvector/csharp:1.0.0 # And repeat for each build item ... # Commit this and it should build images before pushing to dockerhub using credentials
That Devops Guy... Deploy Jenkins on Kubernetes https://www.youtube.com/watch?v=eRWIJGF3Y2g Essentially deploy jenkins in kubernetes with a deployment and a service # jenkins.deployment.yaml spec: serviceAccountName: jenkins containers: - env: - name: JAVA_OPTS value: ... # Some node margin values image: jenkins/jenkins #:lts-alpine imagePullPolicy: IfNotPresent name: Jenkins ports: ... # 8080, 50000 volumeMounts: - mountPath: /var/jenkins_home name: jenkins # jenkins.service.yaml spec: type: ClusterIP ports: - name: ui port: 8080 targetPort: 8080 protocol: TCP - name: slave # So slave can connect to agent port: 50000 targetPort: 8080 protocol: TCP - name: http port: 80 targetPort: 8080 Jenkins doesn't need very fast storage - can just use network drive storage # jenkins.pv.yaml (PersistentVolume) spec: storageClassName: manual capacity: storage: 4Gi accessModes: - ReadWriteOnce hostPath: path: "/mnt/data" # jenkins.pvc.yaml (PersistentVolumeClaim) spec: storageClassName: manual accessModes: - ReadWriteOnce resources: requests: storage: 4Gi # Claim all storage # In deployment config add the volume Now apply everything and setup Jenkins kubectl create ns jenkins kubectl -n jenkins apply -f ./jenkins/ kubectl -n jenkins logs JENKINS_POD # Get the password to unlock website kubectl -n jenkins port-forward JENKINS_POD # Open up the website # Open localhost:8080, paste in unlock code, install suggested and wait # Create admin user. Manage jenkins - manage plugins - "kubernetes" - install and restart # Login - manage - manage nodes and clouds - configure clouds - add new - kubernetes # Need to fill out lots of info to get this to work Now you can start making pipelines - "new job" - "Definition: from SCM - git" Put in github URL with a Jenkinsfile inside and it should run full pipeline Also now waits for jenkins slave to provision in order to run the CICD processes # Jenkinsfile node('jenkins-slave') { stage('test pipeline') { sh(script: """ echo "hello" git clone REPO_URL cd REPO_FOLDER docker build . -t test """) } }
That Devops Guy... Jenkins on EKS https://www.youtube.com/watch?v=eqOCdNO2Nmk Get an Amazon EKS cluster and curl ...; mv /tmp/eksctl /usr/local/bin # Download eksctl for current OS and install it yum install openssh; mkdir -p ~/.ssh # Install ssh on system # Generate a public and private keypair with a password PASSPHRASE="..."; ssh-keygen -t rsa -b 4096 -N "${PASSPHRASE}" -C "EMAIL" -q -f ~/.ssh/id_rsa chmod 400 ~/.ssh/id_rsa # Make it readable only to owner # Create cluster with these credentials eksctl create cluster --name getting-started-eks --region ap-southeast-2 --version 1.16 \ --managed --node-type t2.small --nodes 1 --ssh-access --ssh-public-key=~/.ssh/id_rsa.pub --node_volume-size 200 EFS storage is a good share for mounting on multiple nodes - need EFS CSI driver for k8 kubectl apply -k ... # Download and install kubernetes-sigs/aws-efs-csi-driver aws eks describe-cluster --name getting-started-eks \ --query "cluster.resourceVpcConfig.vpcId" --output text # Get storage VPC ID aws ec2 describe-vpcs --vpc-ids VPC_ID \ --query "Vpcs[].CidrBlock" --output text # Get cidr range (subnet of storage) # Create security group and authorise traffic aws ec2 create-security-group --describe efs-test-sg \ --group-name efs-sg --vpc-id VPC_ID # Outputs a security group ID aws ec2 authorize-security-group-ingress --group-id SECURITY_ID \ --protocol tcp --port 2049 --cidr CIDR_RANGE # Now nodes can talk to fileshare # Create the storage file system # Go on UI and get subnet ID of the server running EKS - get SUBNET_ID aws efs create-file-system --creation-token eks-efs # Get filesystem ID (FS_ID) aws efs create-mount-target --file-system-id FS_ID \ --subnet-id SUBNET_ID --security-group SECURITY_ID # Get filesystem handler aws efs describe-file-systems --query "FileSystems[*].FileSystemId" --output text Now storage is sorted, configure jenkins # jenkins.pv.yaml (PersistentVolume) spec: capacity: storage: 5Gi volumeMode: Filesystem accessModes: - ReadWriteMany persistentVolumeReclaimPolicy: Retain storageClassName: efs-sc csi: driver: efs.csi.aws.com volumeHandle: FS_HANDLER # From previous step # jenkins.pvc.yaml (PersistentVolumeClaim) spec: accessModes: - ReadWriteMany storageClassName: efs-sc resources: requests: storage: 5Gi kubectl create ns jenkins kubectl apply -f jenkins.pv.yaml; kubectl apply -f jenkins.pvc.yaml # Now create an rbac file to allow Jenkins to deploy/manage pods/secrets rules: - apiGroups: [""] resources: ["pods"] verbs: ["create","delete","get","list","patch","update","watch"] ... kubectl apply -f jenkins.rbac.yaml # Deployment is a simple 1-rep "jenkins/jenkins:2.235.1-lts-alpine" on 8080/50000 # Also create volume linked to that volume claim kubectl apply -f jenkins.deployment.yaml # Service is just a ClusterIP service as in any other jenkins guide kubectl apply -f jenkins.service.yaml Now steps are the same as running locally (don't run jenkins through load balancer - insecure) # ssh to EC2 instance to get info about docker user and group eval $(ssh-agent); ssh-add ~/.ssh/id_rsa ssh -i ~/.ssh/id_rsa ec2-user@EC2_SERVER # Using public domain of EC2 from aws UI id -u docker; grep -i docker /etc/group # Get info you need about docker daemon Now define a docker image for Jenkins slaves # Dockerfile FROM openjdk:8-jdk RUN apt-get update -y && apt-get install -y curl sudo RUN curl -sSL https://get.docker.com/ | sh ... # Essentially set up environment for jenkins to run slave nodes docker build . -t aimvector/jenkins-slave Follow UI for jenkins to set up (port forwarding jenkins to get access) - same as before Now jenkins is configured to work with EKS
That Devops Guy... Create your own Github Actions Runner https://www.youtube.com/watch?v=RcHGqCBofvw Make a self-hosted github runner - which just runs all the processes for github actions Should only run this on private repositories due to security concerns of what it does # Make lightweight k8 cluster inside docker - for local testing kind create cluster --name githubactions --image kindest/node: v1.28 Install docker-cli inside docker container # Dockerfile FROM debian:bookworm-slim ARG RUNNER_VERSION="2.302.1" ENV GITHUB_PERSONAL_TOKEN "..." ENV GITHUB_OWNER "..." ENV GITHUB_REPOSITORY "..." RUN apt update && apt install -y ca-certificates curl gnupg RUN install -m 0755 -d /etc/apt/keyrings RUN ... # Download gpg keys for docker apt repo RUN ... # Add docker repo to apt sources RUN apt-get update RUN apt-get install -y docker-ce-cli # JUST the cli (run, build and push) docker build . -t github-runner:latest docker run -it github-runner bash # Essentially gives you debian with docker inside Go in a github repo - settings - runners - add new self-hosted runner - Linux This gives instructions for downloading and configuring runner (but use personal token) curl -o actions-runner.tar.gz -L URL # Download runner tar xzf ./actions-runner.tar.gz # Extract runner ./configure.sh --url GIT_REPO --token TOKEN # Cannot run this as root # Make a user that can run this command apt-get install sudo jq useradd -m github && usermod -aG sudo github && \ echo "%sudo ALL=(ALL) NOPASSWD:ALL" >> /etc/sudoers; su github # Now need ./bin/installdependencies.sh to install .net dependencies Alternatively we can configure this automatically in the dockerfile as well # (extending) Dockerfile RUN apt install sudo jq RUN useradd ... # Add the user and add to sudoers USER github WORKDIR /actions-runner RUN curl -Ls RUNNER_REPO | tar xz && sudo ./bin/installdependencies.sh COPY --chown=github:github entrypoint.sh /actions-runner/entrypoint.sh RUN sudo chmod u+x /actions/entrypoint.sh RUN sudo mkdir /work ENTRYPOINT ["/actions-runner/entrypoint.sh"] # Simple script to run configure script # entrypoint.sh registration_url="..." # URL for the API to get a registration token payload=`curl -sX POST -H "Authorization: token ${GITHUB_PERSONAL_TOKEN}" $registration_url` export RUNNER_TOKEN=$(echo $payload | jq.token --raw-output) ./config.sh --name $(hostname) --token $(RUNNER_TOKEN} --labels my-runner \ --url https://github.com/${GITHUB_OWNER}/${GITHUB_REPOSITORY} \ --work "/work" --unattended --replace remove() { ./config.sh remove --unattended --token "${RUNNER_TOKEN}"; } trap 'remove; exit 130' INT; trap 'remove; exit 143' TERM # Remove runner if pod dies ./run.sh "$*" & wait $! docker build . -t github-runner:latest docker run -it -e GITHUB_PERSONAL_TOKEN="" -e GITHUB_OWNER=... \ -e GITHUB_REPOSITORY=... github-runner To get this working in kubernetes just wrap the container in a pod Can run a "dind" sidecar-container (docker in docker) - used to build images # kubernetes.yaml spec: containers: - name: github-runner imagePullPolicy: Never # Use local "kind" image iamge: github-runner:latest env: - name: GITHUB_OWNER ... - name: GITHUB_REPOSITORY ... - name: GITHUB_PERSONAL_TOKEN ... - name: DOCKER_HOST value: tcp://localhost:2375 # Point to other container - name: dind image: docker:24.0.6-dind ... # Mounts soem storage volumes and limits resources kind load docker-image github-runner:latest --name githubactions kubectl create ns github kubectl -n github create secret generic github-secret ... # Add in github env variables kubectl -n github apply -f kubernetes.yaml # To scale, you could manually increase number of runners kubectl -n github scale deploy github-runner --replicas 3 # on github this updates the list