INDEX
########################################################### 2024-01-02 21:10 ########################################################### Cantrill Docker Fundamentals https://www.youtube.com/playlist?list=PLTk5ZYSbd9Mg51szw21_75Hs1xUpGObDm VM vs Physical vs Containers is a question of resource usage and isolation Hypervisors (VMware, KVM, HyperV) manage VM hosts - between host and VM OS Can run hypervisor across multiple VM hosts (allows VM migration) Docker engine runs multiple apps and libraries on a single host OS (lighter than VMs) Docker itself does not isolate resource usage - can impact other containers if heavy usage docker version Docker Client: runs docker desktop and CLI Docker Host: runs dockerd (main API) - manages containers and images Docker HUB (registry): storage for images (e.g. ubuntu) docker pull # pulls images from HUB to client docker build # builds an image from a DockerFile docker run # runs a container (like an image but is read-write) docker push # pushes an image into a registry docker ps # lists containers running on the host (shows CONTAINER_ID) docker images # list images on the host (shows IMAGE_ID) docker run hello-world # pulls image, creates container and runs it docker ps -a # lists containers on the host (running or exited) An image is immutable (any changes to it make a fully new image) A container is [an image + a writeable layer] (isolated storage space) Images are made of combined independent data layers (i.e. linux, env/libs, app) As layers are independent they can be reused and updated in pieces The main docker hub is "hub.docker.com" docker pull acantrill/containerofcats # downloads (latest) docker image docker inspect IMAGE_ID # gives metadata on the image docker run -p 8081:80 IMAGE # runs image and maps port 8081 (outside) to 80 (inside) # Go to http://localhost:8081 - CTRL-C on terminal will kill the container/website docker run -p 8081:80 -d IMAGE # runs image in detatched mode (in the background) docker port CONTAINER_ID # shows how ports are mapped for this container docker exec -it CONTAINER_ID ps -aux # runs "ps -aux" inside the container docker exec -it CONTAINER_ID sh # opens a shell inside the container docker restart CONTAINER_ID # similarly start/stop docker rm CONTAINER_ID # deletes a container docker rmi IMAGE_ID # deletes an image docker logs CONTAINER_ID -t # get container logs with timestamps Dockerfiles are used to build docker images - can also be used to run a container FROM # Sets the base image to expand on (e.g. alpine linux) LABEL # Add metadata (e.g. desc/author) RUN # Runs commands in a new layer (e.g. install or configure something) COPY # Copies new files from client machine to image layer ADD # Same as COPY but can add web files CMD # Sets detault executable of a container (can be overridden by "docker run") ENTRYPOINT # Same as CMD but cannot be overridden (for single purpose images) EXPOSE # Documents what ports are being used (just documentation - not actual config) To run a basic website, have a website folder (e.g. "./2048/") and an nginx dockerfile # Dockerfile... FROM nginx:latest # Useful for having basic website containers COPY 2048 /usr/share/nginx/html # Copy website files to the container EXPORT 80 # Documents that port 80 will be used CMD ["nginx", "-g", "daemon off;"] # Sets what is run when you do "docker run" docker built -t dock2048 . # Runs container with label "dock2048" from current directory (.) docker run -d -p 8081:80 dock2048 Can similarly make a website using more customised OS interaction # Dockerfile... FROM redhat/ubi8 # (layer 1) Download a full general purpose redhat image RUN yum -y install httpd # (layer 2) Have to manually install httpd in container COPY index.html /var/www/html/ # (layer 3) COPY *.jpg /var/www/html/ # (layer 4) much less efficient ENTRYPOINT ["/usr/sbin/httpd", "-D", "FOREGROUND"] # Run httpd docker build -t cats . # Runs a more complex build process docker run -d -p 8081:80 cats Container storage = (conceptual) image layer + written layer -> Union file system Data writes go into the writable layer - container sees this as one filesystem A tmpfs is RAM storage inside the containre (non-persistent, unshareable) - temporary storage A bind mount is a folder on host system mounted in the container (persistent, shareable) But data outside of the container relies on the host being configured in an expected way A volume is a file system managed by docker (persistent, shareable AND consistent) But volumes have no file locking so mounting with multiple containers must be careful Host networking = container shares the same network as the host (80 inside = 80 outside) This is simple but can cause port conflicts (when multiple containers want the same port) Bridge networking = containers on a host are each given their own private bridge IP Any containers in the same network can communicate with each other (using their bridge IP) To access containers from outside the host, you need to publish them (-p 1338:1337) Setting ENV variables on run allow you to specify starting conditions for a program docker run -e KEY=VALUE IMAGE # Run a container with an environment variable docker inspect IMAGE # Look for "Env" where environment variables are set docker inspect CONTAINER_ID # Find bridge IPs for the container (e.g. 172.17.0.3) # Containers can interact with each other using these bridge IPs without exposing ports # e.g. can run phpMyAdmin and mariadb on separate containers and connect with internal IPs Can make a bind mount for a docker container to have persistent storage --mount type=bind,source="$(pwd)"/data,target=/var/lib/mysql # mounts the folder ./data -v "$(pwd)"/data:/var/lib/mysql # Same as --mount but less verbose # Now running the container writes to this ./data folder persistently Volumes are a more preferable method of persistent storage docker volume create data # Creates a volume called "data" docker volume ls # List volumes docker volume inspect VOLUME_ID # Gives metadata about that volume docker volume rm VOLUME_ID # Deletes the volume # These are either made explicitly (with "volume create") or from a relevant "docker run" docker run ... --mount source=data,target=/var/lib/mysql # creates a volume at runtime # If this volume already exists, it uses the existing one Docker compose is used to manage docker resources using a "compose.yaml" file docker compose up -d # Creates or adjusts docker resources using compose.yaml config docker compose -f FILENAME up # To run a compose file of another name # compose.yaml services: db: # the name of the container image: mariadb:10.6.4-focal command: ... # command to run the container volumes: # list the volumes used by the container - mariadb_data:/var/lib/mysql restart: always # restart on failure? environments: # define environment variables - MYSQL_ROOT_PASSWORD=somewordpress expose: # ports to expose - 3306 - 33060 wordpress: # similarly, another container volumes: # Define named volumes to use mariadb_data: # name of the volume - with any info about it Can also use docker compose to clean up the system docker compose down # stops and deletes all the containers (but not the volumes) Container registries are like Github for images: a cloud/registry of repositories docker pull USER/REPO:TAG # Download repository image (e.g. TAG = "latest") # To make a new repo, go on DockerHub and "create repository" - make it public docker login --username=USER # enter password # Can generate access tokens (use instead of passwords) for access restrictive actions docker build -t IMAGE_NAME . # Builds an image from a Dockerfile docker tag IMAGE_ID USER/REPO:TAG # Associates this local image with a dockerhub image # Actually just copies the image under a new name docker push USER/REPO:TAG # Pushes this image to dockerhub
ArjanCodes Docker Local Development https://www.youtube.com/watch?v=zkMRWDQV4Tg Can use layering of Dockerfile to your advantage - only reload pieces when needed COPY ./requirements.txt /app RUN pip install --no-cache-dir --upgrade -r requirements.txt COPY . /app # If just the CODE changes then requirements are not reloaded You will need to rebuild this every time you change your code - or use docker compose services: app: build: . # Force it to build before starting container_name: channel-api command: uvicorn main:app --host 0.0.0.0 --port 80 --reload ports: - 8080:80 volumes: - .:/app # Map the folder instead of copying it
Hussein Nasser - Docker Networking https://www.youtube.com/watch?v=OU6xOM0SE4o May not be able to interact to the bridge network but can interact within it docker inspect network bridge # See all containers in bridge network docker exec -it CONTAINER ping google.com # Can reach external network # Containers do not have internal DNS so cannot ping by hostname - resolves to host IP docker exec -it A nslookup B # This fails Within bridge network, containers can ping/curl each other (insecure?) docker network create backend --subnet 10.0.0.0/24 # Create network called "backend" docker network inspect backend # See "Internal=false" means data cannot leave network # Create with --internal to isolate from bridge/internet # Containers can be part of multiple networks docker network connect backend CONTAINER # Connect container to a network docker network disconnect bridge CONTAINER # Remove container from a network # But now servers have internal DNS - isolated networks have internal DNS docker exec -it A nslookup B # This now resolves (can also use container ID) How can you make multiple networks to interact together? docker network create frontend --subnet 10.0.1.0/24 # Can add conA to frontend and conB to backend - how to interact between? # Create a gateway container docker run --name gw --network backend -d IMAGE docker network connect frontend gw # Can ping conA and conB from within gw but cannot interact THROUGH gw docker run ... --cap-add=NET_ADMIN # Rerun both conA and conB # Say conA=10.0.1.0 (frontend) and conB=10.0.0.0 (backend) ip route add 10.0.0.0/24 via 10.0.1.3 # ON conA to route to conB via gw ip route add 10.0.1.0/24 via 10.0.0.3 # ON conB to route to conA via gw # But still cannot ping by hostname, as DNS is not also routed So to link 2 networks you need a container to link them and routes inside containers
########################################################### 2024-01-04 19:00 ########################################################### Devops toolkit... Manage Container Images with Harbor https://www.youtube.com/watch?v=f931M4-my1k Harbor lets you manage Docker containers to a higher level In website config, ensure "cosign" and "prevent vulnerable images" are ticked docker login --username admin harbor.$INGRESS_HOST.nip.io # Log in to harbor ... --insecure-registry # To use without TLS To use this, build, upload and download images as normal with a new url IMAGENAME="harbor.$INGRESS_HOST.nip.io/dot/silly-demo:v0.0.1" docker image build --tag IMAGENAME . # Builds the image docker image push IMAGENAME # Upload image docker image pull IMAGENAME # Tries to download it - fails as not signed cosign sign --key cosign.key IMAGENAME # Signs the image, proving ID of owner # Still fails as this specific container has CVEs (vulnerabilities) # Fix this, rebuild, push, sign and pull again Can now store charts in helm (not sure what this is really for) helm package helm helm registry login harbor.$INGRESS_HOST.nip.io --insecure # Push helm push silly-demo-0.0.2.tgz oci://harbor.$INGRESS_HOST.nip.io/dot So harbor is essentially an alternative to things like dockerhub - has other uses too
########################################################### 2024-01-05 11:30 ########################################################### Devops toolkit... Docker multi-stage builds https://www.youtube.com/watch?v=zpkqNPwEzac Can you use Dockerfile for more than just building images? # Dockerfile-simple FROM alpine:3.4 EXPOSE 8080 CMD ["demo"] COPY demo /usr/local/bin/demo # More likely to change = lower on list RUN chmod +x /usr/local/bin/demo docker image build --tag demo --file Dockerfile-simple . # Fails as needs /bin/demo Everything required to build the image should be in the image # Dockerfile-fat FROM gotlang:1.16 AS build EXPOSE 8080 CMD ["demo"] ADD . /src WORKDIR /src RUN go test --cover -v ./... # Do unit tests RUN go build -v -o /usr/local/bin/demo RUN chmod +x /usr/local/bin/demo docker image build --tag demo --file Dockerfile-fat . docker image list # Shows a docker image that is massive for a small binary How can you make the final image smaller? # Dockerfile FROM goland:1.16 AS build ADD . /src WORKDIR /src RUN go test --cover -v ./... RUN go build -v -o demo #### FROM alpine:3.4 # Final image is based on the last FROM text EXPOSE 8080 CMD ["demo"] COPY --from=build /src/demo /usr/local/bin/demo # Copy from previous step RUN chmod +x /usr/local/bin/demo docker image build --tag demo . docker image list # Final image has goen from 875MB to 17.2MB
########################################################### 2024-01-15 12:40 ########################################################### Jessie Frazelle... Container Hacks https://www.youtube.com/watch?v=cYsVvV1aVss Alias normal commands like "chrome" to run docker containers - simplifies startup OpenGL even works on chrome with audio Need to add a few Xresources items to get things to work # An example for clipboard URxvt*cliboard.copycmd xclip -i selection clipboard URxvt*cliboard.pastecmd xclip -o selection clipboard Get things like spotify working in a container by mounting dev sound # Find what packages install a given file to resolve dependencies apt-file search GL.so # e.g. looking for openGL Can run VirtualBox but needs kernel module "sudo modprobe vboxdrv" Can also run Tor, gimp, pulseaudio, skype (mount /dev/video0) docker run -it --name tor --net host jess/tor # Route net traffic through Tor Can mount volumes to cache folders (e.g. ~/.cache/skype) to keep logins etc. For basic GUIs you don't need any extra installs For opengl you need whatever drivers are on your system in the container