INDEX
2023-09-23 19:11 - How can I configure nginx inside docker? Gonna try to run a basic site in docker to try it out- just following chatgpt for basics # ./nginx-site/index.html <DOCTYPE html> <html><body>This is a test website</body></html> Apparently "daemon off" because process must be foreground or by default docker closes # ./nginx-site/Dockerfile FROM nginx:latest COPY . /usr/share/nginx/html EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] Try to execute it - "permission denied" - run as root # Setup container docker build -t nginx-site . # Run the container docker run -d -p 8080:80 nginx-site "localhost:8080" and it's there - but doesn't update live Oh doesn't update cause it just copies the file in the dockerfile Can mount instead with "-v FROM:TO_CONTAINER" # Check all docker processes docker ps -a # Open a shell to check manually docker exec -it NAME /bin/bash # Stop a container and delete it docker stop NAME; docker rm NAME # Run with shared folder (also remove COPY for that folder from Dockerfile and rebuild) docker run -d -p 8080:80 -v ./:/usr/share/nginx/html nginx-site # Check logs - can use "-f" for follow, like tail docker logs NAME Okay so it isn't mounting as expected - going to try to wipe that folder in Dockerfile # ./nginx-site/Dockerfile FROM nginx:latest RUN rm /usr/share/nginx/html/* EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] Moved Dockerfile outside "nginx-site" folder as well # Stop all containers and kill all stopped containers docker stop $(docker ps -a -q); docker container prune # Name the container, delete container when done and mount the folder directly docker run --name nginx-test --rm -d -p 8080:80 \ -v ./nginx-site:/usr/share/nginx/html nginx-site Now I have docker working fine, how can I configure nginx to work # Set docker to mount nginx config file directly and use /srv for hosting docker run --name nginx-test --rm -d -p 8080:80 \ -v ./nginx-test.conf:/etc/nginx/conf.d/nginx-test.conf -v ./nginx-site:/srv nginx-site # Reset container to base state and rerun the original starting command docker restart nginx-test # Check config is working and reload it (instead of restarting container) docker exec nginx-test sh -c 'nginx -t && nginx -s reload' # Wipe all docker images docker rmi $(docker images -q) Need to also configure docker to delete the default config (has a server on port 80) Any conflict between nginx configs will default to the 1st one # ./nginx-test.conf server { listen 80; root /srv; } # ./Dockerfile FROM nginx:latest RUN rm /etc/nginx/conf.d/default.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] Now I have an nginx config I can change and test with - running inside a docker container On a sidenote I want an index file that extracts unique words from notes and indexes them I can use that one file as a reference for "which journals mention python" - no live grep use Could even get it to link to the files instead # Extract all keywords from journals [FILE:WORD] grep -oE '\w+' 20* | awk '{print tolower($0)}' | sort | uniq > _keywordextract # Compare to reference word (run this in a loop for all keywords) grep python _keywordextract | rev | cut -d':' -f2- | rev > _keywordindex_python Dockerfile FROM nginx:latest RUN rm /etc/nginx/conf.d/default.conf EXPOSE 80 CMD ["nginx", "-g", "daemon off;"] nginx-test.conf server { listen 80; root /srv; }
2023-11-21 19:25 - How can I build a recreatable python website in docker? https://www.howtogeek.com/devops/how-to-run-multiple-services-in-one-docker-container/ First of all make a simple docker container which has all the relevant packages installed Ask chatgpt for basic things like this # Dockerfile FROM debian:bullseye-slim RUN apt-get update && \ apt-get install -y python3 python3-pip nginx certbot COPY app/ /app WORKDIR /app COPY etc/requirements.txt . COPY etc/nginx.conf /etc/nginx/nginx.conf RUN pip3 install --no-cache-dir -r requirements.txt EXPOSE 90 EXPOSE 443 Now make etc/requirements.txt (empty) and etc/nginx.conf # nginx.conf http { server { listen 80; server_name localhost; location / { proxy_pass localhost:5000; proxy_set_header Host $host; proxy_set_header X-Real-UP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } Just have to make sure these are copied and running first docker build -t pythonwebsite . docker run -it --rm --name site -d -p 80:80 -v ./src:/app/src pythonwebsite /bin/bash But how do I get both nginx and python running when I launch this? Apparently "supervisor" # etc/supervisord.conf [program:nginx] command=/usr/bin/nginx -g [program:pythonsite] command=/app/src/start.sh Then I need a script to actually start the python program # app/start.sh #!/bin/sh python3 main.py I've changed the Dockerfile part to also build the venv too Dockerfile FROM debian:bullseye-slim ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin RUN apt-get update && \ apt-get install -y python3 python3-pip nginx certbot supervisor COPY etc/supervisord.conf /etc/supervisor/conf.d/supervisord.conf COPY etc/nginx.conf /etc/nginx/nginx.conf WORKDIR /app COPY etc/requirements.txt . RUN pip3 install --upgrade pip RUN pip3 install --no-cache-dir -r requirements.txt EXPOSE 90 EXPOSE 443 ENTRYPOINT ["/bin/sh"] CMD ["/usr/bin/supervisord", "-n"] etc/nginx.conf http { server { listen 80; server_name localhost; location / { proxy_pass localhost:5000; proxy_set_header Host $host; proxy_set_header X-Real-UP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } etc/supervisord.conf [supervisord] nodaemon=true loglevel=debug [program:nginx] command=/usr/bin/nginx -g src/run.sh #!/bin/sh python3 /app/src/main.py src/main.py print("test")
2023-11-18 19:00 - How can I run prometheus in docker and monitor a fastapi site's traffic? https://www.youtube.com/watch?v=tIvHAxs8Fec https://prometheus.io/docs/prometheus/latest/getting_started/ Download prometheus and run on 9090 docker pull prom/prometheus docker run -p 9090:9090 prom/prometheus http://localhost:9090/targets?search= http://localhost:9090/metrics Go to graph page and use items from metrics list http://localhost:9090/graph # Paste in item and go to "graph" to see it # e.g. prometheus_http_requests_total{code="200",handler="/graph"} # Then refresh /graph and this should show an increase in the value http://localhost:9090/status Store prometheus config in a .yml file, using default from "getting_started" global: scrape_interval: 15s external_labels: monitor: 'codelab-monitor' scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090'] Now run prometheus with this config docker run -d --name prometheus -p 9090:9090 \ -v ./prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus # "nmarshal errors:\n line 6: field scrap_configs" - caused by typo So prometheus uses node_exporter to listen to traffic - try to run in docker # https://github.com/prometheus/node_exporter docker run -d --name prom_node --net="host" --pid="host" \ -v "/:/host:ro,rslave" quay.io/prometheus/node-exporter:latest --path.rootfs=/host # So now it's running on "localhost:9100" - see /metrics too curl localhost:9100/metrics | grep "node_" Add in config for this node to prometheus.yml scrape_configs: - job_name: 'node' static_configs: - targets: ['localhost:9100'] # Add --net="host" to prometheus docker command and restart it (so they connect) # Now go to http://localhost:9090/targets to see if both are live # Can now query something like "node_power_supply_capacity{power_supply="BAT0"}" Could use prometheus_client module in python to track metrics inside python. Will look at this later - but that's how to let it track a website's traffic Essentially the python site hosts its own /metrics page for prometheus to look at
2023-12-21 10:00 - Adding python support to run in fastapi First of all adding a new prometheus.yml config for the python code scrape_configs: - job_name: 'pythoncode' static_configs: - targets: ['localhost:9101'] Now set up environment for prometheus_client python3 -m venv .venv source .venv/bin/activate pip3 install prometheus-client # Copy https://prometheus.github.io/client_python/getting-started/three-step-demo/ # Saved it to main.py (change port to 9101) and reboot prometheus server - then run docker restart prometheus # Again this doesn't work because it's not in the same network as docker docker stop prometheus; docker rm prometheus docker run -d --name prometheus --net=host \ -v ./prometheus.yml:/etc/prometheus/prometheus.yml prom/prometheus Apparently you can link other docker containers by referencing the container name https://stackoverflow.com/questions/68674413 Having issues still with networking - trying to add multiple target locations So using 127.0.0.1 in prometheus.yml works but not "localhost" # From http://127.0.0.1:9200/metrics request_processing_seconds_count 221.0 request_processing_seconds_sum 103.67555012699995 request_processing_seconds_created 1.7031545478795354e+09 # Added this to main.py from prometheus_client import Counter testval = Counter('testing_counter', 'Description of counter') testval.inc(5) # inside process_request() # From http://127.0.0.1:9200/metrics testing_counter_total 25.0 testing_counter_created 1.7031549523476255e+09 So now I need to add fastapi and try to get metrics from that Current issue is defining variables within the right scope without "duplicate timeseries" https://github.com/prometheus/client_python/blob/3c91b3f7d19346e8a67a654ae7aa3d966dd60c88/README.md#multiprocess-mode-eg-gunicorn Seems like prometheus and uvicorn are a bit at odds in how they interpret python Will need to get back to tthis - there are ways to use prometheus_client differently prometheus.yml global: scrape_interval: 15s external_labels: monitor: 'codelab-monitor' scrape_configs: - job_name: 'prometheus' scrape_interval: 5s static_configs: - targets: ['localhost:9090'] - job_name: 'node' static_configs: - targets: ['localhost:9100'] - job_name: 'pythoncode' scrape_interval: 5s static_configs: - targets: ['127.0.0.1:9200'] main.py (BROKEN) from prometheus_client import start_http_server, Counter import uvicorn; from fastapi import FastAPI app = FastAPI() @app.get("/") def read_root(): global testval testval.inc(5) return {"message": f"Variable: {X}"} if __name__ == '__main__': start_http_server(9200) uvicorn.run("main:app",host='0.0.0.0', port=9201)