INDEX
########################################################### 2024-01-13 17:40 ########################################################### That Devops Guy... Service Mesh https://www.youtube.com/watch?v=rVNPnHeGYBE A service based architecture needs service-to-service comms (services interacting) If a call fails, it may want to retry (retry logic - can cause issues?) Metrics, authentication and routing has issues with scaling Dynamic tracing lets you find slowness in paths - but can be very complex Service mesh is there to allow services to communicate with sidecar proxies inside services In declarative config you set what features you want (TLS, auto retries, auth etc.) Best to apply service mesh only where needed and expand over time # e.g. website <-> api(<->db) <-> api2(<->db2) - can cause a busy network kind create cluster --name servicemesh --image kindest/node:v1.18.4 # k8 in container kubectl apply -f website.yaml kubectl port-forward svc/website 80:80 # Only website, not api # Repeat for api1_db1 and api2_db2 - run all microservices - need all running to have service
That Devops Guy... Intro to Linkerd https://www.youtube.com/watch?v=Hc-XFPHDDk4 Linkerd is very uninvasive - good minimal system to install and has good documentation This example system has "servicemesh.demo/home" -> nginx ingress controller From there it goes to either videos-web (website) or playlist-api (api) Playlist-api then goes to a redisDB or to videos-api, with its own redisDB kind create cluster --name linkerd --imaage kindest/node:v1.19.1 kubectl create ns ingress-nginx kubectl apply -f applications/* # Load ingress controller and all microservices kubectl get pods # Make sure microservices are all running kubectl -n ingress-nginx get pods # Make sure ingress controller is running # Fake DNS by editing /etc/hosts or C:\Windows\System32\drivers\etc\hosts 127.0.0.1 servicemesh.demo kubectl -n ingress-nginx port-forward deploy/nginx-ingress-controller 80 # Expose website # Look in dev tools and can see requests to api to get data # Open a docker container to make changes to cluster docker run -it --rm -v ${HOME}:/root -v ${PWD}:/work -w /work --net host alpine sh apk install --no-cache curl vim curl -LO KUBECTL_URL; chmod +x ./kubectl; mv ./kubectl /usr/local/bin/ curl -Lo linkerd LINKERD_URL; chmod +x ./linkerd; mv ./linkerd /usr/local/bin/ linkerd check --pre # Do pre-flight checks to see if cluster is compatible linkerd install > linkerd.yaml # Spits out a yaml file to set up linkerd in cluster kubectl apply -f linkerd.yaml # Creates "linkerd control plane" - our apps = "data plane" linkerd check # Check if installed properly Control plane is what linkerd actually is - CLI and web pods interact to controller api Tap pod lets you do service-to-service communication Destination pod is a lookup service. Identity pod is a certificate authority Proxy injector pod injects proxies into other pods. Sp validator checks service profiles Automatically uses prometheus and grafana to give metrics about pods kubectl -n linkerd port-forward svc/linkerd-web 8084 # Then go to localhost:8084 # By default, no pods are meshed - need to inject proxy to enable it kubectl get deploy POD_NAME -o yaml # Get the deployment config of a pod ... | linkerd inject - # Outputs a new yaml config with linkerd enabled (annotation) ... | kubectl apply -f - # Pipe directly to kubectl to apply it kubectl edit DEPLOYMENT # Lets you manually edit and apply a deployment manfiest # deployment.yaml template: metadata: annotations: linkerd.io/inject: enabled Apply this to all pods, including ingress controller - can see on website what is enabled Changing a deployment config restarts all the pods - so need to port forward again Once traffic is going, linkerd lets you see diagrams/tables of traffic data Each pod gets its own grafana dashboard Say there is a problem with some endpoint on an api, you can see it in the traffic data Use "tap" to liseten to traffic streams live - or use the website linkerd tap deploy/web # Tap into website deployment linkerd tap ns/test --to ns/prod # Tap "test" namespace filtered by requests to "pod" Service profiles tell linkerd how to handle service requests # Can automatically profile all traffic and output similar to wireshark pcap for services linkderd profile -n default videos-api --tap deploy/videos-api --tap-duration 10s # Can use service profiles you make match live requests # videos-profile.yaml apiVersion: linkerd.io/v1alpha2 kind: ServiceProfile metadata: creationTimestamp: null name: videos-api.default.svc.cluster.local namespace: default spec: routes: - condition: method: GET pathRegex: /.* # Get all endpoints name: GET ALL isRetryable: true # Enable service retry on matching items linkerd routes -n default deploy/playlist-api --to svc/videos-api -o wide # Get traffic info # Can see actual (no retries) and effective (with retries) success rates # This lets you narrow down where issues are coming from, as upstream failures ignored Having too many retries can strain a network - use retry budgets to restrict this # Solve by changing service profile - tweak according to needs # (extending) videos-profile.yaml spec: retryBudget: retryRatio: 0.2 minRetriesPerSecond: 10 ttl: 10s # Mutual TLS is enabled by default between services linkerd -n default edges deployment # Has ticks for "secured" linkerd -n default tap deploy # Looks like tcpdump - all tls enabled
########################################################### 2024-01-15 07:50 ########################################################### That Devops Guy... Istio Service mesh https://www.youtube.com/watch?v=KUHzxTCe5Uc Look at platform setup guides on Istio docs - different k8 environments to run on # Set up kube cluster, ingress controller, application etc + change DNS # Open docker alpine container to do work in curl -L ISTIO_URL | ISTIO_VERSION=1.6.12 TARGET_ARCH=x86_64 sh - # Download istio chmod +x istio-...; mv istio-.../bin/istioctl /usr/local/bin/; mv istio-... /tmp/ # Istio comes with a bunch of default manifest files you can look at istioctl x prechecks # Check cluster is compatible Istio config profiles (default, minimal, remote etc.) are install types istioctl profile list istioctl install --set profile=default istioctl proxy-status # See "proxy state of cluster" istiod pod is the discovery config pod - injects sidecar proxies to pods (when opted in) Pilot manages traffic. Citadel managed certificates. Gallery translates k8 yaml to istio To opt in, either label a namespace or use istioctl to inject proxy (deployment-based) kubectl label namespace/default istio-injection=enabled # Only affects new pods kubectl -n ingress-nginx get deploy nginx-ingress-controller -o yaml \ istioctl kube-inject -f - | kubectl apply -f - # Inject istio to ingress controller Istio comes preshipped with prometheus and metrics tracking ls /tmp/istio-1.6.12/samples/addons # These are addons for services like prometheus kubectl apply -f /tmp/.../prometheus.yaml # Add prometheus - deploy grafana for graphs kubectl -n istio-system port-forward svc/grafana 3000 # Make grafana visible # Similarly add kiali dashboard (need to apply twice, with time between) kubectl -n istio-system port-forward svc/kiali 20001 # Similar to linkerd kubectl logs videos-api-POD -c videos-api --tail 50 # Get logs from a pod A virtual service can mitigate an issue (like a service profile) # retries/videos-api.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: videos-api spec: hosts: - videos-api http: - routes: - destination: host: videos-api retries: attempts: 10 perTryTimeout: 2s # Apply this virtual service and now retry traffic fixes playlist-api issues # Say issue causing retries to be needed is fixed, can just unapply this service kubectl delete virtualservice videos-api Might want to do traffix splits to test, say, another version of an API # traffic-splits/videos-web.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: videos-web spec: hosts: - servicemesh.demo http: - route: - destination: host: videos-web weight: 80 # 80% to main - destination host: videos-web-v2 weight: 20 # 20% to test # This is usually best for APIs and not websites, as this can cause issues - good for tests # Typically set a cookie for specific users to use test version # canary/videos-web.yaml apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: videos-web-canary spec: hosts: - servicemesh.demo http: - match: - uri: prefix: / # Match everything headers: cookie: regex: ^(.*?;)?(version=v2)(l.*)?$ # Check if item in cookie header route: - destination: host: videos-web-v2 # Route all traffix to v2 - match: - uri: prefix: / # Match everything, but without cookie header route: - destination: host: videos-web # Route all traffix to v1 # Apply this and write a cookie version=v2 - this will enable v2 of the api