Du kannst nicht mehr als 25 Themen auswählen Themen müssen entweder mit einem Buchstaben oder einer Ziffer beginnen. Sie können Bindestriche („-“) enthalten und bis zu 35 Zeichen lang sein.
tobi 2a4379a6ad graylog instructions readme vor 10 Monaten
README.md graylog instructions readme vor 10 Monaten
cert.yaml fix vor 11 Monaten
deploy.yaml nginx graylog vor 10 Monaten
namespace.yaml init vor 1 Jahr

README.md

Central Nginx

For serving requests into different ips in the cluster.

I mainly chose to do it via a single deployment, because my fucking internet provider is too incompetent to assign me a second ipv4 address without me having to pay millions.

cert.yaml

Contains a CR (custom Resource) from the Cert-manager project. If you installed my PowerDns deployment and followed the DNS-01 setup it should generate an appropriate wildcard cert for your domain.

deploy.yaml

contains the entire config map for my nginx reverse proxy setup.

Also contains services that expose http and https ports bound to my floating keepalived ip. That is in short an ip that is managed by multiple servers, keepalived takes care of keeping that ip alive. If the server that holds the ip currently dies, and the others notice it (1 sec timeout), one of them will take it up.

proxy passes

in order to dynamically proxy pass to a service in another namespace we do the following:

  1. We explicitly define a resolver in the http block of our nginx config. The following config uses the coredns resolver (which every pod has access to anyways). You can get it by inspecting the coredns service in the kube-system namespace.

We also tell it that its valid for 60 seconds. That means whatever dns nginx resolved using that ip, is cached for only 60 seconds. any requests beyond that require a new lookup.

http {
    # get your static coredns ip from the service located in the kube-system namespace
    resolver 10.2.0.10 valid=60s;
    ...
}	
  1. we do some shenanigans in our individual server configs, to prevent nginx from perma caching our host urls
server {
    ...
    
    location / {
        set $endpoint http://wiki.tobias-huebner.svc.cluster.local;
        proxy_pass $endpoint$request_uri;
        ...
    } 
}

notice the following things

  • we define a endpoint variable (this somehow forces nginx to redo the lookup, directly setting it in proxy_pass doesnt update)
  • we define servicename.namespace.serviceclass.cluster.local as the full url nginx should give to the resolver we configured. this is based on the standard directives kubernetes sets in the resolv.conf of each pod. From inside a pod you may be able to just do ping wiki, that only works because in the resolv.conf there are a couple of settings that always try to append stuff like .tobias-huebner.svc.cluster.local, .svc.cluster.local
  • we pass it to $endnpoint$request_uri, per default the request uri isnt passed, but since nginx sets that variable always we can just concat the two

Logging to Graylog

We use this to pipe all access logs to our graylog syslog udp input.

http {
	log_format graylog2_json escape=json '{ "timestamp": "$time_iso8601", '
	  '"remote_addr": "$remote_addr", '
	  '"body_bytes_sent": $body_bytes_sent, '
	  '"request_time": $request_time, '
	  '"response_status": $status, '
	  '"request": "$request", '
	  '"request_method": "$request_method", '
	  '"host": "$host",'
	  '"upstream_cache_status": "$upstream_cache_status",'
	  '"upstream_addr": "$upstream_addr",'
	  '"http_x_forwarded_for": "$http_x_forwarded_for",'
	  '"http_referrer": "$http_referer", '
	'"http_user_agent": "$http_user_agent" }';

	access_log syslog:server=logs.tobias-huebner.org:12401 graylog2_json;

}

On our graylog server we want to create a dedicated udp syslog input. We also want to add two extractors, one on the field “message” using the pattern nginx:\s+(.*);

Now all thats left is to add the default JSON extractor on the field generated by the previous extractor.