fluentd can send all the Kubernetes or EKS logs to CloudWatch Logs to have a centralized and unified view of all the logs from the cluster, both from the nodes and from each container stdout.
Installation
To send all nodes and container logs to CloudWatch, create a CloudWatch log group named kubernetes
.
aws logs create-log-group --log-group-name kubernetes
Then install fluentd-cloudwatch
helm chart. This will send logs from node, containers, etcd,… to CloudWatch as defined in the default fluentd chart config.
helm install --name fluentd incubator/fluentd-cloudwatch \ --set awsRegion=us-east-1,rbac.create=true
Each node needs to have permissions to write to CloudWatch Logs, so either add the permission using IAM instance profiles or pass the awsRole
if your are using kube2iam.
helm install --name fluentd incubator/fluentd-cloudwatch \ --set awsRole=arn:aws:iam::123456789012:role/k8s-logs,awsRegion=us-east-1,rbac.create=true,extraVars[0]="{ name: FLUENT_UID, value: '0' }"
The k8s-logs role policy is configured as
{ "Version": "2012-10-17", "Statement": [ { "Sid": "logs", "Effect": "Allow", "Action": [ "logs:CreateLogStream", "logs:PutLogEvents", "logs:DescribeLogGroups", "logs:DescribeLogStreams" ], "Resource": [ "arn:aws:logs:*:*:*" ] } ] }
Now you can go to CloudWatch and find your logs.
I’m new to fluentd. Just deployed using helm for the first time. When I look at the pod logs, I see this:
`2018-12-10 21:16:37 +0000 [error]: unexpected error error_class=Errno::EACCES error=#`
I attached the CloudWatch Logs policy to the worker node IAM role.
Ever figure out the root cause?
Hi: your fluentdb process needs to be run as root – see: https://github.com/fluent/fluentd-kubernetes-daemonset/issues/173
Also, this should be installed via –values/ yaml as opposed to inline, which doesn’t work for some reason.
Doesnt work for me. It fails on second step. It would be great if updated with appropriate steps. I get this error “Error: unknown flag: –name”