627 shaares
Pour centraliser les logs des pods dans Cloudwatch Logs avec un cluster Kubernetes provisionné par KOPS:
$ kops edit cluster
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: kops/v1alpha2
kind: Cluster
spec:
...
additionalPolicies:
node: |
[
{
"Effect": "Allow",
"Action": ["logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"],
"Resource": ["*"]
}
]
master: |
[
{
"Effect": "Allow",
"Action": ["logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"],
"Resource": ["*"]
}
]
docker:
logDriver: awslogs
logOpt:
- awslogs-region=eu-west-1
- awslogs-group=k8s
...
$ kops update cluster --yes # used only to update additional iam policies
$ kops rolling-update --yes --force # used to recreate every k8s cluster members (docker logdriver and logopt will be added to /etc/sysconfig/docker at the first boot)
Attention les noms des streams de logs dans cloudwatch correspondent aux id des containers, ce n'est pas très pratique...
EDIT: en utilisant cette méthode, les logs ne sont plus accessible via la commande kubectl logs
. Du coup je ne recommande pas cette approche...