How To Log Docker Swarm On EC2 To Cloudwatch

Adnan Sabanovic
4 min readMar 2, 2021

If you are building an application, regardless of what size, small or enterprise one, as long as you think it has value, then it is worth protecting it.

The 10th OWASP rule is about Insufficient Logging and Monitoring.

When you first install Docker swarm, the default log driver is set to json-file. This means that all the logs are basically stored in the cache and you can only inspect it while the containers are running.

And now the key moment is this.

When your container crashes, you have no way to determine what happened.

For that, you need to keep your logs outside of your EC2 instance so you can revise them later.

Docker already comes with an array of supported logging drivers to choose from. (Aws logs is one of them).

Challenge #1 (Obtaining the right IAM permissions)

The first this to go from here before you can even touch docker configuration is to set up everything outside of EC2. We are going straight to IAM.

Create Cloudwatch policy

You can call it CloudwatchAccessLog
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"logs:CreateLogStream",
"logs:PutLogEvents"
],
"Resource": "*"
}
]
}

Create IAM Role

You can call your role DockerSwarmManager

Then attach this role to your running EC2.

You can go to your EC2 > select your instance > ACTIONS > Security > Modify IAM Role > Pick your new role.

Now your role is attached to the instance. This is THE SAFEST way to provide access credentials. If you have been researching so far, you have probably run into an example where they create an IAM user with programmatic access and they get the AWS_KEY and AWS_SECRET. They then need to store those somewhere on the instance manually. (It could be automated).

However, this way, your EC2 instance will look for a couple of default places where it can read the KEY and the SECRET. You can read something about it here. Basically, the instance will make calls to the instance profile (which was created automatically for you when you created your IAM role and attached it to the instance) to obtain temporary KEY and SECRET so your applications on your EC2 can make calls to other AWS Resources.

However, keep in mind what it says at the beginning:

When an IAM role is attached to the instance, the AWS CLI automatically and securely retrieves the credentials from the instance metadata.” This will work like magic and we don’t need to deal with custom profiles.

Challenge #2 (Setting up AWS-CLI)

You will need to set up aws-cli so your instance can make calls to the instance metadata. That will allow it to retrieve the temporary KEY and SECRET. (Again, this will all happen automatically behind the scene).

Use this page to set up aws-cli following the instructions.

After you have verified that the aws-cli is printing the version, then let’s go to Cloudwatch.

Challenge #3 (Creating Cloudwatch log group and log stream)

This one is not that difficult. Just go to Cloudwatch and create Log Group and a Log Stream inside of it. Call it anything you want. Maybe docker-swarm-log-group and docker-swarm-log-stream.

Challenge #4 (Configuring docker)

Go back to your EC2, ssh into it and then run sudo service docker stop. We want to stop the docker daemon from running.

By default, docker doesn’t have a /etc/docker/daemon.json file which is used to add additional configuration to your services or containers.

Create that file and in there write this:

{"log-driver": "awslogs","log-opts": {"awslogs-region": "us-west-2","awslogs-group": "docker-swarm-log-group","awslogs-stream": "docker-swarm-log-stream"}}

IMPORTANT NOTE

If you are running docker swarm with more than 1 tasks (more than 1 containers, example: if you used to scale the service to 2 and more), then you should remove the last line from the daemon.json .

Remove this line: (including the , beforehand to make the json be valid.

,"awslogs-stream": "docker-swarm-log-stream"

The reason for this is that having two different containers write into the same stream will cause the awslog to fail to write messages. You can see this message inside /var/log/syslog .

You want to make a note of your awslogs-region which could be anywhere. This is actually the Cloudwatch region that holds the log and stream that you created.

Now just start your docker back up.

sudo service docker start and you will see, if you go to Cloudwatch, that your application is now being logged in Cloudwatch.

You can check your docker to see what logging is selected using two helpful commands:

aws configure listdocker info --format '{{.LoggingDriver}}'

--

--

Adnan Sabanovic

Just a tech guy involved in mastering life using mental and physical discipline. Journaling about productivity, personal development and overall growth. #Life