AWS CloudFormation Custom Resource

Giuseppe Borgese
5 min readMay 20, 2020

--

This article comes from a practical case. We deploy everything in our pipeline with CloudFormation in several environments and for this reason, I cannot perform manual action.

Today 20th May 2020 is possible to mount AWS EFS volumes inside TaskDefinition for ECS but both for Ec2 and Fargate Compatibility the CloudFormation Resources are not yet available.

So the only solution is to use a CloudFormation Custom Resource.

I’m not an expert CloudFormation writer and I don’t want to be because I prefer to use Terraform but in the day by day, you need to find solutions and solve issues of the environment created before your arrival.

If you want to study more in deep this topic here the link to the official documentation, but I’ll explain very quickly what I did.

The Task Definition

I rewrote using the AWS CloudFormation Custom Resource the part in RED the task definition, in this way it can support the EFS mount points.

We are using a modified version of this osixia/openldap

I have adapted for my purpose the code I found in this repository

To realize the AWS CloudFormation Custom Resource you need to define 3 special Resources:

A Lambda Role

CustomResourceRole:
Type: ‘AWS::IAM::Role’

A Lambda Function with NodeJS script, the purpose of this is to read a block of custom data (next point) in the template and creates the resource

CustomResourceFunction:
Type: ‘AWS::Lambda::Function’

The task definition written in a custom data format that integrates with the CloudFormation template and allows to do implement things not yet supported.

CustomTaskDefinition:
Type: ‘Custom::TaskDefinition’

CustomTaskDefinition:
Type: 'Custom::TaskDefinition'
Version: '1.0'
Properties:
ServiceToken: !GetAtt 'CustomResourceFunction.Arn'
TaskDefinition: {
containerDefinitions: [
{
name: "openldapservice",
image: osixia/openldap,
memoryReservation: 1500,
logConfiguration: {
logDriver: "awslogs",
options: {
awslogs-group: <your open ldap group>,
awslogs-datetime-format: "%Y-%m-%d %H:%M:%S.%L",
awslogs-region: !Ref 'AWS::Region',
awslogs-stream-prefix: <your prefix>
}
},
portMappings: [
{
hostPort: 389,
protocol: "tcp",
containerPort: 389
}
],
command: [],
"environment": [
<define your variable here like this one>
{
"name": "LDAP_TLS",
"value": "true"
}
]
,
mountPoints: [
{sourceVolume: "var-lib-ldap", containerPath: "/var/lib/ldap"},
{sourceVolume: "etc-ldap-slapd", containerPath: "/etc/ldap/slapd.d"}
]
}
],
family: "openldapservice",
taskRoleArn: "", # required for EFS permissions
cpu: "256",
memory: "2048",
networkMode: "awsvpc",
volumes: [
{
name: "var-lib-ldap",
efsVolumeConfiguration: {
fileSystemId: <put your fily system id like this one fs-xxxxxx>
}
},
{
name: "etc-ldap-slapd",
efsVolumeConfiguration: {
fileSystemId: <put your fily system id like this one fs-xxxxxx>
}
},
]
}
CustomResourceFunction:
Type: 'AWS::Lambda::Function'
Properties:
Code:
ZipFile: |
const aws = require('aws-sdk')
const response = require('cfn-response')
const ecs = new aws.ECS({apiVersion: '2014-11-13'})
exports.handler = function(event, context) {
console.log("REQUEST RECEIVED:\n" + JSON.stringify(event))
if (event.RequestType === 'Create' || event.RequestType === 'Update') {
ecs.registerTaskDefinition(event.ResourceProperties.TaskDefinition, function(err, data) {
if (err) {
console.error(err);
response.send(event, context, response.FAILED)
} else {
console.log(`Created/Updated task definition ${data.taskDefinition.taskDefinitionArn}`)
response.send(event, context, response.SUCCESS, {}, data.taskDefinition.taskDefinitionArn)
}
})
} else if (event.RequestType === 'Delete') {
ecs.deregisterTaskDefinition({taskDefinition: event.PhysicalResourceId}, function(err) {
if (err) {
if (err.code === 'InvalidParameterException') {
console.log(`Task definition: ${event.PhysicalResourceId} does not exist. Skipping deletion.`)
response.send(event, context, response.SUCCESS)
} else {
console.error(err)
response.send(event, context, response.FAILED)
}
} else {
console.log(`Removed task definition ${event.PhysicalResourceId}`)
response.send(event, context, response.SUCCESS)
}
})
} else {
console.error(`Unsupported request type: ${event.RequestType}`)
response.send(event, context, response.FAILED)
}
}
Handler: 'index.handler'
MemorySize: 128
Role: !GetAtt 'CustomResourceRole.Arn'
Runtime: 'nodejs10.x'
Timeout: 30
CustomResourceRole:
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: 'lambda.amazonaws.com'
Action: 'sts:AssumeRole'
Policies:
- PolicyName: 'customresource'
PolicyDocument:
Statement:
- Effect: Allow
Action:
- 'ecs:DeregisterTaskDefinition'
- 'ecs:RegisterTaskDefinition'
Resource: '*'
- Effect: Allow
Action:
- 'logs:CreateLogGroup'
- 'logs:CreateLogStream'
- 'logs:PutLogEvents'
Resource: '*'
- Effect: Allow
Action:
- 'iam:PassRole'
Resource: '*' # replace with value of taskRoleArn

Verification

To be sure the running container was using the EFS I did this

  1. ssh into the EC2 instance where the running image is running
  2. find the running one with the docker ps
  3. login inside the running container
  4. check the file system

Below you can see the output (I have anonymized the data)

docker psCONTAINER ID        IMAGE                                                                                                COMMAND                  CREATED             STATUS                 PORTS               NAMES288d7970a6f6        000000000.dkr.ecr.eu-west-1.amazonaws.com/myname:openldap-with-testdata   "/bin/sh -c 'start.s…"   7 minutes ago       Up 7 minutes                               ecs-openldapservice-20-openldapservice-aaaaaaa000000docker exec -it ecs-openldapservice-20-openldapservice-aaaaaaa000000 /bin/bashroot@ip:/# df -hFilesystem                                 Size  Used Avail Use% Mounted onoverlay                                     30G  1.9G   28G   7% /tmpfs                                       64M     0   64M   0% /devtmpfs                                      7.8G     0  7.8G   0% /sys/fs/cgroup/dev/nvme0n1p1                              30G  1.9G   28G   7% /etc/hostsshm                                         64M     0   64M   0% /dev/shmfs-xxxxxx.efs.eu-west-1.amazonaws.com:/  8.0E     0  8.0E   0% /var/lib/ldapfs-xxxxxx.efs.eu-west-1.amazonaws.com:/  8.0E     0  8.0E   0% /etc/ldap/slapd.dtmpfs                                      7.8G     0  7.8G   0% /proc/acpitmpfs                                      7.8G     0  7.8G   0% /sys/firmwareroot@ip:/# ll /var/lib/ldap/bash: ll: command not foundroot@ip:/# ls /var/lib/ldap/data.mdb  lock.mdbroot@ip-10-141-82-25:/# ls /etc/ldap/slapd.dcn=config  cn=config.ldif  docker-openldap-was-admin-password-set  docker-openldap-was-started-with-tls

I have also tested if I have to write permissions to be 100% sure

root@ip:/var/lib/ldap# echo "ciao" > test.txtroot@ip:/var/lib/ldap# cat test.txtciaoroot@ip:/var/lib/ldap# rm test.txtroot@ip:/var/lib/ldap# lsdata.mdb  lock.mdb

AWS Support for Custom CloudFormation Resources

After I have implemented my solution, the support replies back to me with some general information.

Here a video where they create an S3 object using the Custom Resource

as you know CloudFormation doesn’t have the S3 object type (on the contrary, Terraform has it) so if you want to do it with CloudFormation you need to create a Custom Resource. There is also this documentation page with some suggestions.

Update 3rd July 2020 EFS on Fargate

With this small change in the CustomTaskDefinition you can have a TaskDefinition on Fargate with EFS mount.

It is enough to add the line <<requiresCompatibilities: [“FARGATE”],>>

        taskRoleArn: !Ref CustomTaskDefinitionRole, 
requiresCompatibilities: ["FARGATE"],
cpu: "256",
memory: "2048",
networkMode: "awsvpc",

We hope that CloudFormation adds the EFS Feature soon but in the meanwhile, you can use this workaround.

Feedback

If you like this article, and you want to motivate me to continue to write, please:

--

--

Giuseppe Borgese

AWS DevOps Professional Certified — Book Author — Terraform Modules Contributor — AWS Tech Youtuber