All my 7 Terraform Pipelines

Giuseppe Borgese
5 min readAug 12, 2020

--

In DevOps, everybody loves pipelines I have tested a lot of them over the years.
Here all possible pipelines with Terraform I ever tried

Disclaimer

This article is not original content, it gathers my 7 LinkedIn posts about the topic if you want to read the original posts with the comments use the #learningterraforminsmallchunks on Linkedin search

Instead, if you would like to follow my posts check daily https://www.linkedin.com/in/giuseppe-borgese-64181a7/detail/recent-activity/shares/

1 — Terraform Cloud

Terraform Cloud https://app.terraform.io/ is for sure the best solution you can have it but not always applicable. For example, there are companies where you cannot use a SaaS service or your code repository don’t have public API exposed (this was my case).
Small note the Workspaces in Terraform Cloud are different from the one you use at the command line.

2 — Gitlab CI

Terraform plan and apply inside a Gitlab CI:

I used the official Terraform docker hub image https://lnkd.in/dZqNfBB
This means automatically update of Terraform binary and provider update because every run it does the terraform init.

A private Runner with a role and a python script to assume roles in different accounts.

Schedule gitlab feature system inside git to run the apply every night for each environment.

Environment gitlab feature to have previous runs split by environment dev/test/prod in my case.

Automatic plan on dev environment every commit.

Email notification in case of error, for example, delete a not empty S3 bucket.

The pipeline article I used to inspire me https://lnkd.in/d8sHzFQ

3 — Terraform inside Codebuild (plan/apply)

I did it the first time in October 2017, CodeBuild image was at 1.0 version (now it is 3.0).

You can see it action on my channel https://lnkd.in/dEaTbx9.

Terraform binary was downloaded every run.

If I had to do it now I would do using the Terraform docker hub official image and not the standard Codebuild, but custom docker image wasn’t possible in 2017.

Also, it was necessary to use a trick to load and convert Access Keys from the Role (now it isn’t necessary).

But the purpose to give a simple as possible, way to run Terraform code to the delivery team was achieved.

If you want to know more take a look to the github repo https://lnkd.in/d7_Uj26

4 — Codebuild Monitor system

Codebuild monitor system to detect if there are changes not reported in the code.

Running Terraform with Python and parse the results to check if there are infrastructure changes not present in the code.
This is not really a pipeline because it only runs terraform init and plan and never a terraform apply but it is, in my opinion, a wonderful way to monitor the infrastructure and implement the “infrastructure as a code” paradigm.

How it works:

It assumes that you modify your infrastructure only using Terraform, every manual change is a violation or at least a warning.
After you modify the code you commit to master.

A python program assumes the roles on different accounts, performs a terraform init and plan, and if there are changes send a notification by email describing also what the terraform plan wants to change.

There is also the possibility to add a number of changes allowed that don’t trigger a notification.

5 — Atlantis inside AWS Fargate

Atlantis https://lnkd.in/eWsNiSq is the ancestor of Terraform Cloud and it is still available, I have run it in 2018 or 2019 (I don’t remember exactly) and at that time it was the only free tool (Terraform Enterprise costs a lot of money) to have a web interface to manage Terraform.

The purpose of Atlantis is that you can visualize the commit and the plan that a commit or a merge produces it. Also, you can put in place a simple approbation flow. It is well explained on the main page of the product.

I have used inside the AWS Fargate configured with the module created by Anton Babenko ☁ https://lnkd.in/ePx95s5 but there are other possibilities to run it in different services.

Terraform Cloud is for sure more advanced compare to Atlantis but it runs inside Hashicorp infrastructure and this is not ok acceptable for many companies. Instead, Atlantis is inside your infra with your IAM Role attached.

6 — Jenkins with Terraform Binary inside

For sure this is not my first choice especially now in 2020 with all the other possible solutions available in the market, but in some case can be a solution.

In 2017 I wrote an article with all the details on Linux Academy https://lnkd.in/dT3b4Zt
to integrate Jenkins and AWS Codecommit with Terraform
Now there is also a plugin for Jenkins https://lnkd.in/dT5zTQR

7 — Lambda with Terraform binary inside.

This is not a good idea and for this reason, is in the last episode.
Also, this is the only pipeline I didn’t try by myself but I want to put it here because there are few cases where it can be a good idea.

Keep in mind Lambda is a microservices and has limitations on duration (15 minutes) disk dimension and also dimension on .zip file you can pass with your code and the running instructions.

Feedback

If you like this article, and you want to motivate me to continue to write, please:

--

--

Giuseppe Borgese
Giuseppe Borgese

Written by Giuseppe Borgese

AWS DevOps Professional Certified — Book Author — Terraform Modules Contributor — AWS Tech Youtuber

No responses yet