Terraform bootstrap with statefile on S3

aws

Bootstraping Terraform and putting the state on S3 has a chicken-egg problem: How to manage the S3 bucket which holds the state file without it already existing?

To do this, the statefile can be put local when initiating. Then an S3 bucket can be created and the state automatically migrated to S3.

To view a video on how to do this, go here.

Initialize with local state file

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.0"
    }
  }
}

provider "aws" {
  region = "eu-central-1"
}

resource "aws_s3_bucket" "terraform_state" {
  bucket        = "BUCKET-NAME-FOR-TF-STATE"
  force_destroy = true
}

resource "aws_s3_bucket_versioning" "terraform_bucket_versioning" {
  bucket = aws_s3_bucket.terraform_state.id
  versioning_configuration {
    status = "Enabled"
  }
}

resource "aws_s3_bucket_server_side_encryption_configuration" "terraform_state_crypto_conf" {
  bucket        = aws_s3_bucket.terraform_state.bucket 
  rule {
    apply_server_side_encryption_by_default {
      sse_algorithm = "AES256"
    }
  }
}

resource "aws_dynamodb_table" "terraform_locks" {
  name         = "BUCKET-NAME-FOR-TF-STATE-LOCKING"
  billing_mode = "PAY_PER_REQUEST"
  hash_key     = "LockID"
  attribute {
    name = "LockID"
    type = "S"
  }
}

After this, terraform init can be executed to create the S3 bucket and DynamoDB table for Terraform and persist it to a local state file.

Migrate backend to S3 statefile

A backend block can now be added to the terraform {} block which will let terraform migrate the state to S3/DDB:

backend "s3" {
  bucket         = "BUCKET-NAME-FOR-TF-STATE"
  key            = "terraform.tfstate"
  region         = "eu-central-1"
  dynamodb_table = "BUCKET-NAME-FOR-TF-STATE-LOCKING"
  encrypt        = true
}

Running terraform init again now will migrate the state.