Rego Rules with Duplicate Names

Rego is the language used by the Open Policy Agent. I use it in Conftest for infrastructure testing. Together, they’re great for writing policies that enforce requirements in things like Terraform HCL and Kubernetes YAML. Check out the conftest usage guide for an example.

In Rego, you write rules that check if data structures look the way you expected. Rules are kind of like functions in Python. They contain the logic you write.

Two rules can have the same name. The way Rego handles that surprised me. I wrote some bugs before I got used to it.

Alan Tunnicliffe/Shutterstock.com

In Python, if two functions have the same name:

def f1():
    return 'first'

def f1():
    return 'second'

print(f1())

The second function gets executed:

python ./test.py
second

In Rego, the value of a rule is a set containing the values of all the rules that have the same name.

If we use the Rego Playground to evaluate this rule:

even_or_odd[value] {
	contains(input.message, "one")
    value := "odd"
}

With this input:

{
    "message": "one two three"
}

We get a set with one value:

 {
    "even_or_odd": [
        "odd"
    ]
}

We can confirm it’s a set with the built-in is_set function in a new rule:

even_or_odd[value] {
	contains(input.message, "one")
    value := "odd"
}

even_or_odd_is_set[value] {
	value := is_set(even_or_odd)
}

This evaluates to:

{
    "even_or_odd": [
        "odd"
    ],
    "even_or_odd_is_set": [
        true
    ]
}

If we add another rule that’s also named even_or_odd and that has a new value:

even_or_odd[value] {
	contains(input.message, "one")
    value := "odd"
}

even_or_odd[value] {
	contains(input.message, "two")
    value := "even"
}

We get the new value in the even_or_odd set:

{
    "even_or_odd": [
        "even",
        "odd"
    ]
}

Sets contain unique values, so if we add a new rule with an old value:

even_or_odd[value] {
	contains(input.message, "one")
    value := "odd"
}

even_or_odd[value] {
	contains(input.message, "two")
    value := "even"
}

even_or_odd[value] {
	contains(input.message, "three")
    value := "odd" # This value already exists in the set.
}

We get the same two values (not a third):

{
    "even_or_odd": [
        "even",
        "odd"
    ]
}

These are small details, but they weren’t obvious to me when I started. Hopefully they save you some troubleshooting.

You might also want to check out these articles:

Just Use the Tags!

There I was at KubeCon, hanging out with some friends from Nebulaworks. We were talking about deploying container images.

“I always tell people to use the SHA256 digest of containers. That way you know exactly what you’re deploying.” Said one of my buddies.

“Sure, but that’s a pain.” I replied, not thinking it through. “Just use the tags! That’s what everybody does.”

Vitaly Korovin/Shutterstock.com

90 minutes later we’re sitting in a talk about container registries and some of the risky things you can do with them.

“You should always deploy by digest.” Said Jon, one of the presenters. “When you deploy by tag, you’re basically piping curl into bash in production. Don’t do that.”

To show what’s possible, they implemented a chat service using pushes and pulls from a registry. “It’s built on what Jon has advised us to never do: pull by tag.” Jon’s colleague explained. This was just a fun example, but it was also a scary illustration of the scope of what a malicious user could do under the right circumstances.

My friend leaned over and loud-whispered “SHA256!”

FAIL.

“Just use the tags!” was a naive response.

I should have asked more questions. “Everybody deploys by tags, is that just a common mistake? A lot of registries have a feature that makes tags immutable, is that equivalent?” That would have left room for me to learn something.

There’s so much knowledge out there. You’ll inevitably miss something. No matter how much you know, you’re still usually better off asking questions than making statements.

You might also want to check out these articles:

One Dollar Scissors

I bought a pair of scissors for a dollar. They were terrible, but they opened packages for ten years. I’d still be using them if I hadn’t lost them. After I lost them, I bought a better pair. They cost twenty dollars. I used them to open packages.

Tool quality is often defined by comparison. My one dollar scissors were lower quality than their twenty dollar replacement.

OlekStock/Shutterstock.com

Tool quality should be defined by outcomes. My one dollar scissors opened my packages. My twenty dollar scissors opened my packages. They delivered the same outcome. I should have bought a one dollar replacement and saved myself nineteen bucks.

This is easy to see when you’re buying scissors. It can be hard to see when you’re shipping software.

Does your app really need to be redundant across multiple geographic regions? If the West Coast region goes offline for an hour, how much will that cost you? Because if it’s less than the cost of building a system that can automatically fail over to the East Coast, consider taking the risk. Maybe you only need one dollar scissors.

You might also want to check out these articles:

The Real Cost of Tech Debt

“Technical debt” is engineering jargon for the work left behind when you cut corners.

You cancel the automatic updates on your laptop because you don’t have time to download and restart. Going back and installing those updates is now tech debt.

You need a CI/CD pipeline for one of your apps. You’re in the middle of five other features and you don’t have time to write automation that creates one. You click around in Azure DevOps and set everything up manually. Recreating your pipeline with automation is now tech debt.

Engineering is full of these compromises. The debt they create is manageable if you don’t accumulate too much of it and if you pay it back fast enough. It’s hard to know how much is too much and how fast is fast enough. Bills come due suddenly and with bigger finance charges than you realized.

Andrey_Popov/Shutterstock.com

The updates you didn’t install turn out to be critical security patches. You get breached. There’s an investigation. Logs show you skipped the update that would have prevented the breach.

When you’re not looking, other engineers copy your temporary pipeline to deploy the primary apps of the company. You try to replace those pipelines with automation and discover it’ll require multiple outages that each impact revenue.

Even when it doesn’t create disasters, tech debt can be expensive. I’ve worked on several projects where 80% of my time was spent paying back tech debt. Most of everything I was paid was diverted from features into debt. The delays in feature development also delayed product launches which led to lost revenue.

Tech debt feels like this:

Cutting corners increases the cost of development by 10% to 20%, but you deliver 10% to 20% faster.

Wrong

Tech debt is actually like this:

Cutting corners is ten times more expensive than doing it right, and sometimes it causes business disasters.

Reality

Even if you need to ship quickly and cheaply, quality is usually the right approach. You can cut a few corners, but you’re gambling every time. When the dice don’t land in your favor, you can suffer outages and breaches and high costs and all kinds of other problems.

You might also want to check out these articles:

What Code Review Can’t Do

When developers complete a feature, they (hopefully) submit it to their colleagues for review. GitHub does this with pull requests. Azure Repos have their own flavor. GitLab uses merge requests. There are many tools.

Code review increases the quality of contributions. It enables the team to have your back. Reviewers might catch mistakes you missed or share ideas you didn’t think of.

But, reviewers can’t guarantee code was written as well as they could have written it if they’d written it themselves. It doesn’t substitute for seniority.

That’s easy to see if we scale up to an extreme example. Five interns plus one senior reviewer doesn’t add up to six seniors. They actually add up to zero seniors because the one you have will be so busy with reviews and helping that they won’t get any work done.

Development is dynamic and creative. You stare into an ocean of tooling and a blank screen and invent a way to implement a feature. There are always many approaches. Some work well. Some don’t. Some seem like they will and then don’t. Sometimes you get halfway through your second approach before you finally realize what you should have done. It takes time and a lot of willingness to rework your own work. Reviewers aren’t spending that time and doing those reworks. They’re looking from a distance at something they didn’t write. They can’t catch everything.

A skilled reviewer can raise the quality of the code they review by 10%-15%. That’s huge value! But it’s also only a little bit of the total. Most of the value still comes from the skills of the developer doing the initial implementation.

You might also want to check out these articles:

HTTP Downloads in Old and New PowerShell Versions

There are primarily two ways to download files over HTTP in PowerShell: the Invoke-WebRequest cmdlet and the .NET WebClient class. They’re similar to Linux tools like wget and curl. Which one you need depends on whether you’re using older or newer PowerShell versions. In older versions, it also depends on the file’s size.

In older Posh, Invoke-WebRequest can be slow. Downloading a 1.2 GB Ubuntu ISO file took 1 hour. Downloading the same file with the WebClient class took less than 5 minutes. We tested on 5.1, the version that came out-of-box with Windows 10 Enterprise.

In newer Posh, Invoke-WebRequest performed the same as the WebClient class. We tested on 7.1 running in both Windows 10 and OS X.

If you’re downloading a small file or you’re using the latest version of Posh, use Invoke-WebRequest. We prefer this because it’s idiomatic PowerShell. Invoke-WebRequest is a built-in cmdlet. If you’re downloading a large file with the Posh that came out-of-box with Windows, you may need the WebClient class.

Digging through release notes and old documentation and some other sources didn’t lead to a point in the PowerShell history where this changed, but it may have been this port. If you know where to find the specific change, we’d love to see it!

We’ll demonstrate each way with this URL and file name:

$Url = [uri]"https://releases.ubuntu.com/20.04.3/ubuntu-20.04.3-live-server-amd64.iso"
$FileName = $Url.Segments[-1]

For details on how we got the file name and what the [uri] prefix means, check out our article on splitting URLs.

Invoke-WebRequest (Preferred Way)

Invoke-WebRequest $Url -OutFile "./$FileName"

This creates an ubuntu-20.04.3-live-server-amd64.iso file in the current directory. It shows progress while it runs.

WebClient Class (Old Way for Large Files)

Using just a file name or a file name in the ./ relative path, this downloaded to the $HOME folder. To avoid that, we constructed an absolute path using the directory of the script that ran our test code:

$LocalFilePath = Join-Path -Path $PSScriptRoot -ChildPath $FileName
(New-Object System.Net.WebClient).DownloadFile($Url, $LocalFilePath)

This creates an ubuntu-20.04.3-live-server-amd64.iso file in the same directory as the script that runs the code. It doesn’t show progress while it runs.

Happy automating!

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

PowerShell: Splitting URL Strings

Sometimes, you have a full URL string but you only need one of its components. PowerShell makes it easy to split them out. Suppose we need the name of the ISO file from here:

$UbuntuIsoUrlString = "https://releases.ubuntu.com/20.04.3/ubuntu-20.04.3-desktop-amd64.iso"

We want the substring ubuntu-20.04.3-desktop-amd64.iso. We could use the split operator:

> $($UbuntuIsoUrlString -split "/")[-1]                                                       
ubuntu-20.04.3-desktop-amd64.iso

This divides the string into the components between the / characters and stores each one in an array. The [-1] index selects the last element of that array. This works for the ISO name, but it fails in other cases. Suppose we need the scheme:

> $($UbuntuIsoUrlString -split "/")[0] 
https:

The scheme is https, but we got https: (with a colon). We were splitting specifically on / (slash) characters. The colon isn’t a slash, so split counted it as part of the first component. Split doesn’t understand URLs. It just divides the string whenever it sees the character we told it to split on. We could strip the colon off after we split, but there’s a better way.

.NET has a class that can be instantiated to represent URI objects. URLs are a type of URI. If we cast our string to a URI, we can use the properties defined by that class to get the scheme:

> $UbuntuIsoUri = [System.Uri]$UbuntuIsoUrlString
> $UbuntuIsoUri.Scheme
https

This class understands URLs. It knows the colon is a delimiter character, not part of the scheme. It excludes that character for us.

We can shorten this a bit with the URI type accelerator:

> $UbuntuIsoUri = [uri]$UbuntuIsoUrlString
> $UbuntuIsoUri.Scheme
https

If we want to get the ISO name from this object, we can use the Segments property:

> $UbuntuIsoUri.Segments[-1]
ubuntu-20.04.3-desktop-amd64.iso

Segments returns an array of all the path segments. We get the last one with the [-1] index.

Let’s make the whole operation a one-liner so it’s easy to copy/paste:

> ([uri]$UbuntuIsoUrlString).Segments[-1]
ubuntu-20.04.3-desktop-amd64.iso

That’s the Posh way to process URLs! Cast to a URI object, then read whatever data you need from that object’s properties. As always, PowerShell is all about objects.

Happy Automating!

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Which Way to Write IAM Policy Documents in Terraform

There are many ways to write IAM policy documents in terraform. In this article, we’ll cover each of them and explain why we use it or why we don’t.

For each pattern, we’ll create an example policy using the last statement of this AWS example. It’s a good test case because it references both an S3 bucket name and an IAM user name, which we’ll handle differently.

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Action": "s3:*",
            "Resource": [
                "arn:aws:s3:::bucket-name/home/${aws:username}",
                "arn:aws:s3:::bucket-name/home/${aws:username}/*"
            ]
        }
    ]
}

Table of Contents

Inline jsonencode() Function

This is what we use. You’ll also see it in HashiCorp examples.

resource "aws_s3_bucket" "test" {
  bucket_prefix = "test"
  acl           = "private"
}

resource "aws_iam_policy" "jsonencode" {
  name = "jsonencode"
  path = "/"

  policy = jsonencode({
    Version = "2012-10-17"
    Statement = [
      {
        Action = [
          "s3:*",
        ]
        Effect = "Allow"
        Resource = [
          "${aws_s3_bucket.test.arn}/home/$${aws:username}",
          "${aws_s3_bucket.test.arn}/home/$${aws:username}/*"
        ]
      },
    ]
  })
}
  • ${aws_s3_bucket.test.arn} interpolates the ARN of the bucket we’re granting access to.
  • $${aws:username} escapes interpolation to render a literal ${aws:username} string. ${aws:username} is an AWS IAM policy variable. IAM’s policy variable syntax collides with terraform’s string interpolation syntax. We have to escape it, otherwise terraform expects a variable named aws:username.
  • If you need it, the policy JSON can be referenced with aws_iam_policy.jsonencode.policy (not shown here).

Why we like this pattern:

  • It declares everything in one resource.
  • The policy is written in HCL. Terraform handles the conversion to JSON.
  • There are no extra lines or files like there are in the following patterns. It only requires the lines to declare the resource and the lines that will go into the policy.

aws_iam_policy_document Data Source

The next-best option is the aws_iam_policy_document data source. It’s 95% as good as jsonencode().

resource "aws_s3_bucket" "test" {
  bucket_prefix = "test"
  acl           = "private"
}

data "aws_iam_policy_document" "test" {
  statement {
    actions = [
      "s3:*",
    ]
    resources = [
      "${aws_s3_bucket.test.arn}/home/&{aws:username}",
      "${aws_s3_bucket.test.arn}/home/&{aws:username}/*",
    ]
  }
}

resource "aws_iam_policy" "aws_iam_policy_document" {
  name = "aws_iam_policy_document"
  path = "/"

  policy = data.aws_iam_policy_document.test.json
}
  • The bucket interpolation works the same as in the jsonencode() pattern above.
  • &{aws:username} is an alternate way to escape interpolation that’s specific to this resource. See note in the resource docs. Like above, it renders a literal ${aws:username} string. You can still use $${} interpolation in these resources. The &{} syntax is just another option.

Why we think this is only 95% as good as jsonencode():

  • It requires two resources instead of one.
  • It requires several more lines of code.
  • The different options for escaping interpolation can get mixed together in one declaration, which makes for messy code.
  • The alternate interpolation escape syntax is specific to this resource. If it’s used as a reference when writing other code, it can cause surprises.

These aren’t big problems. We’ve used this resource plenty of times without issues. It’s a fine way to render policies, we just think the jsonencode() pattern is a little cleaner.

Template File

Instead of writing the policy directly in one of your .tf files, you can put them in .tpl template files and render them later with templatefile(). If you don’t need any variables, you could use file() instead of templatefile().

First, you need a template. We’ll call ours test_policy_jsonencode.tpl.

${jsonencode(
  {
    Version = "2012-10-17",
    Statement = [
      {
        Effect = "Allow",
        Action = "s3:*",
        Resource = [
          "${bucket}/home/$${aws:username}",
          "${bucket}/home/$${aws:username}/*"
        ]
      }
    ]
  }
)}

Then, you can render the template into your resources.

resource "aws_s3_bucket" "test" {
  bucket_prefix = "test"
  acl           = "private"
}

resource "aws_iam_policy" "template_file_jsonencode" {
  name = "template_file_jsonencode"
  path = "/"

  policy = templatefile(
    "${path.module}/test_policy_jsonencode.tpl",
    { bucket = aws_s3_bucket.test.arn }
  )
}
  • The interpolation and escape syntax is the same as in the jsonencode() example above.
  • The jsonencode() call wrapped around the contents of the .tpl file allows us to write HCL instead of JSON.
  • You could write a .tpl file containing raw json instead of using jsonencode() around HCL, but then you’d be mixing another language into your module. We recommend standardizing on HCL and letting terraform convert to JSON.
  • templatefile() requires you to explicitly pass every variable you want to interpolate in the .tpl file, like bucket in this example.

Why we don’t use this pattern:

  • It splits the policy declaration across two files. We find this makes modules harder to read.
  • It requires two variable references for every interpolation. One to pass it through to the template, and another to resolve it into the policy. These are tedious to maintain.

In the past, we used these for long policies to help keep our .tf files short. Today, we use the jsonencode() pattern and declare long aws_iam_policy resources in dedicated .tf files. That keeps the policy separate but avoids the overhead of passing through variables.

Heredoc Multi-Line String

You can use heredoc multi-line strings to construct JSON. The HashiCorp docs specifically say not to do this. Because they do, we won’t include an example of using them to construct policy JSON. If you have policies rendered in blocks like this:

<<EOT
{
    "Version": "2012-10-17",
    ...
}
EOT

We recommend replacing them with the jsonencode() pattern.

Happy automating!

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Allowing AWS IAM Users to Manage their Passwords, Keys, and MFA

We do these three things for IAM users that belong to humans:

  • Set a console access password and rotate it regularly. We don’t manage resources in the console, but its graphical UI is handy for inspection and diagnostics.
  • Create access keys and rotate them regularly. We use these with aws-vault to run things like terraform.
  • Enable a virtual Multi-Factor Authentication (MFA) device. AWS accounts are valuable resources. It’s worthwhile to protect them with a second factor of authentication.

There’s much more to managing IAM users, like setting password policies and enforcing key rotation. These are just three good practices we follow.

Users with the AdministratorAccess policy can do all three, but that’s a lot of access. Often, we don’t need that much. Maybe we’re just doing investigation and ReadOnlyAccess is enough. Maybe users have limited permissions and instead switch into roles with elevated privileges (more on this in a future article). In cases like those, we need a policy that allows users to manage their own authentication. Here’s what we use.

This article is about enabling human operators to responsibly manage their accounts. Service accounts used by automation and security policy enforcement are both topics for future articles.

Table of Contents

Console Access Policy Statements

This one is easy. The AWS docs have a limited policy that works.

{
    "Sid": "GetAccountPasswordPolicy",
    "Effect": "Allow",
    "Action": "iam:GetAccountPasswordPolicy",
    "Resource": "*"
},
{
    "Sid": "ChangeSelfPassword",
    "Effect": "Allow",
    "Action": "iam:ChangePassword",
    "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
}

Access Key Policy Statements

This one is also easy. The AWS docs have a limited policy that works. We made a small tweak.

{
    "Sid": "ManageSelfKeys",
    "Effect": "Allow",
    "Action": [
        "iam:UpdateAccessKey",
        "iam:ListAccessKeys",
        "iam:GetUser",
        "iam:GetAccessKeyLastUsed",
        "iam:DeleteAccessKey",
        "iam:CreateAccessKey"
    ],
    "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
}
  • The AWS policy uses * in the account ID component of the ARN. We like to set the account ID so we’re granting the most specific access we can. Security scanning tools also often check for * characters, and removing them reduces the number of flags.
  • Like above, ${aws:username} is an IAM policy variable. See links there for how to handle this in terraform.
  • We changed the sid from “ManageOwn” to “ManageSelf” so it doesn’t sound like it allows taking ownership of keys for other users.

MFA Device Policy Statements

This one was trickier. We based our policy on an example from the AWS docs, but we made several changes.

{
    "Sid": "ManageSelfMFAUserResources",
    "Effect": "Allow",
    "Action": [
        "iam:ResyncMFADevice",
        "iam:ListMFADevices",
        "iam:EnableMFADevice",
        "iam:DeactivateMFADevice"
    ],
    "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
},
{
    "Sid": "ManageSelfMFAResources",
    "Effect": "Allow",
    "Action": [
        "iam:DeleteVirtualMFADevice",
        "iam:CreateVirtualMFADevice"
    ],
    "Resource": "arn:aws:iam::[account id without hyphens]:mfa/${aws:username}"
}
  • Like we talked about above, our goal is to enable users to follow good practices. We selected statements that enable but not ones that require.
  • The AWS example included arn:aws:iam::*:mfa/* in the resources for iam:ListMFADevices. According to the the AWS docs for the IAM service’s actions, this permission only supports user in the resources list. We removed the mfa resource.
  • Also according to the the AWS docs for the IAM service’s actions, iam:DeleteVirtualMFADevice and iam:CreateVirtualMFADevice support different resources from iam:ResyncMFADevice and iam:EnableMFADevice. We split them into separate statements that limit each one to their supported resources. This probably doesn’t change access level, but our routine is to limit resource lists as much as possible. That helps make it clear to future readers what the policy enables.
  • Like above, ${aws:username} is an IAM policy variable. See links there for how to handle this in terraform.
  • We continued our convention from above of naming sids for “self” to indicate they’re limited to the user who has the policy.

Complete Policy Document

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "GetAccountPasswordPolicy",
            "Effect": "Allow",
            "Action": "iam:GetAccountPasswordPolicy",
            "Resource": "*"
        },
        {
            "Sid": "ChangeSelfPassword",
            "Effect": "Allow",
            "Action": "iam:ChangePassword",
            "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
        },
        {
            "Sid": "ManageSelfKeys",
            "Effect": "Allow",
            "Action": [
                "iam:UpdateAccessKey",
                "iam:ListAccessKeys",
                "iam:GetUser",
                "iam:GetAccessKeyLastUsed",
                "iam:DeleteAccessKey",
                "iam:CreateAccessKey"
            ],
            "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
        },
        {
            "Sid": "ManageSelfMFAUserResources",
            "Effect": "Allow",
            "Action": [
                "iam:ResyncMFADevice",
                "iam:ListMFADevices",
                "iam:EnableMFADevice",
                "iam:DeactivateMFADevice"
            ],
            "Resource": "arn:aws:iam::[account id without hyphens]:user/${aws:username}"
        },
        {
            "Sid": "ManageSelfMFAResources",
            "Effect": "Allow",
            "Action": [
                "iam:DeleteVirtualMFADevice",
                "iam:CreateVirtualMFADevice"
            ],
            "Resource": "arn:aws:iam::[account id without hyphens]:mfa/${aws:username}"
        }
    ]
}

User Guide

  1. Replace [account id without hyphens] with the ID for your account in the policy above.
  2. Attach the policy to users (we like to do this through groups).
  3. Tell users to edit their authentication from My Security Credentials in the user dropdown. This policy won’t let them access their user through the IAM console. My Security Credentials may not appear in the dropdown if the user has switched into a role.

Happy automating!

Need more than just this article? We’re available to consult.

You might also want to check out these related articles:

Creating Terraform Resources in Multiple Regions

In most terraform modules, resources are created in one region using one provider declaration.

provider "aws" {
  region = "us-west-1"
}

data "aws_region" "primary" {}

resource "aws_ssm_parameter" "param" {
  name  = "/${data.aws_region.primary.name}/param"
  type  = "String"
  value = "notavalue"
}

Sometimes, you need to create resources in multiple regions. Maybe the module has to support disaster recovery to an alternate region. Maybe one of the AWS services you’re using doesn’t support your primary region. When this article was written, Amazon Certificate Manager certificates had to be created in us-east-1 to work with Amazon CloudFront. In cases like these, terraform supports targeting multiple regions.

We recommend using this feature cautiously. Resources should usually be created in the same region. If you’re sure your module should target multiple, here’s how to do it.

  1. Declare a provider for the alternate region. You’ll now have two providers. The original one for your primary region, and the new one for your alternate.
  2. Give the new provider an alias.
  3. Declare resources that reference the new alias in their provider attribute with the format aws.[alias]. This also works for data sources, which is handy for dynamically interpolating region names into resource properties like their name.
provider "aws" {
  alias  = "alternate_region"
  region = "us-west-2"
}

data "aws_region" "alternate" {
  provider = aws.alternate_region
}

resource "aws_ssm_parameter" "alt_param" {
  provider = aws.alternate_region

  name  = "/${data.aws_region.alternate.name}/param"
  type  = "String"
  value = "notavalue"
}

terraform plan doesn’t show what regions it’ll create resources in, so this example interpolates the region name into the resource name to make it visible.

...
Terraform will perform the following actions:

  # aws_ssm_parameter.alt_param will be created
  + resource "aws_ssm_parameter" "alt_param" {
      + arn       = (known after apply)
      + data_type = (known after apply)
      + id        = (known after apply)
      + key_id    = (known after apply)
      + name      = "/us-west-2/param"
      + tags_all  = (known after apply)
      + tier      = "Standard"
      + type      = "String"
      + value     = (sensitive value)
      + version   = (known after apply)
    }
...

To confirm the resources ended up in the right places, here are screenshots of each region’s parameters next to the region drop-down menu in the AWS web console.

We get one in us-west-1 and another in us-west-2, as expected.

Happy automating!

Need more than just this article? We’re available to consult.

You might also want to check out these related articles: