Upgrading Terraform Modules to AWS Provider v6 with Confidence
- Oleksandr Kuzminskyi
- September 7, 2025
Table of Contents
When HashiCorp releases a new major version of the AWS Terraform provider, engineering teams often brace themselves. Major upgrades bring new features and bug fixes, but they also come with breaking changes. A module that “just worked” under v5 might fail or drift silently under v6.
For teams moving fast - especially startups - an unexpected Terraform failure can mean lost time, broken deployments, and shaken trust.
At InfraHouse, we help startups build reliable AWS foundations. Recently, we upgraded our Terraform modules to support the AWS provider v6.* while maintaining version 5 compatibility.
Instead of crossing our fingers and hoping for the best, we relied on a testing methodology we’ve refined over years: pytest-driven infrastructure tests with full terraform apply runs.
The Challenge of Provider Upgrades
Terraform providers evolve alongside AWS services. That means new resource arguments, stricter validations, and the occasional removal of deprecated features. The AWS provider v6 release is no exception.
Without testing, teams risk:
- Silent drift when provider defaults change.
- Broken plans that block deployments.
- Downtime if critical modules no longer behave as expected.
For a startup building its first AWS environment, these risks can translate directly into lost customer trust and delayed launches.
Why You Should Upgrade to AWS Provider v6
Upgrading to AWS provider v6 isn’t just about staying current - it’s about ensuring long-term stability and alignment with AWS’s own service lifecycle. Deprecated services are removed before they become liabilities, resource schemas are streamlined, and new capabilities like multi-region support make it easier to manage complex architectures. By adopting v6 early, teams avoid building on features that are disappearing, reduce the pain of future migrations, and keep their infrastructure code compatible with both AWS and Terraform as they evolve.
One feature I particularly appreciate is the new region
argument available on most resources.
Since Terraform still doesn’t support passing a provider as a module input,
being able to specify a region directly is a practical workaround.
It makes multi-region configurations significantly more DRY (Don’t Repeat Yourself).
For example, instead of duplicating nearly identical modules with different provider blocks:
module "accessanalyzer-us-west-1" {
source = "./modules/accessanalyzer"
providers = {
aws = aws.aws-principal-uw1
}
organization = aws_organizations_organization.infrahouse.master_account_name
}
module "accessanalyzer-us-west-2" {
source = "./modules/accessanalyzer"
providers = {
aws = aws.aws-principal-uw2
}
organization = aws_organizations_organization.infrahouse.master_account_name
}
…you can pass the region
argument directly to the resource,
dramatically simplifying the module interface and reducing duplication.
InfraHouse’s Testing Methodology
We treat infrastructure like application code: it deserves automated tests. Our methodology centers on three principles:
Run real applies.
Instead of stopping at
terraform plan
, we runterraform apply
inside tests. This ensures modules don’t just “look valid” but actually create working resources.Leverage pytest
Using
pytest
gives us parameterization, fixtures, and clean reporting. Infrastructure tests integrate seamlessly into the same workflows as application tests.Use our InfraHouse pytest plugin
Our open-source pytest-infrahouse plugin provides helpers like
terraform_apply()
andterraform_destroy()
. This keeps tests concise, readable, and focused on behavior rather than Terraform boilerplate.
Here’s a simplified example:
import pytest
PROVIDER_VERSIONS = ["~> 5.0", "~> 6.0"]
@pytest.mark.parametrize("aws_provider_version", PROVIDER_VERSIONS, ids=["aws5", "aws6"])
@pytest.mark.parametrize("profile_name", ["foo", "very-long-name" * 10])
def test_module(
aws_provider_version, profile_name, aws_region, test_role_arn, keep_after
):
# Write provider constraint dynamically
with open(osp.join(terraform_module_dir, "terraform.tf"), "w") as fp:
fp.write(
dedent(
f"""
terraform {{
required_providers {{
aws = {{
source = "hashicorp/aws"
version = "{aws_provider_version}"
}}
}}
}}
"""
)
)
# Write module input variables. Also, parametrized.
with open(osp.join(terraform_module_dir, "terraform.tfvars"), "w") as fp:
fp.write(
dedent(
f"""
region = "{aws_region}"
profile_name = "{profile_name}"
"""
)
)
# Apply module and capture outputs
with terraform_apply(
terraform_module_dir,
destroy_after=not keep_after,
json_output=True,
) as tf_output:
LOG.info("%s", json.dumps(tf_output, indent=4))
This single test verifies that the same module applies cleanly under both provider v5 and v6.
Why Parametrization Matters
One of pytest’s most powerful features is Cartesian multiplication of parameters. In the example above:
- One dimension varies the AWS provider version (
~> 5.0
vs~> 6.0
). - Another dimension varies module inputs (different profile names).
Pytest automatically runs all combinations, ensuring broad coverage without repetitive test code.
This approach shines in more complex modules, where you might want to test multiple CIDR block configurations, subnet layouts, or feature toggles. By adding provider versions as an independent parameter set, you automatically validate each configuration under both v5 and v6 constraints:
@pytest.mark.parametrize("aws_provider_version", ["~> 5.11", "~> 6.0"], ids=["aws-5", "aws-6"])
@pytest.mark.parametrize(
", ".join(
[
"management_cidr_block",
"vpc_cidr_block",
"subnets",
"expected_nat_gateways_count",
"expected_subnet_all_count",
"expected_subnet_public_count",
"expected_subnet_private_count",
"restrict_all_traffic",
"enable_vpc_flow_logs",
]
),
)
def test_vpc_module(...):
...
This layered parametrization is what allowed us to roll out the AWS v6 upgrade confidently across dozens of modules - every input scenario was validated under both provider versions.
Applying the Methodology to the AWS v6 Upgrade
When AWS provider v6 landed, we applied this methodology across multiple InfraHouse modules - from foundational instance-profile module, to GitHub actions-runner, to elasticsearch clusters.
Our CI/CD pipelines ran tests against both provider versions automatically. Any incompatibility surfaced immediately as a failed job, complete with Terraform logs for debugging.
A few modules needed tweaks: stricter validation of arguments in IAM resources, for example. But thanks to our parametrized test matrix, we fixed those issues once and rolled out the upgrade across all modules with confidence.
Migration of InfraHouse modules is still ongoing, but so far v6 has proven mostly compatible with v5. Tests revealed that the name
attribute in the aws_region
data source is deprecated.
Signals of Engineering Maturity
What sets this approach apart?
- Tests that apply, not just plan: catching real-world issues early.
- Consistency across dozens of modules: every InfraHouse module uses the same test harness.
- Automation: Dependabot/Renovate bumps dependent module versions, and our CI/CD test matrix ensures compatibility.
Every pull request triggers CI that runs the full pytest suite across both v5 and v6.
For example, in PR #33
of the service-network module, GitHub Actions executed
the complete test matrix, ensuring the module works seamlessly across both provider versions.
This is the kind of infrastructure maturity we bring to our clients. For startups, it means less time firefighting and more time shipping features.
Why This Matters for Startups
If you’re a startup building your AWS environment, you have enough challenges already: scaling your product, hiring engineers, finding product-market fit.
The last thing you need is Terraform surprises when AWS or HashiCorp release a new version. By partnering with InfraHouse, you get:
- Modules tested across provider versions.
- Infrastructure that keeps pace with AWS changes.
- A compliance-friendly foundation you can trust as you grow.
Lessons Learned
A few takeaways from our AWS v6 upgrade:
- Always run tests against multiple provider versions before upgrading.
- Don’t rely on
terraform plan
alone - onlyapply
validates defaults and runtime changes. - Automate version bumps and test matrices - future you will thank you.
Conclusion
Upgrading to AWS provider v6 wasn’t just about chasing the latest version. It was an opportunity to prove the resilience of our testing methodology and the maturity of our Terraform practices.
For startups, this means confidence. With InfraHouse, your AWS foundation is built not just for today, but for the inevitable changes tomorrow.
If you’re about to build your AWS environment and want it done right the first time, let’s talk.