Terraform Modules
This page explains the reusable Terraform modules that make up the platform. Think of each module as a building block with a clear purpose, such as networking, monitoring, gateway access, or shared service deployment.
Module Overview
| Module | Purpose | Key Resources | Optional? |
|---|---|---|---|
| network | Virtual network subnets and network security rule configuration | Subnets, Network Security Groups | No |
| monitoring | Monitoring and observability with log collection and application insights | Log Analytics Workspace, App Insights | No |
| bastion | Secure virtual machine access without exposing a public internet address | Azure Bastion Host, Public IP | Yes (enable_bastion) |
| jumpbox | Development virtual machine with command-line tools | Linux VM, SSH Keys, Auto-shutdown | Yes (enable_jumpbox) |
| github-runners-aca | Self-hosted GitHub runners on Container Apps | Container Apps Environment, Container Job, ACR | Yes (github_runners_aca_enabled) |
| azure-proxy | Secure tunnel (chisel) for proxying to peered networks | App Service, Container Image | Yes (enable_azure_proxy) |
terraform.tfvars or as environment variables.
Deployment Script: deploy-terraform.sh
A wrapper script that simplifies Terraform operations with proper Azure authentication. Features auto-import of conflicting resources and intelligent retry logic.
Usage
./initial-setup/infra/deploy-terraform.sh[options] Commands: init Initialize Terraform with backend configuration plan Preview infrastructure changes apply Apply infrastructure changes (with auto-import & retry) destroy Destroy all infrastructure output Display Terraform outputs
Smart Apply Logic
The apply command now includes:
- Auto-detection: Detects "already exists" errors from Azure
- Automatic import: Imports conflicting resources into Terraform state
- Retry on success: Reruns apply after successful import (max 3 attempts)
- Clear failure messages: Shows which resource was imported or why the apply failed
Examples
Full Deployment
./initial-setup/infra/deploy-terraform.sh init ./initial-setup/infra/deploy-terraform.sh plan ./initial-setup/infra/deploy-terraform.sh apply
Target Specific Module
# Deploy only bastion ./initial-setup/infra/deploy-terraform.sh apply \ -target=module.bastion # Deploy only jumpbox ./initial-setup/infra/deploy-terraform.sh apply \ -target=module.jumpbox
Monitoring Module
Provides centralized observability across all deployed resources using Azure Monitor services.
Resources Created
- Log Analytics Workspace: Central logs and metrics repository (default retention: 30 days, configurable)
- Application Insights: APM for applications, connected to Log Analytics workspace
Key Variables
| Variable | Default | Description |
|---|---|---|
log_analytics_sku |
PerGB2018 | Log Analytics pricing tier |
log_analytics_retention_days |
30 | Data retention period in days |
Network Module
Configures subnets within an existing VNet and creates Network Security Groups (NSGs) for traffic control.
Subnets Created
| Subnet | Purpose | Delegation |
|---|---|---|
AzureBastionSubnet |
Azure Bastion service | None (required name) |
jumpbox-subnet |
Jumpbox VM | None |
privateendpoints-subnet |
Private Endpoints for Azure services | None |
app-service-subnet |
App Service VNet integration | Microsoft.Web/serverFarms |
container-instance-subnet |
Azure Container Instances | Microsoft.ContainerInstance/containerGroups |
container-apps-subnet |
Azure Container Apps Environment | Microsoft.App/environments |
apim-subnet |
Azure API Management | Microsoft.Web/serverFarms |
Network Security Groups
Each subnet has a dedicated NSG with rules for:
- Bastion NSG: HTTPS inbound, Gateway Manager, SSH/RDP to VNet
- Jumpbox NSG: SSH/RDP from Bastion only, outbound to Private Endpoints
- App Service NSG: HTTP/HTTPS from internet, communication with Private Endpoints
- Container Apps NSG: Load Balancer, HTTP/HTTPS, Private Endpoint access
- APIM NSG: API Management traffic, backend service access
Bastion Module
Deploys Azure Bastion for secure, browser-based SSH/RDP access to VMs without exposing public IPs.
Resources Created
- Public IP: Static, Standard SKU (required for Bastion)
- Bastion Host: Standard SKU (tunneling enabled, minimum 2 instances)
Features
- Web-based SSH/RDP from Azure Portal
- No public IPs needed on VMs
- TLS encryption for all sessions
- NSG rules per Microsoft requirements
add-or-remove-module.yml workflow to deploy/destroy Bastion on demand.
Jumpbox Module
Creates a Linux VM for development and administration tasks, pre-configured with essential CLI tools.
VM Configuration
| Setting | Value |
|---|---|
| OS | Ubuntu 24.04 LTS (Noble Numbat) |
| Size | B-series burstable (configurable) |
| Authentication | SSH Key (auto-generated, stored in sensitive/) |
| Public IP | None (access via Bastion only) |
| Identity | System-assigned managed identity |
Pre-installed Tools
The VM is bootstrapped with cloud-init to install:
- Azure CLI
- Terraform
- kubectl
- Docker & Compose
- GitHub CLI
- Python 3 & pip
- git, vim, tmux
- jq, curl, wget
- htop, net-tools
Auto-Shutdown & Start
Cost optimization through scheduled operations:
| Schedule | Time | Method |
|---|---|---|
| Auto-Shutdown | 7:00 PM PST daily | Azure Dev/Test shutdown schedule |
| Auto-Start | 8:00 AM PST Mon-Fri | Azure Automation Runbook (Python3) |
Azure Proxy Module (Chisel Tunnel)
This module deploys a Chisel tunnel server so platform maintainers can reach private Azure resources from a local machine. It solves the problem of trying to access private endpoints while you are outside the virtual network.
Resources Created
- App Service Plan: Linux plan hosting the tunnel (configurable SKU)
- Web App: Runs containerized Chisel server with VNet integration
- Random Credentials: Auto-generated username/password for tunnel auth
- Application Insights: Connection logging and monitoring
Security Features
- HTTPS Only: TLS 1.3 minimum
- Mandatory Auth: Random credentials stored in App Settings
- Health Checks: Automatic restart on failure
- Logging: All connections logged to App Insights
Key Variables
| Variable | Default | Description |
|---|---|---|
enable_azure_proxy |
false | Enable/disable Chisel deployment |
azure_proxy_image |
- | Container image URI (built by pr-open workflow) |
app_service_sku_name_azure_proxy |
B1 | App Service plan SKU |
Usage Example
After deployment, connect from your laptop:
# Get credentials from Terraform output terraform output azure_proxy_chisel_auth terraform output azure_proxy_url # Connect to private PostgreSQL database docker run --rm -it -p 5432:5432 jpillora/chisel:latest client \ --auth "tunnel:XXXXXXXX" \ https://your-app.azurewebsites.net \ 0.0.0.0:5432:private-postgres.postgres.database.azure.com:5432 # Now connect locally psql -h localhost -p 5432 -U admin -d mydb
See Chisel Tunnel Playbook for detailed instructions.
Choosing Your Access Method
The infrastructure provides four optional access methods for reaching Azure resources. You do not need all of them. Enable only the methods your team actually uses, because each one exists to solve a different access problem.
| Access Method | Toggle Variable | Control Plane | Data Plane | Best For |
|---|---|---|---|---|
| Public GitHub Runners | (default, no toggle) | ✓ Yes | ✗ No | Resource deployments only |
| Self-Hosted Runners | github_runners_aca_enabled |
✓ Yes | ✓ Yes | Tenant CI/CD pipelines needing VNet access |
| Bastion + Jumpbox | enable_bastion + enable_jumpbox |
✓ Yes | ✓ Yes | Admin access, debugging |
| Chisel Tunnel | enable_azure_proxy |
✓ Yes | ✓ Yes | Local dev access from laptop |
Example: Minimal (Control Plane Only)
# terraform.tfvars enable_bastion = false enable_jumpbox = false enable_azure_proxy = false github_runners_aca_enabled = false # Uses public GitHub runners # Good for: Resource-only Terraform
Example: Full Access
# terraform.tfvars enable_bastion = true enable_jumpbox = true enable_azure_proxy = true github_runners_aca_enabled = true # All access methods available # Good for: Platform maintainers
Recommended Setup by Role
| Platform Maintainers | Enable all: Bastion + Jumpbox for admin, Chisel for local dev, Self-hosted for CI/CD |
| Project Developers | No direct access needed - use APIM/App Gateway endpoints provided by platform |
| Cost-Conscious | Chisel only (enable_azure_proxy) - cheapest option with data plane access |
See Access Methods Architecture Diagram for a visual overview.
Backend Configuration
Terraform state is stored in Azure Blob Storage with Azure AD authentication.
# infra/backend.tf
terraform {
backend "azurerm" {
resource_group_name = "your-resource-group"
storage_account_name = "tfstateaccount"
container_name = "tfstate"
key = "terraform.tfstate"
use_azuread_auth = true # Required for OIDC
}
}
use_azuread_auth = true setting is required when using OIDC authentication. Without it, Terraform will fail to access the state file.
Variables Reference
Key variables used across modules:
| Variable | Description | Source |
|---|---|---|
app_name |
Application name prefix for resources | Environment variable |
app_env |
Environment (dev, test, prod) | GitHub Environment |
location |
Azure region | Default: Canada Central |
resource_group_name |
Target resource group | Environment variable |
vnet_name |
Existing VNet name | GitHub Secret |
vnet_address_space |
VNet CIDR for subnet calculation | GitHub Secret |