LocalStack Integration Guide
KECS integrates with LocalStack to provide a complete local AWS environment. This guide covers setting up and using LocalStack with KECS.
Overview
LocalStack integration enables:
- Local AWS service emulation (S3, DynamoDB, SQS, etc.)
- IAM role simulation
- CloudWatch logs and metrics
- Secrets Manager and SSM Parameter Store
- Service discovery with Route 53
Configuring LocalStack Services
KECS automatically enables core AWS services required for ECS operations. You can also enable additional services based on your application needs.
Default Services
KECS always enables these essential AWS services:
| Service | Purpose |
|---|---|
| IAM | Identity and Access Management for task roles |
| CloudWatch Logs | Container log aggregation and streaming |
| SSM Parameter Store | Configuration parameter storage |
| Secrets Manager | Secure secrets storage |
| ELBv2 | Application and Network Load Balancers |
| S3 | Object storage (included in defaults) |
These services are automatically available in every KECS instance without any configuration.
Enabling Additional Services
For applications that need other AWS services, use the --additional-localstack-services flag when creating an instance:
Via CLI:
# Enable S3 and DynamoDB for data processing
kecs start --instance data-pipeline --additional-localstack-services s3,dynamodb
# Enable Lambda and SNS for serverless applications
kecs start --instance serverless --additional-localstack-services lambda,sns
# Enable multiple services
kecs start --instance full-stack \
--additional-localstack-services dynamodb,sqs,sns,kinesisVia TUI (Interactive Mode):
When using the TUI, you can configure LocalStack services through the instance creation dialog:

- Launch the TUI:
kecs - Navigate to "Create New Instance" and press Enter
- Fill in the instance name
- In "Additional LocalStack Services" field, enter comma-separated service names
- Example:
s3,dynamodb,sqs
- Example:
- The UI shows helper text indicating which services are always enabled
- Press Tab to navigate to "Create" button and press Enter
Available Services
In addition to the default services, you can enable:
Data Storage:
dynamodb- NoSQL databaserds- Relational databases (MySQL, PostgreSQL)elasticache- In-memory caching (Redis, Memcached)
Messaging & Streaming:
sqs- Simple Queue Servicesns- Simple Notification Servicekinesis- Real-time data streamingkafka- Managed streaming for Apache Kafka
Serverless:
lambda- Serverless computestepfunctions- Workflow orchestrationeventbridge- Event bus
Container & Compute:
ec2- Virtual machinesecr- Container registry
Networking:
apigateway- API managementroute53- DNS service
For the complete list of supported services, see the LocalStack Feature Coverage documentation.
Common Configuration Patterns
Microservices with Service Discovery:
kecs start --instance microservices \
--additional-localstack-services route53,servicediscoveryData Processing Pipeline:
kecs start --instance data-pipeline \
--additional-localstack-services s3,dynamodb,kinesis,lambdaEvent-Driven Architecture:
kecs start --instance event-driven \
--additional-localstack-services sqs,sns,eventbridge,lambdaFull-Stack Web Application:
kecs start --instance webapp \
--additional-localstack-services dynamodb,s3,apigateway,lambdaVerifying Enabled Services
Check which services are running in your LocalStack instance:
# Get the KECS endpoint
export AWS_ENDPOINT_URL=http://localhost:5373
# Check LocalStack health (shows all enabled services)
curl http://localhost:5373/_localstack/health | jq .Expected output:
{
"services": {
"iam": "running",
"logs": "running",
"ssm": "running",
"secretsmanager": "running",
"elbv2": "running",
"s3": "running",
"dynamodb": "running" // if enabled via --additional-localstack-services
}
}Using AWS Services
IAM Integration
KECS automatically maps ECS task roles to Kubernetes ServiceAccounts:
{
"family": "webapp",
"taskRoleArn": "arn:aws:iam::000000000000:role/webapp-task-role",
"containerDefinitions": [
{
"name": "app",
"image": "myapp:latest",
"environment": [
{
"name": "AWS_REGION",
"value": "us-east-1"
}
]
}
]
}The container can now access AWS services using the task role.
S3 Integration
Access S3 buckets from your containers without any endpoint configuration:
Your Application Code (same as production):
import boto3
# No endpoint_url parameter needed!
# KECS automatically injects AWS_ENDPOINT_URL environment variable
s3 = boto3.client('s3')
# List buckets
buckets = s3.list_buckets()
# Upload file
s3.upload_file('local.txt', 'my-bucket', 'remote.txt')Task Definition (no environment variables needed):
{
"family": "s3-app",
"containerDefinitions": [
{
"name": "app",
"image": "myapp:latest"
// KECS automatically injects AWS_ENDPOINT_URL and credentials!
}
]
}DynamoDB Integration
Use DynamoDB tables without any configuration:
Your Application Code:
import boto3
# No endpoint configuration needed in code!
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('users')
# Put item
table.put_item(Item={
'userId': '123',
'name': 'John Doe',
'email': '[email protected]'
})
# Query
response = table.get_item(Key={'userId': '123'})Task Definition (clean and production-ready):
{
"family": "dynamodb-app",
"containerDefinitions": [
{
"name": "app",
"image": "myapp:latest"
// No AWS_ENDPOINT_URL needed - KECS handles it automatically!
}
]
}Secrets Manager
Store and retrieve secrets:
# Create secret via AWS CLI
aws secretsmanager create-secret \
--name prod/db/password \
--secret-string "mysecretpassword" \
--endpoint-url http://localhost:5373
# Use in task definition
{
"containerDefinitions": [
{
"name": "app",
"secrets": [
{
"name": "DB_PASSWORD",
"valueFrom": "arn:aws:secretsmanager:us-east-1:000000000000:secret:prod/db/password"
}
]
}
]
}SSM Parameter Store
Store configuration parameters:
# Create parameter
aws ssm put-parameter \
--name /myapp/database/host \
--value "db.example.com" \
--type String \
--endpoint-url http://localhost:5373
# Use in task definition
{
"containerDefinitions": [
{
"name": "app",
"secrets": [
{
"name": "DB_HOST",
"valueFrom": "arn:aws:ssm:us-east-1:000000000000:parameter/myapp/database/host"
}
]
}
]
}CloudWatch Logs
Container logs are automatically sent to CloudWatch:
{
"containerDefinitions": [
{
"name": "app",
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/myapp",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "webapp"
}
}
}
]
}View logs:
aws logs tail /ecs/myapp \
--follow \
--endpoint-url http://localhost:5373Automatic AWS Endpoint Configuration
KECS automatically configures AWS SDK environment variables for all ECS tasks, enabling seamless LocalStack integration without requiring any endpoint configuration in your task definitions. This mirrors the real AWS ECS experience where applications naturally access AWS services.
How It Works
When you run an ECS task, KECS automatically:
Detects ECS Tasks: Identifies pods created from ECS task definitions
Injects Environment Variables: Adds AWS SDK configuration automatically:
AWS_ENDPOINT_URL: Points to LocalStack endpointAWS_ACCESS_KEY_ID: LocalStack test credentialsAWS_SECRET_ACCESS_KEY: LocalStack test credentialsAWS_DEFAULT_REGION: Configured regionAWS_REGION: Configured region
SDK Auto-Configuration: AWS SDK v2 automatically uses these environment variables
Key Benefits
- Zero Configuration: No need to set AWS_ENDPOINT_URL in task definitions
- Production-Ready Code: Same task definitions work in both KECS and real AWS ECS
- Automatic Detection: Works for all ECS tasks without any annotations
- No Code Changes: Existing applications work without modifications
Example: S3 Access Without Endpoint Configuration
Task Definition (no AWS_ENDPOINT_URL needed):
{
"family": "s3-processor",
"containerDefinitions": [
{
"name": "worker",
"image": "amazon/aws-cli:latest",
"command": [
"sh", "-c",
"aws s3 mb s3://my-bucket && aws s3 ls"
]
}
]
}What Happens:
- KECS webhook intercepts pod creation
- Automatically injects AWS environment variables
- AWS CLI uses
AWS_ENDPOINT_URLto connect to LocalStack - S3 operations work seamlessly
Comparison with Standard LocalStack Usage
Without KECS (requires manual configuration):
{
"containerDefinitions": [
{
"name": "app",
"environment": [
{
"name": "AWS_ENDPOINT_URL",
"value": "http://localstack:4566"
},
{
"name": "AWS_ACCESS_KEY_ID",
"value": "test"
},
{
"name": "AWS_SECRET_ACCESS_KEY",
"value": "test"
}
]
}
]
}With KECS (completely automatic):
{
"containerDefinitions": [
{
"name": "app"
// No environment variables needed!
}
]
}How to Verify Auto-Configuration
Check that environment variables are injected:
# Get task ARN
TASK_ARN=$(aws ecs list-tasks --cluster default --endpoint-url http://localhost:5373 --query 'taskArns[0]' --output text)
# Extract task ID
TASK_ID=$(basename $TASK_ARN)
# Check environment variables in the pod
kubectl get pod $TASK_ID -n default-us-east-1 -o jsonpath='{.spec.containers[0].env[?(@.name=="AWS_ENDPOINT_URL")].value}'You should see the LocalStack internal endpoint: http://localstack.kecs-system.svc.cluster.local:4566
Service Discovery
Private DNS Namespace
Create a Route 53 private hosted zone:
aws servicediscovery create-private-dns-namespace \
--name prod.local \
--vpc vpc-12345 \
--endpoint-url http://localhost:5373Register Service
{
"serviceName": "api",
"serviceRegistries": [
{
"registryArn": "arn:aws:servicediscovery:us-east-1:000000000000:service/srv-12345",
"containerName": "api",
"containerPort": 8080
}
]
}Discover Services
Services can discover each other:
# In your application
api_endpoint = "http://api.prod.local:8080"Testing with LocalStack
Unit Tests
import unittest
import boto3
from moto import mock_s3
class TestS3Integration(unittest.TestCase):
@mock_s3
def test_upload_file(self):
# Create bucket
s3 = boto3.client('s3', endpoint_url='http://localhost:5373')
s3.create_bucket(Bucket='test-bucket')
# Upload file
s3.upload_file('test.txt', 'test-bucket', 'uploaded.txt')
# Verify
objects = s3.list_objects(Bucket='test-bucket')
assert len(objects['Contents']) == 1Integration Tests
# Start LocalStack and KECS
docker-compose up -d
# Run tests
pytest tests/integration/
# Clean up
docker-compose downMonitoring and Debugging
LocalStack Dashboard
Access the LocalStack UI:
- Open http://localhost:8080/localstack/dashboard
- View:
- Service health status
- API call logs
- Resource listings
- Configuration
Debugging AWS SDK Calls
Enable debug logging:
import logging
import boto3
# Enable debug logging
boto3.set_stream_logger('boto3.resources', logging.DEBUG)
# Your code here
s3 = boto3.client('s3')Viewing Proxy Logs
Check sidecar proxy logs:
kubectl logs <pod-name> -c localstack-proxy -n <namespace>Best Practices
1. Resource Initialization
Create resources on startup:
# init_resources.py
import boto3
def initialize():
s3 = boto3.client('s3', endpoint_url='http://localhost:5373')
# Create buckets
buckets = ['uploads', 'processed', 'archive']
for bucket in buckets:
try:
s3.create_bucket(Bucket=bucket)
except s3.exceptions.BucketAlreadyExists:
pass
# Create DynamoDB tables
dynamodb = boto3.client('dynamodb', endpoint_url='http://localhost:5373')
# ... create tables
if __name__ == '__main__':
initialize()2. Environment Parity
Keep local and production similar:
- Use same resource names
- Match IAM policies
- Replicate bucket structures
- Use consistent parameter paths