AWS Lambda: The Complete Guide for 2026
AWS Lambda lets you run code without provisioning or managing servers. You upload your function, configure a trigger, and AWS handles everything else: scaling, patching, availability, and capacity planning. You pay only for the milliseconds your code actually executes, and nothing when it is idle.
Since its launch in 2014, Lambda has become the backbone of serverless architecture on AWS. Whether you are building REST APIs, processing file uploads, responding to database changes, or running scheduled tasks, Lambda eliminates the operational overhead of managing infrastructure so you can focus entirely on your application logic.
Table of Contents
- What Is AWS Lambda
- Creating Lambda Functions
- Handler Function Structure
- Event Sources
- Environment Variables
- Lambda Layers
- Cold Starts and Optimization
- Memory and Timeout Configuration
- Logging and Monitoring
- Error Handling
- Python and Node.js Examples
- Deployment Packages
- Lambda@Edge
- Cost Optimization
- Best Practices
- Frequently Asked Questions
1. What Is AWS Lambda
AWS Lambda is a serverless compute service that runs your code in response to events. You write a function, upload it, and AWS manages the infrastructure. There are no servers to provision, no operating systems to patch, and no capacity to plan. Lambda automatically scales from zero invocations to thousands of concurrent executions in seconds.
Lambda supports Python, Node.js, Java, Go, .NET, Ruby, and custom runtimes via container images. Functions can be triggered by over 200 AWS services and external sources. The pricing model is simple: you pay per request ($0.20 per million) and per compute duration measured in GB-seconds. The always-free tier includes 1 million requests and 400,000 GB-seconds per month.
| Feature | Limit |
|---|---|
| Max execution time | 15 minutes (900 seconds) |
| Memory allocation | 128 MB to 10,240 MB (10 GB) |
| Deployment package (ZIP) | 50 MB compressed, 250 MB unzipped |
| Container image | 10 GB |
| Concurrent executions | 1,000 default (can request increase) |
| Ephemeral storage (/tmp) | 512 MB to 10,240 MB |
| Environment variables | 4 KB total |
2. Creating Lambda Functions
Via the AWS Console
The fastest way to create your first function is through the AWS Console. Navigate to Lambda, click "Create function," choose "Author from scratch," name your function, select a runtime, and click Create. The inline code editor lets you write and test immediately.
Via the AWS CLI
# Create a deployment package
mkdir my-function && cd my-function
cat > lambda_function.py <<'EOF'
def lambda_handler(event, context):
return {
'statusCode': 200,
'body': 'Hello from Lambda!'
}
EOF
zip function.zip lambda_function.py
# Create the function
aws lambda create-function \
--function-name my-first-function \
--runtime python3.12 \
--role arn:aws:iam::123456789012:role/lambda-execution-role \
--handler lambda_function.lambda_handler \
--zip-file fileb://function.zip
# Invoke the function
aws lambda invoke \
--function-name my-first-function \
--payload '{"key": "value"}' \
output.json
cat output.json
Via AWS SAM (Serverless Application Model)
# template.yaml
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Description: SAM Lambda API example
Globals:
Function:
Timeout: 30
Runtime: python3.12
MemorySize: 256
Resources:
HelloFunction:
Type: AWS::Serverless::Function
Properties:
Handler: app.lambda_handler
CodeUri: src/
Events:
HelloApi:
Type: Api
Properties:
Path: /hello
Method: get
# Initialize a new SAM project
sam init --runtime python3.12 --name my-sam-app
# Build and deploy
sam build
sam deploy --guided
# Test locally before deploying
sam local invoke HelloFunction --event events/event.json
sam local start-api # starts a local API Gateway on port 3000
3. Handler Function Structure
Every Lambda function has a handler: the entry point that AWS calls when your function is invoked. The handler receives two arguments: the event (input data from the trigger) and the context (runtime information about the invocation).
Python Handler
import json
import logging
logger = logging.getLogger()
logger.setLevel(logging.INFO)
# Code outside the handler runs once per execution environment (cold start)
# Use this for initialization: DB connections, SDK clients, config loading
def lambda_handler(event, context):
"""
event: dict containing trigger data (API Gateway request, S3 event, etc.)
context: LambdaContext object with runtime information
"""
logger.info(f"Request ID: {context.aws_request_id}")
logger.info(f"Time remaining: {context.get_remaining_time_in_millis()}ms")
logger.info(f"Memory limit: {context.memory_limit_in_mb}MB")
name = event.get('queryStringParameters', {}).get('name', 'World')
return {
'statusCode': 200,
'headers': {'Content-Type': 'application/json'},
'body': json.dumps({'message': f'Hello, {name}!'})
}
Node.js Handler
// ES module syntax (Node.js 18+)
export const handler = async (event, context) => {
console.log('Request ID:', context.awsRequestId);
console.log('Event:', JSON.stringify(event));
const name = event.queryStringParameters?.name || 'World';
return {
statusCode: 200,
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message: `Hello, ${name}!` })
};
};
4. Event Sources
Lambda functions are triggered by events from AWS services or external sources. Here are the most common event sources and their event structures.
API Gateway (REST API)
def lambda_handler(event, context):
# API Gateway passes HTTP details in the event
http_method = event['httpMethod'] # GET, POST, etc.
path = event['path'] # /users/123
query_params = event.get('queryStringParameters') or {}
headers = event['headers']
body = json.loads(event['body']) if event.get('body') else None
if http_method == 'GET':
return {'statusCode': 200, 'body': json.dumps({'users': []})}
elif http_method == 'POST':
return {'statusCode': 201, 'body': json.dumps({'created': True})}
S3 Events (File Upload Trigger)
import boto3
s3_client = boto3.client('s3')
def lambda_handler(event, context):
for record in event['Records']:
bucket = record['s3']['bucket']['name']
key = record['s3']['object']['key']
size = record['s3']['object']['size']
print(f"New file: s3://{bucket}/{key} ({size} bytes)")
# Example: process uploaded image, generate thumbnail
response = s3_client.get_object(Bucket=bucket, Key=key)
content = response['Body'].read()
SQS (Queue Processing)
def lambda_handler(event, context):
failed_messages = []
for record in event['Records']:
try:
body = json.loads(record['body'])
process_message(body)
except Exception as e:
print(f"Error processing {record['messageId']}: {e}")
failed_messages.append({'itemIdentifier': record['messageId']})
# Partial batch failure reporting
return {'batchItemFailures': failed_messages}
EventBridge (Scheduled and Event-Driven)
# Triggered by a schedule rule (cron) or custom events
def lambda_handler(event, context):
# Scheduled event
if event.get('source') == 'aws.events':
print("Running scheduled task")
run_daily_cleanup()
# Custom application event
if event.get('detail-type') == 'OrderPlaced':
order = event['detail']
send_confirmation_email(order)
DynamoDB Streams
def lambda_handler(event, context):
for record in event['Records']:
event_name = record['eventName'] # INSERT, MODIFY, REMOVE
if event_name == 'INSERT':
new_item = record['dynamodb']['NewImage']
print(f"New item added: {new_item}")
elif event_name == 'MODIFY':
old_item = record['dynamodb']['OldImage']
new_item = record['dynamodb']['NewImage']
print(f"Item updated")
5. Environment Variables
Environment variables let you configure function behavior without changing code. They are encrypted at rest using AWS KMS and available as standard environment variables in your runtime.
# Set environment variables via CLI
aws lambda update-function-configuration \
--function-name my-function \
--environment "Variables={
DATABASE_URL=postgresql://user:pass@host:5432/db,
API_KEY=sk-abc123,
ENVIRONMENT=production,
LOG_LEVEL=INFO
}"
import os
DATABASE_URL = os.environ['DATABASE_URL']
API_KEY = os.environ.get('API_KEY', 'default-key')
ENVIRONMENT = os.environ.get('ENVIRONMENT', 'development')
def lambda_handler(event, context):
# Environment variables are available throughout your code
if ENVIRONMENT == 'production':
# production-specific logic
pass
For sensitive values like database passwords and API keys, use AWS Secrets Manager or SSM Parameter Store instead of plain environment variables. Retrieve them during cold start initialization and cache them for subsequent invocations.
6. Lambda Layers
Layers are ZIP archives containing shared code and dependencies. Instead of bundling large libraries into every function, you package them once as a layer and attach it to any function that needs it.
# Create a layer with Python dependencies
mkdir -p python/lib/python3.12/site-packages
pip install requests boto3 pandas \
-t python/lib/python3.12/site-packages
zip -r my-layer.zip python/
# Publish the layer
aws lambda publish-layer-version \
--layer-name my-shared-dependencies \
--zip-file fileb://my-layer.zip \
--compatible-runtimes python3.12 python3.11 \
--description "Shared Python dependencies"
# Attach the layer to a function
aws lambda update-function-configuration \
--function-name my-function \
--layers arn:aws:lambda:us-east-1:123456789012:layer:my-shared-dependencies:1
Each function can use up to five layers. The total unzipped size of the function code plus all layers must be under 250 MB. AWS also provides public layers for popular tools like the AWS SDK, pandas, NumPy, and the Lambda Insights monitoring agent.
7. Cold Starts and Optimization
A cold start occurs when Lambda creates a new execution environment for your function. This involves downloading your code, starting the runtime, and running your initialization code. Subsequent invocations reuse the warm environment and skip this overhead.
| Runtime | Typical Cold Start | Warm Invocation |
|---|---|---|
| Python 3.12 | 200-500ms | 1-5ms |
| Node.js 20 | 150-400ms | 1-5ms |
| Go | 50-100ms | <1ms |
| Java 21 | 2,000-6,000ms | 1-10ms |
| Java 21 (SnapStart) | 100-200ms | 1-10ms |
| .NET 8 | 500-1,500ms | 1-5ms |
Reducing Cold Starts
# BAD: Initializing inside the handler runs on every invocation
def lambda_handler(event, context):
import boto3 # slow import on every call
client = boto3.client('dynamodb') # new connection every call
# ...
# GOOD: Initialize outside the handler (runs once per cold start)
import boto3
client = boto3.client('dynamodb') # reused across warm invocations
TABLE_NAME = os.environ['TABLE_NAME']
def lambda_handler(event, context):
response = client.get_item(
TableName=TABLE_NAME,
Key={'id': {'S': event['id']}}
)
return response['Item']
Provisioned Concurrency
# Keep 5 environments warm at all times (eliminates cold starts)
aws lambda put-provisioned-concurrency-config \
--function-name my-function \
--qualifier my-alias \
--provisioned-concurrent-executions 5
# Note: Provisioned Concurrency incurs charges even when idle
# Use it for latency-sensitive functions like user-facing APIs
8. Memory and Timeout Configuration
Lambda allocates CPU power proportional to memory. A function with 1,769 MB gets one full vCPU. Increasing memory does not just give you more RAM; it also gives more CPU and network bandwidth, which can make your function run faster and cost less overall.
# Set memory and timeout
aws lambda update-function-configuration \
--function-name my-function \
--memory-size 512 \
--timeout 30
# Test different memory settings to find the cost/performance sweet spot
# Use AWS Lambda Power Tuning (open-source tool) to automate this:
# https://github.com/alexcasalboni/aws-lambda-power-tuning
Memory vs Cost Example (1-second function):
Memory Duration Cost per invocation Notes
128 MB 2,400ms $0.00000500 CPU-bound, running slow
256 MB 1,200ms $0.00000500 Same cost, 2x faster
512 MB 600ms $0.00000500 Same cost, 4x faster
1024 MB 350ms $0.00000583 Slightly more, 7x faster
1769 MB 200ms $0.00000583 Full vCPU, best for compute
Doubling memory often halves duration, resulting in similar total cost
but much better user experience.
9. Logging and Monitoring (CloudWatch)
Lambda automatically sends logs to CloudWatch Logs. Every print() statement in Python or console.log() in Node.js appears in CloudWatch. Each function gets its own log group at /aws/lambda/function-name.
import logging
import json
logger = logging.getLogger()
logger.setLevel(logging.INFO)
def lambda_handler(event, context):
# Structured logging for easier searching and filtering
logger.info(json.dumps({
'action': 'process_order',
'order_id': event.get('order_id'),
'customer': event.get('customer_id'),
'amount': event.get('total')
}))
try:
result = process_order(event)
logger.info(f"Order processed successfully: {result}")
return {'statusCode': 200, 'body': json.dumps(result)}
except Exception as e:
logger.error(f"Order processing failed: {str(e)}", exc_info=True)
raise
# View logs in real time
aws logs tail /aws/lambda/my-function --follow
# Search for errors in the last hour
aws logs filter-log-events \
--log-group-name /aws/lambda/my-function \
--filter-pattern "ERROR" \
--start-time $(date -d '1 hour ago' +%s)000
# Set log retention to control costs (default is forever)
aws logs put-retention-policy \
--log-group-name /aws/lambda/my-function \
--retention-in-days 14
# Key CloudWatch metrics for Lambda:
# Invocations, Duration, Errors, Throttles, ConcurrentExecutions
# Set alarms on Errors and Throttles for production functions
10. Error Handling
How errors behave depends on the invocation type. Synchronous invocations (API Gateway) return errors to the caller immediately. Asynchronous invocations (S3, EventBridge) retry automatically: twice by default, with configurable dead-letter queues for failures.
import json
import traceback
class ValidationError(Exception):
"""Custom error for invalid input"""
pass
def lambda_handler(event, context):
try:
# Validate input
if not event.get('email'):
raise ValidationError('Email is required')
result = process_request(event)
return {
'statusCode': 200,
'body': json.dumps({'data': result})
}
except ValidationError as e:
# Client error: return 400
return {
'statusCode': 400,
'body': json.dumps({'error': str(e)})
}
except Exception as e:
# Server error: log full traceback, return 500
print(f"Unhandled error: {traceback.format_exc()}")
return {
'statusCode': 500,
'body': json.dumps({'error': 'Internal server error'})
}
# Configure async invocation error handling
aws lambda put-function-event-invoke-config \
--function-name my-function \
--maximum-retry-attempts 2 \
--maximum-event-age-in-seconds 3600 \
--destination-config '{
"OnFailure": {
"Destination": "arn:aws:sqs:us-east-1:123456789012:dead-letter-queue"
}
}'
11. Python and Node.js Examples
Python: DynamoDB CRUD API
import json
import boto3
import uuid
from decimal import Decimal
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table(os.environ['TABLE_NAME'])
class DecimalEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, Decimal):
return float(obj)
return super().default(obj)
def lambda_handler(event, context):
method = event['httpMethod']
path = event.get('pathParameters') or {}
if method == 'GET' and path.get('id'):
result = table.get_item(Key={'id': path['id']})
item = result.get('Item')
if not item:
return {'statusCode': 404, 'body': '{"error":"Not found"}'}
return {'statusCode': 200, 'body': json.dumps(item, cls=DecimalEncoder)}
elif method == 'POST':
body = json.loads(event['body'])
body['id'] = str(uuid.uuid4())
table.put_item(Item=body)
return {'statusCode': 201, 'body': json.dumps(body)}
elif method == 'DELETE' and path.get('id'):
table.delete_item(Key={'id': path['id']})
return {'statusCode': 204, 'body': ''}
Node.js: S3 Image Processor
import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3';
import sharp from 'sharp';
const s3 = new S3Client({});
const THUMBNAIL_WIDTH = 200;
export const handler = async (event) => {
for (const record of event.Records) {
const bucket = record.s3.bucket.name;
const key = record.s3.object.key;
// Skip if already a thumbnail
if (key.startsWith('thumbnails/')) continue;
// Get the original image
const { Body } = await s3.send(
new GetObjectCommand({ Bucket: bucket, Key: key })
);
// Resize to thumbnail
const imageBuffer = Buffer.from(await Body.transformToByteArray());
const thumbnail = await sharp(imageBuffer)
.resize(THUMBNAIL_WIDTH)
.jpeg({ quality: 80 })
.toBuffer();
// Save thumbnail
await s3.send(new PutObjectCommand({
Bucket: bucket,
Key: `thumbnails/${key}`,
Body: thumbnail,
ContentType: 'image/jpeg'
}));
console.log(`Created thumbnail for ${key}`);
}
};
12. Deployment Packages
Lambda supports two deployment formats: ZIP archives (up to 50 MB compressed, 250 MB unzipped) and container images (up to 10 GB). ZIP is simpler for small functions; containers are better for large dependencies or custom runtimes.
ZIP Deployment with Dependencies
# Python: install dependencies into the package
pip install -r requirements.txt -t ./package/
cd package && zip -r ../deployment.zip .
cd .. && zip deployment.zip lambda_function.py
aws lambda update-function-code \
--function-name my-function \
--zip-file fileb://deployment.zip
# Node.js: include node_modules
npm install --production
zip -r deployment.zip index.mjs node_modules/
aws lambda update-function-code \
--function-name my-function \
--zip-file fileb://deployment.zip
Container Image Deployment
# Dockerfile for Python Lambda
FROM public.ecr.aws/lambda/python:3.12
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app.py .
CMD ["app.lambda_handler"]
# Build and push to ECR
docker build -t my-lambda-function .
aws ecr get-login-password | docker login --username AWS \
--password-stdin 123456789012.dkr.ecr.us-east-1.amazonaws.com
docker tag my-lambda-function:latest \
123456789012.dkr.ecr.us-east-1.amazonaws.com/my-lambda-function:latest
docker push \
123456789012.dkr.ecr.us-east-1.amazonaws.com/my-lambda-function:latest
# Create function from container image
aws lambda create-function \
--function-name my-container-function \
--package-type Image \
--code ImageUri=123456789012.dkr.ecr.us-east-1.amazonaws.com/my-lambda-function:latest \
--role arn:aws:iam::123456789012:role/lambda-role
13. Lambda@Edge
Lambda@Edge runs your code at CloudFront edge locations worldwide, closer to your users. It intercepts CloudFront requests and responses to add headers, rewrite URLs, perform authentication, or customize content based on the viewer's location.
// Add security headers to every response
export const handler = async (event) => {
const response = event.Records[0].cf.response;
response.headers['strict-transport-security'] = [{
key: 'Strict-Transport-Security',
value: 'max-age=63072000; includeSubDomains; preload'
}];
response.headers['x-content-type-options'] = [{
key: 'X-Content-Type-Options', value: 'nosniff'
}];
response.headers['x-frame-options'] = [{
key: 'X-Frame-Options', value: 'DENY'
}];
response.headers['content-security-policy'] = [{
key: 'Content-Security-Policy',
value: "default-src 'self'"
}];
return response;
};
Lambda@Edge has stricter limits than standard Lambda: 5 seconds for viewer-triggered events, 30 seconds for origin events, and a maximum of 128 MB memory for viewer events. It must be deployed in us-east-1 and automatically replicates to all edge locations.
14. Cost Optimization
Lambda is already cost-effective for most workloads, but these strategies can reduce costs further.
Strategy Savings Effort
--- --- ---
Right-size memory (Power Tuning) 10-40% Low
Use ARM64 (Graviton2) 20% Low (change runtime)
Minimize deployment package size 5-15% Medium
Batch process (SQS batch size) 30-50% Low
Use Reserved Concurrency wisely Varies Low
Avoid VPC when not needed Faster Low
Cache external calls (global var) 20-40% Low
# Switch to ARM64 architecture (20% cheaper, often faster)
aws lambda update-function-configuration \
--function-name my-function \
--architectures arm64
# Check: ensure your dependencies support ARM64
# Python, Node.js, and Go work seamlessly on ARM64
# Some compiled C extensions may need ARM-compatible builds
For high-throughput workloads exceeding 1 billion invocations per month, compare Lambda costs against Fargate or EC2. At very high sustained concurrency, a reserved EC2 instance can be cheaper than Lambda. For variable or bursty workloads, Lambda almost always wins.
15. Best Practices
- Keep functions focused — each function should do one thing well. A single-purpose function is easier to test, debug, monitor, and scale independently.
- Initialize outside the handler — SDK clients, database connections, and configuration loading should happen in the global scope so they persist across warm invocations.
- Use environment variables for configuration — never hardcode connection strings, API keys, or table names. Use environment variables or AWS Systems Manager Parameter Store.
- Set appropriate timeouts — do not leave the default 3-second timeout or set it to the max 15 minutes. Measure your function's actual duration and set the timeout to 3x that value.
- Use structured logging — log JSON objects instead of plain strings. Structured logs are searchable in CloudWatch Logs Insights and compatible with monitoring tools.
- Implement idempotency — Lambda may retry your function on failure. Ensure processing the same event twice produces the same result without side effects (use idempotency keys in DynamoDB).
- Use Powertools for AWS Lambda — the official library (available for Python, TypeScript, Java, .NET) adds structured logging, tracing, metrics, idempotency, and input validation with minimal code.
- Monitor with X-Ray — enable active tracing to visualize the full request path through API Gateway, Lambda, DynamoDB, and other services.
- Use least-privilege IAM roles — each function should have its own IAM role with only the permissions it needs. Never use a shared role with broad permissions across functions.
- Test locally with SAM — use
sam local invokeandsam local start-apito test functions on your machine before deploying to AWS.
Frequently Asked Questions
What is the maximum execution time for an AWS Lambda function?
AWS Lambda functions can run for a maximum of 15 minutes (900 seconds). This limit applies to all runtimes and cannot be increased. If your workload requires longer execution, consider breaking it into smaller functions chained together using AWS Step Functions, or use ECS/Fargate for long-running tasks. Most well-designed Lambda functions complete in under 30 seconds. For API Gateway-triggered functions, there is an additional 29-second timeout imposed by API Gateway itself, so API-facing functions should aim to respond within a few seconds.
How do I reduce Lambda cold start times?
Cold starts occur when Lambda creates a new execution environment. To minimize them: use lightweight runtimes like Python or Node.js instead of Java or .NET, which have significantly longer cold starts. Keep your deployment package small by excluding unnecessary dependencies. Use Provisioned Concurrency to keep environments warm. Initialize SDK clients and database connections outside the handler function so they persist across invocations. Avoid placing Lambda functions inside a VPC unless necessary, as VPC-attached functions add ENI creation time. Use Lambda SnapStart for Java functions to reduce cold starts from seconds to under 200 milliseconds.
How much does AWS Lambda cost?
AWS Lambda pricing has two components: requests and duration. You are charged $0.20 per 1 million requests, and duration is charged based on memory allocated and execution time at $0.0000166667 per GB-second. The Free Tier includes 1 million requests and 400,000 GB-seconds per month, and it never expires. For example, a function with 256 MB memory running for 200 milliseconds costs about $0.000000833 per invocation. At 10 million invocations per month, that totals roughly $10.33. Lambda is extremely cost-effective for sporadic or event-driven workloads because you pay nothing when your function is not running.
What are Lambda layers and when should I use them?
Lambda layers are ZIP archives containing libraries, custom runtimes, or other dependencies that you can share across multiple functions. Instead of bundling large libraries like NumPy, Pandas, or the AWS SDK into every deployment package, you package them once as a layer and attach it to any function that needs it. This reduces deployment package sizes, speeds up deployments, and keeps your function code clean. A function can use up to five layers, and the total unzipped size including the function and all layers must be under 250 MB. Common use cases include shared utility libraries, database drivers, monitoring agents, and custom runtimes.
Can I run a web application on AWS Lambda?
Yes. AWS Lambda combined with API Gateway can serve as the backend for web applications. You can run Express.js, Flask, FastAPI, or other web frameworks on Lambda using tools like AWS Lambda Web Adapter, Mangum (for Python ASGI apps), or serverless-express (for Node.js). API Gateway handles HTTP routing, SSL termination, and request validation, while Lambda runs your application code. This architecture scales automatically, costs nothing at zero traffic, and eliminates server management. However, Lambda is not ideal for applications requiring WebSocket connections, long-running requests over 29 seconds, or very high sustained throughput where a traditional server would be more cost-effective.
Conclusion
AWS Lambda has matured into the default choice for event-driven and API workloads on AWS. It eliminates server management, scales automatically, and costs nothing when idle. The key to success with Lambda is understanding its execution model: initialize expensive operations outside the handler, keep deployment packages lean, choose the right memory configuration, and design for idempotency.
Start with a simple function triggered by API Gateway or an S3 event. Use SAM for local testing and deployment. Add CloudWatch alarms for errors and throttles. As your architecture grows, introduce layers for shared dependencies, Step Functions for orchestration, and Provisioned Concurrency for latency-sensitive endpoints. That progression from a single function to a production-grade serverless architecture is exactly how every successful Lambda deployment evolves.
Related Resources
- AWS Cloud Fundamentals Guide — learn the core AWS services and concepts that Lambda builds on
- Python: Complete Beginner's Guide — get started with the most popular Lambda runtime
- Node.js: Complete Guide — build Lambda functions with JavaScript
- Docker: Complete Guide — package Lambda functions as container images
- REST API Design Guide — design APIs that work well with API Gateway and Lambda
- GitHub Actions CI/CD Guide — automate Lambda deployments