Python Logging Complete Guide: Handlers, Formatters & Best Practices
Every production Python application needs logging. Yet most developers reach for print() and only discover the logging module when things break at 2 AM and there is no record of what happened. Python's built-in logging module is powerful, flexible, and already included in the standard library — but its architecture can feel confusing at first. This guide walks you through every layer of the logging system: levels, loggers, handlers, formatters, and filters. You will learn structured JSON logging, framework integration with Django and Flask, production-ready patterns, and how to avoid the most common mistakes.
1. Why Use the Logging Module?
The logging module solves problems that print() cannot. It provides severity levels so you can filter noise from signal, configurable outputs that route messages to files, network sockets, or external services, timestamps and metadata for forensic debugging, and thread-safe operations. Most importantly, logging is hierarchical — you configure it once at the application entry point and every module in your codebase automatically inherits that configuration.
import logging
# Don't do this in production
print("User logged in") # No timestamp, no level, no context
# Do this instead
logger = logging.getLogger(__name__)
logger.info("User logged in", extra={"user_id": 42})
# 2026-02-12 10:30:00,123 INFO myapp.auth User logged in
The logging module has been part of the standard library since Python 2.3. Third-party libraries like structlog and loguru build on top of it rather than replacing it.
2. Log Levels: DEBUG Through CRITICAL
Python defines five standard log levels, each with a numeric value. A logger only processes messages at or above its configured level:
import logging
# Level Numeric Purpose
# DEBUG 10 Detailed diagnostic info (variable values, flow tracing)
# INFO 20 Confirmation that things work as expected
# WARNING 30 Something unexpected but the app still works (default level)
# ERROR 40 A function failed to perform its task
# CRITICAL 50 The application may not be able to continue
logger = logging.getLogger(__name__)
logger.debug("Processing item %d of %d", i, total) # Verbose tracing
logger.info("Order %s created successfully", order_id) # Normal operations
logger.warning("Disk usage at %d%%", usage) # Needs attention soon
logger.error("Failed to connect to database: %s", err) # Something broke
logger.critical("Out of memory, shutting down") # App cannot continue
Key rule: the default level is WARNING. If you call logging.info() without any configuration, nothing appears. This trips up beginners constantly. You must explicitly set a lower level to see DEBUG or INFO messages.
Use DEBUG for information only useful when diagnosing a bug. Use INFO for events that confirm normal operation. Use WARNING when something is unusual but not broken. Reserve ERROR for actual failures. CRITICAL is rare and signals the application is in jeopardy.
3. Quick Start with basicConfig
logging.basicConfig() is the fastest way to get logging working. It configures the root logger with a handler, formatter, and level in a single call:
import logging
# Minimal: log to console at DEBUG level
logging.basicConfig(level=logging.DEBUG)
logging.info("Application started")
# Log to file with a custom format
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(levelname)-8s %(name)s %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
filename="app.log",
filemode="a",
)
# Log to both file and console
logging.basicConfig(
level=logging.DEBUG,
format="%(asctime)s %(levelname)s %(message)s",
handlers=[logging.FileHandler("app.log"), logging.StreamHandler()],
)
Important: basicConfig() only works if the root logger has no handlers yet. Calling it a second time does nothing (unless you pass force=True, added in Python 3.8). This is why it must be called early, before any library triggers logging configuration.
4. The Architecture: Loggers, Handlers, Formatters, and Filters
Every log message flows through this pipeline: Logger (level check) → Filter (optionally reject/modify) → Handler (route to destination) → Formatter (convert to string).
Loggers
Loggers form a hierarchy based on dot-separated names. getLogger("myapp.db.queries") is a child of myapp.db, which is a child of myapp, which is a child of the root logger. Messages propagate upward through this hierarchy:
import logging
# Best practice: one logger per module, named after the module
logger = logging.getLogger(__name__) # e.g., "myapp.services.payment"
# Logger hierarchy
parent = logging.getLogger("myapp")
child = logging.getLogger("myapp.db")
# child.parent is parent, parent.parent is root
Handlers
Handlers determine where log messages go. A single logger can have multiple handlers, each with its own level and formatter:
logger = logging.getLogger("myapp")
logger.setLevel(logging.DEBUG)
# Handler 1: all messages to file
file_handler = logging.FileHandler("debug.log")
file_handler.setLevel(logging.DEBUG)
# Handler 2: errors only to console
console_handler = logging.StreamHandler()
console_handler.setLevel(logging.ERROR)
logger.addHandler(file_handler)
logger.addHandler(console_handler)
Formatters
Formatters control the string representation of each log record using %-style placeholders for LogRecord attributes:
# Common attributes: %(asctime)s, %(levelname)s, %(name)s,
# %(module)s, %(funcName)s, %(lineno)d, %(message)s, %(process)d, %(thread)d
formatter = logging.Formatter(
fmt="%(asctime)s [%(levelname)s] %(name)s:%(funcName)s:%(lineno)d - %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
handler = logging.StreamHandler()
handler.setFormatter(formatter)
# Output: 2026-02-12 10:30:00 [INFO] myapp.db:get_user:45 - Fetched user 42
Filters
Filters provide fine-grained control beyond levels. Attach them to loggers or handlers to suppress, modify, or enrich log records:
import logging, re
class SensitiveDataFilter(logging.Filter):
"""Redact credit card numbers from log messages."""
def filter(self, record):
record.msg = re.sub(r'\b\d{4}[- ]?\d{4}[- ]?\d{4}[- ]?\d{4}\b',
'****-****-****-****', str(record.msg))
return True # True = keep the record, False = drop it
logger = logging.getLogger("myapp")
logger.addFilter(SensitiveDataFilter())
5. Handler Types: File, Stream, Rotating, and More
RotatingFileHandler
Prevents log files from growing indefinitely by rotating based on file size:
from logging.handlers import RotatingFileHandler
handler = RotatingFileHandler(
"app.log",
maxBytes=10 * 1024 * 1024, # 10 MB per file
backupCount=5, # Keep app.log.1 through app.log.5
encoding="utf-8",
)
TimedRotatingFileHandler
Rotates based on time intervals, ideal for daily or hourly log files:
from logging.handlers import TimedRotatingFileHandler
handler = TimedRotatingFileHandler(
"app.log",
when="midnight", # Rotate daily at midnight
interval=1,
backupCount=30, # Keep 30 days of logs
encoding="utf-8",
utc=True,
)
handler.suffix = "%Y-%m-%d" # app.log.2026-02-12
Other Useful Handlers
The logging.handlers module also provides SocketHandler (TCP), SysLogHandler (syslog), SMTPHandler (email alerts), HTTPHandler (HTTP endpoints), QueueHandler/QueueListener (non-blocking via queue), and MemoryHandler (buffered flush). For example, to email on critical errors:
from logging.handlers import SMTPHandler
smtp_handler = SMTPHandler(
mailhost=("smtp.example.com", 587),
fromaddr="alerts@example.com",
toaddrs=["oncall@example.com"],
subject="CRITICAL: Application Error",
credentials=("user", "password"),
secure=(),
)
smtp_handler.setLevel(logging.CRITICAL)
6. Configuration with dictConfig
For production applications, logging.config.dictConfig() is the preferred approach. It separates configuration from code and supports all features:
import logging.config
LOGGING_CONFIG = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"standard": {
"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s",
"datefmt": "%Y-%m-%d %H:%M:%S",
},
"detailed": {
"format": "%(asctime)s [%(levelname)s] %(name)s:%(funcName)s:%(lineno)d %(message)s",
},
},
"handlers": {
"console": {
"class": "logging.StreamHandler",
"level": "INFO",
"formatter": "standard",
"stream": "ext://sys.stdout",
},
"file": {
"class": "logging.handlers.RotatingFileHandler",
"level": "DEBUG",
"formatter": "detailed",
"filename": "app.log",
"maxBytes": 10485760,
"backupCount": 5,
},
},
"loggers": {
"myapp": {"level": "DEBUG", "handlers": ["console", "file"], "propagate": False},
"myapp.db": {"level": "WARNING"},
},
"root": {"level": "WARNING", "handlers": ["console"]},
}
logging.config.dictConfig(LOGGING_CONFIG)
Always set "disable_existing_loggers": False to avoid silencing loggers created before dictConfig() runs. Third-party libraries that call getLogger() at import time will lose their configuration if you leave it as True.
7. Structured Logging and JSON Output
Plain text logs are hard to search at scale. Structured logging outputs each record as JSON, which aggregation systems like the ELK stack, Datadog, or Splunk can ingest directly.
Using python-json-logger
# pip install python-json-logger
import logging
from pythonjsonlogger import jsonlogger
logger = logging.getLogger("myapp")
handler = logging.StreamHandler()
formatter = jsonlogger.JsonFormatter(
fmt="%(asctime)s %(levelname)s %(name)s %(message)s",
rename_fields={"asctime": "timestamp", "levelname": "level"},
)
handler.setFormatter(formatter)
logger.addHandler(handler)
logger.setLevel(logging.INFO)
logger.info("Order created", extra={"order_id": "ORD-123", "user_id": 42, "total": 99.99})
# {"timestamp": "2026-02-12T10:30:00+0000", "level": "INFO",
# "name": "myapp", "message": "Order created",
# "order_id": "ORD-123", "user_id": 42, "total": 99.99}
Custom JSON Formatter (Zero Dependencies)
import json, logging
from datetime import datetime, timezone
class JSONFormatter(logging.Formatter):
def format(self, record):
log_data = {
"timestamp": datetime.fromtimestamp(record.created, tz=timezone.utc).isoformat(),
"level": record.levelname,
"logger": record.name,
"message": record.getMessage(),
"module": record.module,
"function": record.funcName,
"line": record.lineno,
}
if record.exc_info and record.exc_info[0] is not None:
log_data["exception"] = self.formatException(record.exc_info)
return json.dumps(log_data, default=str)
8. Logging in Django
Django uses Python's logging module and configures it through the LOGGING setting in settings.py. For a deeper dive, see our Django complete guide:
# settings.py
LOGGING = {
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"verbose": {
"format": "{asctime} {levelname} {name} {module}:{lineno} {message}",
"style": "{",
},
},
"handlers": {
"console": {"class": "logging.StreamHandler", "formatter": "verbose"},
"file": {
"class": "logging.handlers.RotatingFileHandler",
"filename": "django.log",
"maxBytes": 10485760,
"backupCount": 5,
"formatter": "verbose",
},
},
"loggers": {
"django": {"handlers": ["console"], "level": "INFO"},
"django.db.backends": {
"handlers": ["file"], "level": "WARNING", "propagate": False,
},
"myapp": {
"handlers": ["console", "file"], "level": "DEBUG", "propagate": False,
},
},
}
# In your views:
import logging
logger = logging.getLogger(__name__)
def create_order(request):
logger.info("Creating order for user %s", request.user.id)
try:
order = Order.objects.create(user=request.user)
logger.info("Order %s created", order.id)
except Exception:
logger.exception("Failed to create order")
raise
9. Logging in Flask
Flask provides a built-in logger via app.logger. Configure it using dictConfig before creating the app:
import logging
from logging.config import dictConfig
from flask import Flask, request
dictConfig({
"version": 1,
"disable_existing_loggers": False,
"formatters": {"default": {"format": "%(asctime)s %(levelname)s %(name)s: %(message)s"}},
"handlers": {
"file": {
"class": "logging.handlers.RotatingFileHandler",
"filename": "flask.log",
"maxBytes": 10485760,
"backupCount": 5,
"formatter": "default",
},
},
"root": {"level": "INFO", "handlers": ["file"]},
})
app = Flask(__name__)
@app.before_request
def log_request():
app.logger.info("Request: %s %s from %s",
request.method, request.path, request.remote_addr)
@app.after_request
def log_response(response):
app.logger.info("Response: %s %s -> %d",
request.method, request.path, response.status_code)
return response
10. Production Best Practices
Use Lazy String Formatting
# Bad: string is always formatted, even if DEBUG is filtered out
logger.debug(f"Processing user {user.id} with {len(items)} items")
# Good: formatting only happens if DEBUG is enabled
logger.debug("Processing user %s with %d items", user.id, len(items))
Log Exceptions Properly
try:
result = process_payment(order)
except PaymentError:
# Bad: loses the traceback
logger.error("Payment failed")
# Good: includes full traceback automatically
logger.exception("Payment failed for order %s", order.id)
# Also good: explicit exc_info on any level
logger.warning("Payment retry needed for %s", order.id, exc_info=True)
Add Context with Extra Fields and LoggerAdapter
# Use extra for structured context
logger.info("Order shipped", extra={
"order_id": order.id, "tracking_number": tracking, "carrier": "fedex",
})
# Or use LoggerAdapter for consistent context across multiple log calls
class RequestAdapter(logging.LoggerAdapter):
def process(self, msg, kwargs):
kwargs.setdefault("extra", {}).update(self.extra)
return msg, kwargs
logger = RequestAdapter(logging.getLogger(__name__), {
"request_id": "req-abc-123", "user_id": 42,
})
Logging in Libraries
Libraries should never configure logging. Add a NullHandler and let the application decide where logs go:
# In your library's __init__.py
import logging
logging.getLogger(__name__).addHandler(logging.NullHandler())
11. Common Patterns and Anti-Patterns
# ANTI-PATTERNS
logging.info("User logged in") # Root logger, no context
logging.info("User " + str(user_id) + " logged in") # Eager string concat
try: do_something()
except: pass # Swallowed errors
logger.debug("user=%s password=%s", user, password) # Logging secrets
logging.basicConfig(level=logging.DEBUG) # Config in library code
# RECOMMENDED PATTERNS
logger = logging.getLogger(__name__) # Module-level logger
request_id = str(uuid.uuid4())[:8] # Correlation IDs
logger.info("Handling request", extra={"request_id": request_id})
if logger.isEnabledFor(logging.DEBUG): # Guard expensive debug
logger.debug("Full state: %s", expensive_state_dump())
# Contextual logging with contextvars (Python 3.7+)
import contextvars
request_id_var = contextvars.ContextVar("request_id", default="-")
class ContextFilter(logging.Filter):
def filter(self, record):
record.request_id = request_id_var.get()
return True
12. Asyncio Logging Considerations
The standard logging module is thread-safe but not fully async-aware. Handlers that perform I/O (file writes, network calls) can block the event loop. The solution is the QueueHandler/QueueListener pattern. For a deeper understanding of Python's async model, see our asyncio complete guide:
import asyncio, logging
from logging.handlers import QueueHandler, QueueListener
from queue import Queue
def setup_async_logging():
"""Non-blocking logging for asyncio applications."""
log_queue = Queue()
file_handler = logging.FileHandler("async_app.log")
file_handler.setFormatter(logging.Formatter(
"%(asctime)s [%(levelname)s] %(name)s: %(message)s"
))
# QueueListener runs handlers in a background thread
listener = QueueListener(log_queue, file_handler, respect_handler_level=True)
listener.start()
# QueueHandler is fast and non-blocking
root = logging.getLogger()
root.addHandler(QueueHandler(log_queue))
root.setLevel(logging.DEBUG)
return listener # Call listener.stop() on shutdown
async def main():
listener = setup_async_logging()
logger = logging.getLogger("myapp.async")
try:
logger.info("Async application started")
await asyncio.gather(worker("task-1"), worker("task-2"))
finally:
listener.stop()
async def worker(name):
logger = logging.getLogger(f"myapp.worker.{name}")
logger.info("Worker started")
await asyncio.sleep(1)
logger.info("Worker finished")
The handler puts records into a queue instantly while the listener processes them in a dedicated thread, preventing file I/O from blocking your event loop.
13. Centralized Logging with the ELK Stack
For distributed systems, centralized logging is essential. The ELK stack (Elasticsearch, Logstash, Kibana) is one of the most popular solutions. See our Elasticsearch guide for cluster setup details:
# Option 1: Ship JSON logs via Filebeat to Elasticsearch
handler = logging.FileHandler("/var/log/myapp/app.json")
handler.setFormatter(jsonlogger.JsonFormatter(
fmt="%(asctime)s %(levelname)s %(name)s %(message)s",
rename_fields={"asctime": "@timestamp", "levelname": "level"},
))
# Option 2: Send logs directly to Logstash via TCP
from logging.handlers import SocketHandler
import json
class LogstashHandler(SocketHandler):
"""Send JSON logs directly to Logstash TCP input."""
def makePickle(self, record):
log_entry = {
"@timestamp": self.formatter.formatTime(record) if self.formatter else "",
"level": record.levelname, "logger": record.name,
"message": record.getMessage(), "application": "myapp",
}
if record.exc_info:
log_entry["exception"] = logging.Formatter().formatException(record.exc_info)
return (json.dumps(log_entry) + "\n").encode("utf-8")
14. Practical Examples
Production Web Application Setup
"""Complete logging setup for a production web application."""
import logging, logging.config, os
def configure_logging(env: str = "production"):
log_level = "DEBUG" if env == "development" else "INFO"
log_dir = "/var/log/myapp"
os.makedirs(log_dir, exist_ok=True)
logging.config.dictConfig({
"version": 1,
"disable_existing_loggers": False,
"formatters": {
"json": {
"()": "pythonjsonlogger.jsonlogger.JsonFormatter",
"fmt": "%(asctime)s %(levelname)s %(name)s %(message)s",
},
"console": {
"format": "%(asctime)s %(levelname)-8s %(name)s %(message)s",
},
},
"handlers": {
"console": {"class": "logging.StreamHandler", "formatter": "console", "level": log_level},
"app_file": {
"class": "logging.handlers.TimedRotatingFileHandler",
"filename": f"{log_dir}/app.log", "when": "midnight",
"backupCount": 30, "formatter": "json", "encoding": "utf-8",
},
"error_file": {
"class": "logging.handlers.RotatingFileHandler",
"filename": f"{log_dir}/error.log", "maxBytes": 52428800,
"backupCount": 10, "level": "ERROR", "formatter": "json",
},
},
"loggers": {
"myapp": {"level": log_level, "handlers": ["console", "app_file", "error_file"], "propagate": False},
"sqlalchemy.engine": {"level": "WARNING"},
},
"root": {"level": "WARNING", "handlers": ["console"]},
})
CLI Tool Logging
"""Logging pattern for command-line tools with -v/-vv verbosity."""
import argparse, logging, sys
def setup_cli_logging(verbosity: int):
levels = {0: logging.WARNING, 1: logging.INFO, 2: logging.DEBUG}
level = levels.get(verbosity, logging.DEBUG)
handler = logging.StreamHandler(sys.stderr)
handler.setFormatter(logging.Formatter(
"%(levelname)s: %(message)s" if level > logging.DEBUG
else "%(asctime)s %(levelname)s %(name)s:%(lineno)d %(message)s"
))
root = logging.getLogger()
root.addHandler(handler)
root.setLevel(level)
def main():
parser = argparse.ArgumentParser()
parser.add_argument("-v", "--verbose", action="count", default=0)
args = parser.parse_args()
setup_cli_logging(args.verbose)
logger = logging.getLogger("mycli")
logger.info("Starting processing")
logger.debug("Config: %s", vars(args))
if __name__ == "__main__":
main()
Testing Your Logging with Pytest
Pytest provides the caplog fixture to capture and assert log messages. For a complete testing reference, see our pytest guide:
import logging
def process_item(item_id):
logger = logging.getLogger(__name__)
if item_id < 0:
logger.warning("Negative item ID: %d", item_id)
return None
logger.info("Processing item %d", item_id)
return {"id": item_id, "status": "done"}
# test_processing.py
def test_negative_item_logs_warning(caplog):
with caplog.at_level(logging.WARNING):
result = process_item(-1)
assert result is None
assert "Negative item ID: -1" in caplog.text
assert caplog.records[0].levelname == "WARNING"
15. Advanced Patterns
Logging Decorator
Decorators are a natural fit for logging function calls. See our decorators guide for more patterns:
import functools, logging, time
def log_calls(logger=None, level=logging.DEBUG):
"""Decorator that logs function entry, exit, and duration."""
def decorator(func):
nonlocal logger
if logger is None:
logger = logging.getLogger(func.__module__)
@functools.wraps(func)
def wrapper(*args, **kwargs):
name = func.__qualname__
logger.log(level, "Calling %s", name)
start = time.perf_counter()
try:
result = func(*args, **kwargs)
logger.log(level, "%s returned in %.3fs", name, time.perf_counter() - start)
return result
except Exception:
logger.exception("%s raised after %.3fs", name, time.perf_counter() - start)
raise
return wrapper
return decorator
@log_calls(level=logging.INFO)
def fetch_data(url):
import urllib.request
return urllib.request.urlopen(url).read()
Preventing Sensitive Data Leaks
import logging, re
class SanitizingFormatter(logging.Formatter):
"""Redact known sensitive patterns from all log output."""
PATTERNS = [
(re.compile(r'password["\s:=]+\S+', re.IGNORECASE), 'password=***REDACTED***'),
(re.compile(r'token["\s:=]+\S+', re.IGNORECASE), 'token=***REDACTED***'),
(re.compile(r'Bearer\s+\S+'), 'Bearer ***REDACTED***'),
]
def format(self, record):
message = super().format(record)
for pattern, replacement in self.PATTERNS:
message = pattern.sub(replacement, message)
return message
Frequently Asked Questions
What is the difference between logging.warning() and logging.warn() in Python?
logging.warn() is a deprecated alias for logging.warning(). Always use logging.warning() in new code. The warn() method was kept for backward compatibility but has been formally deprecated since Python 3.3 and may be removed in a future release. Similarly, use logger.warning() instead of logger.warn() on Logger instances.
Should I use print() or logging in Python?
Use logging for any application beyond quick scripts. The logging module gives you severity levels, configurable output destinations, structured metadata, timestamps, and the ability to enable or disable messages without changing code. print() writes to stdout with no filtering, no timestamps, and no way to route output to files or external systems. In production, print() output often gets lost while properly configured logging persists to files, aggregation systems, or monitoring tools.
How do I log exceptions with full tracebacks in Python?
Use logger.exception() inside an except block, which automatically attaches the current traceback at ERROR level. Alternatively, pass exc_info=True to any logging call: logger.error('Something failed', exc_info=True). The exception() method is the most common approach and always logs at ERROR level with the full stack trace.
Why do I see duplicate log messages in Python?
Duplicate log messages typically happen because multiple handlers are attached to a logger or its ancestors. The most common causes are: calling basicConfig() multiple times, adding handlers inside functions that run repeatedly, or having both a parent and child logger with handlers (messages propagate up by default). Fix this by configuring logging once at startup, or setting propagate=False on child loggers when you attach handlers to them directly.
How do I configure different log levels for different modules in Python?
Use the logger hierarchy with __name__. Each module that calls logging.getLogger(__name__) gets a logger named after its dotted module path (e.g., myapp.db.queries). You can then configure levels per logger in dictConfig: set myapp.db to DEBUG for database diagnostics while keeping myapp.api at WARNING. Child loggers inherit their parent's level unless explicitly overridden.
Summary
Python's logging module is a mature, flexible system that scales from quick scripts to distributed production services. The key takeaways:
- Always use
getLogger(__name__)for module-level loggers with proper hierarchy - Configure once at the entry point using
dictConfigfor production applications - Use lazy formatting (
logger.info("msg %s", val)) to avoid unnecessary string operations - Log exceptions with
logger.exception()to capture full tracebacks - Use structured JSON logging for production systems that feed into log aggregation
- Libraries should never configure logging — just add a NullHandler
- Use QueueHandler for async apps to avoid blocking the event loop
- Sanitize sensitive data with custom filters or formatters
Good logging is an investment that pays off during every debugging session, every incident response, and every performance investigation. Set it up properly from the start and your future self will thank you.