为什么正确的日志记录很重要
在深入研究技术细节之前,让我们了解为什么正确的日志记录很重要:
- 在生产中实现有效调试
- 提供对应用程序行为的见解
- 促进性能监控
- 帮助跟踪安全事件
- 支持合规性要求
- 提高维护效率
Python 日志记录快速入门
对于那些刚接触 Python 日志记录的人来说,这里有一个使用 logging.basicConfig:
的基本示例
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
此示例演示了 python 中日志记录模块的基础知识,并展示了如何在应用程序中使用 python 记录器日志记录。
Python 日志模块入门
基本设置
让我们从简单的日志配置开始:
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
了解日志级别
Python 日志记录有五个标准级别:
Level | Numeric Value | When to Use |
---|---|---|
DEBUG | 10 | Detailed information for diagnosing problems |
INFO | 20 | General operational events |
WARNING | 30 | Something unexpected happened |
ERROR | 40 | More serious problem |
CRITICAL | 50 | Program may not be able to continue |
超越 print() 语句
为什么选择记录而不是打印语句?
- 严重级别以便更好地分类
- 时间戳信息
- 来源信息(文件、行号)
- 可配置的输出目的地
- 生产就绪过滤
- 线程安全
配置您的日志系统
基本配置选项
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
高级配置
对于更复杂的应用:
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
使用高级日志记录
结构化日志记录
结构化日志记录提供了一致的、机器可读的格式,这对于日志分析和监控至关重要。有关结构化日志记录模式和最佳实践的全面概述,请查看结构化日志记录指南。让我们用 Python 实现结构化日志记录:
logging.basicConfig( filename='app.log', filemode='w', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S' )
错误管理
正确的错误记录对于调试生产问题至关重要。这是一个全面的方法:
config = { 'version': 1, 'formatters': { 'detailed': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'detailed' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'app.log', 'level': 'DEBUG', 'formatter': 'detailed' } }, 'loggers': { 'myapp': { 'handlers': ['console', 'file'], 'level': 'DEBUG', 'propagate': True } } } logging.config.dictConfig(config)
并发日志记录
登录多线程应用时,需要保证线程安全:
import json import logging from datetime import datetime class JSONFormatter(logging.Formatter): def __init__(self): super().__init__() def format(self, record): # Create base log record log_obj = { "timestamp": self.formatTime(record, self.datefmt), "name": record.name, "level": record.levelname, "message": record.getMessage(), "module": record.module, "function": record.funcName, "line": record.lineno } # Add exception info if present if record.exc_info: log_obj["exception"] = self.formatException(record.exc_info) # Add custom fields from extra if hasattr(record, "extra_fields"): log_obj.update(record.extra_fields) return json.dumps(log_obj) # Usage Example logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger.addHandler(handler) # Log with extra fields logger.info("User logged in", extra={"extra_fields": {"user_id": "123", "ip": "192.168.1.1"}})
不同环境下的登录
不同的应用程序环境需要特定的日志记录方法。无论您使用的是 Web 应用程序、微服务还是后台任务,每个环境都有独特的日志记录要求和最佳实践。让我们探讨如何在各种部署场景中实现有效的日志记录。
Web 应用程序日志记录
Django 日志配置
这是一个全面的 Django 日志记录设置:
import traceback import sys from contextlib import contextmanager class ErrorLogger: def __init__(self, logger): self.logger = logger @contextmanager def error_context(self, operation_name, **context): """Context manager for error logging with additional context""" try: yield except Exception as e: # Capture the current stack trace exc_type, exc_value, exc_traceback = sys.exc_info() # Format error details error_details = { "operation": operation_name, "error_type": exc_type.__name__, "error_message": str(exc_value), "context": context, "stack_trace": traceback.format_exception(exc_type, exc_value, exc_traceback) } # Log the error with full context self.logger.error( f"Error in {operation_name}: {str(exc_value)}", extra={"error_details": error_details} ) # Re-raise the exception raise # Usage Example logger = logging.getLogger(__name__) error_logger = ErrorLogger(logger) with error_logger.error_context("user_authentication", user_id="123", attempt=2): # Your code that might raise an exception authenticate_user(user_id)
Flask 日志记录设置
Flask 提供了自己的可以定制的日志系统:
import threading import logging from queue import Queue from logging.handlers import QueueHandler, QueueListener def setup_thread_safe_logging(): """Set up thread-safe logging with a queue""" # Create the queue log_queue = Queue() # Create handlers console_handler = logging.StreamHandler() file_handler = logging.FileHandler('app.log') # Create queue handler and listener queue_handler = QueueHandler(log_queue) listener = QueueListener( log_queue, console_handler, file_handler, respect_handler_level=True ) # Configure root logger root_logger = logging.getLogger() root_logger.addHandler(queue_handler) # Start the listener in a separate thread listener.start() return listener # Usage listener = setup_thread_safe_logging() def worker_function(): logger = logging.getLogger(__name__) logger.info(f"Worker thread {threading.current_thread().name} starting") # Do work... logger.info(f"Worker thread {threading.current_thread().name} finished") # Create and start threads threads = [ threading.Thread(target=worker_function) for _ in range(3) ] for thread in threads: thread.start()
FastAPI 日志记录实践
FastAPI 可以利用 Python 的日志记录和一些中间件增强功能:
# settings.py LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}', 'style': '{', }, 'simple': { 'format': '{levelname} {message}', 'style': '{', }, }, 'filters': { 'require_debug_true': { '()': 'django.utils.log.RequireDebugTrue', }, }, 'handlers': { 'console': { 'level': 'INFO', 'filters': ['require_debug_true'], 'class': 'logging.StreamHandler', 'formatter': 'simple' }, 'file': { 'level': 'ERROR', 'class': 'logging.FileHandler', 'filename': 'django-errors.log', 'formatter': 'verbose' }, 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } }, 'loggers': { 'django': { 'handlers': ['console'], 'propagate': True, }, 'django.request': { 'handlers': ['file', 'mail_admins'], 'level': 'ERROR', 'propagate': False, }, 'myapp': { 'handlers': ['console', 'file'], 'level': 'INFO', } } }
微服务日志记录
对于微服务,分布式跟踪和关联 ID 至关重要:
import logging from logging.handlers import RotatingFileHandler from flask import Flask, request app = Flask(__name__) def setup_logger(): # Create formatter formatter = logging.Formatter( '[%(asctime)s] %(levelname)s in %(module)s: %(message)s' ) # File Handler file_handler = RotatingFileHandler( 'flask_app.log', maxBytes=10485760, # 10MB backupCount=10 ) file_handler.setLevel(logging.INFO) file_handler.setFormatter(formatter) # Add request context class RequestFormatter(logging.Formatter): def format(self, record): record.url = request.url record.remote_addr = request.remote_addr return super().format(record) # Configure app logger app.logger.addHandler(file_handler) app.logger.setLevel(logging.INFO) return app.logger # Usage in routes @app.route('/api/endpoint') def api_endpoint(): app.logger.info(f'Request received from {request.remote_addr}') # Your code here return jsonify({'status': 'success'})
后台任务记录
对于后台任务,我们需要确保正确的日志处理和轮换:
from fastapi import FastAPI, Request from typing import Callable import logging import time app = FastAPI() # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) logger = logging.getLogger(__name__) # Middleware for request logging @app.middleware("http") async def log_requests(request: Request, call_next: Callable): start_time = time.time() response = await call_next(request) duration = time.time() - start_time log_dict = { "url": str(request.url), "method": request.method, "client_ip": request.client.host, "duration": f"{duration:.2f}s", "status_code": response.status_code } logger.info(f"Request processed: {log_dict}") return response # Example endpoint with logging @app.get("/items/{item_id}") async def read_item(item_id: int): logger.info(f"Retrieving item {item_id}") # Your code here return {"item_id": item_id}
常见的日志记录模式和解决方案
请求 ID 跟踪
在您的应用程序中实施请求跟踪:
import logging import contextvars from uuid import uuid4 # Create context variable for trace ID trace_id_var = contextvars.ContextVar('trace_id', default=None) class TraceIDFilter(logging.Filter): def filter(self, record): trace_id = trace_id_var.get() record.trace_id = trace_id if trace_id else 'no_trace' return True def setup_microservice_logging(service_name): logger = logging.getLogger(service_name) # Create formatter with trace ID formatter = logging.Formatter( '%(asctime)s - %(name)s - [%(trace_id)s] - %(levelname)s - %(message)s' ) # Add handlers with trace ID filter handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(TraceIDFilter()) logger.addHandler(handler) logger.setLevel(logging.INFO) return logger # Usage in microservice logger = setup_microservice_logging('order_service') def process_order(order_data): # Generate or get trace ID from request trace_id_var.set(str(uuid4())) logger.info("Starting order processing", extra={ 'order_id': order_data['id'], 'customer_id': order_data['customer_id'] }) # Process order... logger.info("Order processed successfully")
用户活动记录
安全地跟踪用户操作:
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
故障排除和调试
有效地排除日志记录问题需要了解常见问题及其解决方案。本节涵盖开发人员在实现日志记录时面临的最常见挑战,并提供调试日志记录配置的实用解决方案。
常见日志记录问题
丢失日志条目
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
性能瓶颈
logging.basicConfig( filename='app.log', filemode='w', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S' )
常见的日志记录陷阱和解决方案
配置问题
config = { 'version': 1, 'formatters': { 'detailed': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'detailed' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'app.log', 'level': 'DEBUG', 'formatter': 'detailed' } }, 'loggers': { 'myapp': { 'handlers': ['console', 'file'], 'level': 'DEBUG', 'propagate': True } } } logging.config.dictConfig(config)
内存和资源问题
import json import logging from datetime import datetime class JSONFormatter(logging.Formatter): def __init__(self): super().__init__() def format(self, record): # Create base log record log_obj = { "timestamp": self.formatTime(record, self.datefmt), "name": record.name, "level": record.levelname, "message": record.getMessage(), "module": record.module, "function": record.funcName, "line": record.lineno } # Add exception info if present if record.exc_info: log_obj["exception"] = self.formatException(record.exc_info) # Add custom fields from extra if hasattr(record, "extra_fields"): log_obj.update(record.extra_fields) return json.dumps(log_obj) # Usage Example logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger.addHandler(handler) # Log with extra fields logger.info("User logged in", extra={"extra_fields": {"user_id": "123", "ip": "192.168.1.1"}})
格式字符串和性能问题
import traceback import sys from contextlib import contextmanager class ErrorLogger: def __init__(self, logger): self.logger = logger @contextmanager def error_context(self, operation_name, **context): """Context manager for error logging with additional context""" try: yield except Exception as e: # Capture the current stack trace exc_type, exc_value, exc_traceback = sys.exc_info() # Format error details error_details = { "operation": operation_name, "error_type": exc_type.__name__, "error_message": str(exc_value), "context": context, "stack_trace": traceback.format_exception(exc_type, exc_value, exc_traceback) } # Log the error with full context self.logger.error( f"Error in {operation_name}: {str(exc_value)}", extra={"error_details": error_details} ) # Re-raise the exception raise # Usage Example logger = logging.getLogger(__name__) error_logger = ErrorLogger(logger) with error_logger.error_context("user_authentication", user_id="123", attempt=2): # Your code that might raise an exception authenticate_user(user_id)
处理程序配置陷阱
import threading import logging from queue import Queue from logging.handlers import QueueHandler, QueueListener def setup_thread_safe_logging(): """Set up thread-safe logging with a queue""" # Create the queue log_queue = Queue() # Create handlers console_handler = logging.StreamHandler() file_handler = logging.FileHandler('app.log') # Create queue handler and listener queue_handler = QueueHandler(log_queue) listener = QueueListener( log_queue, console_handler, file_handler, respect_handler_level=True ) # Configure root logger root_logger = logging.getLogger() root_logger.addHandler(queue_handler) # Start the listener in a separate thread listener.start() return listener # Usage listener = setup_thread_safe_logging() def worker_function(): logger = logging.getLogger(__name__) logger.info(f"Worker thread {threading.current_thread().name} starting") # Do work... logger.info(f"Worker thread {threading.current_thread().name} finished") # Create and start threads threads = [ threading.Thread(target=worker_function) for _ in range(3) ] for thread in threads: thread.start()
线程安全注意事项
# settings.py LOGGING = { 'version': 1, 'disable_existing_loggers': False, 'formatters': { 'verbose': { 'format': '{levelname} {asctime} {module} {process:d} {thread:d} {message}', 'style': '{', }, 'simple': { 'format': '{levelname} {message}', 'style': '{', }, }, 'filters': { 'require_debug_true': { '()': 'django.utils.log.RequireDebugTrue', }, }, 'handlers': { 'console': { 'level': 'INFO', 'filters': ['require_debug_true'], 'class': 'logging.StreamHandler', 'formatter': 'simple' }, 'file': { 'level': 'ERROR', 'class': 'logging.FileHandler', 'filename': 'django-errors.log', 'formatter': 'verbose' }, 'mail_admins': { 'level': 'ERROR', 'class': 'django.utils.log.AdminEmailHandler', 'include_html': True, } }, 'loggers': { 'django': { 'handlers': ['console'], 'propagate': True, }, 'django.request': { 'handlers': ['file', 'mail_admins'], 'level': 'ERROR', 'propagate': False, }, 'myapp': { 'handlers': ['console', 'file'], 'level': 'INFO', } } }
配置文件问题
import logging from logging.handlers import RotatingFileHandler from flask import Flask, request app = Flask(__name__) def setup_logger(): # Create formatter formatter = logging.Formatter( '[%(asctime)s] %(levelname)s in %(module)s: %(message)s' ) # File Handler file_handler = RotatingFileHandler( 'flask_app.log', maxBytes=10485760, # 10MB backupCount=10 ) file_handler.setLevel(logging.INFO) file_handler.setFormatter(formatter) # Add request context class RequestFormatter(logging.Formatter): def format(self, record): record.url = request.url record.remote_addr = request.remote_addr return super().format(record) # Configure app logger app.logger.addHandler(file_handler) app.logger.setLevel(logging.INFO) return app.logger # Usage in routes @app.route('/api/endpoint') def api_endpoint(): app.logger.info(f'Request received from {request.remote_addr}') # Your code here return jsonify({'status': 'success'})
测试您的日志记录
使用日志进行单元测试
from fastapi import FastAPI, Request from typing import Callable import logging import time app = FastAPI() # Configure logging logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) logger = logging.getLogger(__name__) # Middleware for request logging @app.middleware("http") async def log_requests(request: Request, call_next: Callable): start_time = time.time() response = await call_next(request) duration = time.time() - start_time log_dict = { "url": str(request.url), "method": request.method, "client_ip": request.client.host, "duration": f"{duration:.2f}s", "status_code": response.status_code } logger.info(f"Request processed: {log_dict}") return response # Example endpoint with logging @app.get("/items/{item_id}") async def read_item(item_id: int): logger.info(f"Retrieving item {item_id}") # Your code here return {"item_id": item_id}
使用模拟记录器进行测试
import logging import contextvars from uuid import uuid4 # Create context variable for trace ID trace_id_var = contextvars.ContextVar('trace_id', default=None) class TraceIDFilter(logging.Filter): def filter(self, record): trace_id = trace_id_var.get() record.trace_id = trace_id if trace_id else 'no_trace' return True def setup_microservice_logging(service_name): logger = logging.getLogger(service_name) # Create formatter with trace ID formatter = logging.Formatter( '%(asctime)s - %(name)s - [%(trace_id)s] - %(levelname)s - %(message)s' ) # Add handlers with trace ID filter handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(TraceIDFilter()) logger.addHandler(handler) logger.setLevel(logging.INFO) return logger # Usage in microservice logger = setup_microservice_logging('order_service') def process_order(order_data): # Generate or get trace ID from request trace_id_var.set(str(uuid4())) logger.info("Starting order processing", extra={ 'order_id': order_data['id'], 'customer_id': order_data['customer_id'] }) # Process order... logger.info("Order processed successfully")
替代记录解决方案
洛古鲁
Loguru 提供了一个更简单的日志记录界面,具有开箱即用的强大功能:
from logging.handlers import RotatingFileHandler import logging import threading from datetime import datetime class BackgroundTaskLogger: def __init__(self, task_name): self.logger = logging.getLogger(f'background_task.{task_name}') self.setup_logging() def setup_logging(self): # Create logs directory if it doesn't exist import os os.makedirs('logs', exist_ok=True) # Setup rotating file handler handler = RotatingFileHandler( filename=f'logs/task_{datetime.now():%Y%m%d}.log', maxBytes=5*1024*1024, # 5MB backupCount=5 ) # Create formatter formatter = logging.Formatter( '%(asctime)s - [%(threadName)s] - %(levelname)s - %(message)s' ) handler.setFormatter(formatter) self.logger.addHandler(handler) self.logger.setLevel(logging.INFO) def log_task_status(self, status, **kwargs): """Log task status with additional context""" extra = { 'thread_id': threading.get_ident(), 'timestamp': datetime.now().isoformat(), **kwargs } self.logger.info(f"Task status: {status}", extra=extra) # Usage example def background_job(): logger = BackgroundTaskLogger('data_processing') try: logger.log_task_status('started', job_id=123) # Do some work... logger.log_task_status('completed', records_processed=1000) except Exception as e: logger.logger.error(f"Task failed: {str(e)}", exc_info=True)
结构日志
Structlog 非常适合使用上下文进行结构化日志记录:
import logging from contextlib import contextmanager import threading import uuid # Store request ID in thread-local storage _request_id = threading.local() class RequestIDFilter(logging.Filter): def filter(self, record): record.request_id = getattr(_request_id, 'id', 'no_request_id') return True @contextmanager def request_context(request_id=None): """Context manager for request tracking""" if request_id is None: request_id = str(uuid.uuid4()) old_id = getattr(_request_id, 'id', None) _request_id.id = request_id try: yield request_id finally: if old_id is None: del _request_id.id else: _request_id.id = old_id # Setup logging with request ID def setup_request_logging(): logger = logging.getLogger() formatter = logging.Formatter( '%(asctime)s - [%(request_id)s] - %(levelname)s - %(message)s' ) handler = logging.StreamHandler() handler.setFormatter(formatter) handler.addFilter(RequestIDFilter()) logger.addHandler(handler) return logger # Usage example logger = setup_request_logging() def process_request(data): with request_context() as request_id: logger.info("Processing request", extra={ 'data': data, 'operation': 'process_request' }) # Process the request... logger.info("Request processed successfully")
Python-JSON-记录器
对于 JSON 格式的日志记录:
# Simple python logging example import logging # Basic logger in python example logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Create a logger logger = logging.getLogger(__name__) # Logger in python example logger.info("This is an information message") logger.warning("This is a warning message")
最佳实践和指南
测井标准
import logging # Basic configuration logging.basicConfig( level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s' ) # Your first logger logger = logging.getLogger(__name__) # Using the logger logger.info("Application started") logger.warning("Watch out!") logger.error("Something went wrong")
性能优化
logging.basicConfig( filename='app.log', filemode='w', format='%(asctime)s - %(name)s - %(levelname)s - %(message)s', level=logging.DEBUG, datefmt='%Y-%m-%d %H:%M:%S' )
案例研究
现实世界的实施:电子商务平台
config = { 'version': 1, 'formatters': { 'detailed': { 'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s' } }, 'handlers': { 'console': { 'class': 'logging.StreamHandler', 'level': 'INFO', 'formatter': 'detailed' }, 'file': { 'class': 'logging.FileHandler', 'filename': 'app.log', 'level': 'DEBUG', 'formatter': 'detailed' } }, 'loggers': { 'myapp': { 'handlers': ['console', 'file'], 'level': 'DEBUG', 'propagate': True } } } logging.config.dictConfig(config)
微服务架构示例
import json import logging from datetime import datetime class JSONFormatter(logging.Formatter): def __init__(self): super().__init__() def format(self, record): # Create base log record log_obj = { "timestamp": self.formatTime(record, self.datefmt), "name": record.name, "level": record.levelname, "message": record.getMessage(), "module": record.module, "function": record.funcName, "line": record.lineno } # Add exception info if present if record.exc_info: log_obj["exception"] = self.formatException(record.exc_info) # Add custom fields from extra if hasattr(record, "extra_fields"): log_obj.update(record.extra_fields) return json.dumps(log_obj) # Usage Example logger = logging.getLogger(__name__) handler = logging.StreamHandler() handler.setFormatter(JSONFormatter()) logger.addHandler(handler) # Log with extra fields logger.info("User logged in", extra={"extra_fields": {"user_id": "123", "ip": "192.168.1.1"}})
结论
要点
- 基础优先:从正确的基本配置开始
- 设置适当的日志级别
- 配置有意义的格式
- 选择合适的处理程序
- 结构化方法:使用结构化日志记录进行更好的分析
- 一致的日志格式
- 上下文信息
- 机器可解析的输出
- 性能很重要:优化生产日志记录
- 实施日志轮转
- 需要时使用异步日志记录
- 考虑抽样策略
-
安全意识:保护敏感信息
- 过滤敏感数据
- 实施适当的访问控制
- 遵守合规要求
实施清单
import traceback import sys from contextlib import contextmanager class ErrorLogger: def __init__(self, logger): self.logger = logger @contextmanager def error_context(self, operation_name, **context): """Context manager for error logging with additional context""" try: yield except Exception as e: # Capture the current stack trace exc_type, exc_value, exc_traceback = sys.exc_info() # Format error details error_details = { "operation": operation_name, "error_type": exc_type.__name__, "error_message": str(exc_value), "context": context, "stack_trace": traceback.format_exception(exc_type, exc_value, exc_traceback) } # Log the error with full context self.logger.error( f"Error in {operation_name}: {str(exc_value)}", extra={"error_details": error_details} ) # Re-raise the exception raise # Usage Example logger = logging.getLogger(__name__) error_logger = ErrorLogger(logger) with error_logger.error_context("user_authentication", user_id="123", attempt=2): # Your code that might raise an exception authenticate_user(user_id)
其他资源
- 官方文档:
- Python 日志记录指南
- 记录食谱
- 工具和库:
- Loguru 文档
- Structlog 文档
- Python-JSON-Logger
本指南涵盖了 Python 日志记录的基本方面,从基本设置到高级实现。请记住,日志记录是应用程序可观察性和维护的一个组成部分。深思熟虑地实施并定期维护以获得最佳结果。
请记住,随着应用程序的发展和新需求的出现,定期检查和更新您的日志记录实现。
以上是完整的 Python 日志记录指南:最佳实践和实施的详细内容。更多信息请关注PHP中文网其他相关文章!

Python和C 各有优势,选择应基于项目需求。1)Python适合快速开发和数据处理,因其简洁语法和动态类型。2)C 适用于高性能和系统编程,因其静态类型和手动内存管理。

选择Python还是C 取决于项目需求:1)如果需要快速开发、数据处理和原型设计,选择Python;2)如果需要高性能、低延迟和接近硬件的控制,选择C 。

通过每天投入2小时的Python学习,可以有效提升编程技能。1.学习新知识:阅读文档或观看教程。2.实践:编写代码和完成练习。3.复习:巩固所学内容。4.项目实践:应用所学于实际项目中。这样的结构化学习计划能帮助你系统掌握Python并实现职业目标。

在两小时内高效学习Python的方法包括:1.回顾基础知识,确保熟悉Python的安装和基本语法;2.理解Python的核心概念,如变量、列表、函数等;3.通过使用示例掌握基本和高级用法;4.学习常见错误与调试技巧;5.应用性能优化与最佳实践,如使用列表推导式和遵循PEP8风格指南。

Python适合初学者和数据科学,C 适用于系统编程和游戏开发。1.Python简洁易用,适用于数据科学和Web开发。2.C 提供高性能和控制力,适用于游戏开发和系统编程。选择应基于项目需求和个人兴趣。

Python更适合数据科学和快速开发,C 更适合高性能和系统编程。1.Python语法简洁,易于学习,适用于数据处理和科学计算。2.C 语法复杂,但性能优越,常用于游戏开发和系统编程。

每天投入两小时学习Python是可行的。1.学习新知识:用一小时学习新概念,如列表和字典。2.实践和练习:用一小时进行编程练习,如编写小程序。通过合理规划和坚持不懈,你可以在短时间内掌握Python的核心概念。

Python更易学且易用,C 则更强大但复杂。1.Python语法简洁,适合初学者,动态类型和自动内存管理使其易用,但可能导致运行时错误。2.C 提供低级控制和高级特性,适合高性能应用,但学习门槛高,需手动管理内存和类型安全。


热AI工具

Undresser.AI Undress
人工智能驱动的应用程序,用于创建逼真的裸体照片

AI Clothes Remover
用于从照片中去除衣服的在线人工智能工具。

Undress AI Tool
免费脱衣服图片

Clothoff.io
AI脱衣机

Video Face Swap
使用我们完全免费的人工智能换脸工具轻松在任何视频中换脸!

热门文章

热工具

SublimeText3 英文版
推荐:为Win版本,支持代码提示!

mPDF
mPDF是一个PHP库,可以从UTF-8编码的HTML生成PDF文件。原作者Ian Back编写mPDF以从他的网站上“即时”输出PDF文件,并处理不同的语言。与原始脚本如HTML2FPDF相比,它的速度较慢,并且在使用Unicode字体时生成的文件较大,但支持CSS样式等,并进行了大量增强。支持几乎所有语言,包括RTL(阿拉伯语和希伯来语)和CJK(中日韩)。支持嵌套的块级元素(如P、DIV),

SublimeText3 Mac版
神级代码编辑软件(SublimeText3)

MinGW - 适用于 Windows 的极简 GNU
这个项目正在迁移到osdn.net/projects/mingw的过程中,你可以继续在那里关注我们。MinGW:GNU编译器集合(GCC)的本地Windows移植版本,可自由分发的导入库和用于构建本地Windows应用程序的头文件;包括对MSVC运行时的扩展,以支持C99功能。MinGW的所有软件都可以在64位Windows平台上运行。

Atom编辑器mac版下载
最流行的的开源编辑器