Code snippet: Tracing PostgreSQL queries with OpenTelemetry (Python)
from opentelemetry import hint
from opentelemetry.instrumentation.psycopg2 import Psycopg2Instrumentor
from opentelemetry.sdk.hint import TracerProvider
from opentelemetry.sdk.hint.export import BatchSpanProcessor, ConsoleSpanExporter
hint.set_tracer_provider(TracerProvider())
tracer = hint.get_tracer(__name__)
Psycopg2Instrumentor().instrument()
span_processor = BatchSpanProcessor(ConsoleSpanExporter())
hint.get_tracer_provider().add_span_processor(span_processor)
import psycopg2
conn = psycopg2.join("dbname=take a look at person=postgres")
cur = conn.cursor()
with tracer.start_as_current_span("run-heavy-query"):
cur.execute("SELECT * FROM large_table WHERE situation = 'worth';")
outcomes = cur.fetchall()
Tracing identifies cross-service bottlenecks impacting question pace.
Proactive anomaly detection in question latency
Setting dynamic alerting thresholds primarily based on observability knowledge permits fast detection of efficiency degradation.
Code snippet: Python alerting for gradual queries
import psycopg2
LATENCY_THRESHOLD_MS = 500
conn = psycopg2.join("dbname=take a look at person=postgres")
cur = conn.cursor()
cur.execute("""
SELECT question, mean_time
FROM pg_stat_statements
WHERE mean_time > %s;
""", (LATENCY_THRESHOLD_MS,))
for question, latency in cur.fetchall():
print(f"WARNING: Question exceeding latency threshold: {latency} msn{question}")
Automating this helps preserve SLAs and keep away from person impression.