OpenTelemetry

OpenTelemetry
Zero-Code vs Classic Instrumentation

zero-code vs classic instrumentation, the true metrics showdown


This article is also available in french


The introduction (the pain point)

We've all been there: an app lagging in production, an impatient client, and us staring at a screen, digging through mountains of unreadable logs.
Setting up good observability is often seen as the ultimate chore. But with OpenTelemetry (OTel), the game has changed. We are faced with a very practical choice: go for the efficiency of "zero-code" (auto-instrumentation) or get our hands dirty with a classic (manual) approach to extract true business value.

So, why has this become indispensable nowadays, and how far can we go? Let's break it down.

Auto-instrumentation (zero-code): day one coverage

The huge advantage of zero-code is the promise of having functional observability without touching a single line of your core business logic.

🐍 Concrete example on Python (the most mature)

Python uses an agent based on monkey patching: at startup, this agent dynamically modifies the libraries you use. It's a remarkably effective method.

Here are the installation steps:

# 1. Installation (only once)
pip install opentelemetry-distro opentelemetry-exporter-otlp

# 2. Automatic installation of instrumentations for your libs (Flask, FastAPI, requests, psycopg2, redis, kafka, etc.)
opentelemetry-bootstrap -a install

# 3. Running your app with the agent (pure zero-code)
export OTEL_SERVICE_NAME=my-python-app
export OTEL_TRACES_EXPORTER=otlp
export OTEL_METRICS_EXPORTER=otlp
export OTEL_EXPORTER_OTLP_ENDPOINT=http://your-collector:4317   # or your backend (Tempo, Jaeger, Signoz, etc.)

opentelemetry-instrument python app.py
# or for Flask:
opentelemetry-instrument flask run

And there you go! An example app.py application will run without any code changes.

What you get out-of-the-box:

  • Distributed tracing (HTTP server and client Spans, databases, queues, automatic context propagation).
  • Technical metrics (Number of HTTP requests, latency, errors, system/runtime metrics).
  • Enriched logs with correlation identifiers (trace_id or span_id).

The limitations: The approach shines on the technical layer, but stops at the boundaries of your business logic. You won't get any span for a complex calculate_price() function. Furthermore, it's impossible to attach crucial variables like user_tier without using advanced hooks or switching back to a manual approach.

Concrete example on Rust (the modern approach πŸ¦€)

Rust is a compiled language, which makes dynamic monkey patching much less natural. But the ecosystem evolves rapidly.

The true zero-code approach (OpenTelemetry eBPF - OBI):
It’s an eBPF agent running at the Linux kernel level. It instruments your network calls without touching the Rust binary, without recompilation, and with no additional dependencies in your Cargo.toml.

  • Pros: Very lightweight, zero downtime, perfect for tracing incoming and outgoing HTTP/gRPC flows.
  • Cons: Less deep. No internal spans for your internal functions, nor easy access to business variables.

The "almost zero-code" approach (Framework Middleware):
For frameworks like Actix-web or Axum, adding a middleware only requires a few lines of setup at startup:

# Cargo.toml
[dependencies]
opentelemetry = { version = "0.28", features = ["trace"] }
opentelemetry_sdk = { version = "0.28", features = ["rt-tokio"] }
opentelemetry-otlp = "0.28"
opentelemetry-instrumentation-actix-web = "0.1"   
tracing-opentelemetry = "0.1"

The classic (manual) approach: when tech meets "business"

So, can you have a dashboard showing the "number of validated orders > €100" using only zero-code? The answer is no.

This is where manual instrumentation shines. Instantiating metrics (Counter, Histogram, Gauge) directly in your code allows you to factually link application performance to business goals.

Always start with zero-code to instantly secure an 80% baseline coverage. Then, manually instrument only your critical business paths. The effort is minimal for a maximum return on investment.


πŸ’‘ The Opsvox touch: observability to anticipate and build the future

At Opsvox, we systematically set up the technical baseline (system metrics, global logs, network traces) for all our clients. But we never stop there.

What differentiates enduring production from completely mastering its evolution is the integration of targeted metrics. By adding this layer of manual instrumentation, we no longer just see a "CPU or RAM spike", but we instantly identify "which operation typology" generates that load.

This gives us the keys to anticipate scaling, ensure relevant High Availability, and plan intelligent capacity. That is the very essence of our operational mastery.


The field verdict

OpenTelemetry finally unifies how we observe our systems. Zero-code is the perfect tool to bootstrap monitoring: it offers solid technical metrics, vital for infrastructure teams. But to create real value and deeply understand the behavior of your users, targeted manual instrumentation remains inescapable.

Combine both: start with auto-instrumentation, then complement it surgically with manual coding where it's critical. Nothing will slip past you in production.