The Easiest Way to Do OpenTelemetry in .NET: OTLP + Collector
Decouple your observability configuration from your application code with OTLP and collectors
19 minutes to read- The Problem with Vendor-Specific Exporters
- Enter OTLP: The Universal Solution
- The Magic of the OTLP Collector
- Real-World Configuration: The Memorizer Example
- Production Deployment Patterns
- Why This Approach Wins in Production
- Debugging and Troubleshooting
- The Business Case: Why CTOs Love This Approach
- Getting Started: Your First OTLP Setup
- Wrapping Up
We know OpenTelemetry deeply at Petabridge. We’ve built Phobos, an OpenTelemetry instrumentation plugin for Akka.NET, so we understand the low-level bits. Beyond that, we’ve been using OpenTelemetry in production for years on Sdkbin and we’ve helped over 100 customers implement OpenTelemetry configurations very similar to our own. Through all this experience, one thing has become crystal clear: the easiest, most production-ready approach to OpenTelemetry in .NET is using OTLP (OpenTelemetry Line Protocol) with a collector.
In this post, I’ll walk you through why this approach beats vendor-specific exporters every time, show you exactly how to configure it, and demonstrate the real-world benefits we’ve experienced at Petabridge. This is the companion piece to my recent YouTube video on the topic.
The Problem with Vendor-Specific Exporters
When you’re getting started with OpenTelemetry for the first time in one of your projects, you know your team uses DataDog, or New Relic, or Application Insights. So naturally, the first thing you’ll try is figuring out how to connect your application directly to that specific tool.
You end up with something that looks like this:
builder.Services
.AddOpenTelemetry()
.WithTracing(builder =>
{
builder
.AddHttpClientInstrumentation()
.AddAspNetCoreInstrumentation()
// Coupling our app to vendor-specific implementations
.AddDatadogTracing() // Application code now depends on DataDog SDK
.AddNewRelicTracing() // And New Relic SDK
.AddAzureMonitorTracing(); // And Azure Monitor SDK
});
And you’re going to get frustrated doing this because of:
- Vendor Coupling: Your application code is now directly coupled to vendor-specific SDKs and their implementation details
- Violated Separation of Concerns: Observability configuration is tangled with application logic
- Configuration Complexity: Every vendor has wildly different configuration requirements living in your app
- Missing or Incomplete Packages: Not all vendors have first-party OpenTelemetry exporters, and community packages vary in quality
- Deployment Headaches: Want to try a different APM tool? You have to redeploy your entire application
- Resource Overhead: Your application does the heavy lifting of formatting and shipping telemetry to multiple endpoints
We learned this lesson the hard way during our early Phobos development. Every time we wanted to add support for a new APM vendor, we had to modify our application code, create new exporters, test new configurations, and redeploy everything. Our observability choices were coupled to our application releases - a clear violation of separation of concerns.
Enter OTLP: The Universal Solution
OpenTelemetry Line Protocol (OTLP) is the native wire protocol for OpenTelemetry data. It’s vendor-neutral, efficient, and designed specifically for telemetry data transport. Instead of using vendor-specific exporters, you configure your .NET application to export everything via OTLP to a single endpoint.
Here’s how the same configuration looks with OTLP:
builder.Services
.AddOpenTelemetry()
.UseOtlpExporter() // That's it. One exporter, infinite possibilities.
.WithMetrics(builder => builder
.AddAspNetCoreInstrumentation()
.AddRuntimeInstrumentation())
.WithTracing(builder => builder
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddSource("YourApp")); // Your custom activity source
The beauty of this approach is in its simplicity. Your .NET application doesn’t need to know or care about Datadog, New Relic, Grafana, or any other vendor. It just exports standardized OTLP data to a single endpoint.
The Magic of the OTLP Collector
But wait - how does your OTLP data get to your actual monitoring platforms? That’s where the OpenTelemetry Collector comes in. The collector is a separate process that receives OTLP data and can export it to any number of backend systems.
Here’s the architecture we use in production:
graph TD
A[.NET Application] -->|OTLP| B[OpenTelemetry Collector]
B --> C[DataDog]
B --> D[Grafana]
B --> E[New Relic]
B --> F[Azure Monitor]
B --> G[Local Files/Debug]
style A fill:#e1f5fe
style B fill:#fff3e0
style C fill:#f3e5f5
style D fill:#e8f5e8
style E fill:#fff8e1
style F fill:#e3f2fd
style G fill:#fce4ec
The collector configuration is where the real magic happens. Here’s a real collector configuration we use:
# otel-collector.yaml
receivers:
otlp: # Listens on standard ports 4317 (gRPC) and 4318 (HTTP)
processors:
batch: # Batches telemetry data for efficiency
exporters:
# Debug output for local development
debug:
verbosity: detailed
# Send to Prometheus for metrics
otlphttp/prometheus:
endpoint: ${env:PROMETHEUS_ENDPOINT}
tls:
insecure: true
# Send to Seq for logs and traces
otlphttp/seq:
endpoint: ${env:SEQ_ENDPOINT}
# Send to Grafana Cloud (or your APM of choice)
otlp/grafana:
endpoint: ${env:GRAFANA_ENDPOINT}
headers:
x-api-key: ${env:GRAFANA_API_KEY}
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/grafana, otlphttp/seq]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp/grafana, otlphttp/prometheus]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp/grafana, otlphttp/seq]
Real-World Configuration: The Memorizer Example
Let’s look at how we’ve implemented this approach in our Memorizer MCP Server, a vector-search powered agent memory system we built with .NET and Akka.NET.
First, the OTLP configuration in our .NET application:
// Simple OTLP configuration - that's all you need!
services
.AddOpenTelemetry()
.UseOtlpExporter() // The magic line - OTLP everywhere
.WithMetrics(builder => builder
.AddAspNetCoreInstrumentation()
.AddRuntimeInstrumentation())
.WithTracing(builder => builder
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddSource("YourApp")); // Your custom activity source
And here’s how we configure it via environment variables in production:
# OpenTelemetry Configuration
export OTEL_SERVICE_NAME="memorizer"
export OTEL_SERVICE_VERSION="1.0.0"
export OTEL_RESOURCE_ATTRIBUTES="service.namespace=petabridge,deployment.environment=production"
export OTEL_EXPORTER_OTLP_ENDPOINT="http://otel-collector:4317"
# That's it! No vendor-specific configuration needed.
Production Deployment Patterns
Over the years, we’ve identified three main deployment patterns for OTLP collectors that work well in production:
Pattern 1: Sidecar Deployment
Deploy the collector as a sidecar container alongside your application. This provides isolation and makes it easy to manage collector resources independently:
# docker-compose.yml
version: '3.8'
services:
your-app:
image: your-app:latest
environment:
- OTEL_EXPORTER_OTLP_ENDPOINT=http://otel-collector:4318
- OTEL_SERVICE_NAME=your-app
depends_on:
- otel-collector
otel-collector:
image: otel/opentelemetry-collector-contrib:latest
command: ["--config=/etc/otelcol-contrib/config.yaml"]
volumes:
- ./otel-collector-config.yaml:/etc/otelcol-contrib/config.yaml:ro
environment:
- PROMETHEUS_ENDPOINT=http://prometheus:9090
- SEQ_ENDPOINT=http://seq:5341
- GRAFANA_ENDPOINT=${GRAFANA_ENDPOINT}
- GRAFANA_API_KEY=${GRAFANA_API_KEY}
Pattern 2: Kubernetes DaemonSet
For Kubernetes environments, deploy the collector as a DaemonSet to get one collector per node, reducing network hops and providing better resource efficiency:
# otel-collector-daemonset.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: otel-collector-config
data:
config.yaml: |
receivers:
otlp:
processors:
batch:
exporters:
otlp/grafana:
endpoint: ${env:GRAFANA_ENDPOINT}
headers:
x-api-key: ${env:GRAFANA_API_KEY}
service:
pipelines:
traces:
receivers: [otlp]
processors: [batch]
exporters: [otlp/grafana]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [otlp/grafana]
logs:
receivers: [otlp]
processors: [batch]
exporters: [otlp/grafana]
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: otel-collector
spec:
selector:
matchLabels:
app: otel-collector
template:
metadata:
labels:
app: otel-collector
spec:
containers:
- name: otel-collector
image: otel/opentelemetry-collector-contrib:latest
ports:
- containerPort: 4317
name: otlp-grpc
- containerPort: 4318
name: otlp-http
env:
- name: GRAFANA_ENDPOINT
value: "your-grafana-endpoint:4317"
- name: GRAFANA_API_KEY
valueFrom:
secretKeyRef:
name: otel-secrets
key: grafana-api-key
volumeMounts:
- name: config
mountPath: /etc/otelcol-contrib
volumes:
- name: config
configMap:
name: otel-collector-config
items:
- key: config.yaml
path: config.yaml
This gives you a collector on every node that your applications can reach via localhost:4317
. For more advanced configurations and Helm charts, see the OpenTelemetry Helm Charts repository.
Why This Approach Wins in Production
After four years of production OpenTelemetry deployments, here’s why the OTLP + Collector approach consistently outperforms vendor-specific exporters:
1. True Separation of Concerns
Your application’s job is to produce telemetry, not to know where it goes. Want to switch from DataDog to Grafana? Just update the collector configuration. Your application code remains untouched:
# Before: DataDog only
exporters:
datadog:
api:
key: ${DATADOG_API_KEY}
# After: Added Grafana, keeping DataDog
exporters:
datadog:
api:
key: ${DATADOG_API_KEY}
otlphttp/grafana:
endpoint: ${GRAFANA_OTLP_ENDPOINT}
headers:
authorization: "Basic ${GRAFANA_BASIC_AUTH}"
2. Simplified Network Architecture
Your applications only need outbound HTTP/gRPC access to the collector. The collector handles all the complex vendor-specific network requirements, authentication, and retry logic.
graph TD
subgraph "Application Network Zone"
A[App 1] --> C[Collector]
B[App 2] --> C
D[App 3] --> C
end
subgraph "External Vendor Zone"
C --> E[DataDog API]
C --> F[Grafana Cloud]
C --> G[New Relic API]
C --> H[Azure Monitor]
end
style C fill:#fff3e0
3. Better Resource Management
The collector can batch, compress, and optimize telemetry data before sending it to vendors. Your applications just fire-and-forget via OTLP, while the collector handles the heavy lifting:
processors:
batch:
timeout: 10s # Wait up to 10 seconds
send_batch_size: 512 # Or until we have 512 spans
send_batch_max_size: 1024 # Never exceed 1024 spans
4. Development/Production Parity
The same OTLP configuration works in development, staging, and production. Just point OTEL_EXPORTER_OTLP_ENDPOINT
to different collector instances:
# Development
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317"
# Staging
export OTEL_EXPORTER_OTLP_ENDPOINT="http://otel-collector.staging.internal:4317"
# Production
export OTEL_EXPORTER_OTLP_ENDPOINT="http://otel-collector.prod.internal:4317"
5. Perfect .NET Aspire Integration
If you’re using .NET Aspire (and you should be), OTLP is the natural choice. Aspire embraces this separation of concerns - your app emits OTLP, and infrastructure handles the routing.
For a complete example of .NET Aspire with OpenTelemetry and OTLP collector, check out our Petabridge.Phobos.Web project on GitHub, which includes:
- Custom OTLP collector resource configuration
- Integration with Grafana and Prometheus
- Complete Docker Compose setup
- Production-ready patterns
Debugging and Troubleshooting
One of the biggest advantages of the OTLP approach is debugging visibility. The collector can log everything it’s doing, making it easy to troubleshoot telemetry pipeline issues.
Here’s how we set up debugging in development:
# Development collector config with debugging
receivers:
otlp:
exporters:
debug:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug] # See everything in console
metrics:
receivers: [otlp]
exporters: [debug]
logs:
receivers: [otlp]
exporters: [debug]
And in your .NET application, add a test endpoint to verify OTLP is working:
// Debug endpoint in your application
app.MapGet("/otel-test", () =>
{
using var activity = Activity.StartActivity("test-activity");
activity?.SetTag("test.tag", "test-value");
activity?.SetStatus(ActivityStatusCode.Ok);
var logger = app.Services.GetRequiredService<ILoggerFactory>().CreateLogger("OtelTest");
logger.LogInformation("OTEL test endpoint called - this should appear in collector logs");
return Results.Ok(new { message = "OTEL test completed", activityId = activity?.Id });
});
Hit /otel-test
and then check your collector logs. You should see the trace and log data flowing through.
The Business Case: Why CTOs Love This Approach
From a business perspective, the OTLP + Collector approach delivers several key advantages:
- Clean Architecture: Observability infrastructure is decoupled from application logic
- Vendor Flexibility: Switch or evaluate APM vendors without touching application code
- Cost Control: Easy to compare vendors by routing the same data to multiple platforms
- Operational Independence: Update observability configuration without application releases
- Compliance: Centralized control over where telemetry data gets sent and how it’s processed
We’ve seen teams reduce their observability operational overhead by 60-70% after migrating from vendor-specific exporters to OTLP + Collector. The configuration is simpler, deployments are more predictable, and switching costs approach zero.
Getting Started: Your First OTLP Setup
Ready to try this approach? Here’s the minimal setup to get you started:
Step 1: Configure Your .NET Application
Add the OpenTelemetry packages:
<PackageReference Include="OpenTelemetry.Extensions.Hosting" />
<PackageReference Include="OpenTelemetry.Instrumentation.AspNetCore" />
<PackageReference Include="OpenTelemetry.Instrumentation.Http" />
<PackageReference Include="OpenTelemetry.Exporter.OpenTelemetryProtocol" />
Configure OpenTelemetry with OTLP:
builder.Services
.AddOpenTelemetry()
.UseOtlpExporter() // This is the key line
.WithTracing(tracing => tracing
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation())
.WithMetrics(metrics => metrics
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation());
Step 2: Set Environment Variables
export OTEL_SERVICE_NAME="your-app-name"
export OTEL_EXPORTER_OTLP_ENDPOINT="http://localhost:4317" # Default gRPC endpoint
# Optional: Only specify protocol if you need HTTP instead of gRPC
# export OTEL_EXPORTER_OTLP_PROTOCOL="http/protobuf" # Forces HTTP on port 4318
By default, the OTLP exporter uses gRPC (port 4317), which is what you’ll want in most cases. You only need to set OTEL_EXPORTER_OTLP_PROTOCOL
if you specifically need HTTP (for example, behind certain proxies or firewalls that don’t support HTTP/2).
Step 3: Start a Local Collector
Create a simple collector configuration:
# otel-collector-local.yaml
receivers:
otlp:
exporters:
debug:
verbosity: detailed
service:
pipelines:
traces:
receivers: [otlp]
exporters: [debug]
metrics:
receivers: [otlp]
exporters: [debug]
logs:
receivers: [otlp]
exporters: [debug]
Run the collector:
docker run -p 4318:4318 -p 4317:4317 \
-v $(pwd)/otel-collector-local.yaml:/etc/otelcol-contrib/config.yaml:ro \
otel/opentelemetry-collector-contrib:latest
Step 4: Test It Out
Start your .NET application and make some HTTP requests. You should see traces and metrics appearing in the collector logs.
Wrapping Up
After years of working with OpenTelemetry in production environments, from our early Phobos development to modern .NET applications, the OTLP + Collector approach has proven itself as the most maintainable, flexible, and production-ready way to implement observability.
It properly separates concerns - your application produces telemetry, and your infrastructure decides where it goes. This decoupling simplifies both application development and operations, reduces deployment complexity, and gives you the flexibility to evolve your monitoring strategy independently of your application releases.
The code examples in this post are all drawn from real production deployments. If you want to see more of the implementation details, check out the Memorizer repository on GitHub or watch the full video walkthrough.
Have questions about implementing OTLP in your .NET applications? Drop them in the comments below - I’d love to help you get started with this approach.
Want to learn more about building production-ready .NET applications with Akka.NET? Check out our Akka.NET Bootcamp or get in touch with our team at Petabridge for consulting and training services.
If you liked this post, you can share it with your followers or follow us on Twitter!
- Read more about:
- Akka.NET
- Case Studies
- Videos
Observe and Monitor Your Akka.NET Applications with Phobos
Did you know that Phobos can automatically instrument your Akka.NET applications with OpenTelemetry?
Click here to learn more.