Opentelemetry Collector: What It Is, When You Need It, and When You Don't
Posted4 months agoActive3 months ago
oneuptime.comTechstory
calmmixed
Debate
60/100
OpentelemetryObservabilityDistributed Tracing
Key topics
Opentelemetry
Observability
Distributed Tracing
The article discusses the OpenTelemetry collector, its use cases, and when it's necessary, sparking a discussion among commenters about its benefits and drawbacks in various scenarios.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
40m
Peak period
26
0-12h
Avg / period
9.3
Comment distribution37 data points
Loading chart...
Based on 37 loaded comments
Key moments
- 01Story posted
Sep 18, 2025 at 1:29 PM EDT
4 months ago
Step 01 - 02First comment
Sep 18, 2025 at 2:09 PM EDT
40m after posting
Step 02 - 03Peak activity
26 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 23, 2025 at 7:09 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45292475Type: storyLast synced: 11/20/2025, 6:30:43 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
- Consider decoupling your collector from whatever is consuming your traces with something like kafka. Traces can be pretty heavy and it can be tricky to scale collectors. If something goes down, it's probably a good idea to continue writing the traces to queue or topic.
- https://www.otelbin.io is a nice little tool to help with collector configuration
My ideal setup would be to just write SQL on telemetry data and plot dashboards / set alerts.
Also, thoughts on Vector vs otel agent?
HyperDX is just a lot better, sure a few papercuts but they got all the important stuff right imo.
Can you share which version of SigNoz did you try or what time frame? We recently made a lot of improvement in how you can host SigNoz including support for Postgres and better docs fro self hosting corretcly - https://signoz.io/docs/collection-agents/get-started/
It's been solid, but the UI is kind of clunky and a little buggy here and there. Dashboards are tricky to setup too. But it has no dependencies, and was easy to setup, and I couldn't find anything else that handled logs too.
PS: I am one of the maintainers
Also, I wasn't sure if Zookeeper was mandatory even for a single-server SigNoz install?
SigNoz UI certainly looks more polished tho!
ClickStack/HyperDX is a polished OOTB stack that has an all in one image you can deploy to get started, so you don't need to worry about the ClickHouse side until you need to really scale (which is where ClickHouse really shines).
The UI is predictably an annoying mess, but that's the case with EVERY tracing solution I've tried. Very much including SigNoz.
Another issue is the complexity of switching between filtered views. A very useful primitive that you and Uptrace are missing: "show this event within the surrounding context". CloudWatch has it.
The other main overarching issue is ease of navigation and switching between contexts. You are actually somewhat better than Uptrace because I can actually cut&paste URLs on most of the pages and send them to my colleague over Slack.
But you make up for that by having bad search in traces (e.g. I can't just search all the traces with the word "UploadDoc" somewhere in them). Here's how Uptrace works: https://imgur.com/a/UWSdIEt
Your "Trace View" is ridiculous: I can't resize columns, I can't drag them to change the order, I can't even _show_ additional columns even though I can sort by them: https://www.loom.com/share/d5fa401384d94959978c0bb2be9010a5?...
Then you also are freaking annoying with the UI. I don't even care about everything getting extra-bloated. It's just par for the course for the modern UI vibe-based design.
But I get almost physically sick from these ridiculous popups: https://www.loom.com/share/21f5efdae8b84b12ba09c45cd2fa0855?...
Honestly, I think that most observability stacks (very much including SigNoz) are focusing on looking hip with cool dashboards. They totally suck when I need to dig deep into logs to find what happened.
> My first "sniff test" for observability platforms is a tool to quickly jump to a given trace/span by ID.
You should be able to do this in SigNoz https://www.loom.com/share/71a2a95b76584b3983d9eeebb60ac420?...
> "show this event within the surrounding context"
we have this in the context logs. does this solve your use case or you mean something else? https://www.loom.com/share/9039afd5c4bf45e7b357a22c9943bb32?...
>But you make up for that by having bad search in traces
Did you mean for this to search across all attributes in spans or when you know which attribute you want to search in? If later, than you can do this through our query builder even today.
Your feedback on "Trace View" is fair. We are planning some improvements on that
Don’t use vector or otel-agent. Add a materialized view in clickhouse to transform data and swap HyperDX to load from your view (in the UI.)
This isn't a lot to go on.
The important thing is what you're trying to instrument - hosts, applications, network, microservices, all of the above? (And then whether you want a few weeks retention, or keeping years worth.)
Grafana in front of Prometheus with node-exporter or telegraf (it can expose in prometheus mode) on the clients -- will tick a lot of boxes and is fast to get going.
Grafana in front of InfluxDB + telegraf is similar, but personally I find PromQL easier than InfluxQL.
> ... write SQL on telemetry data and plot dashboards / set alerts.
Read up about the design of TSDBs and log / tracing datastores - their design & intent heavily influences their query languages.
IMO, with the current tech, it entirely depends on what data you're talking about.
For metrics and traces, I would use the OTel collector personally. You will have much more flexibility and it's pretty easy to write custom processors in Go. Support for traces is quite mature and metrics isn't far off. We've been running collectors for production scale of metric and trace ingest for the past couple of years, on the order of 1m events/sec (metric datapoints or spans). You mentioned low volume so that's less important, but I just wanted to mention in case others find this comment.
Logs are a bit different. We looked in to this in the past year. Vector has emerging support for OTLP but it's pretty early. Still, I bet it's pretty straightforward if your backend can ingest via OTLP. Our main concern with running the otel-collector as the log ingest agent was around throughput/performance. Vector is battle-tested, otel is still a bit early in this space. I imagine over time the gap will be closed but I would probably still reach for Vector for this use-case for higher scale. That said, YMMV and as with any technical decision, empirical data and benchmarking on your workloads will be the best way to determine the tradeoffs.
For your scale you could probably get away with an OTel collector daemonset and maybe a deployment with the Target Allocator (to allocate Prometheus scrapes) and call it a day :)
If it were to give more fine-grained control over write-only access -- would probably just write directly and let it handle the load.
We agree that fine-grained access control is important. A read-only user role will be available in the next major release.
Or maybe I'll contribute this piece myself when I'll have time :)
p.s.: btw, I love Greptime so far, thank you for the product!
Having stats is nice but i am not choosing your product because of stats. I actually think greptimedb is exactly what I am looking for, I.e. a humio / falcon logscale alternative. But I had to do some digging to actually infer that.
Your material doesn't highlight what sets you apart from the competition. If you want to target developers which you might not. I dont know.
I want to debug issues using freetext search, i want to be able to aggregate stats i care about on demand.
It's the shame the docs on it are still quite bad. The example config in the article here does look almost identical to the one we use everywhere, just without the redact, and should probably be pasted somewhere into the official docs.
Every provider seems to produce their own soft fork of the collector for branding (eg Alloy, ADOT, etc) and slightly changes the configuration, which doesn't help.
I've dabbled in building a project that collects metrics from the logs for smaller projects. Everyone tells me it's a bad idea, but it seems to work well for me.
Eventually it'll have successors that are better in some way, more efficient, or whatever, but right now there are no alternatives at all. Open Telemetry is the first common standard that multiple vendors have signed up to.
And while all the tracing providers speak the OTEL protocol, the way your do auth is not the same. Sometimes you need to specify it in a header, sometimes it's a part of the URL.