Hacker Newsnew | past | comments | ask | show | jobs | submit | annanay's commentslogin


sweet, will use this the next time :)


Yeah I just used off the shelf base material and didn't have the eight equipment to make rounded corners, hence the current diamond shape :(


Ah yeah, I should have mentioned that in the post somewhere, but reflective would definitely be a nice next step. I have seen TheSignGuy use reflective material in his videos, I'll have to find the equivalent on the Cricut store.


Do you have sample images of what this would look like? I imagine the texture and finish won't be as smooth as vinyl but curious nevertheless.


Grafana Tempo also switched from Protobuf storage format to Apache Parquet last year. It's fully open source, and the proposal (from April 2022) is here: https://github.com/grafana/tempo/blob/main/docs/design-propo...

The relevant code for parquet storage backend can be found here: https://github.com/grafana/tempo/tree/main/tempodb/encoding

disclosure: I work for Grafana!


Cool thanks for sharing. Can you say something about how it's worked out? Has it reduced bandwidth or CPU usage?


The Parquet backend helped unlock traces search for large clusters (>400MB/s data ingestion) and over longer periods of time (>24h). It also helped unlock TraceQL (a query language for traces similar to PromQL/LogQL). There's more details in this blog post: https://grafana.com/blog/2023/02/01/new-in-grafana-tempo-2.0...

I don't have the exact CPU/bandwidth numbers on me right now but CPU usage has went up by about ~50% on our "Ingester" and "Compactor" components (you can read up about the architecture here - https://grafana.com/docs/tempo/latest/operations/architectur...). But this is optimising for read performance which improved significantly.


This is really interesting, thanks for sharing. What's also cool was the low effort needed for this setup (Java autoinstrumentation + Clickhouse exporter + Grafana Clickhouse Plugin).


Since Tempo is a k/v store that can retrieve traces given a traceID, we need either a metric system that can store traceIDs in exemplars OR any logging framework to log traceIDs that can be copied over to the Tempo Query UI.


The semi structured nature of logs works to the advantage of Tempo, because as developers we have the flexibility to log _anything_, high cardinality values like cust-id, request latency, gobble-de-gook .. the equivalent of span tags. Instead of indexing these as tags, we get advance search features through a powerful query language landing in Loki (LogQLv2).


But the data starts out structured. It becomes semi-structured when you log it.

I'm telling you, from first hand experience, this does not end well.

There's no reason that your tracing system should not be indexing your tags in an engine that provides advanced search features through a powerful query language.


I agree, if anything the eventual goal should be to invert it. In applications I work on right now, trace tags contain the richest and best-described request metadata. Tags are indexed differently depending on their cardinality, and there is no cardinality limit.

Tempo's implementation seems pragmatic as a short to medium term solution though. Log engines still have a lot more investment and maturity than trace engines. In my work, even though the trace tags contain the best data quality, the tracing system is currently worse at answering a good deal of my questions. It's simply that Splunk has many tools that work well, and the tracing system is behind.


But Jaeger, as an example, will let you choose what back-end engine you want to store your traces in. There is no need to reinvent the wheel just for tracing. You can just leverage what is already out there.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: