Hacker Newsnew | past | comments | ask | show | jobs | submit | Neywiny's commentslogin

I guess modern compilers (meaning anything Arduino era and up, at least when I first got into them maybe mid 2010s) abstract that away, because while true that it's doing that under the hood we at least don't have to worry about it.

This is similar to nighthawkinlight's videos on phase change materials. It was very cool to see how his Ziploc bags of homemade goo helped regulate temperature.

In this work the authors use a ceramic-coated extruded aluminum heat spreader to improve thermal conductivity through the bulk PCM, but I wonder if the graphite flake+powder additive demonstrated recently by Tech Ingredients[1] would be a viable alternative? It might need a stabilizer (thickener) to prevent the ingredients from separating.

[1] https://www.youtube.com/watch?v=s-41UF02vrU


Is geothermal not the opposite of that? My understanding was that the geothermal MO is that there's virtually infinite thermal mass in the earth so it won't heat/cool, not that you heat/cool your local chunk

To a certain extent, yes. The reason why the water is there is because the thermal flux of the ground is low, so the large mass of water provides a strong buffer. But you can’t cheap physics. You would need a crap ton of salt hydrate to accommodate a whole season of heat needs, even if you don’t factor in thermal loss from the container.

So just to confirm, the actual cause for the controls not working is still unknown to the reader but the reason the measurements didn't make sense was swapped labels?

The controls weren't working because we had wired them up according to the labels which were wrong (which is also why the measurements didn't make sense to us).

Ah. A lesson from somebody who's built hardware that I'm sure you've now learned: make sure connectors can't plug into eachother unless they're supposed to. Even if they're different connectors, different keying, whatever, sometimes they can still be forced together.

#2 is very crucial and it's something I've learned has to be well defined. Just this past week I heard a task was done. I check it. Only half is done. They didn't check the issue ticket before asserting completion. Not a big deal but it does mean I should walk through that when we start tickets next time.

I like this. Bitbake is a steep learning curve. Nowhere near as simple as buildroot. But I maintain that if you can get over the first few slopes, the payoff is worth it.

However, I don't like new files as patches. I really prefer to have my device tree be a dts file that I bring in instead of bundled into a patch. Maybe I'm not following the guidelines, but I think it's nicer to be able to search for dts things in .dts files and I get nice syntax highlighting and whatnot.

I also like their stance that you only need one layer. I've had people push for a layer per machine. Not needed as shown here and most other places.


> However, I don't like new files as patches. I really prefer to have my device tree be a dts file that I bring in instead of bundled into a patch. Maybe I'm not following the guidelines, but I think it's nicer to be able to search for dts things in .dts files and I get nice syntax highlighting and whatnot.

This is what I do on custom boards. It's better to "look" at files and link to others when they're files and not patches.


Would there be any advantage in using Yocto if you only ever have one target (x86 in my case)? Been happily using Buildroot but wondering just how greener the grass is on the other side.

The advantages aren't strictly on how many architectures you have. There's more facility to put things in the layer as steps instead of hacky surrounding scripts, and I've never had it mess up needing to rebuild something.

So the way I see it is fundamentally there was an issue with receiving signals on the spacecraft and that caused issues. I'd really like to know more about that. They mention doppler shift but that's bidirectional so even without the spacecraft knowing how fast it's going, they should be able to account for it based on the received signal. Common issues could be reduced receive sensitivity, interference, oscillator drift or instability, or plenty of other things but there's no mention of even one that I've been able to find.

Didn’t they say the initial issue was some compatibility issue with the base station they were working with? Although throughout the article it sounds like they had a ton of software problems and maybe the spacecraft wasn’t quite as baked as they thought at the time of launch.

Yeah but that's kind of a meaningless description. Unless they literally had 0 planning on how they'd talk to the base station, presumably they tested this to some capacity or had some assumption it would work that was misguided for a specific reason. As is this could vary from "we pointed an AM radio station transmitter at it but it turns out it only listens to 20 GHz unconverted WiFi" to "we needed to adjust the transmit frequency by 1ppm due to vibration during launch shifting our oscillator." One is moronic, the other is a plausible oversight.

I'm sure I don't need to say it, but what got declassified and the work you did are very, very different things. Pretty much everything in the notice is included in this article, so anything you're not reading.... Best to keep to yourself.

Not to worry. Unlike trump, I didn't remove classified info from the SCIF and store it in my bathroom or share it with Russian dinner guests. I hold my oaths seriously.

"I have a lifetime obligation to not talk about this," so im gonna post about it on the Internet.

Maybe I'm not understanding the problem. I just saw mention of obsidian being the only paid app a user had on their laptop. I'd never heard of it. For me, I keep a notes.txt either local to the project (not repo) folder, or named similarly. To find something I grep through them all. It's not perfect but it's very easy. If it becomes collaborative, I push it as a readme or a wiki page. I don't feel a need for anything more. Maybe a slightly better search but that's it.

That makes sense, and your setup is honestly the “good enough” baseline for a lot of people: notes.txt near the work, grep when needed, and promote to README/wiki when it becomes shared.

Also to clarify: I’m not focused on Obsidian specifically. “Notes” here includes anything you stash for later—notes.txt files, links, emails, chat snippets, tickets, bookmarks, random scratchpad windows. The thing I’m exploring is whether there’s demand for making that scattered reference material easier to resurface when it matters, without forcing a heavier system.

If all you want is slightly better search, what would “better” mean for you?

1. fuzzy/semantic search (find it without the exact keyword)

2. ranking by project/context (show what’s relevant to the folder you’re in)

3. cross-format search (txt + markdown + links + email/chat)

4. fast local-only indexing with zero setup

Details in my HN profile/bio if you’re curious what I’m validating, but your “grep + promote when needed” workflow is exactly the kind of counterexample I’m trying to understand.


I think options 1 and 4. I like the idea of 4. I was trying out one of these projects and indexes a codebase with AI to make asking questions about it easier. I ran the numbers and it was going to take 24 hours of crunching on my 7900xtx. I just gave up instead. Zero setup should include not needing to do that.

3 would be the hardest but most useful thing. The problem is that it's scattered around different computers and networks that don't talk to eachother. We could have a file in SharePoint on one system referencing a file on an SMB share on a completely different network. It's a big pain and very difficult to work with but it's not something I expect software running on my computer with access to a subset of the information to be able to solve.


That’s a really important definition of “zero setup”: no long-running indexing jobs, and no “crunch for 24 hours on my laptop” just to make search usable.

And I hear you on cross-network fragmentation — in a lot of real environments the hardest part isn’t search quality, it’s that data lives on different machines, different networks, and you only have partial visibility at any given time.

If you had to pick, would you rather have:

1.instant local indexing over whatever is reachable right now (even if incomplete), or

2.a lightweight distributed approach that can index in-place on each machine/network and only share metadata/results across boundaries?

I’m exploring this “latency + partial visibility” constraint as a first-class requirement (more context in my HN profile/bio if you want to compare notes).


To be very clear, there are networks that exist that do not share anything across the boundary. I'm maybe not your prime customer, but some people get very hung up on such things and we go in circles about feasibility. So in that 2 is an impossibility at times, I'd prefer 1.

Indexing everything becomes unbounded fast. Shrink scope to one source of truth and a small curated corpus. Capture notes in one repeatable format, tag by task, and prune on a fixed cadence. That keeps retrieval predictable and keeps the model inside constraints.

That’s another strong point, and I think it’s the pragmatic default: shrink scope, keep one source of truth, enforce a repeatable format, and prune on a cadence. It’s basically how you keep both retrieval and any automation predictable.

The tension I’m trying to understand is that in a lot of real setups the “corpus” isn’t voluntarily curated — it’s fragmented across machines/networks/tools, and the opportunity cost of “move everything into one place” is exactly why people fall back to grep and ad-hoc search.

Do you think the right answer is always “accept the constraint and curate harder”, or is there a middle ground where you can keep sources where they are but still get reliable re-entry (even if it’s incomplete/partial)?

I’m collecting constraints like this as the core design input (more context in my HN profile/bio if you want to compare notes).


I guess I'm not sure I understand the solution. I use a low value (idk 15 minutes maybe?) because I don't have a static ip and I don't want that to cause issues. It's just me to my home server so I'm not adding noticable traffic like a real company or something, but what am I supposed to do? Is there a way for me to send an update such that all online caches get updated without needing to wait for them to time out?

For a private server with not many users this is mostly irrelevant. Use low ttl if you want to, since you're putting basically 0 load on the DNS system.

> such that all online caches get updated

There's no such thing. Apart from millions of dedicated caching servers, each end device will have it's own cache. You can't invalidate DNS entries at that scope.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: