Hacker Newsnew | past | comments | ask | show | jobs | submit | sally_glance's commentslogin

To me the phrasing seems objective. Making your binaries available to the public is good (though source would be better).

Replace [firmware] with [random popular GitHub repo] and nobody would blink. Replace [firmware] with [customer email address] and it would be a legal case. Differentiating here is important.


I think it fails to be objective because of the repetition. It's an open S3 bucket. No need to state that no authentication was required, it's already open. It's not about economy of writing but the repetition emphasizes the point, elevating the perceived significance to the author or that the author wants the reader to take away.

Furthermore, the repeated use of every when discussing the breadth of access seems like it would easily fall into the "absolutes are absolutely wrong" way of thinking. At least without some careful auditing it seems like another narrative flourish to marvel at this treasure trove (candy store) of firmware images that has been left without adequate protection. But it seems like most here agree that such protection is without merit, so why does it warrant this emphasis? I'm only left with the possible thought that the author considered it significant.


If someone DDOSes an open s3 bucket they’ll get a huge bill. If there is something in front of it, they might not.

An 'open S3 bucket' sounds really bad. If it were posted on an HTTPS site without authentication, like the firmware for most devices, it wouldn't sound so bad.

Sure an open bucket is bad, if it's stuff you weren't planning on sharing with the whole world anyway.


Since firmware is supposed to be accessible to users worldwide, making it easier to get it is good.

But how is an open, read-only S3 bucket worse than a read-only HTTPS site hosting exactly the same data?

The only thing I can see is that it is much easier to make it writeable by accident (for HTTPS web site or API, you need quite some implementation effort).


No wait I agree with you. I think it is bad framing as "S3 open bucket" when people would totally understand an open website :)

> An 'open S3 bucket' sounds really bad.

Only to gullible, clueless types.

Full blown production SPAs are served straight from public access S3 buckets. The only hard requirement is that the S3 bucket enforces read-only access through HTTPS. That's it.

Let's flip it the other way around and make it a thought experiment: what requirement do you think you're fulfilling by enforcing any sort of access restriction?

When you feel compelled to shit on a design trait, the very least you should do is spend a couple of minutes thinking about what problem it solves and what are the constraints.


No I agree with you. I think it is bad framing as "S3 open bucket" when people would totally understand an open website :)

I'm not shitting on anything except the wording in the article.

I guess I didn't word it clearly.

In our company we don't really serve directly from open buckets but through cloudfront. Though this is more because we are afraid of buckets marked open by mistake so they are generally not allowed. But I agree there's nothing bad about it. I just meant it sounds much worse (at least to someone in cybersec like me) and I don't like the effect used as such in the article.


No, it clearly has a gloating tone to it. 'A reverse engineer's candy store' is clearly meant as a slur.

When in fact TP-Link is doing the right thing with keeping older versions available. So this risks some higher up there thinking 'fuck it, we can't win, might as well close it all off'.


I just meant that it was very convenient to have the firmware images there on S3, nothing else :D Many vendors make the process of even just obtaining a copy of the firmware much harder than that, so for once I was glad it has been much easier. Also being able to bindiff two adjacent versions of the same firmware is great ... all in all I was just expressing my happiness :D

Having observed an average of two mgmt rotations at most of the clients our company is working for this comes at absolutely no surprise to me. Engineering is acting perfectly reasonable, optimizing for cost and time within the constraints they were given. Constraints are updated at a (marketing or investor pleasure) whim without consulting engineering, cue disaster. Not even surprising to me anymore...


I enjoy following academic discourse, review and collaboration give me the feeling that actual progress is being made.

So I love that you linked the rebuttal paper. In the last paragraph the authors mention that some ideas could lead to "fruitful analytic or empirical starting points" - did anyone follow up on these? From your perspective, what are the most interesting directions in this area of research today?


I honestly have no idea; I left academia 12 years ago now. I do know that game research continued (e.g. the conference I published that paper in continues: http://fdg2025.org/ and the workshop I started at ICSE continues on as well: https://sites.google.com/view/icsegasworkshop2025/home), but I'm not aware of anyone working in the patterns work right now.

My read from the paper was that Deturding was getting at in his rebuttal was my paper that was getting really popular for citing (now over 500) when really it was some Stuff Made Up By Some Guys. And it was! We all had backgrounds in pattern research, but even things like the Gang of Four are just Stuff Made Up By Some Guys. He reviewed my book that I span off from my thesis which contained the patterns so he was intimately aware of it all. We were friendly, if not capital-F friends, and I was interested in what he wrote for my academic career. He's a smart guy.

My co-authors and I never intended for the paper to be a be-all-and-end-all at 2013. Much of the non-AI research work in games at that time was "well, what if we poked at this avenue of research? what if we poked at that avenue?" And we did that by coming up with papers that were supposed to trigger conversation. It was not a good idea to go down a research avenue for 5 years only to find out no-one cared or someone had an idea that would have changed the direction dramatically had you just gotten something out there in year 1. So we thought hard about what we wrote, but we didn't do legwork tying it back to behavioral economics or something like that (my thesis attempted that to varying degrees of success).

I gave up some time ago trying to track where all the citations were coming from, but it did seem it was being cited because other people cited it. It wasn't really related to many of the papers, and certainly I didn't see anything directly building from it. And that's really what the rebuttal was saying: stop citing this paper unless you're building from it and making it more rigid in its foundations. It's not got the strong analytic/empirical basis that science is about. Which is 100% true, but was 100% known and somewhat by design.


Thanks for the insights! A bit disappointing that this avenue didn't turn out to be the one worth pursuing at the time, although I don't think the ball was completely dropped. Some light prodding surfaces recent research into dark patterns with empirical data based on player perception [1] and attempts to create frameworks for categorization [2].

[1] https://www.researchgate.net/publication/390642492_Dark_Patt...

[2] https://www.researchgate.net/publication/396437975_All_'Dark...


Theoretically you're supposed to assign lower prio to issues with known workarounds but then there should also be reporting for product management (which assigns weight by age of first occurrence and total count of similar issues).

Amazon is mature enough for processes to reflect this, so my guess for why something like this could slip through is either too many new feature requests or many more critical issues to resolve.


Interesting, I had a similar setup with smart bulbs, dumb switches and HA. My experience was that when the bulbs lose connectivity (Zigbee or Wifi in my case) you could maybe still turn them on but they would start flashing like crazy or use different colors (as indicators for their "reconnecting" state). Also Zigbee doesn't really love losing mesh nodes periodically, so turning the bulb completely off using the switch would cause the whole network to fall into broken states that had to be manually fixed from time to time.


None of the bulbs I've had (which have been a pretty wide mixture: Proprietary clown, proprietary local wifi, matter wifi, esphome wifi, zigbee) have that problem.

I just turn them off and back on one time using the switch, and the light bulb's state goes to some variation of "on" within no more than a second or two (maybe not an ideal variation of "on", but good enough to get through a dark hallway). Turn back off with the switch, and it's obviously off. On the next "on" cycle of the switch, it goes back to "on".

And while it is freshly "on", it's trying to reconnect to whatever its programmed mothership is (whether local or afar). This works every time, so far in my experience, as long as that mothership is reachable.

The only time blinkey-mode has been imparted is when I've reset things, which takes rapid iterations of off-on cycling of the light switch. (I test this all the time with the Zigbee bulb in my pantry because the light switch in there sure is convenient. It works fine, even if it has been completely off for hours or days. I just tested it again after pulling the USB zigbee dongle from HA, and the pantry light still worked fine with the switch on the wall.)

I've moved these bulbs and other widgets between houses. No issues (other than renaming things after a move). It's really been OK.

Additional background: For Zigbee in particular, I'm doing that in what is probably the least-preferred, least-effort method: I've got a cheap Chinese CC2531 dev kit that is flashed with different firmware (because that was the cheapest approach ~5 years ago), and I'm using it with ZHA in HA (because that's the easiest approach). All of my Zigbee devices have been buttons or light bulbs, all of those bulbs have been from Sengled, and none of any of them support Zigbee router mode at all. There is no Zigbee "mesh" here to speak of at all, so there's no weird interconnections to break: Endpoints talk directly with the CC2531 and that's that.

Other than some range issues (which were broadly resolved by using an old-school non-3.0 USB extension that I found on Amazon in iMac-esque coloration for a dollar), Zigbee has really been OK for me.

---

But I've been migrating to wifi, anyway. My favorite light bulbs, from Athom, actually come to me with open-source ESPHome already installed...but Matter-wifi light bulbs are often a bit less expensive than those are. (Tradeoffs.)

This migration started on the basis that my old Zigbee bulbs are -- well -- old. They simply don't produce the same quality CRI that even very cheap dumb department store LEDs do these days.

Besides, I've also already built a quite lovely wifi network for my home, wherein I do not care at all about the performance of the 2.4GHz radios at all so they may as well focus their energy on a sea of IoT devices.

I like the idea of having only one set of wireless networking gear to futz with and optimize instead of having multiples of them. (But I'll probably goof around with Matter-Thread, too, if/when that makes sense to me. I'm by no means done tinkering or learning new things.)


Thanks for the response! Refreshing to hear that it can actually work - I think the main difference between your setup and mine might be that I actually needed the mesh because I had bulbs in behind a couple of steel concrete walls. I installed always-on Zigbee outlets thinking that bulbs would route over those but never actually got around to debugging why they didn't.

Currently I'm also mostly on Tasmota-powered Athom bulbs. They work well, but after not powering them on for longer timeframes (presumably after their internal battery or whatever runs out) they forget my wifi and switch to setup.

After these experiences I'll probably go with dumb bulbs and smart switches/relays for our new apartment. Still keeping an eye on the market and open for recommendations though, mainly because I like being able to control light color through HA.


I think you've nailed the key difference for zigbee, indeed. And I'd love to share some first-hand insight about how Zigbee works with either intermediate repeaters or routers scattered around, but I just don't have any to share.

You did remind me of a thing, though: My Athom bulbs, with ESPHome, do have an annoying mode they drop into when their Home Assistant mothership is unavailable. They still work mostly like dumb light bulbs in this state, but they do a periodic blinky-thing (with a cadence in minutes, not seconds) that is annoying until the HA rig comes back.

But since they're running a copy of ESPHome that I compiled locally, that's almost certainly an ESPHome function that I can hack out/turn off/modify/whatever.

I don't have any direct experience with Tasmota. I remember looking into it with some giddiness several years ago (just because hacking on home electronics does that to me), but by the time it came to start actually buying hardware I decided to go in a different direction.

But I don't recall the Athom bulbs, with ESPHome, ever dropping out and not coming back. Even after the last move where some of them were in a box for weeks: If there was any difficulty, it wasn't something that took a lot of steps to resolve. I think I'd remember if it were challenging in some way.

So I'm lead to wonder what mechanism it is that makes your stuff go goofy with Tasmota.

Inside of these things is just a small power supply, an ESP, some MOSFETs and some LEDs. On-device configuration data is stored in flash right alongside the firmware itself. There's no battery, nor any no real-time clock (if the time is useful, it is set over the network).

Athom does publish steps for switching [some of] their hardware back and forth between Tasmota and ESPHome, if that's ever useful to you: https://github.com/athom-tech/athom-configs

---

More broadly, having smart switches and/or dimmers with dumb bulbs does sound appealing. I've got all of the lights in my garage on one smart switch, for instance, and it works well for that environment.

Smart switches would also Grandpa-proof the installation: If a dumb bulb goes out and Grandpa is watching the place, he can just swap it out and things would work fine. (Knowing my own old man, he'd probably use a dusty incandescent bulb that he's had in the glovebox of the car since he stopped to pick it up along the side of an unpaved road somewhere outside of Lincoln Nebraska in 1973...but it'll still work fine.)

But smart switches and relays alike want neutral wires. It's not always straight-forward to integrate them, as I've written extensively about elsewhere here.

And right now, I've got the usual lights in the common areas downstairs set (via the Adaptive Lighting integration) to smoothly adjust their color temperature based on the position of the sun. And I really like that function: I get intense 6000k light during the day that more-or-less emulates the ambient sunlight that comes in through the windows, and a much more serene 3000k light when it's ~dark outside. And nobody has to think about it at all on a day-to-day basis; it Just Works.

This is, quite frankly, pretty glorious to me in ways that I don't think I ever want to give up...so I'm stuck with smart bulbs in lots of places.


I actually didn't try ESPHome yet, thx for mentioning it. That will be my next experiment then. The adaptive lighting also sounds really cool, will try that as well.

Do you know if the Athom bulbs even have some kind of persistent memory that can survive longer timeframes without power?


Yeah, give it a whirl.

There's only two kinds of memory in an Athom bulb: The RAM that is built into the ESP MCU (temporary, fast -- like RAM in a PC), and the flash ROM (permanent, much slower -- like an SSD in a PC).

Data in RAM doesn't survive for even a moment without power. Data in flash should be good for years and years with or without power.


Nothing, but looking at the current results either no one tried yet or it didn't work very well. And the pelican benchmark has been around for a while so the opportunity was there.


From what I understood they provide a kind of shared platform where anyone can run things, and it was one of their clients/users performing the commits.


So they don't set reasonable expectations with the customers and accept any and all garbage. As Ops person, this is a path to Ops hell as customers throw more and more garbage at you and toil dealing with customer problems becomes unbearable.

This is a case of Product Team not working with customers, finding out what is reasonable and allowing system to set reasonable limits.


I would give them some leeway, sometimes you have to learn the hard way. But I was also kind of surprised didn't mention contacting the client anywhere.


fwiw I recently bootstrapped a small Debian image for myself, originally intended to sandbox coding agents I was evaluating. Shortly after I got annoyed by baseline vim and added my tmux & nvim dotfiles, now I find myself working inside the container regularly. It definitely works and is actually not the worst experience if your workflow is cli-focused.


My experience is if the tooling is set up right it’s not painful, it’s the fiddling around with volume mounts folder permissions and debug points and “what’s inside the container and what isn’t” etc that is always the big pain point


Very accurate - that was one of the steps that caused me to fiddle quite a bit. Had to add an entrypoint to chown the mounts and also some Buildkit cache volumes for all the package managers.

You can skip the uid/chown stuff if you work with userns mappings, but this was my work machine so I didn't want to globally touch the docker daemon.


Even putting GUI apps in a container isnt too bad once one develops the right incantation for x11/wayland forwarding.


Would be really nice if we had more of the "just pay" options. As it is the "just pay" options mostly also can't be trusted any more than the free(-mium) options, and both will try their best to "squeeze every dime of revenue".


Or "diversify", basically don't put all of your eggs in one basket. Can be done at any scale too, from storing backup copies of important documents at your parents house to buying a few apartments in Indonesia.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: