Can you speak a little bit more to the stats in the OP?
* 135k+ OpenClaw instances are publicly exposed
* 63% of those run zero authentication. Meaning the "low privilege required" in the CVE = literally anyone on the internet can request pairing access and start the exploit chain
Is this accurate? This is definitely a very different picture then the one you paint
That’s surprising, as the OpenClaw installation makes it pretty difficult to run without auth and explicit device pairing (I don’t even know if that’s possible).
The problem is that a lot of users of OpenClaw use a chatbot to set it up for them so it has a habit of killing safety features if it runs into roadblocks due to user requests. This makes installations super heterogeneous.
I agree—it looks like the OP didn't provide any sources for these numbers either. That's why I would have hoped that the original maintainer had a better set of metrics to dispute them. It doesn't seem like he does though :(
Those numbers aren't in the CVE. You introduced them, attributed them to a source that doesn't contain them, and now you're disclaiming them. Where did they come from, and what was the goal of sharing them?
The numbers were in the post when I clicked through and when I made the comment. It looks like the HN moderators have since changed the link for the post to go to the CVE entry. However, my comment was about the reddit thread, not the CVE entry.
Honestly that seems like total guesswork. There's a lot of FUD going around, or people running portscans and assuming just because they detect a gateway on a port, that they can connect to it. That’s not the case.
When evaluating the complete bun install improvements, it came out speed-wise to about the same as the existing git usage (due to networking being the big bottleneck time-wise despite more cases being slightly faster with ziggit over multiple benchmarks). Except, it's done in 100% zig and those internal improvements pile up as projects consist of more git dependencies. All in all, it seems like a sensible upstream contribution.
So you have to maintain a completely separate git implementation and keep that up to date with upstream git, all for the benefit of being indistinguishable on benchmarks. Oh well!
The commit I linked shows that it didn't even read the user name and email from git's config file, but used a test name, which means it's woefully incomplete.
It's just one giant function. Sometimes big functions are necessary. This one is clearly AI generated and not very readable for a human. This is just from a quick glance.
Surely "the commits are attributed to the user who creates them" is a pretty basic feature of the git CLI, and not something that you can add in as a fix later after posting your project to Github and writing a blog post about how much faster than git it is.
It's very easy to be faster than git's CLI if you don't have to do any of the things that git's CLI does!
Now we just need that AI booster guy to join this thread and tell us that actually this is super impressive. He was doing that for that worthless “browser” that Cursor built.
If git were a rapidly evolving project then I'd think this'd be a stronger issue.
With git being more of an established protocol that projects can piggy-back off of from GitHub to jj, filling a library in a new language seems like something that contributes
> I think most people would interpret “scanning your computer” as breaking out of the confines the browser and gathering information from the computer itself.
Yes, but I also think that most people would interpret "Getting a full list of all the Chrome extensions you have installed" as a meaningful escape/violation of the browser's privacy sandbox. The fact that there's no getAllExtensions API is deliberate. The fact that you can work around this with scanning for extension IDs is not something most people know about, and the Chrome developers patched it when it became common. So I don't think describing it as something everybody would expect is totally fine and normal for browsers to allow is correct.
> I also think that most people would interpret "Getting a full list of all the Chrome extensions you have installed" as a meaningful escape/violation of the browser's privacy sandbox
I think that’s a far more reasonable framing of the issue.
> I don't think describing it as something everybody would expect is totally fine and normal for browsers to allow is correct.
I agree that most people would not expect their extensions to be visible. I agree that browsers shouldn’t allow this. I, and most privacy/security focused people I know have been sounding the alarm about Chrome itself as unsafe if you care about privacy for awhile now.
This is still a drastically different thing than what the title implies.
> Yes, but I also think that most people would interpret "Getting a full list of all the Chrome extensions you have installed" as a meaningful escape/violation of the browser's privacy sandbox.
I don't think so, because most people understand that extensions necessarily work inside of the sandbox. Accessing your filesystem is a meaningful escape. Accessing extensions means they have identification mechanisms unfortunately exposed inside the sandbox. No escape needed.
It's extremely unfortunate that the sandbox exposes this in some way.
Microsoft should be sued, but browsers should also figure out how to mitigate revealing installed extensions.
Y'all are letting "most people" carry an awful lot of water for this scummy behavior here.
In my experience, most people - even most tech people - are unaware of just how much information a bit of script on a website can snag without triggering so much as a mild warning in the browser UI. And tend toward shock and horror on those occasions where they encounter evidence of reality.
The widespread "Facebook is listening to me" belief is my favorite proxy for this ... Because, it sorta is - just... Not in the way folks think. Don't need ears if you see everything!
> The widespread "Facebook is listening to me" belief is my favorite proxy for this ... Because, it sorta is - just... Not in the way folks think. Don't need ears if you see everything!
Getting folks to install “like” and “share” widgets all over their websites was a genius move.
No, they propose just concatenating it with the data received from the network
> it makes a concatenation of the domain separator (@0x92880d38b74de9fb) and the serialization of the object, and then feeds the byte stream into the signing primitive. Similarly, verification of an object verifies this same reconstructed concatenation against the supplied signature.
> Note that the domain separator does not appear in the eventual serialization (which would waste bytes), since both signer and receiver agree on it via this shared protocol specification. Encrypt, HMAC, and hash work the same way
You are, of course, right. And this distinction is important for this chain of comments.
Though, in fairness, that is /kind of/ like transmitting it---in the sense that it impacts the message that is returned. It's more akin to sending a checksum of the magic number, rather than the magic number itself. But conceptually, that is just an optimization. The desire is for the client to ensure the server is using the same magic number, we just so happen to be able to overload the signature to encode this data without increasing the message size.
Yes, this is a trend I've noticed strongly with Claude code—it really struggles to explain why. Especially in PR descriptions, it has a strong bias to just summarize the commits and not explain at all why the PR exists.
No, I think a lot of humans can explain why they're adding a new button to the checkout page, or why they're removing a line from the revenue reconciliation job. There's always a reason a change gets made, or else nobody would be working on it at all :)
The minute between December 31, 2016 23:59 and January 1st 2017 is 61 seconds, not 60 seconds. The hour that contains that minute is 3601 seconds, the day that contains that hour is 43201 seconds, etc. If you assume a fixed duration and simply multiply by 43200, your math will be wrong compared to the rest of the world.
Daylight savings time makes a day take 23 hours or 25 hours. That makes a week take 7254000 seconds or 7261200 seconds. Etc.
That’s what I mean by calendar units. These aren’t issues if you don’t try to apply durations to the “real” calendar.
(This is all in the context of cooldowns, where I’m not convinced the there’s any real ambiguity risk by allowing the user to specify a duration in day or hour units rather than seconds. In that context a day is exactly 24 hours, regardless of what your local savings time rules are.)
"exactly 24 hours" could still be anywhere between 86399 and 86401 seconds, depending on leap seconds. At least if by an hour you mean an interval of 60 minutes, because a minute that contains a leap second will have either 59 or 61 seconds.
You could specify that for the purposes of cooldowns you want "hour" to mean an interval of 3600 seconds. But that you have to specify that should illustrate how ambiguous the concept of an hour is. It's not a useless concept by any means and I far prefer to specify duration in hours and days, but you have to spend a sentence or two on defining which definition of hours and days you are using. Or you don't and just hope nobody cares enough about the exact cooldown duration
If you say "wait 1 day without using a calendar+locale" then the duration is unambiguously 86400s, but if you say "wait 1 day using a calendar+locale" or "wait until this time tomorrow" then the duration is ambiguous until you've incorporated rules like leap/DST. I think GP's point is that "wait 1 day" unambiguously defaults to the former, and you disagree, but perhaps it's a reasonable default.
Yep, this is exactly my point. Durations are abstract spans of "stopwatch time," they don't adhere to local times or anything else we use as humans to make time more useful to us. In that context there's no real ambiguity to using units like hours/days/weeks (but not months, etc.) because they have unambiguous durations.
Now you've got me wondering something: if a "stopwatch month" can't exist since everyone agrees that different months have different durations (and therefore you must select one like "the month of January" to know how long to run the stopwatch), isn't there an argument that a "stopwatch year" has the same need to select one since everyone agrees that different years have different day counts (unless we mean a solar year in seconds, not quantized to the nearest day, but that's probably a Bad Default)?
The collective human decision to make days-per-year vary (requiring leap rules to calculate days) seems similar to the collective human decision to make days-per-month vary (requiring month names to calculate days). So if we say a "stopwatch year" suffers the same fate as the "stopwatch month" then it's a slippery slope to saying the "stopwatch minute" is no different than a "stopwatch year" (requiring leap rules to calculate seconds) even if, for all practical purposes, it seems exempt.
I guess this is why we make "second" the SI unit, and none of our human-convenience rules mess with the duration of a second. A leap second changes the duration of a minute (and above), and a leap year changes the duration of a month (and above). Which, oddly enough, demonstrates an inconsistency: we ought to say "leap day" instead of "leap year" if the duration being added shall follow the word "leap" as is the case for leap seconds.
Leap seconds are their own nightmare. UNIX time ignores them, btw, so that the unix epoch is 86400*number of days since 1/1/1970 + number of seconds since midnight. The behavior at the instance of a leap second is undefined.
Presumably because the DOM order of the elements is not the actual order of the lines (you can see this with e.g. the blockquotes), so it would be confusing if the user tried to copy the text and saw that all the lines were jumbled up
It's only the blockquotes that are out of order. If this were a valid reason to disable user selection, then no website with a sidebar would have it enabled. Besides, you could just disable user selection on the blockquotes if that were the reason (not that I'd ever recommend that)
No idea how I'm supposed to read the end of this. But it seems kinda interesting? Not that like, require('fontmetrics') doesn't exist, but it's definitely true that most JS needs more font rendering then the browser seems capable of giving us these days.
I've never had that issue with Github—I think their account mixing setup reduces the amount of work I have to do to sign in 100x compared to other SSO systems I use.
You must have used some weird other SSO systems is the only explanation I have.
GitHub has all the normal SSO stuff as anything else we use, but on top of the GitHub-specific account login. Everywhere else I just log in via SSO, in GitHub I log in first to GitHub (with its own MFA) and then the same SSO step as anywhere else.
I've never had to log in to Github as part of my daily flow. Only once to set up a new computer. Are you logging in using an incognito window or something?
Interesting. Perhaps it's because I'm not using GitHub daily, we're migrating to GitHub so I still do work in repos which live in the old system. Also, perhaps I'm more affected because I'm doing org admin stuff as well.
reply