Hacker Newsnew | past | comments | ask | show | jobs | submit | derefr's commentslogin

No; if a phone has both a non-removable battery and a baseband modem, then various laws require that modem to be wired directly to that battery (and to the phone's microphone) and to able to be activated in response to a legal wiretap order, even when the phone itself is nominally "powered off."

(And this doesn't even require that the phone stay connected to the network after the wiretap-enable packet is received. Rather, while the phone is "powered off", the baseband modem might sit there passively acting as a bug, capturing conversation through the microphone onto a bit of NAND onboard the modem; and then, once the phone is powered on again, the baseband modem will take the opportunity to silently play back whatever it's recorded to the tower.)

> if your use case is that sensitive even carrying a smartphone seems questionable.

The issue is that, if you're an actual honest-to-god spy (or investigative journalist...) trying to poke their nose into the goings-on of some government, then you want to draw as little suspicion to yourself as possible; and it's much more suspicious to be going around without the subject government's favorite citizen-surveillance tool on your person. In fact, to blend in, you need to be constantly using your state-surveillance-device to communicate with (decoy) friends and coworkers, doom-scroll, etc.

This is why spies are fans of the few remaining Android phone brands that offer designs with removable batteries. When meeting with a contact, they'll still slip their could-be-bugged-phone into a faraday bag, to cut off its network connectivity; but they'll also remove the phone's battery before putting the phone into the faraday bag, to inhibit this class of "powered-off" record-to-NAND-style baseband wiretap attacks.

(Of course, these are just ways to secure a phone you own + can determine wasn't subject to a supply-chain attack. If two people are meeting who aren't within the same security envelope, then either of them might be trying to surreptitiously record the conversation, and so their phones (or anything else on them) might contain a tiny bug with its own power source, that would stay active even if the macro-scale device's battery was removed. For such meetings, you therefore want to leave all electronic devices in a soundproof safe, in another room. Which will also implicitly act as a faraday cage.)


> if a phone has both a non-removable battery and a baseband modem, then various laws require that modem to be wired directly to that battery (and to the phone's microphone) and to able to be activated in response to a legal wiretap order, even when the phone itself is nominally "powered off."

Could you link to such a law?


I have seen phone schematics for many generic Androids, and at least for them, this comment is complete BS. The AP loads the firmware for the modem when it's turned on and boots it, and completely powers off the modem when asked to turn it off, e.g. in airplane mode. No idea about Apple though, they tend to Think Different™.


> I have seen phone schematics

Documentation is insufficient for protection.

Stuxnet happened, despite correct documentation of Siemens PLCs.


That is a complete non-sequitur.


Architects aren't generally brutalists themselves, but rather, brutalist architecture proposals win contracts because their TCO is lower. Facades have maintenance costs; bare concrete just requires power-washing now and then.

Well, it's even cheaper if you skip the wash and let it become completely drab and awful.

I don't think the exciting thing here is the technology powering it. This isn't a story about OpenClaw being particularly suited to enabling this use-case, or of higher quality than other agent frameworks. It's just what people happen to be running.

Rather, the implicit/underlying story here, as far as I'm concerned, is about:

1. the agentive frameworks around LLMs having evolved to a point where it's trivial to connect them together to form an Artificial Life (ALife) Research multi-agent simulation platform;

2. that, distinctly from most experiments in ALife Research so far (where the researchers needed to get grant funding for all the compute required to run the agents themselves — which becomes cost-prohibitive when you get to "thousands of parallel LLM-based agents"!), it turns out that volunteers are willing to allow research platforms to arbitrarily harness the underlying compute of "their" personal LLM-based agents, offering them up as "test subjects" in these simulations, like some kind of LLM-oriented folding@home project;

3. that these "personal" LLM-based agents being volunteered for research purposes, are actually really interesting as research subjects vs the kinds of agents researchers could build themselves: they use heterogeneous underlying models, and heterogeneous agent frameworks; they each come with their own long history of stateful interactions that shapes them separately; etc. (In a regular closed-world ALife Research experiment, these are properties the research team might want very badly, but would struggle to acquire!)

4. and that, most interestingly of all, it's now clear that these volunteers don't have much-if-any wariness to offer their agents as test subjects only to an established university in the context of a large academic study (as they would if they were e.g. offering their own bodies as a test subject for medical research); but rather are willing to offer up their agents to basically any random nobody who's decided that they want to run an ALife experiment — whether or not that random nobody even realizes/acknowledges that what they're doing is an ALife experiment. (I don't think the Moltbook people know the term "ALife", despite what they've built here.)

That last one's the real shift: once people realize (from this example, and probably soon others) that there's this pool of people excited to volunteer their agent's compute/time toward projects like this, I expect that we'll be seeing a huge boom in LLM ALife research studies. Especially from "citizen scientists." Maybe we'll even learn something we wouldn't have otherwise.


Yeah, I think that's why I don't find this superinteresting. It's more a viral social media thing, than an AI thing.

In countries with public healthcare + doctor shortages (e.g. Canada), good luck even getting a family doctor, let alone having a request to switch you family doctor "when you already have one!" get taken seriously.

Everyone I know just goes to walk-in clinics / urgent-care centres. And neither of those options give doctors any "skin in the game." Or any opportunities for follow-up. Or any ongoing context for evaluating treatment outcomes of chronic conditions, with metrics measured across yearly checkups. Or the "treatment workflow state" required to ever prescribe anything that's not a first-line treatment for a disease. Or, for that matter, the willingness to believe you when you say that your throat infection is not in fact viral, because you've had symptoms continuously for four months already, and this was just the first time you had enough time and energy to wake up at 6AM so you could wait out in front of the clinic at 7:30AM before the "first-come-first-served" clinic fills up its entire patient queue for the day.


The US has the same issue.

Because the republican party turned out to be a bunch of fascist fucks, there's no real critique of Obamacare. One of the big changes with the ACA is that it allowed medical networks to turn into regional cartels. Most regions have 2-3 medical networks, who are gobbled up all of the medical practices and closed many.

Most of the private general practices have been bought up, consolidated to giant practices, and doctors paid to quit and replaced by other providers at half the cost. Specialty practices are being swept up by PE.


Perhaps you are not aware, but Obamacare is actually Romneycare. It is set up exactly in the way Republicans wanted, instead of a single-payer system that the general public and especially the Democrats voters wanted. So why would Republicans critique the system that gave Insurance companies even more money?

Republicans wouldn't. Fascist cult members would be against ice cream if Obama was giving it away.

I think the implication is that you should own multiple client devices capable of SSHing into things, each with their own SSH keypair; and every SSH host you interact with should have multiple of your devices’ keypairs registered to it.

Right, and to never backup the keys which means losing of all your devices means you can't possibly recover.

Tuna-Fish said that instead of backing up the keys from your devices, you should create a specific backup key that is only ever used in case you lose access to all your devices.

This is indeed best practice because it allows you to alert based on key: if you receive a login on a machine with your backup key, but you haven't lost your devices, then you know your backup was compromised. If you take backups of your regular key then it would be much more difficult to notice a problem.


My point was that one of the devices would be your (cold) backup — you'd e.g. get an (ideally passphrase-protectable) smart-card; read off its pubkey; register that pubkey with all your remote systems/services; and then put the smart-card itself into a fire safe / safe-deposit box at a bank / leave it in trust with your lawyer / etc.

Note that you would never need to go get the smart-card just to perform incremental registration between it and a new remote host/service. You just need its pubkey, which can live in your password manager or wherever.

And yet, if your house burns down, you can go get that smart-card, and use it to get back into all your services.

And yet also, unlike a backup of another of your keys, if you find out that someone broke into your house and stole your safe, or robbed your bank, etc, then you can separately revoke the access of the pubkey associated with the smart-card, without affecting / requiring the rolling of the keys associated with your other devices. (And the ideal additional layer of passphrase protection for the card, gives you a time window to realize your card has been taken, and perform this revocation step, before the card can be cracked and used.)

Indeed, as the sibling comment mentions, this is vaguely similar to a (symmetrically passphrase-encrypted) backup of a unique extra KPI keypair onto a USB stick or somesuch.

The major difference, though, is that because a backup of a key is truly "just data", an attacker can copy off the encrypted file (or image the raw bytes of the encrypted USB disk), and then spawn 10000 compute instances to attempt to crack that encrypted file / disk image.

Whereas, even when in possession of the smart-card, the attacker can't make 10000 copies of the data held in the smart-card. All they can do is attack the single smart-card they have — where doing so may in turn cause the smart-card to delete said data, or to apply exponential-backoff to failed attempts to activate/use the key material. The workflow becomes less like traditional password cracking, and more like interrogating a human (who has been explicitly trained in Resistance-to-Interrogation techniques.)


To me that just sounds like creating obstacles for myself to get access to my system when I desperately need to. I keep a backup of my work pc keys on Google Drive and I have zero anxiety about that.

I believe that will result in Google locking you out of your Google account, including Gmail, YouTube, any Google Cloud projects, etc.

This is exactly what will happen, you have no recourse. Technofeudalism is real.

I've done it in the past (~2015). Honestly if Google locked me out of all of those other purchases it'd be great grounds to sue them. If everyone started doing this it would prevent them from doing this in the first place and may be additional fodder for (hopefully) continued anti-trust losses in court. If your life is tied to Google in that way then it's a risk no matter what you do and you should probably think about how to reduce that risk. I don't have anything other than purchases tied to my Google accounts anymore.

It's likely down in the ToS somewhere that they are free to close your account if you do a chargeback, otherwise they wouldn't be so eager to do it.

Peanuts to an elephant.

It’d be difficult to use in any automated process, as the judgement for how good one of these renditions is, is very qualitative.

You could try to rasterize the SVG and then use an image2text model to describe it, but I suspect it would just “see through” any flaws in the depiction and describe it as “a pelican on a bicycle” anyway.


I would argue that an LLM is a perfectly sensible tool for structure-preserving machine translation from another language to English. (Where by "another language", you could also also substitute "very poor/non-fluent English." Though IMHO that's a bit silly, even though it's possible; there's little sense in writing in a language you only half know, when you'd get a less-lossy result from just writing in your native tongue, and then having it translate from that.)

Google Translate et al were never good enough at this task to actually allow people to use the results for anything professional. Previous tools were limited to getting a rough gloss of what words in another language mean.

But LLMs can be used in this way, and are being used in this way; and this is increasingly allowing non-English-fluent academics to publish papers in English-language journals (thus engaging with the English-language academic community), where previously those academics they may have felt "stuck" publishing in what few journals exist for their discipline in their own language.

Would you call the use of LLMs for translation "shoddy" or "irresponsible"? To me, it'd be no more and no less "shoddy" or "irresponsible" than it would be to hire a freelance human translator to translate the paper for you. (In fact, the human translator might be a worse idea, as LLMs are more likely to understand how to translate the specific academic jargon of your discipline than a randomly-selected human translator would be.)


Autotranslating technical texts is very hard. After the translation, you muct check that all the technical words were translated correctly, instead of a fancy synonym that does not make sense.

(A friend has an old book translated a long time ago (by a human) from Russian to Spanish. Instead of "complex numbers", the book calls them "complicated numbers". :) )


I remember one time when I had written a bunch of user facing text for an imaging app and was reviewing our French translation. I don't speak French but I was pretty sure "plane" (as in geometry) shouldn't be translated as "avion". And this was human translated!

You'd be surprised how shoddy human translations can be, and it's not necessarily because of the translators themselves.

Typically what happens is that translators are given an Excel sheet with the original text in a column, and the translated text must be put into the next column. Because there's no context, it's not necessarily clear to the translator whether the translation for plane should be avion (airplane) or plan (geometric plane). The translator might not ever see the actual software with their translated text.


The convenient thing in this case (verification of translation of academic papers from the speaker's native language to English) is that the authors of the paper likely already 1. can read English to some degree, and 2. are highly likely to be familiar specifically with the jargon terms of their field in both their own language and in English.

This is because, even in countries with a different primary spoken language, many academic subjects, especially at a graduate level (masters/PhD programs — i.e. when publishing starts to matter), are still taught at universities at least partly in English. The best textbooks are usually written in English (with acceptably-faithful translations of these texts being rarer than you'd think); all the seminal papers one might reference are likely to be in English; etc. For many programs, the ability to read English to some degree is a requirement for attendance.

And yet these same programs are also likely to provide lectures (and TA assistance) in the country's own native language, with the native-language versions of the jargon terms used. And any collaborative work is likely to also occur in the native language. So attendees of such programs end up exposed to both the native-language and English-language terms within their field.

This means that academics in these places often have very little trouble in verifying the fidelity of translation of the jargon in their papers. It's usually all the other stuff in the translation that they aren't sure is correct. But this can be cheaply verified by handing the paper to any fluently-multilingual non-academic and asking them to check the translation, with the instruction to just ignore the jargon terms because they were already verified.


> with the native-language versions of the jargon terms used

It depends on the country. Here in Argentina we use a lot of loaned words for technical terms, but I think in Spain they like to translate everything.


When reading technical material in my native language, I sometimes need to translate it back to English to fully understand it.

idk I think Gemini 2.5 did a great job at almost all research math papers translating from french to english...

To that point I think it's lovely how LLMs democratize science. At ICLR a few years ago I spoke with a few Korean researchers that were delighted that their relative inability to write in English was no being held against them during the review process. I think until then I underestimated how pivotal this technology was in lowering the barrier to entry for the non-English speaking scientific community.

If they can write a whole draft in their first language, they can easily read the translated English version and correct it. The errors described by gp/op were generated when authors directly required LLM to generate a full paragraph of text. Look at my terrible English; I really have the experience of the full process from draft to English version before :)

We still do not have a standardized way to represent Machine Learning concepts. For example in vision model, I see lots of papers confused about the "skip connections" and "residual connection" and when they concatenate channels they call them "residual connection" while it shows that they haven't understood why we call them "residual" in the first place. In my humble opinion, each conference, and better be a confederation of conferences, work together to provide a glossary, a technical guideline, and also a special machine translation tool, to correct a non-clear-with-lots-of-grammatical-error-English like mine!

I'm surprised by these results. I agree that LLMs are a great tool for offsetting the English-speaking world's advantage. I would have expected non-Anglo-American universities to rank at the top of the list. One of the most valuable features of LLMs from the beginning has been their ability to improve written language.

Why is their use more intense in English-speaking universities?


Good point. There may be a place for LLMs for science writing translation (hopefully not adding nor subtracting anything) when you're not fluent in the language of a venue.

You need a way to validate the correctness of the translation, and to be able to stand behind whatever the translation says. And the translation should be disclosed on the paper.


> Coming from a software background, this seems bizarre, as if C++ compilers rejected valid programs unless they stuck to easy constructs with obvious assembly implementations.

To my understanding, isn’t it more like there being a perfectly good IR instruction coding for a feature, but with no extant ISA codegen targets that recognize that instruction? I.e. you get stuck at the step where you’re lowering the code for a specific FPGA impl.

And, as with compilers, one could get around this by defining a new abstract codegen target implemented only in the form of a software simulator, and adding support for the feature to that. Though it would be mightily unsatisfying to ultimately be constrained to run your FPGA bitstream on a CPU :)


The non-synthesizable features of Verilog not only work in current simulators, they were expressly developed for that purpose. Verilog has those features to describe conditions that might exist in a semiconductor as manufactured, but aren't part of any design, so that they can be more accurately simulated. For example, a pin state can be unknown, or two pins can be connected with a delay line. These allow a real-life semiconductor to be characterized well enough to insert into a simulation of the electronics circuit as a whole.

It's more akin to directives than instructions. Debug instructions can also serve a similar purpose, although they actually run on the hardware, whereas compiler directives and non-synthesizable verilog instructions are never executed.


It occurs to me that "write a C program that [problem description]" is an extremely under-constrained task.

People are highly aware that C++ programmers are always using some particular subset of C++; but it's not as obvious that any actual C programmer is actually going to use a particular dialect on top of C.

Since the C standard library is so anemic for algorithms and data structures, any given "C programmer" is going to have a hash map of choice, a b-tree of choice, a streams abstraction of choice, an async abstraction of choice, etc.

And, in any project they create, they're going to depend on (or vendor in) those low-level libraries.

Meanwhile, any big framework-ish library (GTK, OpenMP, OpenSSL) is also going to have its own set of built-in data structures that you have to use to interact with it (because it needs to take and return such data-structures in its API, and it has to define them in order to do that.) Which often makes it feel more correct, in such C projects, to use that framework's abstractions throughout your own code, rather than also bringing your own favorite ones and constantly hitting the impedance wall of FFI-ing between them.

It's actually shocking that, in both FOSS and hiring, we expect "experienced C programmers" who've worked for 99% of their careers with a dialect of C consisting of abstractions from libraries E+F+G, to also be able to jump onto C codebases that instead use abstractions from libraries W+X+Y+Z (that may depend on entirely different usage patterns for their safety guarantees!), look around a bit, and immediately be productively contributing.

It's no wonder an AI can't do that. Humans can barely do it!

My guess is that the performance of an AI coding agent on a greenfield C project would massively improve if you initially prompt it (or instruct it in an AGENTS.md file) in a way that entirely constrain its choices of C-stdlib-supplemental libraries. Either by explicitly listing them; or by just saying e.g. "Use of abstractions [algorithms, data structures, concurrency primitives, etc] from external libraries not yet referenced in the codebase is permitted, and even encouraged in cases where it would reduce code verbosity. Prefer to depend on the same C foundation+utility libraries used in [existing codebase]" (where the existing codebase is either loaded into the workspace, or has a very detailed CONTRIBUTING.md you can point the agent at.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: