Hacker Newsnew | past | comments | ask | show | jobs | submit | EE84M3i's commentslogin


I explicitly stopped this habit so that I don't accidentally do it with sensitive data I don't want to go to my search engine provider's auto complete API.

Disabling remote search autocomplete is one of the first things I do when I setup a new browser instance. It's a privacy and security nightmare I don't want.

Same here. And I just noticed yesterday that Firefox had added and enabled a "Suggestions from sponsors" feature. Which I've now disabled, but presumably it's been sending anything I type into the address bar to Mozilla since 2021. I am tired of Mozilla but Chrome is very much worse.

ETA: I only noticed yesterday because a "sponsored suggestion" popped up when I was typing, which I've not seen before. So either they actually enabled it recently, or advertisers don't bid on the kinds of things I usually type.


> Disabling remote search autocomplete

I've always have a suspicion that even with auto complete off, some sort of telemetry or obscure feature is still leaking browser address bar text.


ctrl-k is for the search box

ctrl-l is for the address box

At most I want the address box to do is look up a dns name. Which can still be a risk if I were to hit "enter" with sensitive information which could in some cases get pushed out to my DNS provider (which is me, but then it's possible the address would be pushed out to another resolver, and will also be logged in an unexpected place)


I've never really understood why it's a thing to use a telnet client for transmitting text on a socket for purposes other than telnet. My understanding is that telnet is a proper protocol with escape sequences/etc, and even that HTTP/SMTP/etc require things like \r\n for line breaks. Are these protocols just... close enough that it's not a problem in practice for text data?

Because for a long time, on most computers, the telnet client was the closest thing to an "open a tcp socket to this ip/port and connect the i/o from it to stdin/stdout" application you can get without installing something or coding it up yourself.

These days we have netcat/socat and others, but they're not reliably installed, while telnet used to be generally available because telnetting to another machine was more common.

These days, the answer would be to use a netcat variant. In the past, telnet was the best we could be confident would be there.


You don't even need netcat or socat for that, probing /dev/tcp/<host>/<port> from the shell is enough.

Telnet was available in the 90s. I reckon /dev/tcp is way more recent. GP did say a long time ago.

That's some gnu bash shenanigans. There is no /dev/tcp in unix

Lots of shops didn't have gnu installed: telnet was what we had.


In corporate environments, netcat was often banned as it was seen as a "hacking" tool. Having it installed would sometimes get the attention of the security folks, depending how tightly they controlled things.

Same reason that people use vi. It's always there.

The telnet protocol with escapes, etc. is only used by the telnet client if you’re connecting to the telnet port. If you’re connecting to HTTP, SMTP or something else, the telnet protocol is not enabled.

Because it's there.

It hasn't for the most part of the last 2 decades.

The telnet client comes with MS Windows, Linux and macOS. The only platforms were you need to install some extra component are Android and iOS.

Many companies have been preventing its execution or removing the package by default for a number of years.

Also most linux containers do not ships with such binaries to save on img size and reduce vuln management overhead.


> to save on img size

    $ ls --human --size --dereference $(which telnet)
    144K /usr/bin/telnet

The point is not that this particular binary is huge, the point is that we tend to strip images of anything that is not useful for the actual application shipped. So we strip everything. Also: small things adds up. On AI prompt can be handled reasonably by a single machine, millions of concurrent ones involve huge datacenters and whole energy plants being restarted/built.

The point of reducing the amount of binaries shipped with the image is also to reduce the amount of CVEs/vulns in your reports that wouldn't be relevant for your app but woulld still be raised by their presence.


Telnet client is an optional feature in Windows that needs to be enabled/installed.

telnet hasn’t shipped with macOS since 10.12 Sierra, ten years ago.

Debian also isn’t shipping telnet in the base install since Debian 11.


Thanks, sounds like a recent development. I don't use macOS, but on other peoples macOS computer it was always there, even when they are not developers. But it could very well be that these computers are ten years old.

I mean technically MS Windows 10 is ten years old, but the big upgrade wave to 10 only happened like 4 years ago, which is quite recently. Maybe that is similar to macOS users, I don't know that.


In the days of yore, Windows had telnet installed. Most hackers used telnet in the 90's and early 2000's.

Anki also regularly takes local backups.

For me, it brings to mind the SR-71 speedcheck story just as a similar classic. https://www.thesr71blackbird.com/Aircraft/Stories/sr-71-blac...


Doesn't it ask you if you trust a folder when you open it?


You are right that the computer asks you. But people click yes because they are used to ignoring warning signs. The software relies on people making perfect choices every time and that never happens.


It should tell me what should I look before I trust it. Not trusting the workspace means I might as well use Notepad to open it. I wouldn't think that tasks.json include autorun tasks in addition to build actions.


Who remembers autorun.exe


I always wondered why. Now I finally know that it auto runs code in that folder.

Who thought this is a good idea and why wasn't it specified in ALL CAPS in that dialog?

Is it even documented anywhere?

Very infrequent vscode user here, beginning to think it's some kind of Eclipse.


I mean it's not in caps, but it's literally the first line in the dialog after the header:

https://code.visualstudio.com/docs/editing/workspaces/worksp...

I'm big on user first, if that dialog had sirens blaring, a gif and ten arrows pointing that "THIS MAY EXECUTE CODE" and people still didn't get the idea, I'd say it needs fixing. It can't be said that they didn't try or that they hid it though.


>"THIS MAY EXECUTE CODE"

So at the end of the day its still unclear whether it executes code or not? Just say "this WILL execute code" and specify exactly which code it tries to execute by default.


I don't know about you people, but I always read this as "it may execute code if you run a build step".

Not "I will execute autorun.inf like an idiot."

And NO. I do not want my IDE to execute code when i open files for editing. I want it to execute code only as part of an explicit step that I initiate.


Yeah but it's one of those useless permission requests along the lines of "Do you want this program to work or not?"

They're pawning off responsibility without giving people a real choice.

It's like the old permission dialog for Android that was pretty much "do you want to use this app?". Obviously most people just say yes.

There's a reason Google changed that.

To be fair I'm sure Microsoft would switch to a saner permission model if they could but it's kind of too late.


It's not a false choice - "Trust" and "don't trust" are both perfectly viable options. The editor works fine in restricted mode, you just won't have all your extensions enabled.


> there is no doubt that the proof is correct.

Do you have any links to reading about how often lean core has soundness bugs or mathlib has correctness bugs?


IIRC a lot of NYC Taxis have them? (or at least, a mark on the side saying "Induction Loop")


It's interesting that the US navy apparently uses a regular gmail address for the vet clinic on the base in Bahrain according to the linked country instructions[1]. One would imagine that would be prohibited by some policy.

[1]: https://www.navsup.navy.mil/Portals/65/HHG/Documents/Oversea...


It is interesting, for sure, that they are using a gmail.com email address for a role account apparently currently for which the recipient is CPT John Hutchison as of May 2025 [0] But that's not what actually inspired me to write this reply I thought some of you may enjoy reading about.

Incidentally, the dot in the local recipient part of that NSA veterinarian address brings something of a fond anecdote to mind: Since for a gmail SMTP address at delivery time, (excluding organizationally-managed Workspace addresses) "dots" do not matter in the LHS of a recipient address [1], this gmail account address (since it is in the gmail.com domain) would actually be just "nsabahrainvetclinic[at]gmail.com", and the dot seems only to be a visual cue to make its meaning clearer for the human reader/sender. But that's just a preface to my actual anecdote.

More preface: Gmail account names (the LHS) must be at least six characters in length when the account is submitted for creation. [2]

As an early adopter from the Gmail invite-only beta stage, I was able to obtain my long-held, well-known (by my peers) 7-character UNIX login name @gmail.com without issue, which consists of my five-letter first name followed immediately by a two-letter abbreviation of my lengthy Dutch surname, as had been used for years as my Unix login (and thus email address) and sometimes as my online forum handle.

In this early day of gmail, I wanted to "reserve" similar short, memorable, and tradition-preserving usernames for my children, who would soon be entering ages where having an email account would be relevant for them and I was in a position with my allotment of invites to secure such "good" addresses for them. For my daughter this worked out easily as her first name plus surname abbreviation worked out to exactly six characters. For my son, this seemed to not be possible since his given name was only three letters long, and 3+2 being 5, meant that creating a gmail account for him, following my newly-imposed family standard naming scheme seemed impossible.

So, on a hunch following a scent of there possibly being something I could exploit here (and slightly influenced by the burgeoning non-Unix-login-length character imposition corporate trend of first.last[at]domain address standardizations), hypothesizing a letter-correct gmail web front-end implementation that might allow me to spirit-violate backend behavior to achieve my goal, I followed through and successfully got my son's gmail address past the first criteria that a new account must be at least six characters by creating his address as his three letter first name, followed by a "dot", with our two-letter abbreviation of our long surname at the end; something like abc.xy@gmail.com. And my hunch paid off, for as described in [1], the dot was simply ignored at SMTP address-parsing and delivery (and mayhaps also/because at username creation/storage time, but that's just a guess; I'm unsure how/why it actually worked at a technical level since I did not work at Google), giving my son the ability to effectively have a five-letter gmail "username" in his address, in the intended "first name followed by last name two-letter short form" I had created for my progeny, simply by omitting the '.' From his username when sending him email to his gmail address! :-) (My son, sadly has since passed - RIP my sweet boy Ryk; I miss you terribly every day) and I have no idea if this technique is still exploitable in this way today.

I did later wonder if I could have done similar using the fact that "+anything" is ignored in the LHS when parsing a gmail delivery address to maybe pull off creating a three-letter username for a gmail account for my son back then, but never actually tried it when it could have been trivial to try to exploit that sort of front-end-validation vs backend implementation technique for gmail addresses. shrug

I hope y'all don't mind my little off-topic tangent and enjoy the story of this afaik little-known feat that could be pulled off, at least for a time.

[0] https://www.cusnc.navy.mil/Portals/17/NSA%20BAHRAIN%20IMPORT...

[1] https://support.google.com/mail/answer/7436150?hl=en

[2] https://support.google.com/mail/answer/9211434?hl=en


I just wanted to say that I enjoyed your story and I am deeply sorry for your loss.


Thank you, on both counts.


Would be curious to hear your hypothesis on what's the remaining 10-20% that might be out of reach? Business logic bugs?


Honestly I'm just trying to be nice about it. I don't know that I can tell you a story about the 90% ceiling that makes any sense, especially since you can task 3 different high-caliber teams of senior software security people on an app and get 3 different (overlapping, but different) sets of vulnerabilities back. By the end of 2027, if you did a triangle test, 2:1 agents/humans or vice/versa, I don't think you'd be able to distinguish.

Just registering the prediction.


I would take the other side of that bet.

  # if >10 then was_created_by_agent = true
  $ grep -oP '\p{Emoji}' vulns.md | wc -l


I don't understand what you're trying to say here.


Just that the superficial details of how AI communicate (e.g. with lots of emojis) might give them away in any triangle test :)


I see this emoji thing being mentioned a lot recently, but I don't remember ever seeing one. Granted I rarely use AI and when I do it's on duck.ai. What models are (ab)using emojis?


Ah! Touche.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: