Hacker Newsnew | past | comments | ask | show | jobs | submit | nixpulvis's commentslogin

I looked into this a bit for a rust project I'm working on, it's slightly difficult to be confident, when you get all the way down to the CPU.

https://github.com/rust-lang/rust/issues/17046

https://github.com/conradkleinespel/rpassword/issues/100#iss...


Author tries to avoid a database for storing tokens while the client is disconnected and ends up storing them in a pub/sub provider.

There's no solution other than to store the tokens somewhere, or drop them. You have to make a choice how long you want to allow reconnects for. And this is all pretty independent of the transport layer, as the author even mentioned themselves, you can resume even a new session as long as you have a prompt ID or something to tie it back to the original request.

I don't know enough about how the LLM providers stream results, but the original claim that inference is more expensive than transport is a good point, and caching tokens seems like a smart move. Unfortunately, we pay by the token, so I don't see the incentive for providers to spend time and money doing this for us.


> Unfortunately, we pay by the token, so I don't see the incentive for providers to spend time and money doing this for us.

Providing a better service, for one. Plenty of providers do offer caching, both input and output tokens, and usually give you a cheaper price for it too. Example from two of them: https://platform.claude.com/docs/en/build-with-claude/prompt... & https://api-docs.deepseek.com/guides/kv_cache


I feel like it's slightly different to cache duplicate parts of the input, vs storing outputs when a connection drops.


It seems like a good use case for a caching layer. It seems like you would probably be able to make a set up for agentic systems more simply / cheaply in Hetzner than trying to cobble together a bunch of fragmented apis.


How is this time efficient at all? It takes upwards of 40 seconds to compute on large 32bit values.

It's a joke post with some interesting bits and details.


It's a constant number of lookups, and all good Computer Scientists know that it is therefore an O(1) algorithm.

It is hard to imagine better efficiency than O(1)!

Indeed we could improve it further by performing all evaluations even when we find the answer earlier, ensuring it is a true Constant Time algorithm, safe for use in cryptography.


> This is time efficient* but rather wasteful of space.

You're saying that the blog's solution is time efficient. Which it is not. Your solution may be O(1) but it is also not efficient. As I'm sure you are aware.

I can tell you a practical solution which is also O(1) and takes up maybe 2 or 3 instructions of program code and no extra memory at all.

`x & 1` or `x % 2 != 0`

This blog post was taking a joke and running with it. And your comment is in that spirit as well, I just wanted to point out that it's by no means time efficient when we have 2s or 1s complement numbers which make this algorithm trivial.


You need to read their entire comment as a joke.


I guess I should have been more clear that I was just pointing out the obvious in case some confused reader missed the joke.

lol


Which was also obvious, but maybe also needed pointing out, which says something about online discussion. Something obvious, probably.


explaining the joke spoils the joke, such is social convention.


Forgive me for not being funny.


It's alright. I don't make the rules.


> I just wanted to point out that

We already know. Everybody knows. That's the joke. There's no need to point out anything.


How are you able to recognize a joke post but not a joke comment?


I may have missed the * meaning. I got that the bloom filter was an extension of the joke as I mentioned below. I was just clarifying in case someone else missed the joke.


You're absolutely right. The obvious solution would have been to create a boolean table containing all the pre-computed answers, and then simply use the integer you are testing as the index of the correct answer in memory. Now your isEven code is just a simple array lookup! Such an obvious improvement, I can't believe the OP didn't see it.

And with a little extra work you can shrink the whole table's size in memory by a factor of eight, but I'll leave that as an exercise for the interested reader.


If the "exercise" is to strictly rely on if-else statements, then the obvious speedup is to perform a binary search instead of a linear one. The result would still be horrifically space inefficient, but the speed would be roughly the time it takes to load 32x 4KB pages randomly from disk (the article memory-mapped the file). On a modern SSD a random read is 20 microseconds, so that's less than a millisecond for an even/odd check!

"That's good enough, ship it to production. We'll optimise it later."


Maybe we can even find some correlation in the bit pattern of the input and the Boolean table!


Perhaps, but I fear you’re veering way too much into “clever” territory. Remember, this code has to be understandable to the junior members of the team! If you’re not careful you’ll end up with arcane operators, strange magic numbers, and a general unreadable mess.

The comment you're replying to is also a joke, with some interesting bits and details.


I think I'll just avoid commenting on jokes from now on.


r/whoosh


I remember when this first became an issue, then they tweaked something and I noticed it a lot less. Something changed again recently (last couple years) where this is happening a lot again.

I appreciate how Apple pioneered the touchscreen mobile device, largely due to the implementation of the keyboard, but it needs to be more stable than this.


We've built stacks so high we're afraid to jump off.

Nobody is really competing because nobody can build a complete product. So there's less pressure to fix the little irritations. Users are mostly satisfied, and problems get worse slowly enough that for the average user they don't notice right away how bad it's getting. So they stay because it's too hard or completely impossible to leave.


I think the bigger issue is the update model. In the past, if a new version sucked, people wouldn't upgrade. Now with subscriptions / continuous delivery, there's less ability to vote with one's wallet/feet


That's related.

If you're dependent on updating your OS for security fixes and basic compatibility, you are also forced to update the things you may not want to. It's all bundled together.


But it's not just the OS, but apps too, to say nothing of web SaaS products.

How many times have you launched something only to find the UI had been redone, some feature was now gone or changed, something that worked was now broken, etc.

But it's fine, you see, because we have telemetry and observability and robust CI/CD.

Users and their work are nothing more than ephemeral numbers on a metrics dashboard


100%

Ownership is a critical and fading concept for software. And it makes me really sad and frustrated.


Except if you use OS that respects you, e.g., Debian. In the latter, security updates can be installed independently. On phones, there is Mobian.


This does not always work for specific programs which do not do that, and even then, there are updates that you might want other than security updates without updating other parts of the same program. Separate programs can usually be updated individually, but if they are all in one program then it can make it more difficult (sometimes configuration can be done but not always; sometimes they change things that make this not work either).


100% this. And cars are following down this road as well. For example, my Tesla 3 radio will go bonkers every so often and will refuse to change the channel, no matter what I do. Tapping a new channel icon changes the "currently playing" view, but the audio from the original channel continues to play. This happens until you restart the entire UI (by turning off the car or rebooting the display).

But, hey, they managed to add a Tron cross-over tie-in feature, and maybe some new fart noises!

Undoubtedly when they fix that radio bug, something else will fail. Like the SRS (supplemental restraint system, aka airbag) error message that was introduced at some point in the past six months, then silently got fixed with a more recent firmware update.


> But, hey, they managed to add a Tron cross-over tie-in feature, and maybe some new fart noises!

And, you know, FSD 14.2. :)


Quick give everyone colors to indicate their rank here and ban anyone with a grade less than C-.

Seriously, while I find this cool and interesting, I also fear how these sorts of things will work out for us all.


I want to read a study that compares what readers estimate for much effort was put into producing the same page of text in two contemporary and basic serif and sans-serif fonts. My hypothesis is that the serif font is viewed as more polished or refined, and therefore the result of more hours of work. But I could be wrong.

This is in-line with the advice here to use serif for long form and sans for short. When you're making signs and things like that, you don't have the repeated forms to inform your ability to interpret letters, so the serifs act to confuse readers, while in long form, they add flair, which could be more artistic and tasteful.


i3/sway are so much snappier and simpler. I spend basically no time rearranging things with them and I don't have to do awkward drag and drop operations to get things where I want them.


It's been a long time since Apple was really the home for sane defaults for me.

An easy example is how the workspaces rearrange themselves be mort recent use, and worse on iOS there's some seemingly random time interval at which they move themselves after use.


I keep wondering why they are not doing a "pro" edition of macOS. Especially focused on creatives vs developers. Already the case with hardware.


This is what I am hoping Apple does with iOS and the Mac. iOS becomes the mainstream operating sytem and Mac is the "pro" operating system.


Is there a notion of tier 1 and tier 2 certificates? Like if I setup paid and backed by contract agreements with a cert provider, does this give users more confidence that their lock icon in the browser actually means they are talking to who they think they are?

It's one thing to provide a cert to provide secure encrypted TLS, it's another thing to establish identity with the user. Though, most users would never notice either way.


There are Extended Validation (EV) certificates, and for a couple of years browsers gave them special treatment (typically, a green lock indicator instead of gray, sometimes accompanied by the validated business name). However, they were eventually demoted to the same appearance as ordinary Domain Validation (DV) certificates for a couple reasons:

1) This is not as useful as it sounds. Business names are not unique, and the legal entity behind a legitimate business may have a different name that no one has ever heard of.

2) Validation gets dicier as the world gets opened up and as laws and customs change. The higher tier confers greater prestige and legitimacy, but the only discriminator really backing it is money.


Yea, this was what I thought I'd dealt with before but I couldn't remember.

It's too bad the same hasn't happened to software notarization and signing systems.

People will argue that having payments enforced some accountability, but I'm not really convinced.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: