Hacker Newsnew | past | comments | ask | show | jobs | submit | jiveturkey's commentslogin

> need to make it crystal clear

That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.


And programming languages are designed for clarifying the implementation details of abstract processes; while human language is this undocumented, half grandfathered in, half adversarially designed instrument for making apes get along (as in, move in the same general direction) without excessive stench.

The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).

But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.

What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.


> that's because it was defined in decimal from the start

I mean, that's not quite it. By that logic, had memory been defined in decimal from the start (happenstance), we'd have 4000 byte pages.

Now ethernet is interesting ... the data rates are defined in decimal, but almost everything else about it is octets! Starting with the preamble. But the payload is up to an annoying 1500 (decimal) octets. The _minimum_ frame length is defined for CSMA/CD to work, but the max could have been anything.


Looking around their website, they appear to be an enthusiastic novice. I looked around because isn't a hardware architecture course part of any first year syllabus? The author clearly hasn't a clue about hardware, how memory is implemented.

> "The author clearly hasn't a clue about hardware, how memory is implemented."

I'm the author. Actually I'm quite familiar how memory addressing works, including concepts related to virtual memory / memory paging. Yes, I'm not a "low-level nerd" with deep knowledge in OS, hardware or machine code / assembly, but I know enough basics. And yes, I already mentioned that binary addressing makes more sense in RAM (and most of the hardware), and yes, I would not expect 4000-byte memory pages or disk clusters.

My main points are:

1) Kilo, mega, etc. prefixes are supposed to be base 10 instead of base 2, but in tech industry they are often base 2.

2) But this isn't the worst part. While we could agree on 1024 magnitude for memory, the problem is that it's still used inconsistently. Sometimes kilobyte is 1024 bytes, sometimes it's 1000. And this causes a confusion. In some contexts, such as RAM stick or disk cluster, you can assume base 2, but in some other contexts, such as file size, it's ambiguous. For example, would it be good if Celsius meant different things? I don't think so, it would certainly complicate things.


I would guess this would be a contingency case, which would typically be 40%.

What about the criminal lawyers that they needed when they charged with crimes? Did they get any money?

When Gmail downloads the image it identifies itself as GoogleImageProxy, and will be coming from a GCP/Google ASN.

Similar signal will be there for any email provider or server-side filter that downloads the content for malware inspection.

Pixel trackers are nearly never implemented in-house, because it's basically impossible for you to do your own email. So the tracker is a function of the batteries-included sending email provider. Those guys do that for a living, so they are sophisticated, and filter on the provider download of images.


> massive step forward

umm, anti-glare/matte used to be the norm for LCD. Around 2005-2006 that changed. As laptops became more of a consumer product, and DVD watching was an important usage, the glossy screens became the norm.

https://forum.thinkpads.com/viewtopic.php?t=26396

So, I would call it a massive step backwards! The 2006 MBP had an optional glossy screen, and the 2008 was the first one with default glossy. Around 2012 Apple dropped the matte option altogether.


The screen has an oleophobic coating. That is the danger of alcohol, that it strips the coating. For your phone absolutely don't do this. For your laptop it should be fine.


I've always wondered how long that coating stays on there? Most people I know, including myself use a screenprotector anyway


not at all. sensitive strain gauges are commonplace.


I think if you actually tried to build this, you'd find that highly sensitive strain gauges are not sufficient. The human hand is extremely sensitive, and for example can detect tiny clicks and vibrations that you're not going to get with a simple strain gauge.


> brute force

rather inelegant, similar to an autodialer for safes.

i was hoping to see something that worked like a human lockpicker!


Yeah! Pretty sure the person silhouette in the first photo is fake, so we can understand the scale. Great touch.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: