Hacker Newsnew | past | comments | ask | show | jobs | submit | uxhacker's commentslogin

Can somebody explain how the software actually adds value? How is it really that amazing? Or is it just a buzz word?


There was a great blog post by an early employee on here a while back that did a deep dive. It's a good read: https://nabeelqu.substack.com/p/reflections-on-palantir


No. Thats the point. There nothing groundbreaking except their branding and lobby. Every other enterprise sofware company has data analytics platforms similar to Palantir - including the big microsoft / google. Palantir might even be best by a big margin but none of that matters. Nothing would justify this valuation. But what matters is that govs around the world will buy Palantir to show their alliance to US/Trump/Thiel. European far right politicians will happily push Palantir everywhere.


It’s pretty scary. According to Barron’s, MicroStrategy, a bitcoin treasury company, alone makes up about 5% of the U.S. convertible bond market. That’s remarkable given that it isn’t a typical tech or biotech growth company issuing convertibles, but essentially a Bitcoin treasury company.


Not even AI!


And when will it do class designs? Maybe this would solve many an AI coding issue.


Yes, to loosen the Model, but not to have no model. The new idea needs to be reintegrate back into the existing world models.

An example would be improvised jazz, the musicians need to bend the rules, but they still need some sense of key and rhythm to make it coherent.


But that world model must have a allowance for doubt inheritance, so that new better world models can branch off and surplant the old. Nothing about this world might be permanent in the long run, not even physics.


Yes, everyone has a world model even a toddler has a casual model (“cry → mum comes”).


This is the opposite of a world model and is very much how machine learning works. It's purely correlative without some underlying theory (the world model). The article's (absurd) example of a person not being a mushroom gives a simple example of a "violation" that having a world model about the kinds of organisms that exist would catch. It's not at all pattern matching, nor is it science in any sense about understanding things from first principles, it's about having an understanding of how you imagine things work that you can sanity check against.


Maybe the question is about how much of the world model they're conscious of.


What stops imports?


One point is that the US fire services seem to like really massive trucks, but fire trucks overseas are generally smaller.


Roads in the US have more space and homes in the US are more flammable


M A N makes enormous fire trucks, and they are quite popular throughout continental Europe.


Surely, the government and legal liability play no part in this, only bad businesses.


Regulation most likely.


Darn capitalism and its regulation


It’s likely regulation is a too-high barrier of entry for foreign products, and it’s also likely regulation was steered by lobbying from the currently dominant vendor.



Capitalism that has defects introduced via regulation is sort of like Communism that has defects introduced by authoritarians: the actual version that gets implemented.


Totally unregulated capitalism only works in a world full of perfectly ethical and moral people. Sounds like an Ayn Rand novel to me!


Probably the government, also-capitalist buyers would buy cheaper if they could.


Does using AI kill the dopamine loop?

I don’t think so.

I still spend the hours — because it needs to sound original. It needs to feel authentic. I have to add my own personal parts to the story.

I still struggle writing it.

The AI helps, but it doesn’t replace the work. The dopamine’s still there — because I’m still in the loop.


AI kills the fun here for me. Writing is fun. Writing using AI to help is horrible. Same with coding to some extent.


Same, kills the fun. It’s actually made it harder to get started on anything because I know the starting point and most of the work is just prompts which I don’t find fun at all. Handcrafting feels more tedious knowing that prompts could do it so much faster. So I end up just disengaging from the activity all together. This is the second year since about 1995 that my side projects folder has practically nothing new (I’ve built a few things with AI, but I lose interest very fast - like a day or two).

FWIW my context is coding as a hobby/entrepreneur. It’s not my job.


The mainstream writing assistants are dog-shite, but so is everything else! If your idea of writing with AI is ChatGPT and no harness, you're only making a statement about the largest common denominator of AI tooling—from a position of ignorance. I'd previously helped multiple pen-pals of mine to properly harness the AI tooling with low-code platforms such as Dify. I'm sure there's plenty more out there, but re: Dify specifically, they took to it rather well. When carefully prompted, some models excel in "editing" moreso than writing from scratch. Not having to rely on professional editors is a huge advantage for aspiring authors that would otherwise struggle with keeping on-form. In my experience, progressively refining ideas, maintaining notes on development of characters in long-winded stories, and soon enough, persistent agents with proactivity, interruptible work capabilities—would vastly reduce the cognitive load that has very little to do with "creativity," that writers have to deal with all the time.

You cannot blame "AI" for your own lack of trying...


Actually, the fact of the matter is that a lot of people derive joy of being the "sole creator" of what they do, or if they collaborate, to enrich human relationships when they do it. So, AI fundamentally takes away that joy because its outside the parameters of normal creation.


What you allude to is not so much "fact," as the "heart" of the matter. The availability of AI tooling takes away nothing; you elect to either use it, or not. I personally hate having to deal with human editors! Most of them typically fit in two broad categories: guns-for-hire and genuine collaborators. The "fact" of the matter is such that AI does not prevent me from collaborating with any of my peers, however, it does allow me to pseudo-collaborate with the writers long-dead! In fact, I happen to maintaib a collection of theatrical play-journals, riddled with conversations I've had at the time with various historical figures vis-à-vis AI. This is the single most valuable source of inspiration enabling my writing in ways that my peers never could. AI-assisted writing is a misnomer—it's not about writing as much as reading, and moreso playing, which is how we get creative.

Wittgenstein would absolutely love it!

It doesn't surprise me that those of us to have failed in keeping up with the constantly-evolving AI tooling, would also make it part of their newly-refined, all-human identity. IMHO, similarly to how hating popular things does not make you cool, not using AI does not make you a joyous independent creator to bravely hold post in the treacherous world of AI slop! It sounds more like a fantasy than coherent creative position. We're still in the early days when it comes to creative writing comprehension in AI. You may or may not be surprised that there's very little to show for in terms of evals when it comes to that. Unlike coding and maths, fiction is yet to be recognised as verifiable domain. (Probably due to probability distribution in fictional outputs not necessarily converging the way of related objective rewards!) However, some labs are working it! There's a huge market for creative writing aids, as it'a necessary to everything from education (as story-telling is what makes studying worthwhile) to political work.


I think that generalizes to 'creation is fun, using AI to help is horrible'.


I think that generalized to ‘AI killed the dopamine loop’ XD


AT is an amazing tool that can boost productivity and help with creative inspiration. If an app is making you feel sad stop using the app.


Can it really help with creative inspiration in the long term? I'd say the answer is no for most people.

And some people need a certain number of others who are also doing the same thing for the love of it. We are a social species after all. AI is taking that away.


I’m very dyslexic, so having AI in the loop is incredibly useful — especially for feedback.

But I have to guide it: “just list the changes,” “use English English,” and so on.

The fun’s still there — because the thinking is still mine.


I wrote a "funny" email to a colleague who asked for a formal request to do a task I asked him for. I took it seriously and wrote extremely formal ("Dearest Steven... " Etc). He laughed and said "did chatgpt write that?".

It made me irrationally angry, no, I spent two minutes of my own brain power to come up with those five sentences. This kind of thing happens constantly now, everyone assumes everyone else uses gpt's for everything and I find it a bit depressing to be honest.


You've either let AI help you with the 'struggle' of this post or you've spent so much time with chatgpt that you've internalized it's cadence and patterns. This is straight chatGPT.


It also reduces the enjoyment of a finished product. You used to write a story or report and be proud of the work. Now your neighbors have done the same with AI and feels like it isn't worth it.


This has been the effect of technology for a while, at least mass communications technology. It exposes you to a pseudo-anoymous world of millions of people doing things but for which you have no context for their creation, only their output.

AI however brings it to a horrific next level, and really emphasizes the mass production of art.


> The AI helps, but it doesn’t replace the work. The dopamine’s still there — because I’m still in the loop.

The problem is that most people need to feel that they are doing something original, and AI takes that away. AI doesn't help anything, except in the short term and maybe for some people who can compartamentalize it. But those people are few and far between indeed.


this reads like it was written by AI.


I recently have also been thinking about Jef Raskin’s book The Humane Interface. It feels increasingly relevant to now.

Raskin was deeply concerned with how humans think in vague, associative, creative ways, while computers demand precision and predictability.

His goal was to humanize the machine through thoughtful interface design—minimizing modes, reducing cognitive load, and anticipating user intent.

What’s fascinating now is how AI, changes the equation entirely. Instead of rigid systems requiring exact input, we now have tools that themselves are fuzzy, and probabilistic.

I keep thinking that the gap Raskin was trying to bridge is closing—not just through interface, but through the architecture of the machine itself.

So AI makes Raskin’s vision more feasible than ever but also challenges his assumptions:

Does AI finally enable truly humane interfaces?


"no" .. intelligent appliance was the product that came out of Raskin's thinking..

I object to the framing of this question directly -- there is no definition of "AI" . Secondly, the humane interface is a genre that Jef Raskin shaped and re-thought over years.. A one-liner here definitely does not embody the works of Jef Raskin.

Off the top of my head, it appears that "AI" enables one-to-many broadcast, service interactions and knowledge retrieval in a way that was not possible before. The thinking of Jef Raskin was very much along the lines of an ordinary person using computers for their own purposes. "AI" in the supply-side format coming down the road, appears to be headed towards societal interactions that depersonalize and segregate individual people. It is possible to engage "AI" whatever that means, to enable individuals as an appliance. This is by no means certain at this time IMHO.


> Does AI finally enable truly humane interfaces?

Perhaps, but I don't think we're going to see evidence of this for quite a while. It would be really cool if the computer adapted to how you naturally want to use it, though, without forcing you through an interface where you talk/type to it.


"Does AI finally enable truly humane interfaces?"

I think it does; LLMs in particular. AI also enables a ton of other things, many of them inhumane, which can make it very hard to discuss these things as people fixate on the inhumane. (Which is fair... but if you are BUILDING something, I think it's best to fixate on the humane so that you conjure THAT into being.)

I think Jef Raskin's goal with a lot of what he proposed was to connect the computer interface more directly with the user's intent. An application-oriented model really focuses so much of the organization around the software company's intent and position, something that follows us fully into (most of) today's interfaces.

A magical aspect of LLMs is that they can actually fully vertically integrate with intent. It doesn't mean every LLM interface exposes this or takes advantage of this (quite the contrary!), but it's _possible_, and it simple wasn't possible in the past.

For instance: you can create an LLM-powered piece of software that collects (and allows revision) to some overriding intent. Just literally take the user's stated intent and puts it in a slot in all following prompts. This alone will have a substantial effect on the LLMs behavior! And importantly you can ask for their intent, not just their specific goal. Maybe I want to build a shed, and I'm looking up some materials... the underlying goal can inform all kinds of things, like whether I'm looking for used or new materials, aesthetic or functional, etc.

To accomplish something with a computer we often thread together many different tools. Each of them is generally defined by their function (photo album, email client, browser-that-contains-other-things, and so on). It's up to the human to figure out how to assemble these, and at each step it's easy to lose track, to become distracted or confused, to lose track of context. And again an LLM can engage with the larger task in a way that wasn't possible before.


Tell me, how does doing any of the things you've suggested help with the huge range of computer-driven tasks that have nothing to do with language? Video editing, audio editing, music composition, architectural and mechanical design, the list is vast and nearly endless.

LLMs have no role to play in any of that, because their job is text generation. At best, they could generate excerpts from a half-imagined user manual ...


Because some LLMs are now multimodal—they can process and generate not just text, but also sound and visuals. In other words, they’re beginning to handle a broader range of human inputs and outputs, much like we do.


Those are not LLMs. They use the same foundational technology (pick what you like, but I'd say transformers) to accomplish tasks that require entirely different training data and architectures.

I was specifically asking about LLMs because the comment I replied to only talked about LLMs - Large Language Models.


At this point in time calling a multimodal LLM an LLM is pretty uncontroversial. Most of the differences lie in the encoders and embedding projections. If anything I'd think MoE models are actually more different from a basic LLM than a multimodal LLM is from a regular LLM.

Bottom line is that when folks are talking about LLM applications, multimodal LLMs, MoE LLMs, and even agents are all in the general umbrella.


Multimodal LLMs are absolutely LLMs, the language is just not human language.


Everything has to do with language! Language is a way of stating intention, of expression something before it exists, of talking about goals and criteria. Everything example you give can be described in language. You are caught up in the mechanisms of these tools, not the underlying intention.

You can describe your intention in any of these tools. And it can be whatever you want... maybe your intention in an audio editor is "I need to finish this before the deadline in the morning but I have no idea what the client wants" and that's valid, that's something an LLM can actually work with.

HOW the LLM is involved is an open question, something that hasn't been done very well, and may not work well when applied to existing applications. But an LLM can make sense of events and images in addition to natural language text. You can give an LLM a timestamped list of UI events and it can actually infer quite a bit about what the user is actually doing. What does it do with that understanding? We're going to have to figure that out! These are exciting times!


What if you could pilot your video editing tool through voice? Have a multimodal LLM convert your instructions into some structured data instruction that gets used by the editor to perform actions.


Compare pinch zoom to the tedious scene in Bladerunner where Deckard is asking the computer to zoom in to a picture.


Zooming is a bad example (because pinch zoom is just so much better than that scene hah.) Instead "go back 5 frames, and change the color grading. Make the mood more pensive and bring out blues and magentas and fewer yellows and oranges." That's a lot faster than fiddling with 2-3 different sliders IMO.


> Zooming is a bad example (because pinch zoom is just so much better than that scene hah.) Instead "go back 5 frames, and change the color grading. Make the mood more pensive and bring out blues and magentas and fewer yellows and oranges." That's a lot faster than fiddling with 2-3 different sliders IMO.

Eh. That's not as good as being skilled enough to know exactly what you want and have the tools to make that happen.

There's something to be said for tools that give you the power of manipulating something efficiently, than systems that do the manipulation for you.


> Eh. That's not as good as being skilled enough to know exactly what you want and have the tools to make that happen.

I mean, do you know that? A tool that offers this audible fluent experience needs to exist before you can make that assessment right? Or are vibes alone a strong enough way to make this judgement? (There's also some strong "Less space than a Nomad. Lame" energy in this post lol.)

Moreover why can't you just have both? When I fire up Lightroom, sure I have easy mode sliders to affect "warmth" but then I have detailed panels that let me control the hue and saturation of midtones. And if those panels aren't enough I can fire up Photoshop and edit to my heart's content.

Nothing is stopping you from taking your mouse in hand at any point and saying "let me do it" and pausing the LLM to let you handle the hard bits. The same way programmers rely on compliers to generate most machine or VM code and only write machine code when the compiler isn't doing what the programmer wants.

So again, why not?


> So again, why not?

Because at my heart I'm a humanist, and I want tools that allow and encourage humans to have and express mastery themselves.

> Nothing is stopping you from taking your mouse in hand at any point and saying "let me do it" and pausing the LLM to let you handle the hard bits. The same way programmers rely on compliers to generate most machine or VM code and only write machine code when the compiler isn't doing what the programmer wants.

IMHO, good tools are deterministic, so a compiler (to use your example) is a good tool, because you can learn how it functions and gain mastery over it.

I think an AI easy-button is a bad tool. It may get the job done (after a fashion), but there's no possibility of mastery. It's making subjective decisions and is too unpredictable, because it's taking the task on itself.

And I don't think bad tools should be built, because the weaknesses of human psychology. Something is stopping you "from taking your mouse in hand at any point and saying 'let me do it'," and its those weaknesses. You either take the shortcut or have to exercise continuous willpower to decline it, which can be really hard and stressful. I don't think we should build bad tools that should put people in that situation.

And you're not going to make any progress with me by arguing based on precedent of some widely-used bad tool. Those tools were likely a mistake too. For a long time, our society has been putting technology for its own sake ahead of people.


> And you're not going to make any progress with me by arguing based on precedent of some widely-used bad tool. Those tools were likely a mistake too. For a long time, our society has been putting technology for its own sake ahead of people.

Your comment is pretty frustrating. HN has definitely become more "random internet comments" forum over the years from its more grounded focus. But even when "random internet comments" talk to each other, you expect a forthrightness to discuss and talk. My reading of your comment is that you have a strong opinion, you're injecting that opinion, but you're not open to discussion on your opinion. This statement makes me feel like my time spent replying to you was a waste.

Moreover I feel like an attitude of posting but not listening when using internet forums is corrosive. In fact, when you call yourself a humanist, this confuses and frustrates me even more because I feel it's human to engage with an argument or just stop discussing when engagement is fruitless. Stating your opinion constantly without room for discussion seems profoundly inhuman to me, but I also suspect we're not going to have a productive discussion from here so I will heed my own feelings and disengage. Have a nice day.


> My reading of your comment is that you have a strong opinion, you're injecting that opinion, but you're not open to discussion on your opinion. This statement makes me feel like my time spent replying to you was a waste.

Eh, whatever. I was just trying to prevent the possibility of a particularly tiresome cookie-cutter "argument" I've seen a million times around here. I don't know if you were actually going to make it, but we're in the context where it's likely to pop up, and it'd just waste everyone's time.

Also this isn't really opinion territory, it's more values territory.


Training LLMs to generate some internal command structure for a tool is conceptually similar to what we've done with them already, but the training data for it is essentially non-existent, and would be hard to generate.


My experience has been that generating structured output with zero, one, and few-shot prompts works quite well. We've used it at $WORK for zero-shot stuff and it's been good enough. I've done few-shot prompting for some personal projects and it's been solid. JSON Schema based enforcement of responses with temperature 0 settings works quite well. Sometimes LLMs hallucinate their responses but if you keep output formats fairly constrained (e.g. structured dicts of booleans) it decreases hallucinations and even when they do hallucinate, at temperature 0 it seems to stay within < 0.1% of responses even with zero-shot prompting. (At least with datasets and prompts I've considered.)

(Though yes, keep in mind that 0.1% hallucination = 99.9% correctness which is really not that high when we're talking about high reliability things. With zero-shot that far exceeded my expectations though.)


Deckard. Blade Runner.


> Does AI finally enable truly humane interfaces?

This is something I keep tossing over in my head. Multimodal capabilities of frontier models right now are fantastic. Rather than locking into a desktop with peripherals or hunching over a tiny screen and tapping with thumbs we finally have an easy way to create apps that interact "natively" through audio. We can finally try to decipher a user's intent rather than forcing the user to interact through an interface designed to provide precise inputs to an algorithm. I'm excited to see what we build with these things.


Highly recommended, timeless read!


Dude is responsible for one-button mouse ...


I’m not a mathematician (just a programmer), but reading this made me wonder—doesn’t this kind of dimensional weirdness feel a bit like how LLMs organize their internal space? Like how similar ideas or meanings seem to get pulled close together in a way that’s hard to visualize, but clearly works?

That bit in the article about knots only existing in 3D really caught my attention. "And dimension 3 is the only one that can contain knots — in any higher dimension, you can untangle a knot even while holding its ends fast."

That’s so unintuitive… and I can't help thinking of how LLMs seem to "untangle" language meaning in some weird embedding space that’s way beyond anything we can picture.

Is there a real connection here? Or am I just seeing patterns where there aren’t any?


> That’s so unintuitive…

It's pretty simple, actually. Imagine you have a knot you want to untie. Lay it out in a knot diagram, so that there are just finitely many crossings. If you could pass the string through itself at any crossing, flipping which strand is over and which is under, it would be easy, wouldn't it? It's only knotted because those over/unders are in an unfavorable configuration. Well, with a 4th spatial dimension available, you can't pass the string through itself, but you can still invert any crossing by using the extra dimension to move one strand around the other, in a way that wouldn't be possible in just 3 dimensions.

> Or am I just seeing patterns where there aren’t any?

Pretty sure it's the latter.


That makes sense for a 2D rope in 4D space, but I’m not convinced the same approach holds for a 3D ”hyperrope” in 4D space.


Your intuition is correct, it doesn't! A "3D hyperrope" is in fact just the surface of a ball[1], and it turns out that you can actually form non-trivial knots of that spherical surface in a 4-dimensional ambient space (and analogously they can be un-knotted if you then move up to 5-dimension ambient space, although the mechanics for doing so might be a little trickier than in the 1d-in-4d case). In fact, if you have a k-dimensional sphere, you can always knot it up in a k+2 dimensional ambient space (and can then always be unknotted if you add enough additional dimensions).

[1] note that a [loop of] rope is actually a 1-dimensional object (it only has length, no width), so the next dimension up should be a 2-dimensional object, which is true of the surface of a ball. a topologist would call these things a 1-sphere and a 2-sphere, respectively


Any time I am tempted to feel smart, I try to go and study some linear algebra and walk away humbled. I will be spending 20-30 minutes probably trying to understand what you said (and I think you typed it out quite reasonably), but first I have to figure out how... a 3D hyperrope is the same as a surface of a ball...


I'm not sure what you mean here. This is discussing a 1-dimensional structure embeded in 4-dimensional space. If you're not sure it works for something else, well, that isn't what's under discussion.

If you just mean you're just unclear on the first step, of laying the knot out in 2D with crossings marked over/under, that's always possible after just some ordinary 3D adjustments. Although, yeah, if you asked me to prove it, I dunno that I could give one, I'm not a topologist... (and I guess now that I think about it the "finitely many" crossings part is actually wrong if we're allowing wild knots, but that's not really the issue)


There is a real connection insofar as the internal space of an LLM is a vector space so things which hold for vector spaces hold for the internal space of an LLM. This is the power of abstract algebra. When an algebraic structure can be identified you all of a sudden know a ton of things about it because mathematicians have been working to understand those structures for a while.

The internal space of an LLM would also have things in common with how, say currents flow in a body of water because that too is a vector space. When you study this stuff you get this sort of zen sense of everything getting connected to everything else. Eg in one of my textbooks you look at how pollution spreads through the great lakes and then literally the next example looks at how drugs are absorbed into the bloodstream through the stomach and it’s exactly the same dynamic matrix and set of differential equations. Your stomach works the same as the great lakes on a really fundamental level.

The spaces being described here are a little more general than vector spaces, so some of things which are true about vector spaces wouldn’t necessarily work the same way here.


> The spaces being described here are a little more general than vector spaces

You probably mean considerably more special than a general vector space. We do have differentiable manifolds here.


If you're holding a hammer, everything looks like a nail ...


I would be careful about drawing any analogies which are “too cute”. We use LLMs because they work, not because they are are theoretically optimal. They are full of lossy tradeoffs that work in practice because they are a good match for the hardware and data we have.

What is true is that you can get good results by projecting lower dimensional data into higher dimensions, applying operations, and then projecting it back down.


> "And dimension 3 is the only one that can contain knots — in any higher dimension, you can untangle a knot even while holding its ends fast."

Maybe you could create "hyperknots", e.g. in 4D a knot made of a surface instead of a string? Not sure what "holding one end" would mean though.


Yes, circles don't knot in 4D, but the 2-sphere does: https://en.wikipedia.org/wiki/Knot_theory#Higher_dimensions

Warning: If you get too deep into this, you're going to find yourself dealing with a lot of technicalities like "are we talking about smooth knots, tame knots, topological knots, or PL knots?" But the above statement I think is true regardless!


Yep — you can always “knot” a sphere of two dimensions lower, starting with a circle in 3D and a sphere in 4D.


It's not just LLMs. Deep learning in general forms these multi-d latent spaces


When you untie a knot, it’s ends are fixed in time.

Humans also unravel language meaning from within a hyper dimensional manifold.


I don't think this is true, I believe humans unravel language meaning in the plain old 3+1 dimensional Galilean manifold of events in nonrelativistic spacetime, just as animals do with vocalizations and body language, and LLM confabulations / reasoning errors are fundamentally due to their inability to access this level of meaning. (Likewise with video generators not understanding object permanence.)


> Or am I just seeing patterns where there aren’t any?

Meta: there are patterns to seeing patterns, and it's good to understand where your doubt springs from.

1: hallucinating connections/metaphors can be a sign you're spending too much time within a topic. The classic is binging on a game for days, and then resurfacing back into a warped reality where everything you see related back to the game. Hallucinations is the wrong word sorry: because sometimes the metaphors are deeply insightful and valuable: e.g. new inventions or unintuitive cross-discipline solutions to unsolved maths problems. Watch when others see connections to their pet topics: eventually you'll learn to internally dicern your valuable insights from your more fanciful ones. One can always consider whether a temporary change to another topic would be healthy? However sometimes diving deeper helps. How to choose??

2: there's a narrow path between valuable insight and debilitating overmatching. Mania and conspirational paranioa find amazing patterns, however they tend to be rather unhelpful overall. Seek a good balance.

3: cultivate the joy within yourself and others; arts and poetry is fun. Finding crazy connections is worthwhile and often a basis for humour. Engineering is inventive and being a judgy killjoy is unhealthy for everyone.

Hmmm, I usually avoid philosophical stuff like that. Abstract stuff is too difficult to write down well.


A lot of innovation is stealing ideas from two domains that often don’t talk to each other and combining them. That’s how we get simultaneous invention. Two talented individuals both realize that a new fact, when combined with existing facts, implies the existence of more facts.

Someone once asserted that all learning is compression, and I’m pretty sure that’s how polymaths work. Maybe the first couple of domains they learn occupy considerable space in their heads, but then patterns emerge, and this school has elements from these other three, with important differences. X is like Y except for Z. Shortcut is too strong a word, but recycling perhaps.


I'm unsure if I misunderstand you or your writing ingroup!

> learning is compression

I don't think I know enough about compression to find that metaphor useful

> occupy considerable space in their heads

I reckon this is a terribly misleading cliche. Our brains don't work like hard drives. From what I see we can keep stuffing more in there (compression?). Much of my past learning is now blurred but sometimes it surfaces in intuitions? Perhaps attention or interest is a better concept to use?

My favorite thing about LLMs is wondering how much of people's (or my own) conversations are just LLMs. I love the idea of playing games with people to see if I can predictably trigger phrases from people, but unfortunately I would feel like a heel doing that (so I don't). And catching myself doing an LLM reply is wonderful.

Some of the other sibling replies are also gorgeously vague-as (and I'm teasing myself with vagueness too). Abstracts are so soft.


If you have some probability distribution over finite sequences of bits, a stream of independent samples drawn from that stream can be compressed so that the number of bits in the compressed stream per sample from the original stream, is (in the long run) the (base 2) entropy of the distribution. Likewise if instead of independent samples from a distribution there is instead a Markov process or something like that, with some fixed average rate of entropy.

The closer one can get to this ideal, the closer one has to a complete description of the distribution.

I think this is the sort of thing they were getting at with the compression comment.


I think LLM layers are basically big matrices, which are one of the most popular many-dimensional objects that us non-mathematician mortals get to play with.


The BBC article seems to overstate the uniqueness of the service at Great Ormond Street Hospital. While it’s true that they are the first in the UK to offer a UKAS-accredited clinical metagenomics service (UKAS being the United Kingdom Accreditation Service, which certifies labs to meet medical testing standards like ISO 15189), metagenomics itself is already being used in several other places across the UK.

For example, the Earlham Institute, the University of Oxford, and the UK Health Security Agency are all actively involved in metagenomics research and surveillance.

For example: https://www.phgfoundation.org/blog/metagenomic-sequencing-in...

https://www.earlham.ac.uk/events/nanopore-metagenomics-sampl...


It might be so that the plebs don't start demanding it from the NHS.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: