I don't know why, it's just an irrational form of first-principles admiration for me.
This is especially true in the age of LLM's (but the same can be applied to social media forums and the like). Sure, we should "just judge arguments on their merit" but there's something... suspicious. Like, a thought experiment: What if something came to a very reasonable seeming argument in 10 minutes, versus 10 hours? To me, I can't help but feel suspicious that I'm being tricked by some ad-hoc framing that is complete bogus in reality. "Obvious" conclusions can be obviously shaped with extremely hidden premises, things can be "locally logically correct" but horrible from a global view.
Maybe I'm way too cynical of seeing the same arguments over and over, people just stripping out their view of the elephant that they intuited in 5 minutes, then treating it as an authoritative slice, and stubbornly refusing to admit that that constraint, is well, a constraint, and not an "objective" slice. Like, yes, within your axioms and model, sure, but pretending like you found a grand unification in 5 minutes is absurd, and in practice people behave this way online.
(Point being that, okay, even if you don't buy that argument when it comes to LLM's, when it comes to a distributed internet setting, I feel my intuition there holds much stronger, for me at least. Even if everybody was truly an expert, argument JITing is still a problem).
Of course, in practice, when I do decide something is "valuable" enough for me to look at, I take apart the argument logically to the best of my ability, etc. but I've been filtering what to look at a lot more aggressively based on this criteria. And yes it's a bit circular, but I think I've realized that with a lot of really complicated wishy-washy things, well, they're hard for a reason :)
All that to say, is that yeah, the human element is important for me here :D. I find that, when it comes to consumption, if the person is a singular human, it's much harder to come to that issue. They at least have some semblance of consistence, and it's "real/emergent" in a sense. The more you learn about someone, the more they're truly unique. You can't just JIT a reductionist argument in 10 minutes.
You took a very specific argument, abstracted it, then posited your worldview.
What do you have to say about the circular trillions of dollars going around 7 companies and building huge data centers and expecting all smaller players to just subsidize them?
Sure, you can elide the argument by saying, "actually that doesn't matter because I am really smart and understood what the author really was talking about, let me reframe it properly".
I don't really have a response to that. You're free to do what you please. To me, something feels very wrong with that and this behavior in general plagues the modern Internet.
I don't think knuth does modern TCS stuff, the "old guard" (80s-ish) was focused on either classical algorithms / combinatorics, or the start of systems programming (db, network, os). Yes, Knuth did quite a bit of math in TAOCP, but they're very much "old" techniques.
Modern TCS is about unifying a lot of the ad-hoc approaches of old, as well as analyzing different models of computation that better model reality (EMM, streaming, distributed, etc).
The issue is that real life is not adaptable. Resources and capital are slow.
That's the whole issue with monopolies for example, innit? We envision "ideal free market dynamics" yet in practice everybody just centralizes for efficiency gains.
Right, and my point is that "ideal free market dynamics" conveniently always ignore this failure state that seems to always emerge as a logical consequence of its tenets.
I don't have a better solution, but it's a clear problem. Also, for some reason, more and more people (not you) will praise and attack anyone who doesn't defend state A (ideal equilibrium). Leaving no room to point out state B as a logical consequence of A which requires intervention.
The definition of a monopoly basically resolves to "those companies that don't get pressured to meaningfully compete on price or quality", it's a tautology. If a firm has to compete, it doesn't remain a monopoly. What's the point you're making here?
My terminal is the only user-friendly way to interact with a variety of system resources on linux (generally, implicitly because of the filesystem API). I don't go to view where named pipes are in vscode, I go to the terminal and understand the problem structure there, especially when autogenerated pipelines come into the mix. If I need to look at daemons or tune perf, I also reach for the terminal.
Any solution has to address this use case first, IMO. There are some design constraints here, like:
- I don't care about video game levels of graphics
- I generally want things to feel local, as opposed to say some cloud GUI
- byte stream model: probably bad? But how would I do better?
as just a few examples I thought of in 10 seconds; there's probably way more.
I've thought about the author's exact complaints for months, as an avid tmux/neovim user, but the ability to interact with system primitives on a machine that I own and understand is important.
But hey, those statements are design constraints too - modern machines are tied somewhat to unix, but not really. Sysadmin stuff? Got standardized into things like systemd, so maybe it's a bit easier.
So it's not just a cynical mess of "everything is shit, so let's stick to terminals!" but I'd like to see more of actually considering the underlying systems you are operating on, fundamentally, rather than immidiately jumping to sort of, "how do we design the best terminal" (effectively UI)? The actual workflow of being a systems plumber happens to be aided very well by tmux and vim :)
(And to be fair, I only make this critique because I had this vague feeling for a while about this design space, but couldn't formalize it until I read this article).
The Bell Labs we look back on was only the result of government intervention in the telecom monopoly. The 1956 consent decree forced Bell to license thousands of its patents, royalty free, to anyone who wanted to use them. Any patent not listed in the consent decree was to be licensed at "reasonable and nondiscriminatory rates."
The US government basically forced AT&T to use revenue from its monopoly to do fundamental research for the public good. Could the government do the same thing to our modern megacorps? Absolutely! Will it? I doubt it.
Used to be a Google X. Not sure at what scale it was.
But if any state/central bank was clever they would subsidize this.
That's a better trickle down strategy.
Until we get to agi and all new discoveries are autonomously led by AI that is :p
> Google X is a complete failure
- Google Brain
- Google Watch/Wear OS
- Gcam/Pixel Camera
- Insight (indoor GMaps)
- Waymo
- Verily
It is a moonshot factory after all, not a "we're only going to do things that are likely to succeed" factory. It's an internal startup space, which comes with high failure rates. But these successes seem pretty successful. Even the failed Google Glass seems to have led to learning, though they probably should have kept the team going considering the success of Meta Raybands and with things like Snap's glasses.
Didn't the current LLMs stem from this...? Or it might be Google Brain instead. For Google X, there is Waymo? I know a lot of stuff didn't pan out. This is expected. These were 'moonshots'.
But the principle is there. I think that when a company sits on a load of cash, that's what they should do. Either that or become a kind of alternative investments allocator. These are risky bets. But they should be incentivized to take those risks. From a fiscal policy standpoint for instance.
Well it probably is the case already via lower taxation of capital gains and so on.
But there should probably exist a more streamlined framework to make sure incentives are aligned.
And/or assigned government projects?
Besides implementing their Cloud infrastructure that is...
It seems DeepMind is the closest thing to a well funded blue-sky AI research group, even despite the merger with Google Brain and now more of a product focus.
Google Deepmind is the closest lab to that idea because Google is the only entity that is big enough to get close to the scale of AT&T. I was skeptical that the Deepmind and Google Brain merge would be successful but it seems to have worked surprisingly well. They are killing it with LLMs and image editing models. They are also backing the fastest growing cloud business in the world and collecting Nobel prizes along the way.
I thought that was Google. Regulators pretend not to notice their monopoly, they probably get large government contracts for social engineering and surveillance laundered through advertising, and the “don’t be evil” part is they make some open source contributions
I'd argue SSI and Thinking Machines Lab seem to that environment you are thinking about. Industry labs that focuses on research without immediate product requirement.
I don't think that quite matches because those labs have very clear directions of research in LLMs. The theming is a bit more constrained and I don't know if a line of research as vague as what LeCun is pursuing would be funded by those labs.
> A pipe dream sustaining the biggest stock market bubble in history
This is why we're losing innovation.
Look at electric cars, batteries, solar panels, rare earths and many more. Bubble or struggle for survival? Right, because if US has no AI the world will have no AI? That's the real bubble - being stuck in an ancient world view.
Meta's stock has already tanked for "over" investing in AI. Bubble, where?
> 2 Trillion dollars in Capex to get code generators with hallucinations
You assume that's the only use of it.
And are people not using these code generators?
Is this an issue with a lost generation that forgot what Capex is? We've moved from Capex to Opex and now the notion is lost, is it? You can hire an army of software developers but can't build hardware.
Is it better when everyone buys DeepSeek or a non-US version? Well then you don't need to spend Capex but you won't have revenue either.
And that $2T you're referring to includes infrastructure like energy, data centers, servers and many things. DeepSeek rents from others. Someone is paying.
Man, why did no one tell the people who invented bronze that they weren’t allowed to do it until they had a correct definition for metals and understood how they worked? I guess the person saying something can’t be done should stay out of the way of the people doing it.
>> I guess the person saying something can’t be done should stay out of the way of the people doing it.
I'll happily step out of the way once someone simply tells me what it is you're trying to accomplish. Until you can actually define it, you can't do "it".
The big tech companies are trying to make machines that replace all human labor. They call it artificial intelligence. Feel free to argue about definitions.
I'm not sure what 'inventing bronze' is supposed to be. 'Inventing' AGI is pretty much equivalent to creating new life, from scratch. And we don't have an idea on how to do that either, or how life came to be.
Intelligence and human health can't be defined neatly. They are what we call suitcase words. If there exists a physiological tradeoff between medical research about whether to live till 500 years or to be able to lift 1000kg when a person is in youth, those are different dimensions / directions across we can make progress. Same happens for intelligence. I think we are on right track.
I don't think the bar exam is scientifically designed to measure intelligence so that was an odd example. Citing the bar exam is like saying it passes the "Game of thrones trivia" exam so it must be intelligent.
As for IQ tests and the like, to the extent they are "scientific" they are designed based on empirical observations of humans. It is not designed to measure the intelligence of a statistical system containing a compressed version of the internet.
Or does this just prove lawyers are artificially intelligent?
yes, a glib response, but think about it: we define an intelligence test for humans, which by definition is an artificial construct. If we then get a computer to do well on the test we haven't proved it's on par with human intelligence, just that both meet some of the markers that the test makers are using as rough proxies for human intelligence. Maybe this helps signal or judge if AI is a useful tool for specific problems, but it doesn't mean AGI
Hi there! :) Just wanted to gently flag that one of the terms (beginning with the letter "r") in your comment isn't really aligned with the kind of inclusive language we try to encourage across the community. Totally understand it was likely unintentional - happens to all of us! Going forward, it'd be great to keep things phrased in a way that ensures everyone feels welcome and respected. Thanks so much for taking the time to share your thoughts here!
I became interested in the matter reading this thread and vaguely remember reading a couple of the articles. Saved them all in NotebookLM to get an audio overview and to read later. Thanks!
I always take a bird's eye kind of view on things like that, because however close I get, it always loops around to make no sense.
> is massively monopolistic and have unbounded discretionary research budget
that is the case for most megacorps. if you look at all the financial instruments.
modern monopolies are not equal to single corporation domination. modern monopolies are portfolios who do business using the same methods and strategies.
the problem is that private interests strive mostly for control, not money or progress. if they have to spend a lot of money to stay in control of (their (share of the)) segments, they will do that, which is why stuff like the current graph of investments of, by and for AI companies and the industries works.
A modern equivalent and "breadth" of a Bell Labs (et. al) kind of R&D speed could not be controlled and would 100% result in actual Artificial Intelligence vs all those white labelababbebel (sry) AI toys we get now.
Post WW I and II "business psychology" have build a culture that cannot thrive in a free world (free as in undisturbed and left to all devices available) for a variety of reasons, but mostly because of elements with a medieval/dark-age kind of aggressive tendency to come to power and maintain it that way.
In other words: not having a Bell Labs kind of setup anymore ensures that the variety of approaches taken on large scales aka industry-wide or systemic, remains narrow enough.
Weird angle, but isn't "believing there will be a crash" sort of framing it as if this were still normal market dynamics?
OpenAI and AI in general has posed itself as an existential threat and tightly integrated itself (how well? let's argue later) with so many facts of society, especially government, that like, realistically there just can't be a crash, no?
Or is this too doomsday / conspiratorial?
I just find it weird that we're framing it as crash/not crash when it seems pretty clear to me they really genuinely believe in AGI, and if you can get basically all facets of society to buy in... well, airlines don't "crash" anymore, do they?
If OpenAI were to shut down today, would anything in society really change? It seems all valuations are based on future integration into society and our daily lives. I don't think it has really happened yet.
A crash in the stock market doesn't necessarily mean a crash in the real market, The AI bubble burst being dot com style vs a gfc debacle depends on how much critical financial infrastructure is at risk during the debt deleveraging. If you look at the gdp growth during those two periods, the dot com era was a mild stagnation compared to the gfc's actual gdp decline.
reply