Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct. There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.


The biggest problem is the infrastructure left behind from the Dotcom boom that laid the path for the current world (the high speed fiber) doesn't translate to computer chips. Are you still using intel chips from 1998? And the chips are such a huge cost, and being backed by debt but they depreciate in value exponentially. It's not the same because so much of the current debt fueled spending is on an asset that has very short shelf life. I think AI will be huge, I don't doubt the endgame once it matures. But the bubble now, spending huge amounts on these data centers using debt without a path to profitability (and inordinate spending on these chips) is dangerous. You can think AI will be huge and see how dangerous the current manifestation of the bubble is. A lot of people will get hurt very very badly. This is going to maim the economy in a generational way.


And a lot of the gains from the Dotcom boom are being paid back in negative value for the average person at this point. We have automated systems that waste our time when we need support, product features that should have a one-time-cost being turned into subscriptions, a complete usurping of the ability to distribute software or build compatible replacements, etc..

The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.


If you're ever been to a third world country then you'd see how this is completely untrue. The dotcom boom has revolutionized the way of life for people in countries like India.

Even for the average person in America, the ability to do so many activities online that would have taken hours otherwise (eg. shopping, research, DMV/government activities, etc). The fact that we see negative consequences of this like social network polarization or brainrot doesn't negate the positives that have been brought about.


I think you’re putting too much weight on cost (time, money), and not enough weight on “quality of life”, in your analysis.

For sure, we can shop faster, and (attempt) research and admin faster. But…

Shopping: used to be fun. You’d go with friends or family, discuss the goods together, gossip, bump into people you knew, stop for a sandwich, maybe mix shopping and a cinema or dinner trip. All the while, you’d be aware of other peoples’ personal space, see their family dynamics. Queuing for event tickets brought you shoulder to shoulder with the crowd before the event began… Today, we do all this at home; strangers (and communities) are separated from us by glass, cables and satalites, rather than by air and shouting distance. I argue that this time saving is reducing our ability to socialise.

Research: this is definitely accelerated, and probably mostly for the better. But… some kinds of research were mingled with the “shopping” socialisation described above.

Admin: the happy path is now faster and functioning bureaucracy is smoother in the digital realm. But, it’s the edge cases which are now more painful. Elderly people struggle with digital tech and prefer face to face. Everyone is more open to more subtle and challenging threats (identity theft, fraud); we all have to learn complex and layered mitigation strategies. Also: digital systems are very fragile: they leak private data, they’re open to wider attack surfaces, they need more training and are harder to intuit without that training; they’re ripe for capture by monopolists (Google, Palantir).

The time and cost savings of all these are not felt by the users, or even the admins of these systems. The savings are felt only by the owners of the systems.

Technologgy has saved billions of person-hours individual costs, in travel, in physical work. Yet, wemre working longer, using fewer ranges of motions, are less fit, less able to tolerste others’ differences and the wealth gap is widening.


> I think you’re putting too much weight on cost (time, money), and not enough weight on “quality of life”, in your analysis.

"Quality of life" is a hugely privileged topic to be zooming in on. For the vast majority of people both inside and outside the US, Time and Money are by far the most important factors in their lives.


Costs (time/money) are metrics to help analyse your situation/progress. They are not quality of life.

If you had a huge pile of money but still lived in a shack in a slum, you’d still have a terrible quality of life.

If you argument were true, and people are saving time (or money) due to these new systems, why is the wealth gap widening?


Setting aside time, is money not downstream from quality of life? Meaning, in a better world one might not need to care as much about money? I believe that time and quality of life are congruent - good quality of life means control over one’s own time.


Yes, this is true. There used to be a lot of local book stores, for example. Amazon optimized that away, while ruining social fabric.


Independent book stores are booming in quantity since the initial Internet crash.

Be careful about making narratives that don’t line up with industry data.

There’s a lot of brick and mortar retail going on. It just doesn’t look like the overbuilt mall infrastructure of the 1970s-1980s.


Two decades ago, in the Bay Area we used to have a lot of books stores, specialized, chains, children's, grade school, college slugbooks, etc. Places like Fry's had a coffee and a book store inside. The population grew, number of book stores went down to near zero.


RIP literacy in America. We hardly knew ye.


Recall that the arXiv was established in 1991, many years before the dotcom bubble. Many scientists still use arXiv prominently for research.


It seems the crux is that we needed X people to produce goods, and we had Y demand.

Now we need X*0.75 people to do meet Y demand.

However, those savings are partially piped to consumers, and partially piped to owners.

There is only so much marginal propensity to spend that rich people have, so that additional wealth is not resulting in an increase in demand, at least commensurate enough to absorb the 25% who are unemployed or underemployed.

Ideally that money would be getting ploughed back into making new firms, or creating new work, but the work being created requires people with PHDs, and a few specific skills, which means that entire fields of people are not in the work force.

However all that money has to go somewhere, and so asset classes are rising in value, because there is no where else for it to go.


> Now we need X0.75 people to do meet Y demand.*

This is how GDP/person has increased 30x the last 250 years.

What always happens is that the no longer needed X*0.25 people find new useful things to do and we end up 33% richer.


"we end up."

It's actually, "they end up" and the 33% gains you're talking about aren't realized en masse until all the coal miners have black lung. It's really quite the, "dealy" as Homer Simpson would say. See, "Charles Dickens" or, "William Blake" for more. #grease


> partially piped to consumers, and partially piped to owners.

Or, the returns on capital exceed the rate of economic growth (r > g), if you like Piketty's Capital in the Twenty First Century.

One of the central points is about how productivity and growth gains increasingly accrue to capital rather than labor, leading to capital accumulation and asset inflation.


Yep, that’s the source of the point. The effort is in finding a way to make it easy to convey. Communication of an idea is almost as critical as its verification now.


I just need to read it one of these days. I keep reading parts of it but never sitting down with it.


You're in luck! A couple years ago he released an "abridged version" of sorts. A Brief History of Inequality is the name. Much more accessible than the 700 pages of Capital in the 21st Century.

https://en.m.wikipedia.org/wiki/Special:BookSources/97806742...


Oh hell, don’t make me feel bad. I’ve got a meager capacity to trudge through major papers. I cheer you on.


With telecom, we benefited from skipping generations. I got into a telecom management program because in 2001-ish, I was passed by on a village street by a farmer bicycling while talking on his cellphone. Mind you my family could not afford cellphone call rates at the time.

In fact, the technology was introduced out here assuming corporate / elite users. The market reality became such that telcos were forced kicking and screaming to open up networks to everybody. The Telecom Regulatory Authority of India (back then) mandated rural <> urban parity of sorts. This eventually forced telcos to share infrastructure costs (share towers etc.) The total call and data volumes are eye-watering, but low-yield (low ARPU). I could go on and on but it's just batshit crazy.

Now UPI has layered on top of that---once again, benefiting from Reserve Bank of India's mandate for zero-fee transactions, and participating via a formal data interchange protocol and format.

Speaking from India, having lived here all my life, and occasionally travelled abroad (USAmerica, S.E. Asia).

We, as a society and democracy, are also feeling the harsh, harsh hand of "Code is Law", and increasingly centralised control of communication utilities (which the telecoms are). The left hand of darkness comes with a lot of darkness, sadly.

Which brings me to the moniker of "third world".

This place is insane, my friend --- first, second, third, and fourth worlds all smashing into each others' faces all the time. In so many ways, we are more first world here than many western countries. I first visited USAmerica in 2015, and I could almost smell an empire in decline. Walking across twitter headquarters in downtown SF of all the places, avoiding needles and syringes strewn on the sidewalk, and avoiding the completely smashed guy just barely standing there, right there in the middle of it all.

That was insane.


That kind of extreme poverty juxtaposed to extreme wealth, and all of the social ills that come along with it, have always been a fixture of the American experience. I don’t think it’s a good barometer or whether the USA is in decline when there has long been pockets of urban decay, massive inequality, drug use etc. Jump back to any point in American history and you’ll find something similar if not much, much worse. Even in SF of all places, back in the wild west era gold rush or in the 1970s… America has always held that contradiction.


Yeah, I sort of recounted a stark memory. That juxtaposition was a bit too much.

However, it wasn't just that, and the feeling has only solidified in three further visits. It isn't rational, very much a nose thing, coming from an ordinary software programmer (definitely not an economist, sociologist, think tank).


AI itself is a manifestation of that too, a huge time waster for a lot of people. Getting randomly generated wrong but sounding right information is very frustrating. Start asking AI questions you already know the answer too and the issues can become very obvious.


I know HN and most younger people or people with otherwise political leanings always push narratives pointing at rich people bad but I feel a lot of tech has made our lives easier and better. It's also made it more complicated and worse in some ways. That effect has applied to everyone.

In poor countries, they may not have access to clean running water but it's almost guaranteed they have cell phones. We saw that in a documentary recently. What's good about that? They use cell phones not only to stay in touch but to carry out small business and personal sales. Something that wouldn't have been possible before the Internet age.


Don’t say that, you’ll hasten the Antichrist!



> The Dotcom boom was probably good for everyone in some way, but it was much, much better for the extremely wealthy people that have gained control of everything.

You are describing platform capture. Be it Google Search, YouTube, TikTok, Meta, X, App Store, Play Store, Amazon, Uber - they have all made themselves intermediaries between public and services, extracting a huge fee. I see it like rent going up in a region until it reaches maximum bearable level, making it almost not worth it to live and work there. They extract value both directions, up and down, like ISPs without net-neutrality.

But AI has a different dynamic, it is not easy to centrally control ranking, filtering and UI with AI agents. You can download a LLM, can't download a Google or Meta. Now it is AI agents that got the "ear" of the user base.

It's not like before it was good - we had a generation of people writing slop to grab attention on web and social networks, from the lowest porn site to CNN. We all got prompted by the Algorithm. Now that Algorithms is replaced by many AI agents that serve users more directly than before.


>You can download a LLM, can't download a Google or Meta.

You can download a model. That doesn't necessarily mean you can download the best model and all the ancillary systems attached to it by whatever service. Just like you can download a web index but you probably cannot download google's index and certainly can't download their system of crawlers for keeping it up to date.


That's true for the GPUs themselves, but the data centers with their electricity infrastructure and cooling and suchlike won't become obsolete nearly as quickly.


this is a good point, and it would be interesting to see the relative value of this building and housing 'plumbing' overhead Vs the chips themselves.

I guess another example of the same thing is power generation capacity, although this comes online so much more slowly I'm not sure the dynamics would work in the same way.


The data centers built in 1998 don't have nearly enough power or cooling capacity to run today's infrastructure. I'd be surprised if very many of them are even still in use. Cheaper to build new than upgrade.


How come? I'd expect that efficiency gains would lower power and thus cooling demands - are we packing more servers into the same space now or losing those gains elsewhere?


Power limitations are a big deal. I haven't shopped for datacenter cages since web 2.0, but even back then it was a significant issue. Lots of places couldn't give you more than a few kw per rack. State of the art servers can be 2kw each, so you start pushing 60kw per rack. Re rigging a decades old data center for that isn't trivial. Remember you need not just the raw power but cooling, backup generator capacity, enough battery to cover the transition, etc.

It's hugely expensive, which is why the big cloud infrastructure companies have spent so much on optimizing every detail they can.


Yes - blades-of-servers replacing what was 2 or 3 rack mount servers. Both air exchange and power requirements are radically different in order to fill that rack as it was before.


It's just an educated guess, but I expect that power density has gone up quite a bit as a form of optimization. Efficiency gains permit both lower power (mobile) and higher compute (server) parts. How tightly you pack those server parts in is an entirely different matter. How many H100s can you fit on average per 1U of space?


How much more of centralized data center capacity we actually need outside AI? And how much more we would need if we used slightly more time on doing things more efficiently?


This is true. It’s probably 2-3 times as long as a GPU chip. But it’s still probably half or a quarter of the depreciation timeline of a carrier fiber line.


Even if the building itself is condemnable, what it took to build it out is still valuable.

To give a different example, right now, some of the most prized sites for renewable energy are former coal plant sites, because they already have big fat transmission lines ready to go. Yesterday's industrial parks are now today's gentrifying urban districts, and so on.


That’s true. The permitting is already dealt with, and that’s substantial lead time on any data center build.

Evnn moreso for carrier lines of course. Nimbyism is a strong block on right-of-way needs (except the undersea ones obviously).


There are probably a lot of cool and useful things you could do with a bunch of data centers full of GPUs.

- better weather forecasts

- modeling intermittent generation on the grid to get more solar online

- drug discovery

- economic modeling

- low cost streaming games

- simulation of all types


- cloud gaming service? :D


Eh not really. Maybe retro cloud gaming services. But games haven't stopped getting more demanding every year. Not only are the AI GPUs focused on achieving clusters with great compute performance per watt and dollar rather than making singular GPUs with great raster performance; even the GPUs which are powerful enough for current games won't be powerful enough for games in 5 years.

Not to mean that we're still nowhere near close to solving the broadband coverage problem, especially in less developed countries like the US and most of the third world. If anything, it seems like we're moving towards satellite internet and cellular for areas outside of the urban centers, and those are terrible for latency-sensitive applications like game streaming.


> But games haven't stopped getting more demanding every year.

This is not particularly true.

Even top of the line AAA games make sure they can be played on the current generation consoles which have been around for the last N years. Right now N=5.

Sure you’ll get much better graphics with a high end PC, but those looking for cloud gaming would likely be satisfied with PS5 level graphics which can be pretty good.


If you look at year over year chip improvements in 2025 vs 1998, it's clear that modern hardware just has a longer shelf life than it used to. The difficulties in getting more performance for the same power expenditure are just very different than back in the day.

There's still depreciation, but it's not the same. Also look at other forms of hardware, like RAM, and the bonus electrical capacity being built.


In 1998 to transfer a megabyte over telephone lines was expensive and 5 years later is was almost free.

I have not seen the prices of GPUs, CPU or RAM going down, on the contrary, each day it gets more expensive.


In 1998, 16 MiB of RAM was ~$200, in 2025, 16 GiB of ram is about $50. A Pentium II in 1998 at 459 MHz was $600. Today, a AMD Ryzen 7 9800X can be had for $500. That Ryzen is maybe 100 times as powerful as the Pentium II. What's available at what price point has changed, but it's ridiculous how much computing I can get for $150 at Best Buy, and it's also ridiculous how little I can do with that much computing power. Wirth’s law still holds: software is getting slower more rapidly than hardware is getting faster.


But why not measure $ per intelligence. In 2020 you'd need a billion dollars to get your computer to write good code, now it is practically free.


> This is going to maim the economy in a generational way.

Just as I'm getting to the point where I can see retirement coming from off in the distance. Ugh.


One thing to note about modern social media is that the most negative comment tends to become the most upvoted.

You can see that all across this discussion.


> the current debt fueled spending

Honestly I think the most surprising thing about this latest investment boom has been how little debt there is. VC spending and big tech's deep pockets keep banks from being too tangled in all of this, so the fallout will be much more gentle imo.


We don’t have moores law anymore. Why are the chips obseleting so quickly?


FLOP/s/$ is still increasing exponentially, even if the specific components don't match Moore's original phrasing.

Markets for electronics have momentum, and estimating that momentum is how chip producers plan for investment in manufacturing capacity, and how chip consumers plan for deprecation.


They kind of aren't. If you actually look at "how many dollars am I spending per month on electricity", there's a good chance it's not worth upgrading even if your computer is 10 years old.

Of course this does make some moderate assumptions that it was a solid build in the first place, not a flimsy laptop, not artificially made obsolete/slow, etc. Even then, "install an SSD" and "install more RAM" is most of everything.

Of course, if you are a developer you should avoid doing these things so you won't get encouraged to write crappy programs.


Companies want GW data centers, which are a new thing that will last decades, even if GPUs are consumable and have high failure rates. Also, depending on how far it takes us, it could upgrade the electric grid, make electricity cheaper.

And there will also be software infrastructure which could be durable. There will be improvements to software tooling and the ecosystem. We will have enormous pre-trained foundation models. These model weight artifacts could be copied for free, distilled, or fine tuned for a fraction of the cost.


About 40% of AI infrastructure spending is the physical datacenter itself and the associated energy production. 60% is the chips.

That 40% has a very long shelf life.

Unfortunately, the energy component is almost entirely fossil fuels, so the global warming impact is pretty significant.

At this point, geoengineering is the only thing that can earn us a bit of time to figure...idk, something out, and we can only hope the oceans don't acidify too much in the meantime.


Interesting. Do you have any sources for this 60/40 split? And while I agree that the infrastructure has a long shelf life, it seems to me like an AI bubble burst would greatly depreciate the value of this infrastructure as the demand for it plummets, no?


Intel chips from 2008, as there is no real improvements.


They're only replacing GPUs because investors will give "free" money to do so. Once the bubble pops people will realize that GPUs actually last a while.


While yes, I sure look forward to the flood of cheap graphics cards we will see 5-10 years from now. I don't need the newest card, but I don't mind the five-year old top-of-the-line at discount prices.


I think you partially answer to yourself though. Is the value in the depreciating chips, or in the huge datacenters, with cooling, energy supply, at such scale etc. ?


I am not still using the same 1Mbps token ring from 1998 or the same dial up connecting to some 10Mbps backbone.

I am using x86 chips though.


A lot of the infrastructure made during the Dotcom boom was shortly discarded. How many dial-up modems were sold in the 90s?

The current AI bubble is leading to trained models that won't be feasible to retrain for a decade or longer after the bubble bursts.


The wealth the Dotcom boom left behind wasn't in dial up modems or internet over the telephone, it was in the huge amounts of high speed fiber optic networks that were laid down. I think a lot of that infrastructure is still in use today, fiber optic cables can last 30 years or more.


> fiber optic cables can last 30 years or more

The trenches for the cables even longer than that.


How much of it is still dark?


Honestly, not that many people had modems.


In the late 90s to 2001? Many people were still using modems at that time. Cable or DSL wasn't even an option for a considerable percentage of the population.


In 2000 6% of the population had access to internet.

In 2002 I was woking making webs and setting up linux servers and I did not have internet at home.


This is pretty location specific—in the US 42% of households had home internet in 2000.

https://www.statista.com/statistics/189349/us-households-hom...


That web is not a good source.


Yes it is.

Low Global Penetration: Only 361 million people had internet access worldwide in 2000, a small fraction of the global population. Specific Country Examples United States: The US had a significant portion of the world's internet users, making up 31.1% of all global users in 2000. Its penetration rate was 43.1%.


Not still using, flat out modemless. Lots of guys got their hand on a mouse for the first time only after Windows XP launched. Which was after the collapse.


Windows 95 didnt even have Internet by default.

It had the Microsoft network or whatever it was called.


my 486sx with math co-processor is long gone.


Personally I think people should stop trying to reason from the past.

As tempting as it is, it leads to false outcomes because you are not thinking about how this particular situation is going to impact society and the economy.

Its much harder to reason this way, but isnt that the point? personally I dont want to hear or read analogies based on the past - I want to see and read stuff that comes from original thinking.


Doesn't that line of reasoning leave you in danger of being largely ignorant? There's a wonderful quote from Twain "History doesn't repeat itself but it often rhymes" there are two critical things I'd highlight in that quote - first off the contrast between repetition and rhyming is drawing attention to the fact that things are never exactly the same - there's just a gist of similarities - the second is that it often but doesn't always rhyme - this sure looks like a bubble but it might not be and it might be something entirely new. _That all said_ it's important to learn from history because there are clear echoes of history in events because we, people in general, don't change that fundamentally.


IME the number of times where people have said "this time it's different" and been wrong is a lot higher than the number of times they've said "this time is the same as the last" and been wrong. In fact, it is the increasing prevalence of the idea that "this time it's different" that makes me batten down the hatches and invest somewhere with more stability.


With all due respect, Im not here to teach people how to think.

This guy gets it - https://www.youtube.com/watch?v=kxLCTA5wQow

Instead of plainly jumping on the bubble bandwagon he actually goes through a thorough analysis.


This won’t even come close to maiming the economy, that’s one of the more extreme takes I’ve heard.

AI is already making us wildly more productive. I vibe coded 5 deep ML libraries over the last month or so. This would have taken me maybe years before when I was manually coding as an MLE.

We have clearly hit the stage of exponential improvement, and to not invest basically everything we have in it would be crazy. Anyone who doesn’t see that is missing the bigger picture.


Go ahead and post what youve done fella so we can critique.

Why is it all these kinds of posts never come with any attachments? We are all interested to see it m8.


Because they are private and I can’t make them public. I have no self interest here, I’m an MLE by trade and this technology threatens my job.

I had been mostly opposed to vibe coding for a long time, autocomplete was fine but full agentic coding just made a mess.

That’s changed now, this stuff is genuinely working on really hard problems.


So you have nothing to show. Typical


You can go use Codex on high and try it yourself, but I'm sure you'll prove yourself right


ZZZ show something or stop posting fella.


The quality of your contributions is extremely low. You're making hacker news worse.


yes because all company libraries should be public lol


Video game crash followed by video games taking off and eclipsing most other forms of digital entertainment.

Dot com crash followed by the web getting pretty popular and a bit central to business.

To all those betting big on AI before the crash:

Careful, Icarus.


Bad comparison.

The leap of faith necessary in LLMs to achieve the same feat is so large its very difficult to imagine it happening. Particularly due to the well known constraints on what the technology is capable of.

The whole investment thesis of LLMs is that it will be able to a) be intelligent b) produce new knowledge. If those two things that dont happen, what has been delivered is not commensurate to the risk in regards to the money invested.


Given they're referencing Icarus, they seem to agree with you.

Past bubbles leaving behind something of value is indeed no guarantee the current bubble will do so. For as many times as people post "but dotcom produced Amazon" to HN, people had posted that exact argument about the Blockchain, the NFT, or the "Metaverse" bubbles.


Many AI startups around LLMs are going to crash and burn.

This is because many people have mistaken LLMs for AI, when they’re just a small subset of the technology - and this has driven myopic focus in a lot of development, and has lead to naive investors placing bets on golden dog turds.

I disagree on AI as a whole, however - as unlike previous technologies this one can self-ratchet and bootstrap. ML designed chips, ML designed models, and around you go until god pops out the exit chute.


> Careful, Icarus.

What does that even mean?

pets.com was a fat loser only telling telling people that were going to fly.

Amazon was Icarus, they did something.

Vs weak commentators going on about the wax melting from their parents root cellar while Icarus was soaring.

Most of Y Combinator are not using AI they just say that and you're worried about the people who do things?


> commentators going on about the wax melting from their parents root cellar while Icarus was soaring.

Icarus drowned in the sea.

Even if you want to put the world into only two lumps of cellar dwellers and Icaruses it is still a group of living people on one side and a floating/semi-submerged pile of dead bodies that are literally only remembered for how stupid their deaths were on the other.


Dotcom mania companies were not Internet providers. They tried making money on the internet, something people already saw as worth paying for.


Cisco, Level3 and WorldCom all saw astronomical valuation spikes during the dotcom bubble and all three saw their stock prices and actual business prospects collapse in the aftermath of it.

Perhaps the most famous implosion of all was AOL who merged (sort of) with TimeWarner gaining the lion's share of control through market cap balancing. AOL fell so destructively that it nearly wiped out all the value of the actual hard assets that TW controlled pre-merger.


This is not really true, e.g. Cogent was basically created by buying bankrupt dotcum-bubble network providers for cents on the dollar.


Also AOL was a mix of dialup provider and Dotcom service. There were many other popular examples of such.


I don't think it is fair to characterize AOL as a bubble baby- they were a thing before that


Nor I dont think Coget, GX or Level-3 were dotcom companies.


I would add more metrics to think about. For example, very few people used Internet in the dotcom era while now the AI use is distributed into all the population using the Internet that will probably not growth too much. In this case, if Internet population is the driver, and it will not growth significantly we are redistributing the attention. Assuming "all" society will be more productive we will all be in the same train at the relatively same speed.


And what were the societal benefits of the internet?


That everybody all over the world can instantly connect with each other?


The 90s bubble also had massive financial fraud and laid capital that wasn’t used at 100% utilization when it hit the ground like what we are seeing now.

It’s different enough that it probably isn’t relevant.


> [At dotcom time] There was a bubble, many non-viable companies got funded and died, and nevertheless the internet did eventually change everything.

It did, but not for the better. Quality of life and standard of living both declined while income inequality skyrocketed and that period of time is now known as The Great Divergence.

> He's (unsurprisingly) making an analogy to the dotcom bubble, which seems to me correct.

He's got no downside if he's wrong or doesn't deliver, he's promising an analogy to selling you a brand new bridge in exchange for taking half of your money... and you're ecstatic about it.


Thank you for acknowledging this. The internet was created around a lot of lofty idealism and none of that has been realized other than opening up the world's information to a great many. It made society and the global economy worse (occidental west; Chinese middle class might disagree) and has paralleled the destabilization of geopolitics. I am not luddite but until we can, "get our moral shit together" new technologies aren't but fuel on the proverbial fire.


Glad to be in agreement. The higher message here is that technology is no substitute for politics, cue crypto-hype which produced little more than crime and money-laundering. Without proper policies, corruption invades every strata of society.


The analogy to the dot com bubble is leaky at best. AI will hit a point of exponential improvement, we are already in the outer parts of this loop.

It will become so valuable so fast we struggle to comprehend it.


Then why has my experience with AI started to see such dramatically diminishing returns?

2022-2023 AI changed enough to be me to convert from skeptic, to a believer. I started working as an AI Engineer and wanted to be on the front lines.

2023-2024 Again, major changes, especially as far as coding goes. I started building very promising prototypes for companies, was able to build a laundry list of projects that were just boring to write.

2024-2025 My day to day usage has decreased. The models seem better at fact finding but worse for code. None of those "cool" prototypes from myself or anyone else I knew seemed to be able to become more than just that. Many of the cool companies I started learning about in 2022 started to reduce staff and are running into financial troubles.

The only area where I've been impressed is the relatively niche improvements in open source text/image to video models. It's wild that you can make sure animated films on a home computer now.

But even there I'm seeing no signs of "exponential improvement".


I vibe coded 5 deep ML libraries this month. I'm an MLE by trade and it would have taken me ages without AI. This wasn't possible even a year ago. I have no idea how anyone thinks the models haven't improved


> This wasn't possible even a year ago.

My experience has been that it was. I was using AI last year to build ML models about as well as I have been this year.

I'm not saying AI isn't useful, just that the progress certainly looks to be sigmoid not exponential in growth. By far the biggest year for improvement was 2022-2023. Early 2022 I didn't think any of the code assistants were useful, by 2023 I was able to use them more reliably. 2024 was another big improvement, but I honestly haven't felt the change (at least not for the better).

Some of the tooling may be better, but that has little do to with exponential progress in AI itself.


Wow really? The agentic coding work that has come out in the last year are super impressive to me.

And before it didn’t seem to understand the fundamentals of Torch well, not well enough to do novel work. Now with Codex in high it absolutely does, and MLE bench reflects that


While this remains possible my main impression now is that progress seems to be slowing down rather than accelerating.


Not even remotely. In LLM land, the progress seems slow the past few years, but a lot has happened under the hood.

Elsewhere in AI however progress has been enormous, and many projects are only now reaching the point where they are starting to have valuable outputs. Take video gen for instance - it simply did not exist outside of research labs a few years ago, and now it’s getting to the point where it’s actually useful - and that’s just a very visible example, never mind the models being applied to everything from plasma physics to kidney disease.


> the progress seems slow the past few years, but a lot has happened under the hood.

The claim is "exponential" progress, exponential progress never seems "slow" after it has started to become visible.

I've worked in the research part of this space, there's neat stuff happening, but we are very clearly in the diminishing returns phase of development.


We continue to see steady progress, look at any benchmark and its basically a linear increase.


If you keep up with the research this isn't the case, ML timelines have always been slower than anyone likes


I'm not so sure about this.

First were the models. Then the APIs. Then the cost efficiencies. Right now the tooling and automated workflows. Next will be a frantic effort to "AI-Everything". A lot of things won't make the cut, but absolutely many tasks, whole jobs, and perhaps entire subsets of industries will flip over.

For example you might say no AI can write a completely tested, secure, fully functional mobile app with one prompt (yet). But look at the advancements in Cline, Claude code, MCPs, code execution environments, and other tooling in just the last 6 months.

The whole monkeys typewriters shakespeare thing starts to become viable.


Maybe once we get another architecture breakthrough, it won't feel so slow.


Very few people predicted LLMs, yet lots of people are now very certain they know what the future of AI holds. I have no idea why so many people have so much faith in their ability to predict the future of technology, when the evidence that they can't is so clear.

It's certainly possible that AI will improve this way, but I'd wager it's extremely unlikely. My sense is that what people are calling AI will later be recognized as obviously steroidal statistical models that could do little else than remix and regurgitate in convincing ways. I guess time will tell which of us is correct.


If those statistical models are helping you do better research, or basically doing most of it better than you can, does it matter? People act like models are implicitly bad because they are statistical, which makes no sense at all.

If the model is doing meaningful research that moves along the state of the ecosystem, then we are in the outer loop of self improvement. And yes it will progress because thats the nature of it doing meaningful work.


> If the model is doing meaningful research that moves along the state of the ecosystem, then we are in the outer loop of self improvement.

That's a lot of vague language. I don't really see any way to respond. I suppose I can say this much: the usefulness of a tool is not proof of the correctness of the predictions we make about it.

> And yes it will progress because thats the nature of it doing meaningful work.

This is a non sequitur. It makes no sense.

And I never said there's anything bad about or wrong with statistical models.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: