Hacker Newsnew | past | comments | ask | show | jobs | submit | andy12_'s commentslogin

The most important context is this image[1] from the Guardia Civil. Using Google Maps, and using as context the tree, post and yellow connection box in the image, we can place its location at 180m before the accident in the tracks of the Iryo train. The image appears to show a track welding failure. This would match the reports of some passengers[2] that reported that the "train started shaking violently" before the accident.

Photo at 38.00771000519087, -4.565435982666953

Accident at 38.009292813090475, -4.564960554581273

[1] https://img2.rtve.es/im/16899875/?w=900

[2] https://x.com/eleanorinthesky/status/2012961856520917401?s=2...


The first image looks like sabotage to me. Continuous welded rail sections are much longer than this gap.

Just a few weeks ago, terrorists twice tried to sabotage rail lines in Poland, endangering a passenger train with hundreds of people.

> "[Prime Minister Donald] Tusk said that a military-grade C4 explosive device had been detonated on 15 November at about 21:00 (20:00 GMT) near the village of Mika."

> "The explosion, which happened as a freight train was passing, caused minor damage to a wagon floor. It was captured on CCTV."

> "Tusk said the train driver had not even noticed the incident."

> "A previous attempt to derail a train by placing a steel clamp on the rail had failed, he added."

> "The second act of sabotage, on 17 November, involved a train carrying 475 passengers having to suddenly brake because of damaged railway infrastructure, said Tusk."

https://www.bbc.com/news/articles/c4gknv8nxlzo


So, was it the Russians or the Ukrainians (as is the case with the Nordstream pipelines)?

Wouldn't the gap simply be the result of loss of tension after the weld broke? Metal expands in the heat (about 1cm per degree C per km). Weather shows it got down to around 0C in Córdoba last night while the summer record is around 47C so one would expect a fairly large gap once tension is released.

That's not the way stuff like this is built nowadays. Meaning the thermal expansion and shrinkage of rails is considered and accounted for(or should).

Thus things like these are integrated:

https://en.wikipedia.org/wiki/Breather_switch


As I understand it breather switches are used rarely in high speed rail systems. The ride on Spanish high speed trains is very smooth. At 300km/h (5km per minute) you’d notice going over a breather switch. It’d be like taking Amtrak’s Acela.

The gap looks about 50cm which is maybe 1.5km of contraction from installation tension.


I disagree. Though I've never ridden Acela, I did Intercity Express at 330kph. Since I've been rail fannig in my youth, I still look out for rail-related stuff. Even if it's 'only infrastructure'. Meaning I notice that stuff in pictures in reports about building/opening new HSR track. No matter where. Seems like they are mandatory. You just don't notice them, even when looking out of the window onto the other track, because it's all just a blur. Need to be on an overpass, and looking down onto where they are, for instance, or from the side, during construction or maintenance, watching how the machines operate, and wondering about what they are doing there. Because it's an interruption :-)

Some better pictures:

https://www.eisenbahn.gerhard-obermayr.com/produzenten/vae/s...

https://slabtrackaustria.com/our-technology/red/

https://www.voestalpine.com/railway-systems/en/products/rail...

https://upload.wikimedia.org/wikipedia/commons/7/7d/Oelzetal...

https://upload.wikimedia.org/wikipedia/commons/b/bf/Schienen...

https://cmi-promex.com/wp-content/uploads/2023/06/CurvedRail...

https://cmi-promex.com/wp-content/uploads/2023/06/Sound-Tran...

https://www.hsrail.org/wp-content/uploads/2024/06/HSR_Track_...


Your pictures show breather switches installed at a tunnel portal where they are necessary to handle the large differences in temperature and on what looks like various bridges which can be subject to their own thermal expansion. But at least as I understand it there's normally no need for them on continuously welded rail otherwise.

We will know once the report is public. In Poland, they explicitly left C14 to make sure everybody understands who did it.


Looks like a pull-apart: bad weld, then cold weather caused contraction from both sides making a gap. Pretty massive for a pull-apart but not impossible.

If sabotage it will be plain as day to a trained eye. I await the report. That break could also be explained by the rail heading away in that photo snapping at that point because the train pushed it out, noting the rail has rotated 90 degrees clockwise -- something did that work, and it was probably the train going out and over. I'm not a rail tie expert (nor is anyone likely to be on HN) so I don't know if this is an unusual failure mode. But there was a line change point intersection immediately south of the crash. My money is there was a fault (accidental or deliberate) there, not at this snapping point.

Because AGI is still some years away even if you are optimistic; and OpenAI must avoid going to the ground in the meantime due to lack of revenue. Selling ads and believing that AGI is reachable in the near future is not incompatible.

>Because AGI is still some years away

For years now, proponents have insisted that AI would improve at an exponential rate. I think we can now say for sure that this was incorrect.


> For years now, proponents have insisted that AI would improve at an exponential rate.

Did they? The scaling "laws" seem at best logarithmic: double the training data or model size for each additional unit of... "intelligence?"

We're well past the point of believing in creating a Machine God and asking Him for money. LLMs are good at some easily verifiable tasks like coding to a test suite, and can also be used as a sort-of search engine. The former is a useful new product; the latter is just another surface for ads.


Yes, they did, or at least some of them did. The claim was that AI would become smarter than us, and therefore be able to improve itself into an even smarter AI, and that the improvement would happen at computer rather than human speeds.

That is, shall we say, not yet proven. But it's not yet disproven exactly, either, because the AIs we have are definitely not yet smart enough to meet the starting threshold. (Can you imagine trying to let an LLM implement an LLM, on its own? Would you get something smarter? No, it would definitely be dumber.)

Now the question is, has AI (such as we have it so far) given any hint that it will be able to exceed that threshold? It appears to me that the answer so far is no.

But even if the answer is yes, and even if we eventually exceed that threshold, the exponential claim is still unsupported by any evidence. It could be just making logarithmic improvements at machine speed, which is going to be considerably less dramatic.


The original AGI timeline was 2027-2028, ads are an admission that the timeline is further out.

I think the problem is the formulation "If so, AGI can't be far behind". I think that if a model were advanced enough such that it could do Einstein's job, that's it; that's AGI. Would it be ASI? Not necessarily, but that's another matter.

The phone in your pocket can perform arithmetic many orders of magnitude faster than any human, even the fringe autistic savant type. Yet it's still obviously not intelligent.

Excellence at any given task is not indicative of intelligence. I think we set these sort of false goalposts because we want something that sounds achievable but is just out of reach at one moment in time. For instance at one time it was believed that a computer playing chess at the level of a human would be proof of intelligence. Of course it sounds naive now, but it was genuinely believed. It ultimately not being so is not us moving the goalposts, so much as us setting artificially low goalposts to begin with.

So for instance what we're speaking of here is logical processing across natural language, yet human intelligence predates natural language. It poses a bit of a logical problem to then define intelligence as the logical processing of natural language.


The problem is that so far, SOTA generalist models are not excellent at just one particular task. They have a very wide range of tasks they are good at, and good scores in one particular benchmarks correlates very strongly with good scores in almost all other benchmarks, even esoteric benchmarks that AI labs certainly didn't train against.

I'm sure, without any uncertainty, that any generalist model able to do what Einstein did would be AGI, as in, that model would be able to perform any cognitive task that an intelligent human being could complete in a reasonable amount of time (here "reasonable" depends on the task at hand; it could be minutes, hours, days, years, etc).


I see things rather differently. Here's a few points in no particular order:

(1) - A major part of the challenge is in not being directed towards something. There was no external guidance for Einstein - he wasn't even a formal researcher at the time of his breakthroughs. An LLM might be able to be handheld towards relativity, though I doubt it, but given the prompt of 'hey find something revolutionary' it's obviously never going to respond with anything relevant, even with substantially greater precision specifying field/subtopic/etc.

(2) - Logical processing of natural language remains one small aspect of intelligence. For example - humanity invented natural language from nothing. The concept of an LLM doing this is a nonstarter since they're dependent upon token prediction, yet we're speaking of starting with 0 tokens.

(3) - LLMs are, in many ways, very much like calculators. They can indeed achieve some quite impressive feats in specific domains, yet then they will completely hallucinate nonsense on relatively trivial queries, particularly on topics where there isn't extensive data to drive their token prediction. I don't entirely understand your extreme optimism towards LLMs given this proclivity for hallucination. Their ability to produce compelling nonsense makes them particularly tedious for using to do anything you don't already effectively know the answer to.


> I don't entirely understand your extreme optimism towards LLMs given this proclivity for hallucination

Simply because I don't see hallucinations as a permanent problem. I see that models keep improving more and more in this regard, and I don't see why the hallucination rate can't be abirtrarily reduced with further improvements to the architecture. When I ask Claude about obscure topics, it correctly replies "I don't know", where past models would have hallucinated an answer. When I use GPT 5.2-thinking for my ML research job, I pretty much never encounter hallucinations.


Hahah, well you working in the field probably explains your optimism more than your words! If you pretty much never encounter hallucinations with GPT then you're probably dealing with it on topics where there's less of a right or wrong answer. I encounter them literally every single time I start trying to work out a technical problem with it.

Well the "prompt" in this case would be Einstein's neurotype and all his life experiences. Might a bit long for the current context windows though ;)

Is it that weird that AI agents (and arguably also humans) are faster and more efficient to use if standardized APIs/UIs are available?

This is probably related to this [1] if anyone is wondering.

https://news.ycombinator.com/item?id=46527950


Most likely, as Adam directly "credited" their revenue issues to AI (which makes sense, tailwind was making money by selling pre-made components, but now the AI can generate those for you).


The AI issue was that their docs advertise their paid offerings. When AI plagiarizes the docs it doesn't include the ads.

When you say plagiarizes, do you mean they are publishing their own docs without ads? Or you mean when the AI is reading the docs instead of a person they ignore the ads?

People don't just ask AI to produce a Tailwind app, they also ask AI specific questions that are answered in the docs. When the AI regurgitates the answers from the docs they don't visit the actual docs. Like the Google answer box in search results stealing clicks from the pages that produce the content.

This would also be true if someone wrote a book about Tailwind. Are they “stealing” clicks the too?

The answer is "it depends". If someone printed out the documentation and bound it together to sell without permission? Yes. The mere act of converting from one medium to another usually isn't transformative.

The test for writing a book is whether the author applied their own judgement in the creation of the book. Even if some explanations of concepts are inevitably similar the structure of the book, the example code, etc. will reflect the author's judgement and experience.

An LLM is incapable of authorial intent. It's not synthesizing the docs with a career of experience and the input of an editor. It's playing madlibs with the work of one or more prior authors.


I think its the latter.

It was a problem with their revenue stream, which was documentation website -> banner for lifetime payment.

All customers already had lifetime access and couldn't pay more. Plus noone was reading the docs on the webpage anymore.

Recurring subscriptions, ads in AI products (think Tailwind MCP server telling you about subscription features.) Those were just two things I pulled out of the hat in a minute.


I can understand recurring subscriptions and ads in MCP being a bright line that the team doesn't want to cross. You will probably say it's a bad business model to not make everything a recurring charge and packed full of ads.

I've experienced this in my own life - I ran my own business and I had to choose between doing a worse job and enshittifying the product to make more money, or doing a good job but risking bankruptcy. I choose bankruptcy, because I believed strongly in doing a good job and not enshittifying the product. I don't regret it.


And since AI knows every Tailwind page, you probably do not need the paid offer for a decent looking page.

Well you always could just read the docs instead of using the paid offer. Took longer. Not anymore.


It’s both.

In which case one has to wonder if we need tailwind at all anymore. To me, years ago, tailwind was a great sell as a tool to work faster by typing less. The tradeoff is that the "inline styles" look awful and become a mess real quickly when too many of them are placed together (so and so has precedence or whatever, a media query for each single property, constantly translating between CSS and tailwind equivalent, etc).

Now? Well, AI solves the entire issue of time taken typing. Classnames always looked cleaner too. Additionally CSS doesn't lag behind Browser features and comes with the full power of the language.

Why bother with Tailwind anymore whatsoever?

They were extremely lucky that AI picked up tailwind to keep it relevant, they should be keeping up with the times if they want to stay relevant. Instead their actions are those of someone that is cowering in fear, making sure that they can put the last of the revenue into the coffer. (reject PR because they don't want AI to do better with tailwind while firing engineers, not to mention the big tantrum).

Lets go back to actual CSS, it is easier to read anyway, it's now a modern tool with variables and all that, there's no longer a need to dumb it down.

Besides, if I wanted to pay for pre-made components, I would go with DaisyUI which is agnostic to the frontend framework, unlike the paid components from tailwindLabs, which strictly require you use one of the javascript frameworks.


> then you have an extremely biased sample to gauge the overall mood of their populace.

I think that if a good chunk of the people that don't agree with their government are basically forced to emigrate you don't get to turn around and say "See, everyone that remains loves the government!"


Because the general idea here is that image and video models, when scaled way up, can generalize like text models did[1], and eventually be treated as "world models"[2]; models that can accurately model real world processes. These "world models" then could be used to train embodied agents with RL in an scalable way[3]. The video-slop and image-slop generators is just a way to take advantage of the current research in world models to get more out of it.

[1] https://arxiv.org/pdf/2509.20328

[2] https://deepmind.google/blog/genie-3-a-new-frontier-for-worl...

[3] https://arxiv.org/pdf/2509.24527


This is extremely similar to Karpathy's idea of a "cognitive core" [1]; an extremely small model with near-0 encyclopedic knowledge and basic reasoning and tool-use capabilities.

[1] https://x.com/karpathy/status/1938626382248149433


I don't care about the supposed ecological consequences of AI. If we need more water, we build more desalination plants. If we need more electricity, we build more nuclear reactors.

This is purely a technological problem and not a moral one.


There were people before “ai” in other industries who were like “I don’t care about ecological consequences of my actions”. We as society have turned them into law-abiding citizens. You will be there too. Don’t worry. Time will come. You will be regulated. Same as cryptocurrencies, chemical, oil and gas, …


If you were capable of time travel and you could go to the past and convince world government of the evil oil and gas industries, and that their expansion should be prevented, would you have done it? Would you have prevented the technological and sociatal advances that came from oil and gas to avoid their ecological consequences?

If you answer yes, I don't think we can agree on anything. If you answer no, I think you are a hypocrite.


Oil and gas is regulated. Took some time but we have gotten there.


In which sense is it regulated? Are they regulated in any way that matters for this discussion? Have their ecological consequences been avoided by regulation? The oil and gas industries continue to be the biggest culprits of climate change, and that cannot be changed by law.

If data centers were "regulated" would that make you happy? Even if those data centers continued to use the same amount of electricity and the same amount of water?


Clean water is a public good, it is required for basic human survival. It is needed to grow crops to feed people. Both of these uses depend on fairly cheap water, in many many places the supply of sufficiently cheap water is already constrained. This is causing a shortage for both basic human needs, and agriculture.

Who will pay for the desalination plant construction? Who will pay for the operation?

If the AI companies are ready to pay the full marginal cost of this "new water", and not free-load on the already insufficient supply needed for more important uses, then fine. But I very much doubt that is what will happen.


The data center companies frequently pay for upgrades to the local water systems.

https://www.hermiston.gov/publicworks/page/hermiston-water-s... - "AWS is covering all construction costs associated with the water service agreement"

https://www.thedalles.org/news_detail_T4_R180.php - "The fees paid by Google have funded essential upgrades to our water systems, ensuring reliable service and addressing the City's growing needs. Additionally, Google continues to pay for its water use and contributes to infrastructure projects that exceed the requirements of its facilities."

https://commerce.idaho.gov/press-releases/meta-announces-kun... - "As part of the company’s commitment to Kuna, Meta is investing approximately $50 million in a new water and sewer system for the city. Infrastructure will be constructed by Meta and dedicated to the City of Kuna to own and operate."


For desalination, the important part is paying the ongoing cost. The opex is much higher, and it's not fair to just average that into the supply for everyone to pay.


Are any data centers using desalinated water? I thought that was a shockingly expensive and hence very rare process.

(I asked ChatGPT and it said that some of the Gulf state data centers do.)

They do use treated (aka drinking) water, but that's a relatively inexpensive process which should be easily covered by the extra cash they shovel into their water systems on an annual basis.

Andy wrote a section about that here: https://andymasley.substack.com/i/175834975/how-big-of-a-dea...


Read the comment I replied to, they proposed that since desalination is possible, there can be no meaningful shortage of water.

And yes, many places have plenty of water. After some Capex improvements to the local system, a datacenter is often net-helpful, as they spread the fixed cost of the water system cost out over more gallons delivered.

But many places don't have lots of water to spare.


Sorry, I'm not impressed much. First of all, their presence has a huge impact on the existing infrastructure, and let's face it, these corporations are not doing improvements out of their goodness of their hearts and because they care about the people that live in those areas. They do it because, pragmatically, they need those upgrades to function in a longer run. Secondly, it is indicative of the state of the current infrastructure and the fact that cityhalls don't have money to improve the infrastructure just for the sake of the locals. I've seen a documentary with a family affected by a datacentre built by meta: https://www.youtube.com/watch?v=X3_YYf0ty4s

I totally identify with Rob Pyke's reaction, because that's how I feel about generative AI every day, especially when more and more articles are published about the negative impact generative AI has on the normal people and especially kids who are very vulnerable to manipulation detrimental to their very own existence. The tech bros don't give a rats tail about us. You can reason and analyse all you want about who does what and whose fault is it, but at the end of the day it would not have happened if this technology didn't exist or it was not pushed so aggressively by all the big corporations. Personally, I hope the bubble will burst and generative AI will crawl back into the hellhole where it came from.


That documentary is about the Georgia data center where construction errors damaged local water supplies - it's bad, but it's not evidence that continued operation of data centers is harmful to local water.


There was no indication that things will be somehow repaired. The video stated that the data center consumes 10% of the whole county water need. Here is another video for you: https://youtu.be/DGjj7wDYaiI . Locals are experiencing higher rates of electricity, almost double because of the data centres. And this is ongoing. I don't think these power companies will rollback rates any time soon.


I was trying to remember where I'd heard of "More Perfect Union" and then I realized they were the outfit Andy Masley specifically called out for misleading people on AI data center use a few months ago: https://andymasley.substack.com/p/more-perfect-union-is-dece...

Do you find his writing there unconvincing?


You just have to extrapolate the improvements in consistency in image model from the last couple of years and apply it to these kinds of video models. When in a couple of years they can generate videos of many physical phenomena such that they are nearly indistinguishably from reality, you'll se why they are called "world models".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: