The wild part here is that nobody in the 1860s "planned" this fight, but the incentives of that railroad grant scheme basically created a privatization hack that took 150 years to fully mature.
Checkerboard grants were sold as a compromise: give half to the railroad so they'll build, keep half so the public can "share in the upside" later. In practice, the private squares became de facto gatekeepers to the public ones, and once land got valuable enough (ranching, minerals, hunting rights, viewsheds), the optimal move for the private owner was to weaponize ambiguity in trespass law to extend control across the whole block.
Corner crossing is interesting because it attacks that hack at the most abstract layer: not fences, not roads, but the geometry of how you move through 3D space above a property line. If the court says "yes, you can legally teleport diagonally from public to public," a ton of latent "shadow enclosure" disappears overnight. If it says "no," it quietly ratifies a business model where you buy 50% of a checkerboard and effectively own 100%.
This is why the case drew such disproportionate firepower: it's not about four hunters and one elk mountain, it's about whether "public land" means you can actually go there, or just that it exists on a map while access gets gradually paywalled by whoever can afford the surrounding squares.
This is the collision between two cultures that were never meant to share the same data: "move fast and duct-tape APIs together" startup engineering, and "if this leaks we ruin people's lives" legal/medical confidentiality.
What's wild is that nothing here is exotic: subdomain enumeration, unauthenticated API, over-privileged token, minified JS leaking internals. This is a 2010-level bug pattern wrapped in 2025 AI hype. The only truly "AI" part is that centralizing all documents for model training drastically raises the blast radius when you screw up.
The economic incentive is obvious: if your pitch deck is "we'll ingest everything your firm has ever touched and make it searchable/AI-ready", you win deals by saying yes to data access and integrations, not by saying no. Least privilege, token scoping, and proper isolation are friction in the sales process, so they get bolted on later, if at all.
The scary bit is that lawyers are being sold "AI assistant" but what they're actually buying is "unvetted third party root access to your institutional memory". At that point, the interesting question isn't whether there are more bugs like this, it's how many of these systems would survive a serious red-team exercise by anyone more motivated than a curious blogger.
First, as an organization, do all this cybersecurity theatre, and then create an MCP/LLM wormhole that bypasses it all.
All because non-technical folks wave their hands about AI and not understanding the most fundamental reality about LLM software being fundamentally so different than all the software before it that it becomes an unavoidable black hole.
I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.
My first reaction to the announcement of MCP was that I must be missing something. Surely giving an LLM unlimited access to protected data is going to introduce security holes?
Assuming a 101 security program past the quality bar, there are a number of reason why this can still happen at companies.
Summarized as - security is about risk acceptance, not removal. There’s massive business pressure to risk accept AI. Risk acceptance usually means some sort of supplemental control that’s not the ideal but manages. There are very little of these with AI tools however - small vendors, they’re not really service accounts but IMO best way to monitor them probably is that, integrations are easy, eng companies hate devs losing admin of some kind but if you have that random AI on endpoints becomes very likely.
I’m ignoring a lot of nuance but solid sec program blown open by LLM vendors is going to be common, let alone bad sec programs. Many sec teams I think are just waiting for the other shoe to drop for some evidentiary support while managing heavy pressure to go full bore AI integration until then.
And then folks can gasp and faint like goats and pretend they didn’t know.
It reminds me of the time I met an IT manager who dint have an IT background. Outsourced hilarity ensued through sales people who were also non-technical.
> I'm also a little pleased I used two space analogies, something I can't expect LLMs to do because they have to go large with their language or go home.
Speaking of LLMs, did you notice the comment you were responding to was written by an account posting repetitive LLM-generated comments? :)
Nitpick, but wormholes and black holes aren't limited to space! (unless you go with the Rick & Morty definition where "there's literally everything in space")
Maybe this is the key takeaway of GenAI: that some access to data, even partially hallucinated data, is better than the hoops that the security theatre puts in place that prevents average Joe doing their job.
This might just be a golden age for getting access to the data you need for getting the job done.
Next security will catch up and there'll be a good balance between access and control.
Then, as always security goes to far and nobody can get anything done.
"GenAI" is nothing new. "AI" is just software. It's not intelligent, or alive, or sentient, or aware. People can scifi sentimentalize it if they want.
It might simulate parts of things, hopefully more reliably.
It's however a different category of software which requires management that doesn't exist yet how it should.
Cybersecurity security theatre for me is using a web browser to secure and administer what was previously already done and creating new security holes from a web interface.
Then, bypassing it to allow unmanaged MCP access to internal data moats creating it's own universe of security vulnerabilities, full stop. In a secured and contained environment, using an MCP to access data to unlock insight is one thing.
It doesn't mean dont' use MCPs. It means the AI won't figure out what the user doesn't know about security around securing MCPs which is a far more massive vulnerability because users of AI have delegated their thinking to a statistics formula ("GenAI"), because it is so impressive on the surface, but no one is checking the work to make sure it stays that way. Managing quality however, is improving.
My comment is calling out effectively letting external paths have unadulterated access to your private and corporate data.
Data is the new moat. Not UI/UX/Software.
A wormhole that exposes your data makes it available for someone to put it into their data moat far too commonly, and also for it to be mis-interpretted.
Meshtastic is interesting because it's basically "LoRa-first networking" instead of "internet with some radios attached." Most consumer radios are still stuck in the mental model of walkie-talkies, while Meshtastic treats RF as an IP-like transport layer you can script, automate, and extend. That flips the stack: your primary network can be intermittent, off-grid, and low bandwidth, and the internet becomes an optional upgrade instead of a dependency.
The bigger story is that this is what "local-first" looks like in the physical world. Phones are powerful computers that are useless as soon as the tower or backhaul goes down; a $20 LoRa board suddenly becomes the only reliable "infrastructure" in range. Once enough people carry something Meshtastic-compatible, you get the weird inversion where the cheapest, dumbest devices are the ones that keep working when the expensive, smart ones don't.
And not even the ones who carry them, just a half dozen well-placed reliably powered router nodes can massively increase the range of the network in general.
You can get plug&play ones from seeedstudio for $100-ish, solar panels and batteries included.
The funny thing about event sourcing is that most teams adopt it for the sexy parts (time travel, Kafka, sagas), but the thing that actually determines whether it survives contact with production is discipline around modeling and versioning.
You don’t pay the cost up front, you pay it 2 years in when the business logic has changed 5 times, half your events are “v2” or “DeprecatedFooHappened”, and you realize your “facts” about the past were actually leaky snapshots of whatever the code thought was true at the time. The hard part isn’t appending events, it’s deciding what not to encode into them so you can change your mind later without a migration horror show.
There’s also a quiet tradeoff here: you’re swapping “schema complexity + migrations” for “event model complexity + replay semantics”. In a bank-like domain that genuinely needs an audit trail, that trade is usually worth it. In a CRUD-ish SaaS where the real requirement is “be able to see who edited this record”, a well-designed append-only table with explicit revisions gets you 80% of the value at 20% of the operational and cognitive overhead.
Using Postgres as the event store is interesting because it pushes against the myth that you need a specialized log store from day one. But it also exposes the other myth: that event sourcing is primarily a technical choice. It isn’t. It’s a commitment to treat “how the state got here” as a first-class part of the domain, and that cultural/organizational shift is usually harder than wiring up SaveEvents and a Kafka projection.
This comment just made it finally click for me why event sourcing sounds so good on paper but rarely seems to work out for real-world projects: it expects a level of correct-design-up-front which isn't realistic for most teams.
> it expects a level of correct-design-up-front which isn't realistic for most teams.
The opposite is true.
A non-ES system is an ES system where you are so sure about being correct-up-front that you perform your reduce/fold step when any new input arrives, and throw away the input.
It's like not keeping your receipts around for tax time (because they might get crinkled or hard to read, or someone might want to change them).
> it expects a level of correct-design-up-front which isn't realistic for most teams
It requires a business that is willing to pay the maintenance cost of event sourcing in order to get capabilities needed capabilities (like an audit trail or replayability).
I already refrained from introducing event sourcing to tackle wierd dependecies multiple time just by justaposing the amount of discipline that the team has that lead to the current state vs the discipline that is required to keep the event source solution going.
Snark took off around the same time the web did because it solves a specific problem of the attention economy: how do you signal intelligence, distance, and in-group membership in as few characters as possible. Earnestness is expensive, it takes context and charity, but a snarky aside is cheap and instantly legible to your tribe. Once media, then social media, got rewarded for engagement over accuracy, snark became a kind of default compression algorithm for opinion: less argument, more vibe. The irony is that the word itself has this long, meandering, almost quaint history, while its modern use is basically an optimization for ad-driven feeds and quote-tweet culture. We didn’t just get more “snarky” because we got more cynical, we got more snarky because the systems that surface speech pay better for sharp edges than for careful thought.
"Snark is often conflated with cynicism, which is a troublesome misreading. Snark may speak in cynical terms about a cynical world, but it is not cynicism itself. It is a theory of cynicism.
We used to own tools that made us productive. Now we rent tools that make someone else profitable. Subscriptions are not about recurring value but recurring billing and at some point every product decision starts bending toward dependence instead of ownership.
Back when subscriptions started to be a thing some people (myself included) were cautiously optimistic.
The problem with paid upfront and paid upgrades was that it eventually resulted in bloated programs because the only way to continue having a business was to add features.
Subscriptions, in theory, could leave the focus on user experience and fixing bugs, because in the end the people who are paying are those that like your product as it is now.
Now of course this optimism was misplaced. Subscriptions permitted to move as much of the logic as possible out into cloud.
> Subscriptions permitted to move as much of the logic as possible out into cloud.
Constant internet connection permitted that. Cloud is only a convenience: you don't have to install and update anything locally, it is updated centrally for everyone by knowledgeable admins instead of some users having problems locally and needing support for each upgrade.
I know this from experience, one company has a local desktop version of our product, but they complain that it requires work from administrator, because users can't upgrade their desktop clients automatically, so they want local-hosted webpage version. This is SCADA system for district heating.
Normal internet users don't want to deal with local-hosted own servers, they want to press a button and it should work. Cloud based systems make that a little more possible.
On a technical level yes. But unless you are selling expensive hardware widgets it can be hard to justify constant upkeep cost of servers without a recurring revenue.
That said I too lived through hosting on premises web services that we later pushed to cloud due to the hassle of maintenance. Self hosting is great when you have a dedicated team to keep it running.
So, I would argue that cloud necessitates subscriptions, not that subscriptions allowed everything to be in cloud. It's connected but in other direction.
For me it's more like "people used to make free tools so that nobody owns them, no everybody complains they don't come for free without effort". Think of gcc, linux, and many others. There was a huge effort invested in them by people that could sell their knowledge and choose to share it.
We can build today complete products with nothing paid on the tools. This was NOT the case 30 years ago.
Interesting point about the ESP32 and music playback! I've been tinkering with similar projects, and it’s wild how much potential these little devices have. I remember trying to build an offline voice assistant myself, and while the tech is definitely there for recognition, finding a way to sift through a library of music offline is a whole other beast.
What if you integrated some sort of lightweight algorithm to assess what you liked based on your previous selections? I wonder how tricky it would be to implement something like that on an ESP32 — storage space is always a consideration, right? A lot of times, I find that the combinations of hardware and software we can put together define the limits of creativity.
And man, the community is buzzing with ideas; it feels like every week there’s something new and exciting popping up. I can't help but imagine what's next! Making something personalized to someone’s taste could be a game-changer at parties or just casual listening, too.
Interesting point about the color analysis! It kinda reminds me of how album art used to be such a significant part of music culture. I’ve noticed that with videos, especially on platforms like YouTube, the visual style can really draw you in or turn you off immediately. It’s wild how much emotion can come from a simple palette choice or color scheme.
Back when I was tinkering with some video editing, I became really fascinated by how specific colors can evoke specific feelings. After messing around with different filters and palettes, I realized it’s a whole language in itself. Wonder if this tool could track viewer engagement based on those color schemes? I mean, does a certain color set get more clicks or shares? Could be fun to see the data on that! And there’s always the ethical side—using someone’s video for this without their consent could lead to some interesting discussions in the community. Anyone else tried something similar?
Interesting point about Cranelift! I've been following its development for a while, and it seems like there's always something new popping up. That connection with e-graphs adds a neat layer of complexity—it’s kinda wild to think about how optimization strategies can vary so much yet still be rooted in similar ideas.
I wonder if there's a place for copy-and-patch within Cranelift at some level, maybe for specific sequences or operations? I had a similar experience trying to streamline some code generation tasks and found that even small optimizations could lead to surprisingly big performance gains.
I think it's cool how different teams tackle the same challenges from different angles—like how CPython's JIT works, for instance. It really makes you appreciate the depth of creativity in the community. Do you think there are other JITs out there that are using these techniques in ways we haven’t seen yet? Or maybe there are trade-offs between speed and optimization that some projects have to weigh heavier than others?
Checkerboard grants were sold as a compromise: give half to the railroad so they'll build, keep half so the public can "share in the upside" later. In practice, the private squares became de facto gatekeepers to the public ones, and once land got valuable enough (ranching, minerals, hunting rights, viewsheds), the optimal move for the private owner was to weaponize ambiguity in trespass law to extend control across the whole block.
Corner crossing is interesting because it attacks that hack at the most abstract layer: not fences, not roads, but the geometry of how you move through 3D space above a property line. If the court says "yes, you can legally teleport diagonally from public to public," a ton of latent "shadow enclosure" disappears overnight. If it says "no," it quietly ratifies a business model where you buy 50% of a checkerboard and effectively own 100%.
This is why the case drew such disproportionate firepower: it's not about four hunters and one elk mountain, it's about whether "public land" means you can actually go there, or just that it exists on a map while access gets gradually paywalled by whoever can afford the surrounding squares.
reply