People always like telling stories. Books, comic strips, movies, they're all just telling a story with a different amount of it left up to the viewer's imagination. Lowering the barrier to entry for this type of stuff is so cool.
I think you have to be pretty pessimistic to not just think it's really cool. You can find issues with it for sure, and maybe argue that those issues outweigh the benefit, but hard to say it's not going to be fun for some people.
>Books, comic strips, movies, they're all just telling a story with a different amount of it left up to the viewer's imagination. Lowering the barrier to entry for this type of stuff is so cool.
This response just never feels true to me. Many of the most successful web comics are crude drawings of just stick figures and text[1] with potentially a little color thrown in[2] and like half of the videos I see on TikTok are just a person talking into the forward facing camera of their phone. The barrier to entry in the pre-AI world isn't actually that high if you have something interesting to say. So when I see this argument about lowering the barrier to entry, I can't stop myself from thinking that maybe the problem is that these people have nothing interesting to say, but no one can admit that to themselves so they must blame it on the production values of their content which surely will be improved by AI.
I think people have a mistaken view of what makes some form of storytelling interesting. Perhaps this is my own bias, but something could be incredibly technically proficient or realistic and I could find it utterly uninteresting. This is because the interesting part is in what is unique about the perspective of the people creating it and ideas they want to express, in relation to their own viewpoint and background.
Like you pointed out, many famous and widely enjoyed pieces of media are extremely simple in their portrayal.
>Perhaps this is my own bias, but something could be incredibly technically proficient or realistic and I could find it utterly uninteresting. This is because the interesting part is in what is unique about the perspective of the people creating it and ideas they want to express, in relation to their own viewpoint and background.
I completely agree. And now that you mention this, I realize I didn't even point to the most obvious and famous examples of this sort of thing with artists like Picasso and Van Gogh.
If someone criticizes Picasso's or Van Gogh's lack of realism, they are completely missing the point of their work. They easily could have and occasional did go for a more photorealistic look, but that isn't what made them important artists. What set them apart was the ways they eschewed photorealism in order to communicate something deeper.
Similarly, creating art in their individual styles isn't interesting because it shifts the primary goal from communication to emulation. That is all AI art really is, attempts at imitation, and imitation without iteration just isn't interesting from an artistic or storytelling perspective.
This "barrier of entry" rhetoric reads like a pure buzzword dreamed up by AI pushers with no actual meaning to it. The barrier has NEVER been lower to produce books or comic strips or anything else like that. Hell, look at xkcd, there's nothing technically challenging about it, it's quite literally just stick figures, yet it's massively popular because it's clever and well thought out.
What exactly is this enabling, other than the mass generation of low quality, throwaway crap that exists solely to fatten up Altman's wallet some more?
What about the era of flash cartoons? Remember "End of Ze World"? In a way that's throwaway crap. Or it could have been written as a comic strip, or animated manually. But Flash kinda opened up this whole new world of games and animation. AI is doing the same.
One that comes to mind is a sort of podcast-style of two cats having a conversation, and in each "episode" there's some punchline where they end up laughing about some cat stereotype. Definitely low quality garbage, but I guess what I mean by "barrier of entry" (sorry for the buzzword), is just that this is going to enable a new generation of content, memes, whatever you want to call it.
> People always like telling stories. Books, comic strips, movies, they're all just telling a story with a different amount of it left up to the viewer's imagination.
It's not just different amounts, but different kinds. A (good) comic strip isn't just the full text of a book plus some pictures..
I think it’s really cool… and I’m still concerned about the long term implications of it. We’ve already seen a lot of TV get worse and worse (e.g. more reality tv) in a quest to reduce costs. It’s not difficult to imagine a reality where talented people can’t make great content because it’s cheaper to thump out bargain basement AI slop.
The democratization of storytelling is probably the best argument in favor, I'd agree. Thank you for the response!
I do find the actual generation of video very cool as a technical process. I would also say that I can find a lot of things cool or interesting that I think are also probably deleterious to society on the whole, and I worry about the possibility of slop feeds that are optimized to be as addictive as possible, and this seems like another step in that direction. Hopefully it won't be, but definitely something that worries me.
And you have to have a monopoly though? Farms provide the most value to the world but there's so much competition that it's commoditized, so as far as I know there's no super valuable farms... Hopefully the same thing happens with autonomous cars, cloud computing, etc.
That's how competition should work. Every layer should have multiple providers until the companies get all of their profits squeezed away and users get the best possible price.
A "layer" itself represents costs that can be eliminated which can lead to lower prices for buyers.
That is why vertically integrated businesses can peel away business from existing non vertically integrated businesses with lower prices (legal liability notwithstanding). Sometimes it pays to have the business with more to lose insulated from liability by having a layer without much to lose.
I sometimes prefer to enjoy the benefits of vertical integration, like Apple being able to codevelop hardware and software unrestricted from having to provide a public API at every layer (e.g. airpods device switching), and being able to unilaterally dictate user experience guidelines to app developers (e.g. ask app not to track).
For sure! It's an interesting point. But from an economic point of view, it's better for consumers if there are clean boundaries and every layer is commoditized.
It feels less fair though. When everyone is driving x mph over the limit but only you get pulled over, it sucks. So I agree for efficiency of enforcement, but I'd rather see 100% enforcement (automated if possible), with more warnings and lower penalties.
That's a pretty extreme example, maybe the idea doesn't hold as much there. But yeah, if 99% of murders weren't prosecuted, the 1% who get charged might feel like they were singled out (and maybe they were, because of some bias or discrimination). Again, 100% enforcement is better.
It doesn't just "feel" less fair, it often is -- bc it's not truly random, it's selective enforcement which leads to things like "driving while black".
Unpopular opinion, but I actually like traffic enforcement cameras. They don't know what race you are, and they never end up escalating to using lethal force.
The problem with 100% enforcement is it doesn't allow law enforcement any discretion, and then you end up having to actually officially change the speed limit which would probably never happen
Definitely true in practice, but I don't think we want discretion. What I mean though is as a deterrent, you can either have a "fair" fine that's enforced 100% of the time, or 2x the "fair" amount with 50% enforcement, etc. When it's 100x the "fair" amount with 1% enforcement, and you see everyone else not being enforced, it feels unfair.
Traffic rules do require some discretion though - if eg you don’t allow crossing a double yellow line but a car is broken down blocking the lane, does that mean that the road is now effectively unusable until that car is towed? Lots of examples.
But I’m with you on more enforcement. I’m totally fine with automated traffic cameras and it was working great when I was in China - suddenly seemingly overnight everyone stopped speeding on the highways when I was in Shanghai, as your chances of getting a ticket were super high.
I haven't used Cursor or Claude much, how different is it from Copilot? I bounce between desktop ChatGPT (which can update VS Code) and copilot. Is there an impression that those have fallen behind?
IME, one of execution. Copilot is like having your cousin who works at Bestbuy try and help you code - it knows what a computer is, and speaks english, but is pretty bad at both
The story I've heard is that Cursor is making all their money on context management and prompting, to help smooth over the gap between "you know what I meant" and getting the underlying model to "know what you meant"
I haven't had as much experience with Claude or Claude Code to speak to those, but my colleagues speak of them highly
I thought it was a nice clear summary, I wouldn't care if it was from an LLM. This summary should be at the top of their page. Maybe it'd be nice if there were a Chrome HN plugin that adds a summary like this to all articles on HN so you can get the context before either reading the article or skimming the comments.
It's weird that each paragraphs start with "Precious Plastics". It may be a sign of AI or an unusual writing style. The author claims it's not AI and looking at the previous comments of the author they look not AI generated, so I guess it's not AI generated.
> the software implementation is much less trivial
Aren't most geospatial tools just doing simple geometry? And therefore need to work on some sort of projection?
If you can do the math on the spheroidal model, ok you get better results and its easier to intuit like you said, but it's much more complicated math. Can you actually do that today with tools like QGIS and GDAL?
Many do use simple geometry. This causes endless headaches for people who are not cartographers, they don’t expect that. The good geospatial tools usually support spheroidal models but it is not the default, you have to know to explicitly make sure it uses that (many people assume that is the default).
An additional issue is that the spheroidal implementations have undergone very little optimization, perhaps because they are not the defaults. So when people figure out how to turn them on, performance is suddenly terrible. Now you have people that believe spheroidal implementations are terribly slow, when in reality they just used a pathologically slow implementation. Really good performance-engineered spheroidal implementations are much faster than people assume based on the performance of open source implementations.
For what it's worth, you _can't_ use spherical approaches for most data. They're only used for points, in practice. Your spatial data is inherently stored/generated in ways that don't allow spherical approaches as soon as you start working with polygons, let alone things like rasters.
Yes, spherical representations of polygon data exist, but the data you import has already been "split" and undoing that is often impossible, or at best non-trivial. And then rasters are fundamentally impossible to represent that way.
Analysis uses projections for that reason. Spherical approaches aren't fundamentally "better" for most use cases. They're only strictly better if everything you're working with is a point.
A good point. Certainly for raster analysis it doesn't make sense.
But any type of vector data could be modeled on a sphere, right? Points, shapes, lines. And I saw "better" because even the best suited projection will have some small amount of distortion.
Either way, most things use planer geometry so projections are necessary, and you need to have some understanding of how all that works
You can model polygons on a sphere, but the issue is that the data you're starting with is already in a cartesian representation. You actually can't easily convert between the two for complex geometries in cases where they cross the antimeridian/poles. So trying to do anything other than points is difficult in practice, unless you're natively generating data from scratch in a spherical representation, which is rate.
This is not really a problem, unless you’re trying to simulate some 3D space orbits, physics. The crossover from geo INFORMATION systems to geo simulation systems is a bit rough, but the projections and calculations on projected cartesian space are enough for many typical questions, like distance, area, routing. However, even topology support starts getting specialized, and the use cases are more niche. I think it’s asking a bit too much from a database/storage layer to do efficient calculations outside of those supported by GEOS. At this point, you might want to import the relevant data into higher level applications.
Speaking for myself, I was not referring to any kind of simulation systems. This is a standard requirement of many operational geospatial data models, and there are a lot of these in industry. Anything that works from a projection is a non-starter, this causes demonstrable issues for geospatial analysis at scale or if any kind of precision is required. Efficient calculation just means efficient code, there is nothing preventing this from existing in open source beyond people writing it. Yes, you may be able to get away with it if your data model is small, both geographically and data size, but that does not describe every company.
It is entirely possible to do this in databases. That is how it is actually done. The limitations of GEOS are not the limitations of software, it is not a particularly sophisticated implementation (even PostGIS doesn’t use it for the important parts last I checked). To some extent you are affirming that there is a lack of ambition in this part of the market in open source.
I wouldn't say it's correct to say that GEOS isn't particularly sophisticated. A lot of (certainly not all) GEOS algorithms are now ported from JTS, the primary author of which is Martin Davis (aka Dr JTS), who works at Crunchy Data, who provide the PostGIS extension. So the chain (again, mostly) goes JTS -> GEOS -> {PostGIS, Shapely} -> … . Martin's work is at the cutting edge of open-source GIS-focused computational geometry, and has been for a long time (of course, industry has its own tools, but that's not what we're talking about).
I can sort of see your point about the merits of global, spheroidal geometry, certainly from a user's perspective. But there's no getting around the fact that the geometry calculations are both slower (I'm tempted to say "inherently"…) and far more complex to implement (just look at how painful it is to write a performant, accurate r- or r*-tree for spherical coordinates) along every dimension. That's not going to change any time soon, so the projection workflow probably isn't going anywhere.
You're right in that pretty much anything can be done via an API exposed via a database function. However, as they say... if it can be done, does mean it should? Now, I agree that having more sophisticated multi-dim calculations would be cool, but I've just rarely ran into needing or even wanting to do this, over many projects, some involving accurate simulations. In practice, database has always been for storing and querying data, which can be extremely accurate. I am probably the first person to abuse SQL and I've written some 3D ECEF rotation code in SQL, but it was a hack for a deadline, not because it was the right thing to do. All the projects I've worked with, had external models or components that did the "precise work" using complex code that I would never dare to make dependent on any database.
I'm actually curious, speaking for yourself, what kind of analysis you're doing where something like NAD83, or UTM does not give you enough precision? Is this actually "real world" geospatial data? If I have a soil model, I have a very localized analysis, and if I have a global climate model, we're talking kilometers for grid cells. In all these cases, the collected data has built in geolocation error MUCH grater than most decent projections...
So, what analysis are you doing where you need centimeter precision at global scale of thousands of kilometers? Sounds really interesting. The only time I've seen this, is doing space flight simulations where the error really accumulates into the future.
Nice work! I also made a recipe app, but after seeing the recipes people generated on yours, I'm choosing not to share it at this time.
Seriously though - vibecoding is great. Even better (or only feasible) as engineers who can dive in when we need to.
My app is iOS and I had never done any Swift. I do have AI generation but that was more of a fun afterthought. The main utility is extracting recipes from the web and having a synced shopping list that I can share with my wife.
People always like telling stories. Books, comic strips, movies, they're all just telling a story with a different amount of it left up to the viewer's imagination. Lowering the barrier to entry for this type of stuff is so cool.
I think you have to be pretty pessimistic to not just think it's really cool. You can find issues with it for sure, and maybe argue that those issues outweigh the benefit, but hard to say it's not going to be fun for some people.