those big productions are production design and above the line heavy. Most people on shoots are paid well. However, if you look at the other side of the coin, hardware and software supporting the industry, it's actually "laughable". ARRI which is the biggest name in the game on shoots is ~$1b, RED was sold of for $85m, BMD could fetch as high as $3b, Autodesk's Media & Entertainment is <%5 of its revenue which would, if it were standalone, also bring it to around $1b valuation. Avid the same at ~$1.5, Grass Valley the same ~$1b-1.5, Sony's ET&S is hard to gauge since it includes everything, but an estimate is ~$1.5b, Maxon ~$1.5, all of Nikon $4b, Canon's camera division ~$15b...
and then you have Adobe which has ~%65 of its revenue coming from Creative segment ($14-15b over $23.77 for 2025), which would put it at ~$70b - $100b valuation if it were standalone (5x-7x revenue).
That's how big Adobe is compared to literally everything else. Its creative division is 3x-4x more than the entire industry combined.
You do have new contenders now with Epic (~$22b), Canva ($26b), Figma ($20b), but I'm not convinced.. in certain segments for sure, but still not confident based on stock performance or revenue.
Adobe may be a big dog but it hasn’t insulated them from black magic eating more and more of their NLE market share with every passing year. BMD went from making a (niche for everyone else) Hollywood color tool to a full blown NLE with almost 20% of the market share in less than a decade. Not to mention a very respectable camera line.
I remember hearing the phrase “round tripping through resolve” for years as some sort of magical incantation only somebody in post production understood. Now resolve is fighting for Lightroom’s space within a full NLE. That’s something!
Yeah, BMD has a fighting chance. It's interesting how they got there (software-wise). They bought cutting-edge and expensive tools and kind of gave them away for free-ish. Hardware sales gave them a chest to do it with, and small market for those niche tools allowed them to buy it for peanuts. It's Autodesk's playbook on M&A but definitely different strategy in capturing the market.
Even Apple had a horse in the race with Color, but once Resolve became free or ridiculously cheap it was game over.. even for more advanced tools like Lustre (which merged into Flame), Film Light, Base Light, Scratch, etc. More than I can count which died even before that.
Turns out if you can afford to give your tool to wide audience with no budget, that's what they'll use (especially if it's any good) and will end up turning to you eventually for more professional setups once / if they get into pro waters.
It does kind of make sense for alphabet and meta considering their primary revenue driver is advertising and communication platforms respectively. That would put them into Media & Communications. IT is how they get to that. For amazon it's a bit more complicated, but still it's their retail that drives their revenue where AWS accounts for like ~20% of revenue.. however in amazon's case it's also AWS that also drives more than half of its profit.
smoother ride, no need for wheels so no road friction and fewer parts that wear, no need for shock absorbers as well, no need for roads clean of snow and ice which would make them both more practical and safer.. if we're talking star trek hovering, not rotor blade / hovercraft noisy shit with rotating parts that waste a ton of energy.
You asked what advantage would it have over rolling rubber, not how would one do it (you wouldn't with current understanding of physics and energy density/portability). Any at advantage vehicle like that is still in the realm of scifi.
Yes but it collides with our understanding of physics as well. Floating anything with significant weight within an air atmosphere requires constant power, you will at least have to profuce an upwards force that is equivalent to the downwards force. Depending on how efficiently that force is transfered you may need much more. A wheel made of rubber or steel (trains are freakingly efficient!) does give you that much cheaper.
Now theoretically one could envision some energy form that is so abundant it doesn't matter anymore that you constantly fight gravity, sure. But what most people seem to imagine is some magical tech that decouples the vehicle from the force of gravity, while still coupling it to the planet (or whatever the next relevant relevant frame of reference is) somehow. This kind of magical tech makes sense in films or scifi books, but if we just collect together what it would need to be, it is hard to envision any actual potential mechanism short of "we live in a Matrix and we lesrn to control the program".
That's a good question. When (if) we figure out how to practically travel at FTL speeds with a "warp drive", we might figure out the answer to this question too.
To be honest I think FTL is likelier than magical "sticks you to a fixed point in space relative to a rotating planet"-technology.
Sure you can do that with pushing air and a global positioning system, so if eventually we invent an eventual anti-gravity drive or something that may be used for the same thing. But wether such an entirely fictional device could be then made to (1) fit into a car sized vehicle and (2) be powered by whatever the most powerful mobile energy source is at that time and (3) become affordable to anyone outside of the 0.01% is another question.
>To be honest I think FTL is likelier than magical "sticks you to a fixed point in space relative to a rotating planet"-technology.
I disagree. I'm no physicist, but given how gravity seems to be related to the structure of spacetime according to Einsteinian physics, and a lot of FTL ideas seem to center on the idea of "warping" spacetime, I suspect the two are highly related, and if FTL is possible at all, it'll be also related to artificial gravity.
at that level (Tier 2) we're basically talking plasmonics, right? optics + antenna theory for the uninitiated. SPR, quantum plasmonics, active nanophotonics.. that's some advance shit from the (hopefully near) future, man. This is mostly in semiconductor research now, right? maybe biology?
I know something about game engines, I know a lot more about real time graphics, and I do even more about databases and implementation of those.
While it's nice to get yourself acquainted with all, especially easy to do with AI these days.. I do have to point out few things visible outside of myopia induced when looking from one perspective into another. In this case from gamedev to data world. In Cliff's notes since I don't have much time but I also don't want to give a drive-by snark since there are hidden values to this, imo, contrary to what I'm going to say.
What gamedev perspective myopia kind of ignores is, in no particular order..
persistence vs throughput - goal of data management is not only about execution speed. ACID, WAL, etc all are core features, not an overhead per se. Sometimes you can forego these, of course. Sometimes, if you understand what you're giving way to.
columnar fallacy, for the lack of better words - DoD, SoA are not secrets game engines use which others have forgotten. This in-particular ignores existence of OLAP. Clickhouse, Snowflake, HN's darling duckDB have been using SoA for _quite awhile_. AoS in OLTP is used because updating a single record _is_ faster that way. Why one over the other - see OLAP vs OLTP
Game engines obsess over cache hierarchy (L1-L3), and in particular cache misses by avoiding those with cache line packing, prefetching. Databases operate in a different real, that of I/O boundaries; Disk and network primarily. Bottleneck of databases is often the speed of light (latency) or the bus (NVME for example). packing structs is going to give you marginal benefit if you're waiting for 10ms for a network packet or a disk read. Suggested are micro optimizations in the context of a global system. Different realms of execution flows.
ECS are cool, everyone agrees on that. Relational databases are built on top of relational algebra though. So if you're up for running the same logic over many objects, ECS is going to be cool. If you want to allow for complex, arbitrary queries and set based logic you will need some guarantees of consistency. Complex, many-to-many joins across disparate ECS ssystems without predefined _system_ and you're foregoing the promised performance. What DBs are doing is general purpose, ECS ain't that.
Finally, yes game engines manage thousands, millions, of entities in localized 3D space. Databases manage billions, trillions, of records across petabytes of distributed storage. So, what gives? Entity model does not scale to distributed systems because of CAP theorem. No such thing as instant consistency in a globally distributed system without violating physics or sacrificing availability. TBH, some middle ground, localized to a machine, might give way to the idea but at what cost?
Don't let it shoot you down though. If there is still a kernel of idea tingling after that chat with AI, go ahead and drive through that CAP wall!
reply