I had a similar problem when I was making a tool processing a lot of data in the browser. I'd naively made a large array of identical objects each holding a bunch of fields with numbers.
Turns out, this works completely fine in Firefox. However, in Chrome, it produces millions of individual HeapNumber allocations (why is that a thing??) in addition to the objects and uses GBs of RAM, and is slow to access, making the whole thing unusable.
Replacing it with a SoA structure using TypedArray made it fast in both browsers and fixed the memory overhead in Chrome.
As someone more familiar with systems programming than web, the concept of creating individual heap allocations for a single double baffles me beyond belief. What were they thinking?
Yeah, this is a historical design difference between Firefox's Spidermonkey JS engine and Chrome's V8.
Spidermonkey uses (I'm simplifying here, there are cases where this isn't true) a trick where all values are 64-bits, and for anything that isn't a double-precision float they smuggle it inside of the bits of a NaN. This means that you can store a double, a float32, an int, or an object pointer all in a field of the same size. Great, but creates some problems and complications for asm.js/wasm because you can't rely on all the bits of a NaN surviving a trip through the JS engine.
V8 instead allocates doubles on the heap. I forget the exact historical reason why they do this. IIRC they also do some fancy stuff with integers - if your integer is 31 bits or less it counts as a "smi" in that engine, or small int, and gets special performance treatment. So letting your integers get too big is also a performance trap, not just having double-precision numbers.
EDIT: I found something just now that suggests Smis are now 32-bits instead of 31-bits in 64-bit builds of v8, so that's cool!
I suspect it's just circumstantial - two different design approaches. Both of the approaches have their advantages and disadvantages.
IMHO the bigger issue with NaN-boxing is that on 64-bit systems it relies on the address space only needing <50 bits or so, as the discriminator is stored on the high bits. It's ok for now when virtual address spaces typically only need 48 bits of representation, but that's already starting to slip with newer systems.
On the other hand, I love the fact that NaN-boxing basically lets you eliminate all heap allocations for doubles.
I actually wrote a small article a while back on a hybrid approach called Ex-boxing (exponent boxing), which tries to get at the best of both worlds: decouple the boxing representation from virtual address significant bits, and also represent most (almost all) doubles that show up at runtime as immediates.
> IMHO the bigger issue with NaN-boxing is that on 64-bit systems it relies on the address space only needing <50 bits or so, as the discriminator is stored on the high bits.
Is this right? You get 51 tag bits, of which you must use one to distinguish pointer-to-object from other uses of the tag bits (assuming Huffman-ish coding of tags). But objects are presumedly a minimum of 8-byte sized and aligned, and on most platforms I assume they'd be 16-byte sized and aligned, which means the low three (four) bits of the address are implicit, giving 53 (54) bit object addresses. This is quite a few years of runway...
There's a bit of time yes, but for an engine that relies on this format (e.g. spidermonkey), the assumptions associated with the value boxing format would have leaked into the codebase all over the place. It's the kind of thing that's far less painful to take care of when you don't need to do it than when you need to do it.
But fair point on the aligned pointers - that would give you some free bits to keep using, but it gets ugly.
You're right about the 51 bits - I always get mixed up about whether it's 12 bits of exponent, or the 12 includes the sign. Point is it puts some hard constraints on a pretty large number of high bits of a pointer being free, as opposed to an alignment requirement for low-bit tagging which will never run out of bits.
Release your movie in native 120 fps and I'll turn off motion interpolation. Until then, minor flickering artifacts when it fails to resolve motion, or minor haloing around edges of moving objects, are vastly preferable to unwatchable judder that I can't even interpret as motion sometimes.
Every PC gamer knows you need high frame rates for camera movement. It's ridiculous the movie industry is stuck at 24 like it's the stone age, only because of some boomers screaming of some "soap opera" effect they invented in their brains. I'd imagine most Gen Z people don't even know what a "soap opera" is supposed to be, I had to look it up the first time I saw someone say it.
My LG OLED G5 literally provides a better experience than going to the cinema, due to this.
I'm so glad 4k60 is being established as the standard on YouTube, where I watch most of my content now... it's just movies that are inexplicably stuck in the past...
> Every PC gamer knows you need high frame rates for camera movement.
Obviously not, because generations of people saw "movement" at 24 fps. You're railing against other people's preferences, but presenting your personal preferences as fact.
Also, there are technical limitations in cameras that aren't present in video games. The higher the frame rate, the less light that hits it. To compensate, not only do you need better sensors, but you probably need to change the entire way that sets, costumes, and lighting are handled.
The shift to higher frame rates will happen, but it's gonna require massive investment to shift an entire industry and time to learn what looks good. Cinematographers have higher standards than random Youtubers.
> You're railing against other people's preferences, but presenting your personal preferences as fact.
It is a fact that motion is smoother at 120 fps than 24, and therefore easier to follow on screen. There are no preferences involved.
> Also, there are technical limitations in cameras that aren't present in video games.
Cameras capable of recording high quality footage at this refresh rate already exist and their cost is not meaningful compared to the full budget of a movie (and you can use it more than one time of course).
> It is a fact that motion is smoother at 120 fps than 24
Yes, but that's not what you wrote. "unwatchable judder that I can't even interpret as motion sometimes" is false, unless you have some super-rare motion processing disorder in area MT of your brain.
> Cameras capable of recording high quality footage at this refresh rate already exist and their cost is not meaningful compared to the full budget of a movie
Yes, but that's not what I wrote. The cost to handle it is not concentrated in the camera itself. Reread my comment.
The cost of recording/storing 120fps video, and editing/rendering effects at this fps is costly and incredibly meaningful to take into account when creating movies.
You've not met a real hater if you think this, and should consider yourself very lucky. That was just a frustrated user.
A real hater will obsessively use your product, yet simultaneously attempt to find any reason whatsoever to hate your product (or you), no matter how small, and be extremely vocal about it, to the point of founding new communities centered on complaining about you. Should you address the issue, they will silently drop that one from their regularly posted complaints and find or invent a new one. Any communication you send to them will be purposefully misinterpreted and combined with half truths and turned against you.
Some of these people probably have genuine mental illnesses that makes them act like this.
Just to be clear, this particular user didn't ever become a fountain of sweetness and light - they were pretty touchy and cranky at the best of times, if I remember right (it's been over a decade), but accepting them as they were let them become a contributor instead of toxic.
Honestly I have thick enough skin that I'm happy to let them be themselves as long as we can reach a basis of professionalism and get a positive result.
You're right that there are many people you can't reach, and trying is a waste of effort, but I think an appreciation for human dignity requires me to at least make the attempt, and sometimes you're rewarded.
Yeah, which is why I think it's important to draw a line between a frustrated user (has genuine issues with his use of the product, can be turned by fixing them), a casual troll (reposts some bad feedback because he thinks it's funny) and a hater (malicious, bad faith, communication not recommended)
With my old saas app (now sold, and then the new owner killed it) I used to love getting angry emails. Almost every time the user ended up turning into an advocate and product champion. I don't know if they were "haters" per-se but they were almost always suprised to get an email back from a real person who cared about their concerns, and over time they changed their opinion. That may just be an artifact of early saas in 2010. Not sure if the same thing can happen these days.
I've seen pathological users like the sibling is commenting about. I don't want to out any community in particular but some of subreddits surrounding open source games can get pretty yikes.
Not saying you're wrong to find silver linings, just wanted to corrobate that sometimes that is insufficient (as far as I can tell, given impassioned haterness germinating for years).
I completely agree, this is perhaps the least sensible part of common English syntax.
"Hello," he said.
"Hello", he said.
Only one of these makes actual sense as a hierarchical grammar, and it's not the commonly accepted one! If enough of us do it correctly perhaps we can change it.
I’ve always wondered about this. I guess typographically they should just occupy the same horizontal space, or at least be kerned closer in such a way as to prevent the ugly holes without cramming.
It’s true, though, that the hierarchically wrong option looks better, IMHO. The whitespace before the comma is intolerable.
This is an interesting case where I am of two autistic hearts, the logical one slowly losing vehemence as I get older and become more accepting of traditions.
Is indie music no longer indie if they sell really well? Is an indie film no longer indie if it gets too many awards? This kind of redefinition of indie as "poor people" is ahistorical and unhelpful. Indie means independent of a major studio. Which they are.
If your studio has enough resources that it could easily be its own publisher, the definition "independent from a publisher" is no longer of much use. It's also wrong: this project did have a publisher and various other investment in it.
The founders of this studio come from rich family backgrounds, to think they have anything in common with what the average person understands as an "indie game" developer is laughable. For example, they supposedly rented an office to work in, in a building owned by the founder's father's real estate firm, of course.
Projects like these used to be called AA games. It's a fantastic game, it doesn't have to be indie to be good.
I'm always a bit baffled by this project. While it's cool that he can create hundreds of hours of content for his puzzle game, does anyone actually want to play a single puzzle game for this long? Would it not be better to make a few different, shorter, higher quality experiences?
I agree. His first and second game are based on deep themes and unique concepts. He explores the medium of video games in new ways. The selling point of this game seems to be "largest puzzle game ever". I'm excited to see if there are deeper ideas once I play it though.
One of Blow's favorite games is Steven's Sausage Roll. I personally didn't enjoy it because the intellectual content of that kind of puzzle is, as far as I can tell, exploring a large tree of sausage roll states. And while I had a few aha moments playing it, as far as I can tell the way you do that at the end of the day is just to try all the possible states.
Turns out, this works completely fine in Firefox. However, in Chrome, it produces millions of individual HeapNumber allocations (why is that a thing??) in addition to the objects and uses GBs of RAM, and is slow to access, making the whole thing unusable.
Replacing it with a SoA structure using TypedArray made it fast in both browsers and fixed the memory overhead in Chrome.
As someone more familiar with systems programming than web, the concept of creating individual heap allocations for a single double baffles me beyond belief. What were they thinking?
reply