^- This let's us pass arbitrary starting data to a new thread.
I don't know whether this counts as "very few use cases".
The Memory Ownership advice is maybe good, but why are you allocating in the copy routine if the caller is responsible for freeing it, anyway? This dependency on the global allocator creates an unnecessarily inflexible program design. I also don't get how the caller is supposed to know how to free the memory. What if the data structure is more complex, such as a binary tree?
It's preferable to have the caller allocate the memory.
void insert(BinTree *tree, int key, BinTreeNode *node);
^- this is preferable to the variant where it takes the value as the third parameter. Of course, an intrusive variant is probably the best.
If you need to allocate for your own needs, then allow the user to pass in an allocator pointer (I guessed on function pointer syntax):
void* is a problem because the caller and callee need to coordinate across the encapsulation boundary, thus breaking it. (Internally it would be fine to use - the author could carefully check that qsort casts to the right type inside the .c file)
> What if the data structure is more complex, such as a binary tree?
I think that's what the author was going with by exposing opaque structs with _new() and _free() methods.
But yeah, his good and bad versions of strclone look more or less the same to me.
If you don't pass the size, the allocation subsystem has to track the size somehow, typically by either storing the size in a header or partitioning space into fixed-size buckets and doing address arithmetic. This makes the runtime more complex, and often requires more runtime storage space.
If your API instead accepts a size parameter, you can ignore it and still use these approaches, but it also opens up other possibilities that require less complexity and runtime space by relying on the client to provide this information.
The way I've implemented it now was indeed to track the size in a small header above the allocation, but this was only present in debug mode. I only deal with simple allocators like a linear, pool, and normal heap allocator. I haven't found the need for something super complex yet.
You can prevent buffer overflows even when you don't use a VM. Eg it's perfectly legal for your C compiler to insert checks. But there are also languages like Rust or Haskell that demand an absence of buffer overflows.
You can design a VM that still allows for buffer overflows. Eg you can compile C via the low-level-virtual-machine, and still get buffer overflows.
Any combination of VM (Yes/No) and buffer-overflows (Yes/No) is possible.
I agree that using a VM is one possible way to prevent buffer overflows.
The JVM being a stack-machine is probably the least controversial thing about it. Wasm, CPython and Emacs all also have a stack-based bytecode language. The value, of course, comes from having a generic machine that you can then compile down into whatever machine code you want. Having a register machine doesn't seem very useful, as it's completely unnecessary for the front-end compiler to minimize register usage (the backend compiler will do that for you).
Specifying classpath isn't fun, I agree with that. Launch performance isn't good, and is generally a consequence of its high degree of dynamicism and JIT compiler, though of course there are ways around that (Leyden).
> I've written entire programs in JVM bytecode, without a compiler, and I see very little of value in it
I agree, I also see very little value in manually writing JVM bytecode programs. However, compiling into the JVM classfile format? Pretty darn useful.
You're saying $8 billion to cover interest, another commenter said 80, but the actual article says ""$8 trillion of CapEx means you need roughly $800 billion of profit just to pay for the interest". Eight HUNDRED billion. Where does the eight come from, from 90% of these companies failing to make a return? If a few AI companies survive and thrive (which tbh, sure, why not?) then we're still gonna fall face down into concrete.
I think it's the realm of maybe in Silicon Valley. That's 5000 dollars. Look at this statement:
> Let's say only about 1/3 of the world's adult population is poised to take advantage of paid tools enabled by AI
2/3 of the world's adult population is between 15 and 65 (roughly: 'working age'), so that's 50% of the working world that is capable of using AI with those numbers. India's GDP per capita is 2750USD, and now the price tag is even higher than 5k.
I don't know how to say this well, so I'll just blurt it out: I feel like I'm being quite aggressive, but I don't blame you or expect you to defend your statements or anything, though of course I'll read what you've got to say.
> Which, in a subject like algebra, is extremely suspicious ("how could both of them get the exact same WRONG answer?").
In Germany, the traditional sharp-tongued answer of pupils to the question "How could both of you get the exact same WRONG answer (in the test)?" is: "Well, we both have the same teacher." :-)
This is very similar to how Java's object monitors are implemented. In OpenJDK, the markWord uses two bits to describe the state of an Object's monitor (see markWord.hpp:55). On contention, the monitor is said to become inflated, which basically means revving up a heavier lock and knowing how to find it.
I'm a bit disappointed though, I assumed that you had a way of only using 2 bits of an object's memory somehow, but it seems like the lock takes a full byte?
It’s just that if you use the WTF::Lock class the. You get a full byte simply because the smallest possible size of a class instance in C++ is one byte.
But there’s a template mixing
thing you can use to get it to be two bits (you tell the mixin which byte to steal the two bits from and which two bits).
I suspend the same situation holds in the Rust port.
I am very familiar with how Java does locks. This is different. Look at the ParkingLot/parking_lot API. It lets you do much more than just locks, and there’s no direct equivalent of what Java VMs call the inflated or fat lock. The closest thing is the on demand created queue keyed by address.
>I am very familiar with how Java does locks. This is different. Look at the ParkingLot/parking_lot API. It lets you do much more than just locks, and there’s no direct equivalent of what Java VMs call the inflated or fat lock. The closest thing is the on demand created queue keyed by address.
Are you familiar with the new LightweightSynchronizer approach with an indirection via a table, instead of overwriting the markWord? I'd say that this has pushed the ParkingLot approach and Java's (Hotspot's, really) approach closer to each other than before. I think the table approach in Java could be encoded trivially into ParkingLot API, and the opposite maybe. Obviously the latter would be a lot more hamfisted.
The idea is that six bits in the byte are free to use as you wish. Of course you'll need to implement operations on those six bits as CAS loops (which nonetheless allow for any arbitrary RMW operation) to avoid interfering with the mutex state.
In a single-threaded context, I think 'giant array array of bytes' is still correct? Performance, not so much.
> This part of the blog didn't seem very accurate.
It was a sufficient amount of understanding to produce this allocator :-). I think that if we have beginner[0] projects posted and upvoted, we must understand that the author's understanding may be lacking some nuance.
[0] author might be a very good programmer, just not familiar with this particular area!
reply