You said You won't get "extreme performance" from C++ because it is buried under the weight of decades of compatibility hacks.
Now your whole comment is about vector behavior. You haven't talked about what 'decades of compatibility hacks' are holding back performance. Whatever behavior you want from a vector is not a language limitation.
You could write your own vector and be done with it, although I'm still not sure what you mean, since once you reserve capacity a vector still doubles capacity when you overrun it. The reason this is never a performance obstacle is that if you're going to use more memory anyway, you reserve more up front. This is what any normal programmer does and they move on.
I've never used ISPC. It's somewhat interesting although since it's Intel focused of course it's not actually portable.
I guess now the goal posts are shifting. First it was that "C++ as a language has performance limitations" now it's "rust has a vector that has a function I want and also I want SIMD stuff that doesn't exist. It does exist? not like that!"
Try to stay on track. You said there were "decades of compatibility hacks" holding back C++ performance then you went down a rabbit hole that has nothing to do with supporting that.
Actually first my comment explicitly talks about move, but you just decided you don't care that C++ move has a perf leak and claimed that you didn't notice so therefore it doesn't count, which I guess could equally apply to somebody who wants to claim Python has extreme performance, or Visual Basic.
I deliberately picked two examples from ends of the spectrum, a core language feature and then a pure library type. Both in some ways of equal practical performance necessity.
> The reason this is never a performance obstacle is that if you're going to use more memory anyway, you reserve more up front.
We often don't know how big a growable container will finally be when we're adding things to it, and so without travelling back from the future to tell ourselves how big it will grow this is useless even though we know how much we're adding right now. This is the essence of the defect in std::vector
Here's the demonstration I wrote about in my previous post, no I am not going to build a replacement for std::vector with the correct API in C++ so this is Rust where the appropriate API already exists. It provides a policy knob, so you can pick Bjarne, Cpp or Best in the main function to see what happens for yourself.
you just decided you don't care that C++ move has a perf leak and claimed that you didn't notice so therefore it doesn't count, which I guess could equally apply to somebody who wants to claim Python has extreme performance, or Visual Basic.
Are you seriously implying copying a pointer makes C++ like python or visual basic in speed? Where did you even get these ideas? Show me any program anywhere, any github ticket any performance profile where a C++ move is somehow a performance problem.
I looked at your link and I still have no idea how doubling the size of std::vector "destroys" amortization. The std link you had before wasn't even about this, it was about memcpy.
Please link the origins of where you are getting this stuff. It doesn't sounds like there is some niche rust forum where people are looking for anything, no matter how far fetched to pretend C++ has a performance problem.
Meanwhile literally everything that needs performance is being written in C++. Why aren't codecs and browsers and games written in something else?
> Are you seriously implying copying a pointer makes C++ like python or visual basic in speed?
No, but I get the feeling you really do think that "copying a pointer" is somehow what's at stake here which suggests you've badly misunderstood how move works on C++ in general.
> Show me any program anywhere, any github ticket any performance profile where a C++ move is somehow a performance problem.
There's a CppNow talk from 2018 or so in which Arthur demonstrates a 3x perf difference. That's obviously an extreme case, you're not magically going to save most of your runtime by fixing this in most software, but it shows this is a real issue.
Since you're the one who believes in "extreme performance" you're probably surprised fixing this wasn't a priority. But P2137 gives a better indication of the status quo, or rather, what happened to the paper does rather than what's inside it. You can read the paper if you want (if you don't recognise at least most of the authors that means you're way past your depth for whatever that's worth) https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2020/p21... -- WG21 gave a firm "No" to that paper, which is what it was for. C++ is not a language for "extreme performance". It's a language which prizes backwards compatibility. The authors, or rather their employers, needed to be explicitly told that. No, if you want perf you are in the wrong place just as surely as if you want safety, use a different language. C++ is for backwards compatibility with yet more C++.
> It doesn't sounds like there is some niche rust forum where people are looking for anything, no matter how far fetched to pretend C++ has a performance problem.
Indeed, this isn't the product of a "niche rust forum". It's WG21, the C++ Standards Committee.
I think the problem is on your end, C++ demands a very high price in terms of safety, ergonomics, learning curve - and to some people that means it ought to be really good. Otherwise why such a high cost? Those are the same people who figure if they paid $500 for a T-shirt that must mean it's a good T-shirt. Nah, it just means you're a sucker.
> Why aren't codecs [...] written in something else?
This is especially frustrating because C++ is an incredibly bad choice for this work, as you'll have seen with the incidents at Apple. Nobody should be choosing the unsafe language with less than stellar performance to write codecs in 2024, and yet here we are.
And yet, most of the time when somebody even realises they shouldn't write codecs (and file compression and various other similar technologies) in C++ their next guess is Rust which while obviously an improvement over C++ is hardly a good choice for this work either.
The sad truth is that often it's inertia. We used this crap C++ code in 2008, and we re-used it when we refreshed this product in 2018, so it's still C++ today because nobody changed that.
All these rants are a classic case of some sort of emotional investment that isn't about evidence. Your links and evidence either don't apply to what you're talking about or they are vague references to "find it yourself" in something large.
First, you are now ignoring your own claims that "amortization is broken" when your own link just showed normal doubling and I asked what was supposed to be wrong. Your link before was about a trivial copying attribute not the doubling of a vector's size.
There's a CppNow talk from 2018 or so in which Arthur demonstrates a 3x perf difference.
A "3x perf difference" compared to what? Prove it and link a timestamp. This barely makes sense. It's a vague claim with vague evidence.
It's WG21, the C++ Standards Committee.
Now your evidence that "fixing this" (no specific of what 'this' means) is linking the entire 2020 iso C++ plan? This is one step removed from the classic "I'm not going to do your homework, google it yourself".
I think the problem is on your end, C++ demands a very high price in terms of safety, ergonomics, learning curve - and to some people that means it ought to be really good. Otherwise why such a high cost? Those are the same people who figure if they paid $500 for a T-shirt that must mean it's a good T-shirt. Nah, it just means you're a sucker.
This seems like some personal frustration. When I write C++ it's very simple and direct. Small classes, value semantics, vectors, hash maps and loops.
Then your 'answer' is that nothing is good enough and nothing works. You say C++ is a bad choice for codecs, yet half the planet is using video codecs written in C++ all day every day. You ignore browsers and games being written in C++ too. You have no solutions, it's just that everything sucks but you can't be bothered to even write your own vector.
This is just your frustrations wrapped in rants with some hand waving non evidence to act like it's based on something other than emotion.
> And yet, most of the time when somebody even realises they shouldn't write codecs (and file compression and various other similar technologies) in C++ their next guess is Rust which while obviously an improvement over C++ is hardly a good choice for this work either.
Now your whole comment is about vector behavior. You haven't talked about what 'decades of compatibility hacks' are holding back performance. Whatever behavior you want from a vector is not a language limitation.
You could write your own vector and be done with it, although I'm still not sure what you mean, since once you reserve capacity a vector still doubles capacity when you overrun it. The reason this is never a performance obstacle is that if you're going to use more memory anyway, you reserve more up front. This is what any normal programmer does and they move on.
Show what you mean here:
https://godbolt.org/
I've never used ISPC. It's somewhat interesting although since it's Intel focused of course it's not actually portable.
I guess now the goal posts are shifting. First it was that "C++ as a language has performance limitations" now it's "rust has a vector that has a function I want and also I want SIMD stuff that doesn't exist. It does exist? not like that!"
Try to stay on track. You said there were "decades of compatibility hacks" holding back C++ performance then you went down a rabbit hole that has nothing to do with supporting that.