Because the compiler optimizes based on the assumption that consecutive reads yield the same value. Reading from uninitialized memory may violate that assumption and lead to undefined behavior.
(This isn't the theoretical ivory tower kind of UB. Operating systems regularly remap a page that hasn't yet been written to.)
If you read something where you have not written, who cares whether the compiler optimizes things such that if you read from there again, you get the same value, even though that is not true?
Anyone who wants to be able to sanely debug. Code is imperfect, mistakes happen. If the compiler can optimise so that any mistake anywhere in your program could mean insane behaviour anywhere else in your program, then you get, well, C.
(E.g. imagine doing a write to an array at offset x - this is safe in Rust, so the compiler turns that into code that checks that x is within the bounds of that array, then writes at that offset. If the value of x can change, then now this code can overwrite some other variable anywhere in your program, giving you a bug that's very hard to track down)
I see what you're getting at: situations in which the compiler trusts that the location has not changed, but needs to re-load it because the cached value is not available. When the location is reloaded, the security test (like a bounds check) is not re-applied to it, yet the value being trusted is not the one that had been checked.
This is not exactly an optimization though, in the sense that it will mess up even thoroughly unoptimized code (with more likelihood, due to caching optimizations being absent).
So that is to say, even the generation of basic unoptimized intermediate code for a language construct relies on assumptions like that certain quantities will not spontaneously deviate from their last stored value.
That's baked into the code generation template for the construct that someone may well have written by hand. If it is optimization, it is that coder's optimization.
The intermediate code for a checked array access, though, should be indicating that the value of the indexing expression is to be moved into a temporary register. The code which checks the value and performs the access refers to that temporary register. Only if the storage for the temporary registers (the storage to which they are translated by the back end) changes randomly would there be a problem.
Like if some dynamically allocated location is used as an array index, e,g. array[foo.i] where foo is a reference to something heap allocated, the compiler cannot emit code which checks the range of foo.i, and then again refers to foo.i in the access. It has to evaluate foo.i to an abstract temporary, and refer to that. In the generated target code, that will be a machine register, or a location on the stack. If the machine register or stack are flaky, all bets are off, sure. But we have been talking about memory that is only flaky until it is written to. The temporary in question is written to!
> The intermediate code for a checked array access, though, should be indicating that the value of the indexing expression is to be moved into a temporary register. The code which checks the value and performs the access refers to that temporary register. Only if the storage for the temporary registers (the storage to which they are translated by the back end) changes randomly would there be a problem.
You'd almost certainly pass it as a function parameter, prima facie in a register/on the stack, sure, and therefore in unoptimised code nothing weird would happen. But an optimising compiler might inline the function call, observe that the value doesn't escape, and then if registers are already full it might choose to access the same memory address twice (no reason to copy it onto the stack, and spilling other registers would cost more).
I don't know how likely this exact scenario is, but it's the kind of thing that can happen. Today's compilers stack dozens of optimisation passes, most of which don't know anything about what the others are doing, and all of which make basic assumptions like that the values at memory addresses aren't going to change under them (unless they're specifically marked as volatile). When one of those assumptions is broken, even compiler authors can't generally predict what the effects will be.
Makes sense. When a temporary is the result of a simple expression with no side effects that is expected to evaluate to the same value each time, the temporary can be taken back. An obvious example of this is constant folding. We set a temporary t27 to 42. Well, that can just be 42 everywhere, so we don't need the temporary. The trust "evaluate to same value each time" is based on assumptions, which, if they are wrong, things are screwed.
I run Windows 10 LTSC IoT edition on my gaming PC and it’s been pretty great. No crapware and no annoying update notifications. Should be the default windows version.
I don't like the trend where people put language models into everything, contributing to global warming. Samples from sound packs usually have most of the data in the names, it is just very unstructured. I have tons of regexps to figure out different info (like scale) from names.
That said, I'm working on actually analyzing files themselves with Apple Neural engine and pretrained local ML model with some spectral analysis. This will be a huge lift, but this project is a marathon for me until I'm somewhere where the price is well justified
As an extra data point, my sample library is an absolute mess with regards to filenames even though 99% of it is from professional packs. I'd love to see some analysis built in, BPM / key / notes etc. For instance, I've got one folder where everything in it is named `STAB 0001` etc. and there's about a thousand of them. Kicks hasn't renamed any of them, but has tagged them with some not so useful tags: `0001`, `1030`, `sample`, `stab` and `stabs` - the first is the latter part of the filename, I've no idea where the `1030` came from but that's on everything from that folder now.
As for the price, it's maybe a little steep for this launch version, but if you get some good results going with the local ML it'll be cheap.
One last thing, it would be amazing if this was a universal app so I could use it on my iPad where I do the vast majority of my music making. Good luck, I like it so far!
Wow, thanks a lot for the feedback. I'd love to keep iterating on tags.
1030 is clearly an attempt to pick up BPM. I try to find a number that's bigger than a simple sample number (like Kick_1 and Kick_2), but have completely forgotten to set an upper limit, will fix that!
What would be your ideal workflow for universal app? Would you like to selectively synchronize your sample library between devices? I've been thinking about it but wasn't sure if someone would use that.
> Does this do any analysis of the sample files themselves or just auto-tagging and searching based on the sample file name?
The site suggests the latter: "Kicks Pro figures instrument names, genres, BPM, scales and more from sample names." The lack of subscription pricing is also a signal that they're not using a paid API.
The privacy policy is not helpful in this regard. It's effectively, "We do not sell, trade, or otherwise transfer your information to outside parties, except when we do". https://www.kickspro.app/privacy-policy
Thanks for pointing out. TBH, I set myself a hard deadline to release until the end of 2023, so that privacy policy turned out to be very generic. I'll improve that
Yes! Kinda the only one that supports j,k for navigation haha.
VIM bindings are not yet implemented to the level I want it to be, so it is a bit of hidden feature
Nerves is a framework meant to run on relatively beefy embedded systems (not microcontrollers) and uses the normal BEAM. AtomVM is its own VM suitable for running on microcontrollers in much more resource constrained environments than Nerves could ever get close to running on.
It'd be nice to have an 'at a glance' comparison between standard Erlang and this, to get a better idea of what's going on.
I don't know if "kind of like language X, but cut down and missing some stuff" environments have a great track record. Especially if the runtime is not the tried and true one with a lot of the kinks worked out.
As someone who's yet to play with Erlang and Elixir, is the bytecode lean or fluffy? Microcontrollers often have somewhat limited Flash storage, and I see they have 512kB as bare minimum with 1MB recommended for the VM itself.
I'm not a BEAM developer, but a quick search found this [1] which was very informative. My classification would be "quite fluffy", but that's of course highly subjective and the language/platform has quite a lot of features that it needs to support so I wasn't expecting CHIP-8, exactly.
If I understand correctly, Elixir builds an Erlang syntax tree and then the erlang toolchain takes over. If it runs Erlang, it should be able to run elixir
I think you have it the other way around. Elixir is built on top of Erlang, no? So if it runs Elixir it can run Erlang in the same way React can run JavaScript, but not necessarily the other way around.
No, the GP is correct: Erlang and Elixir are 100% equivalent on the BEAM, because Elixir compiles to Erlang AST, then that is compiled with Erlang compiler to BEAM bytecode. I honestly don't know what you mean by the React example, but it's more like JavaScript and TypeScript, or Java and Kotlin. At runtime, there's no difference between the code compiled directly from Erlang and code compiled from Elixir.
I think pedantically anchors are the markers in the document, and the URL fragments refer to the anchors to tell the browser to scroll there. If I'm wrong, someone will be along to correct me...
(If you’re taking about the scroll behavior) It can be set in two different places, but it’s the same setting underneath. Either you get natural everywhere or the other behavior everywhere.