Hacker Newsnew | past | comments | ask | show | jobs | submit | bee_rider's commentslogin

I suspect people who are motivated enough to contribute to the Wikipedia article are a bit over-interested in memorizing social rules.

There are lots of standards, but some contradict one-another.

In the area I grew up in, caring too much about useless aesthetic stuff like “elbows on the table” would have a social cost.


Modern versions of Fortran are reasonably pretty.

It’s a fairly nice language. You can probably get better performance out of C/C++ with unlimited effort. But, it is really nice for allowing computational scientists to get, like, 95% of the way there.

I think it actually suffers from the reputation as this ancient/super hardcore performance language. The name comes from “Formula Translating System,” which implies… it was written for people who speak human languages first!


ive maintained a simulation software where the core is written in fortran. its using some intel math library that is expensive that i cannot recall, does immense calculations and makes faster binaries than c on every compiler we tried

Have you tried using the restrict keyword everywhere you can in c?

In Fortran, arrays are not allowed to overlap which allows some optimisations. c has rules in the spec about how memory accesses must occur, and overlapping arrays being possible prevents some compiler optimisations. The restrict keyword is you promising that the memory at some pointer won't be accessed through another pointer.

You can compare two implementations in Fortran/c using godbolt to see how each of them compile.


Interesting.

MKL is Intel’s famous numerical library (it includes things like BLAS “basic linear algebra subroutines” and fast Fourier transforms). It is availible for free, but IIRC they had some support plans maybe, maybe that’s what you are remembering?

It is closed source, but you can look at the source of the best open source competitor libflame/BLIS, and see that most of the performance comes from C and assembly.

It is difficult to beat “unlimited effort” C, but not many program really justify that treatment.


MKL used to not be free. You had to buy it from a local reseller for a few hundred dollars per developer.

Which version of the language is it? It looks like you used Fortran90 at least (modules are used), which is pretty old, but not totally ancient like Fortran77.

Anyway there are also 2018 and 2023 versions…


I still make a living from Fortran77 work, though much of it is converting it to Matlab.

Building interfaces or full converting? Either way sounds like a dream job, haha.

“Cloud” seems like a better comparison than stuff like cryptocurrency. AI seems totally over-hyped but with some obvious sensible use-cases.

.0037. IIRC it is possible to get a better score by looking around the screen, your peripheral vision might be somehow more sensitive.

I can see what they mean about .02 though. If I weren’t specifically looking for difference that’s where the colors become less noticeable.


Looking around if you have an LCD also helps compensate for colors shifting off-axis.

What youre doing is seeing changes limited to one of the R, G, B so instwad of judging integral xolors, your doing 3 different. The article explains how errors propagate, and those RGB pixels will all shift errors because of matetial science.

It is a tiny example, but it measures something. It doesn’t handle the other performance characteristics you mention, but it has the advantage of being a basically pure measurement of the memorization ability of the branch predictors.

The blog post is not very long—not much longer than some of the comments we’ve written here about it. So, I think it is reasonable to expect the reader to be able to hold the whole thing in their head, and understand it, and understand that it is extremely targeted at a specific metric.


Resilience against geopolitical disruption has always been a nice characteristic of renewables (of course, centralizing the production in China is a mistake from that point of view, if for no other reason than the general danger of centralization). It is unfortunate though, if we needed an actual event to see this advantage.

I guess the generate_random_value function uses the same seed every time, so the expectation is that the branch predictor should be able to memorize it with perfect accuracy.

But the memorization capacity of the branch predictor must be a trade-off, right? I guess this generate_random_value function is impossible to predict using heuristics, so I guess the question is how often we encounter 30k long branch patterns like that.

Which isn’t to say I have evidence to the contrary. I just have no idea how useful this capacity actually is, haha.


30k long patterns are likely rare. However in the real world there is a lot of code with 30k different branches that we use several times and so the same ability memorize/predict 30k branches is useful even though this particular example isn't realistic it still looks good.

Of course we can't generalize this to Intel bad. This pattern seems unrealistic (at least at a glance - but real experts should have real data/statistics on what real code does not just my semi-educated guess), and so perhaps Intel has better prediction algorithms for the real world that miss this example. Not being an expert in the branches real world code takes I can't comment.


Yeah, I’m also not an expert in this. Just had enough architecture classes to know that all three companies are using cleverer branch predictors than I could come up with, haha.

Another possibility is that the memorization capacity of the branch predictors is a bottleneck, but a bottleneck that they aren’t often hitting. As the design is enhanced, that bottleneck might show up. AMD might just have most recently widened that bottleneck.

Super hand-wavey, but to your point about data, without data we can really only hand-wave anyway.


https://chromium.googlesource.com/chromiumos/third_party/gcc... has some looong select/case things with lots of ifs in them, but I don't think they would hit 30k.

Does it do this for really cut and dry problems? I’ve noticed that ChatGPT will put a lot of effort into (retroactively) “discovering” a basically-valid alternative interpretation of something it said previously, if you object on good grounds. Like it’s trying to evade admitting that it made a mistake, but also find some say to satisfy your objection. Fair enough, if slightly annoying.

But I have also caught it on straightforward matters of fact and it’ll apologize. Sometimes in an over the top fashion…


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: