This is surprisingly true in a way. TypeScript is not a language(1), it's primarily a linter-assisting overlay atop of an actual language, JavaScript. Also, there's a linter that outputs and bundles JS, shedding the alien type annotations and also injecting its own, very partial runtime.
So, JSDoc is just a linter/documenter aid. And so is TypeScript.
(1) TS is not a language: it has no spec, no reference documentation. It defines no behaviors, in particular, no runtime behaviors. It sits atop of various JS versions, layering over them in unspecified ways. TS is a linting layer, and also is a hack.
What changed is that the very things capable of eliciting interest in programming also offer overpowering content consumption functions with huge, never ending catalogs of games, movies, videos, short funny clips etc.
As computing and "content" proliferate, the uncompetitiveness of creation, esp. symbolic creation such as programming, is increasing. At some point, broadening of the access no longer offsets this effect, and the talent pool may start to shrink even if capability and permeation is a million times higher than it was.
Good points. A minor thing, generative art isn't necessarily abstract art, though currently almost exclusively is. (Oh and spelled Pollock.) Regarding validation, it might not matter for some, but might for others. I'm sure there were quite a few great artists who wouldn't have done their art at the same level had there not been validation as art. Something more mundane, recognition is likely a big factor in whether patrons seek out and are willing to pay for generative art. So it can matter even for those who aren't looking for the ego lifting part.
> A minor thing, generative art isn't necessarily abstract art, though currently almost exclusively is.
Thank you. There's a lot of people throwing around art terminology in this thread and it doesn't always make sense.
Generative art only has to output through a photorealistic renderer and suddenly it's no longer "abstract", but "surrealism". This demonstrates how little sense it makes to apply such terms to generative art.
Let alone the fact that it's multimedia and the term really only applies to visual art. You could have "abstract music", I guess, but it means something completely different.
In the case of asymmetric competition, a company with vastly more resources might curate a list of undisclosed problems with competitors' products, for when a major event like this strikes, they can drag along the underdogs. The optics are different if 1) the dominant player can pretend it's "working together" with the other vendors, 2) the dominant player condescendingly points out mildly related issues in competitors' product, and 3) the name of the alternatives keeps getting linked in a "we're in the same boat" way.
I'm not suggesting this is the case here at all, as it's unlikely that Intel identified this precise issue with AMD long ago and haven't checked their own vulnerability (though there's always a chance that an issue is found by company A when analyzing the strengths and weaknesses of company B's products, which then turns out to apply for their own products too). But I wouldn't be surprised if somehow there were some loosely related AMD issues that came to light now, and it's impossible to tell if those would be current finds or older ones.
Given Intel's dominant position, they may come out ahead in P&L or gross margin terms even if it turns out to be a clearly Intel issue, as the perceived or real loss of performance may trigger an upgrade spree, sold unit counts inevitably dominated by Intel purchases.
AMD has just started to catch up in overall performance, and in the worst case bug impact to Intel, they may even get competitive single-threaded performance. Also, there has been speculation that Apple has been evaluating ARM processors for some future laptops, and a sudden drop of the baseline is an interesting turn of events.
While this in theory benefits the underdogs, financially Intel may well come out ahead due to their market hegemony.
> Loop-Blinn is fine if you want fast rendering with medium quality antialiasing
For example, when using SVG shape-rendering: optimizeSpeed ? I truly hope that SVG is going to be part of this new magic, and that the shape-rendering presentation attribute is utilized. I don't think current SVG implementations get much of a speed boost by optimizeSpeed.
Speaking of which, to what extent will SVG benefit from this massive rewrite?
I think there are other reasons for dismissing BEM, it's quite an opinionated and shallow structuring, good for some stuff and not for others. Very HTML tag oriented, for which we have, well, HTML tags already. Its claims are simply not true (eg. "Reduces style conflicts by keeping CSS specificity to a minimum level."). Even in syntax, it saves on silly things (".btn"?) and wastes much more on, in effect, introducing Hungarian notation to CSS. I could go on but no time rn
Just adding it here as well: if one likes editing CSS files in Chrome Dev Tools, then activating Workspaces will save such changes into the source file. This Dev Tools integration with the Svelte approach is so useful that it might actually tilt the choice in favor of CSS files over the more abstract, thus Chrome-uneditable CSS-in-JS (Workspaces are doable with SASS too as per https://www.amazeelabs.com/en/How-to-write-Sass-within-Chrom...).
Worth mentioning that CSS-in-JS _and_ CSS-in-CSS hot reload in any modern build system anyway. So using workspaces used to be super useful but it's not a big selling point any more.
I'd agree with it were it not for the huge difference between the view refresh latencies. Within the browser, it's near-intantaneous. Hot module reloading (which is still experimental) is relatively fast, yet there's a couple of seconds of a trip; also depends on what and how changes. Full page reload can take even longer, there's compilation time, bundling and the unavoidable cost of reloading stuff in the browser. Sure a couple of seconds doesn't sound bad but it's still a couple of seconds longer than what it should be (immediate) and it breaks my (work)flow.
And you lose state, unless you've carefully designed your app around that problem by using something like Redux. Which is a fine thing to do, but not everyone wants to.
The chrome dev tools API has a neat function to inject new source code by only replacing all functions from the new code, thereby keeping any state. I've used this when building wright (https://github.com/porsager/wright) to allow for hot reloading of anything (no need for redux). CSS reload is also instantaneous with wright, so might even make sense for you to use with the setup you described - No need for copy pasting ;)
You can see it in action here https://porsager.com/wright/example.mov - The start is editing js code (a mithril app) that is going through rollup, but there's also editing of raw css files at the end.
Insanely cool demo. Learning Wright has been near the top of my TODO list for longer than I want to admit — I just never manage to get through the other items on my open source TODO list :(
An example may be, graphic designer shop gets a contract to update the company 'style guide'. It gets done and there are a lot changes that are cross-cutting w.r.t. components, starting with fonts, borders, margins, border styling etc.
Also, what's not fully clear to me, what if - as usually is the case - the components are nested? Changing eg. font in the outer component would leave inner components fully intact?
Totally — I'll often have a small global CSS file for that exact reason. (Scoped CSS is about preventing your components from clobbering each other rather than preventing the desirable aspects of the cascade.)
If components are nested, inheritance still happens (unless you're compiling to web components with Shadow DOM, which Svelte allows), but cascading doesn't unless you opt-in per-selector with the global:(...) modifier borrowed from CSS modules.
Re isolation, if I've understood correctly, then state in components is indeed isolated.
> Leave data processing and business logic to the back-end, on the server
In general, this is not good. Of course there are specific applications where it's useful but in the general space of applications it would be very limiting.
For example, go to http://square.github.io/crossfilter/ and do the filtering on the 5MB data such that you wait for the histograms to be updated from the server. Instead of the tens of milliseconds, it might be seconds or tens of seconds if the server is loaded, i.e. orders of magnitude slower, not to mention unpredictable.
You can think of the network as a data flow constraint. It constrains latency, throughput, privacy and security; the constraints can be unpredictable (network outage; DoS, MiM attack etc.). There can be many good reasons for wanting part of your domain specific logic to fall on the client side.
In particular, dynamic media e.g. interactive data visualization, games and most interactive things that use data or modeling are best partly in the browser.
We're past the point where the rule of thumb was to do business logic on the server and the client only did the presenting of the view and acted as a controller.
Though both are on WebKit, they use different renderers. Generally Chrome is better than Safari except on iOS where Apple forces other browser makers to use the Safari engine, I guess it's one way of keeping tabs on mobile browser speed competition :-)
This is surprisingly true in a way. TypeScript is not a language(1), it's primarily a linter-assisting overlay atop of an actual language, JavaScript. Also, there's a linter that outputs and bundles JS, shedding the alien type annotations and also injecting its own, very partial runtime.
So, JSDoc is just a linter/documenter aid. And so is TypeScript.
(1) TS is not a language: it has no spec, no reference documentation. It defines no behaviors, in particular, no runtime behaviors. It sits atop of various JS versions, layering over them in unspecified ways. TS is a linting layer, and also is a hack.