You could use a similar argument to argue that "brains do not think, understand, reason, reflect, comprehend and they never shall." After all, there's nothing in there but neurons, synapses and other biological gunk. If you look at it that way.
That argument you posed does not follow. Brains do all of those things, at least I know mine does because those are the most intimate experiences I have. More intimate than even my sense experiences. It's what lead Kant to exclaim 'cogito ergo sum' and Ibn Sina before him to also note that fact.
Moreover how the brain does what it does an open academic question. One of the most difficult, please see the Hard problem of consciousness for example.
I can make certain determined judgments about things, my digital thermometer does not think or understand when it tells me the temperature, something I myself would be unable to determine. My digital LLM does not think for the same reason. Importantly my paper and pen version of that very same LLM would also not think.
What you are trying to minimize here is the error rate of the composite system, not the error rate of the individual modules. You take it as a given that all the teams are doing their human best to eliminate mistakes from their design. The idea of this is to make it likely that the mistakes that remain are different mistakes from those made by the other teams.
Providing errors are independent, it's better to have three subsystems with 99% reliability in a voting arrangement than one system with 99.9% reliability.
This seems like it would need some referees who watch over the teams and intrude with, "no, that method is already claimed by the other team, do something else"!
Otherwise, I can easily see teams doing parallel construction of the same techniques. So many developments seem to happen like this, due to everyone being primed by the same socio-technical environment...
It's baffling from our perspective, but perhaps not so much if you try to look at it the mindset of its proponents.
It's been sold as "for the children". A very substantial proportion of the population are natural authoritarians, and this is red meat for them. Never mind that "the children" that they profess to be protecting are going to grow up living in an increasingly authoritarian surveillance state, this is what authoritarians want for our future, and they see it as not only morally good, but any opposition to it as indefensible.
The big problem is the Red Queen's Race nature of development in rapidly-evolving software ecosystems, where everyone has to keep pushing versions forward to deal with their dependencies' changes, as well as any actual software developments of their own. Combine that with the poor design decisions found in rapidly-evolving ecosystems, where everyone assumes anyting can be fixed in the next release, and you have a recipe for disaster.
I find this fascinating, as it raises the possibility of a single framework that can unify neural and symbolic computation by "defuzzing" activations into what are effectively symbols. Has anyone looked at the possibility of going the other way, by fuzzifying logical computation?
Yes, you can relax logic gates into continuous versions which makes the system differentiable. An AND gate can be constructed with the function x*y and NOT by 1-x (on inputs in the range [0,1]. From there you can construct a NAND gate, which is universal and can be used to construct all other gates. Sigmoid can be used to squash the inputs into [0,1] if necessary.
This paper lists out all 16 possible logic gates in Table 1 if you're interested in this sort of thing: https://arxiv.org/abs/2210.08277
Unity is absolutely being squeezed between the two. I can't really see how it can compete with Godot at the low end; it's hard to compete with free, and most of the goodness in low end games is the gameplay logic, not graphics or animation. And Godot can only get better; look at how Blender ate the CGI tools market. This leaves Unity having to either compete with Unreal at the high end - a very high bar - or somehow finding a new business model. The switcheroo they tried to pull on their customer base can best be viewed in that light.
Godot isn't quite free if you want to release on consoles since those platforms are only supported by commercial forks, but I'm sure going down that route is still a hell of a lot cheaper than licensing Unity.
It's a bit of a weird edge-case, but the very popular Battlefield 6 is partially a Godot game. It's an odd hybrid of a proprietary in-house engine with Godot grafted onto it, which serves as a public-facing SDK for players to build their own content. I know that's not exactly what you meant but it is an interesting application in a major AAA title.
Battlefield 6 of all things includes Godot as core of the Portal map-building. Casette Beasts is what Pokemon wishes it was. Upcoming Planetenverteidigungskanonenkommandant looks gorgeous from the previews.
I don’t know if I could list something that matches say Cuphead or Silksong, but I do think that Godot is currently on a Clayton Christiansen-style worse-is-better ascent right now.
Maybe a bit of an exaggeration. But I think at least 30%. Unreal is popular too. Unity seems to be more popular for indie/coop/single player/certain art styles. There seems to be many more unity games overall, but a lot of them are very small.
Monophasic waveforms are generally considered less safe than biphasic waveforms. That's why many TENS units have an output stage based around a pulse transformer, so they can deliver two pulses, one in each direction, shortly after each other. Leaving this out seems to me to be a false economy when you've gone to all the effort of building the rest of the system.
What makes trading such a special case is that as you use new technology to increase the capability of your trading system, other market participants you are trading against will be doing the same; it's a never-ending arms race.
The only applications of generative AI I can envisage for trading, systematically or otherwise are the following:
- data extraction: It's possible to get pretty good levels of accuracy on unstructured data, eg financial reports with relatively little effort compared to before decent llm's
- sentiment analysis: Why bother with complicated sentiment analysis when you can just feed an article into an LLM for scoring?
- reports: You could use it to generate reports on your financial performance, current positions etc
- code: It can generate some code that might sometimes be useful in the development of a system
The issue is that these models don't really reason and they trade in what might as well be a random way. For example, a stock might have just dropped 5%. One LLM might say that we should buy the stock now and follow a mean reversion strategy. Another may say we should short the stock and follow the trend. The same LLM may give the same output on a different call. A miniscule difference in price, time or other data will potentially change the output when really a signal should be relatively robust.
And if you're going to tell the model say, 'we want to look for mean reversion opportunities' - then why bother with an LLM?
Another angle:
LLM's are trained on the vast swathe of scammy internet content and rubbish in relation to the stock market. 90%+ of active retail traders lose money. If an llm is fed on losing / scammy rubbish, how could it possibly produce a return?
RL would reasonably be expected to work if the market had some sort of discoverable static behavior.
The reason why RL by backtesting cannot work is that the real market is continuously changing, as all the agents within it, both human and automated, are constantly updating their opinions and strategies.
Good one!
The thing is, you are assuming "perfect/symmetric distribution" of all known/available technologies across all market participants - this far off the reality.
Sure: Jane Street et al are on the same level, but the next big buckets are a huge variety of trading shops doing whatever proprietary stuff to get their cut; most of them may be aware of the latest buzz, but just dont deploy it et.
Even more than 20 years on, people are finding new ways to improve kernel internals. This is the sort of subtle, intelligent work that distinguishes excellent software from the rest.
There is absolutely nothing to prevent anyone from generating arbirary DOM content from XML using JS; indeed, there's nothing stopping them from creating a complete XSLT implementation. There's just no need to have it in the core of the browser.
You don’t need to generate anything with JavaScript, aside from one call to build an entire DOM object from your XML document. Boom, whole thing’s a DOM.
I guess the fact that it’s obscure knowledge that browsers have great, fast tools for working directly with XML is why we’re losing nice things and will soon be stuck in a nothing-but-JavaScript land of shit.
Lots of protocols are based on XML and browsers are (though, increasingly, “were”) very capable of handling them, with little more than a bridge on the server to overcome their inability to do TCP sockets. Super cool capability, with really good performance because all the important stuff’s in fast and efficient languages rather than JS.
Which you shouldn't.