I’m sure I’m not the only one, but it seriously bothers me, the high ranking discussion and comments under this post about whether or not a model trained on data from this time period (or any other constrained period) could synthesize it and postulate “new” scientific ideas that we now accept as true in the future. The answer is a resounding “no”. Sorry for being so blunt, but that is the answer that is a consensus among experts, and you will come to the same answer after a relatively small mount of focus & critical thinking on the issue of how LLMs & other categories of “AI” work.
This is your assertion made without any supportive data or sources. It's nice to know your subjective opinion on the issue but your voice doesn't hold much weight making such a bold assertion devoid of any evidence/data.
I understand where you are coming from, but not every field is hard science. In many fields we deal with some amount of randomness and attribute causality to correlations even if we do not have as much as a speculative hypothesis for a mechanism of action behind the supposed causality.
LLMs trained on data up to a strictly constrained point are our best vehicle to have a view (however biased) on something, detached from its origins and escape a local minima. The speculation is that such LLMs could help us look at correlational links accepted as truths and help us devise an alternative experimental path or craft arguments for such experiments.
Imagine you have an LLM trained on papers up to some threshold, feed your manuscript with correlational evidence and have an LLM point out uncontrolled confounders or something like that.
Outside of science it would be an interesting pedagogic tool for many people. There is a tendency to imagine that people in the past saw the world much the same as we do. The expression "the past is a foreign country" resonates because we can empathise at some level that things were different, but we can't visit that country. "Talking" to a denizen of London in 1910 regarding world affairs, gender equality, economic opportunities, etc would be very interesting. Even if it can never be entirely accurate I think it would be enlightening.
> but that is the answer that is a consensus among experts
Do you have any resources that back up such a big claim?
> relatively small mount of focus & critical thinking on the issue of how LLMs & other categories of “AI” work.
I don't understand this line of thought. Why wouldn't the ability to recognize patterns in existing literature or scientific publications result in potential new understandings? What critical thinking am I not doing?
> postulate “new” scientific ideas
What are you examples of "new" ideas that aren't based on existing ones?
When you say "other categories of AI", you're not including AlphaFold, are you?
I think it's pretty likely the answer is no, but the idea here is that you could actually test that assertion. I'm also pessimistic about it but that doesn't mean it wouldn't be a little interesting to try.
I'm sorry but this is factually incorrect and I'm not sure what experts you are referring to here about there being concensus on this topic. I would love know. Geoffrey Hinton, Demis Hassabis, and Yann LeCun all heavily disagree with what you claim.
I think you might be confusing creation ex nihilo with combinatorial synthesis which LLMs excel at. The proposed scenario is a fantastic testcase for exactly this. This doesn't cover verification of course but that's not the question here. The question is wether an already known valid postulate can be synthesized.
This article is getting a lot of pushback from the SPA champions, deservedly so, but it makes some good points to. I can’t be the only one, but I myself am getting very tired of the amount of websites where I have to sit and look at a skeleton loading for way too many seconds, then the data loads and it looks nothing like the skeleton. There is an over abundance of really crappy SPAs out there. Sorry not sorry
I thought about your comment, and IMO the reason some (or most) SPAs are badly built comes down to the inexperience of developers and hiring managers. They don't know much about performance, they don't measure it, they don't handle errors properly, they ignore edge cases, and some are learning as they go.
Bottom line: they build the SPA, but leave behind a terrible UX and tech debt the size of Mount Everest.
Yeah, of course. But I was thinking more from a user experience point of view. MPAs usually render pages server-side, which cuts down on frontend dependencies and expertise. That's why companies used to hire experienced engineers for the backend, and CSS/jQuery devs for the frontend. It used to make sense, but not anymore. These days, apps built with MACH architecture rely heavily on client-side code. This means companies are supposed to hire experienced engineers to architect apps on the client side but instead, they end up hiring JS devs with little to no software architecture experience. For example, I've seen SPAs that don't log any error messages from the app, which means developers have no idea what problems users are running into.
We were just there in October. There is a ticketing system now with timed entry, so get your tickets in advance. It’s reasonable. Like 10 Euro or something. We got there later in the day and it was sold out, but my fiancé was able negotiate her way in :) She loved it! The rest of the library looked really cool too.
I love nature documentaries so much! They are beautiful, entertaining, and informative. But this makes me wonder how many other ideas have they have mislead me on? Do we need to worry about nature documentaries being a type of propaganda?
Silly question from a web dev who has never written a line of Lua before (or delved into game development). What is it about the language that make it such a good fit for game development? Seems like every time I come across these engines, or read stuff about game development in general, Lua is always the language of choice. Can anyone explain it in a nutshell for me? Thanks
As others have mentioned, it's got a few things going for it.
1. It's super easy to embed in an existing code base
2. The language itself is simple and easy to learn
3. It has native coroutines, which allows game devs to write code without a spider web of callbacks
4. For what it does, it is fast
One point that wasn't mentioned: It has a performance-oriented alternative implementation, called LuaJIT.
With this, you can run JIT-compiled Lua code at speeds comparable to V8 (the NodeJS runtime), and almost as good as the Java VM (depending on the workload, YMMV). LuaJIT also includes a foreign function interface (FFI) for even closer integration with native code, which makes it almost trivial to use native libraries from Lua.
Essentially, this makes it easy to move logic to lower/higher levels of the stack when/if you need it: There's the three layers that are increasingly more difficult to use but faster (Lua -> FFI -> C/C++), and you can directly use gamedev-oriented C++ or even graphics APIs like WebGPU from Lua, without crippling performance or writing tons of glue code.
Note that I've worked with Lua for many years and I'm definitely biased. I've also worked with JavaScript/TypeScript-based engines, where my favorite is BabylonJS (it's great, but JS/browsers really aren't...). So if you don't want to learn Lua/C++ I can recommend looking into BabylonJS as a starting point - it will probably be easier to get something going thanks to the browser APIs.
My understanding is that it's simply historically fallen into the niche. Lua early on was relatively easy to embed in C codebases, making it a natural fit for scripting C (or C++) game engines and their editors. So many hugely popular game development tools are scripted with Lua at this point that it's somewhat breaking the mold to use anything that _isn't_ Lua in this domain.
I think it’s less that it’s particularly good for gamedev and more that it’s particularly easy to embed in an engine, and it’s faster than it has any right to be
The main Lua interpreter by PUC-Rio is among the fastest bytecode interpreters for a popular scripting language. Very efficient C code.
Wizard programmer Mike Pall then came along and wrote an even faster Lua bytecode interpreter in assembly language. And added JIT for even faster performance of hot functions.
I think it’s because Lua’s C API is very very closely coupled to its native keywords and standard library, so it’s not only built for trivial embedding but also for trivial / mechanical integration. Using the Lua API from within C or C++ is almost like the syntactic language isn’t even there and it’s just a convenient library.
It’s also amazingly semantically simple and dynamic, almost like a Lisp; even though it doesn’t have classes, typelevel programming is pretty straightforward.
It also allows you to easily write and distribute native modules (as shared objects) without needing integration code.
Probably since you can embed it in other languages quite trivially. For instance, in C after about a dozen lines of code you can now pass around data into Lua and back to C and thus give you access to a scripting language with little fuss. It’s also a fairly small and simple language, so adding it in won’t add much more to the overall footprint of the project.
It says there is a bill in the works. Nothing definite yet. If it does pass, that seems very odd to me, especially coming from the US. I noticed the policy on €10K or more is already similar to what we have here for dollars. Anything $10K in cash or more gets flagged, and you have to fill out a form.
This is the way I look at it. The productivity claims are probably a bit overhyped. For me Vim is just simply fun! It makes me think of the best way to do something, and there is always something new to learn. It makes me happy as well :)
While your perspective is totally valid, I think the idea that you always have to think about and learn how to use your editor seems to confirm the original commenter's point about the whole thing being too complicated.
To which I say, I've been using Vim (and nVim) for more than 20 years now, and the learning part is optional after a year or so of intense use.
I’ve had a lot of experience with another similar library, Kivy (kivy.org). These types of libraries, can be quite effective for the right use case. I shipped a couple mobile apps on both Google and Apple, so there’s that. I would check this out, but I do remember having to work through a lot of issues to get the mobile build just right. I wonder if a less mature project like this would have similar struggles?
I too have used Kivy, using Buildozer for Android builds. One day I did something trivial like change the color of a button. The otherwise untouched project suddenly refused to build for random obscure Buildozer-related reasons I quickly lost the patience to figure out. Imagine if it was a security vulnerability I was trying to quickly patch. What's more, I think even rebuilding with the untouched source also failed now in the same way. And that's how I lost interest in pursuing Kivy further.
Yep. Kivy was a big part of my journey to usually wanting nothing to do with any niche or obscure tech.
It took me a long time to learn that no matter how awesome the concept seems, if it's not extremely popular (aside from very simple things) there's probably an issue somewhere.
At this point I would rather just make web apps for most everything, and I wish they'd just merge android and ChromeOS so we had an easy way to make simple cross platform stuff, while keeping the Android APIs for more powerful stuff.
This isn't intended as a dig at you, but in what world should an "Install Kivy" page take ~10 screens of text to display? If my goal is to build a simple Python app I can give to my friends and say, "here, I made this, give it a try," then that install page is screaming at me not to use Kivy to do that.
I'll add: the "getting started" collection of pages where I found that "install kivy" monstrosity then goes on to an "A first app" page, which...doesn't walk you through building your first app. Instead, it links to a page in a different collection of pages. And that page describes how to build a Pong app (too complex to be a "build my first app" -- there's a reason to start with "Hello World")...except it doesn't, it says that it assumes you already know how to create a Kivy application, and refers you to another page if you don't. And that page does "Hello World, but poorly.
Okay, one more and I'll shut up. The app is unsigned (on MacOS) so you can't double-click or command-O it -- you therefore have to right-click and select Open, and then acknowledge the dire warning about running unsigned code. I don't see any mention of this on the over-long install page, so anyone who doesn't know how to run unsigned code on a Mac is out of the game immediately.
And after all that, the app doesn't run. So yeah.
I put off learning python for years because of this ridiculousness. Finally I decided to just bite the bullet, because it just isn't getting any better.