"Aristotle integrates three main components: a Lean proof search system, an informal reasoning system that generates and formalizes lemmas, and a dedicated geometry solver"
It is far more than an LLM, and math != "language".
> Aristotle integrates three main components (...)
The second one being backed by a model.
> It is far more than an LLM
It's an LLM with a bunch of tools around it, and a slightly different runtime that ChatGPT. It's "only" that, but people - even here, of all places - keep underestimating just how much power there is in that.
Transformer != LLM. See my edited top-level post. Just because Aristotle uses a transformer doesn't mean it is an LLM, just as Vision Transformers and AlphaFold use transformers but are not LLMs.
LLM = Large Language Model. Large refers to both the number of parameters (and in practice, depth) of the model, and also implicitly the amount of data used for training, and "language" means human (i.e. written, spoken) language. A Vision Transformer is not an LLM because it is trained on images, and AlphaFold is not an LLM because it is trained molecular configurations.
Aristotle works heavily with formalized LEAN statements and expressions. While you can certainly argue this is a language of sorts, it is not at all the same "language" as the "language" in LLMs. Calling Aristotle an "LLM" just because it has a transformer is more misleading than truthful, because every other single aspect of it is far more clever and involved.
Sigh. If I start with a pre-trained LLM architecture, and then do extensive further training / fine-tuning with different data and loss functions and custom similarity metrics for specialized search and specialized training procedures, and use feedback from other automated systems, we are far, far more than an LLM. That's the point. Calling something like this an LLM is as deeply misleading as calling AlphaFold an LLM. These tools goes far beyond simple LLMs. The special losses and metrics are really so important here and are why these tools can be so game-changing.
In this context, we're not even talking about "math" (as a broad, abstract concept). We're strictly talking about converting English to Lean. Both are just languages. Lean isn't just something that can be a language. It's a language.
There is no reason or framing where you can say Aristotle isn't a language model.
That's true, and a good fundamental point. But here it's much simpler than that: math is a language the same way code is, and if there's one thing LLMs excel at, it's reading and writing code and translating back and forth between code and natural language.
I have zero success with most ads. Checking the session recordings in my UXWizz dashboard, I can see that 90%+ of visits are usually bots or fake users.
Planning to scale UXWizz, I'll narrow down the niche I'm focusing on (to be targeted more towards web agencies), and I'll go all-in on educational content (tutorials).
This month I will start creating tutorials for uxwizz.com (I got a Raspberry Pi 4, and I will write a blog post on how to set-up LAMP stack and run your own UXWizz dashboard locally on a PI)
Most likely, as Adam directly "credited" their revenue issues to AI (which makes sense, tailwind was making money by selling pre-made components, but now the AI can generate those for you).
When you say plagiarizes, do you mean they are publishing their own docs without ads? Or you mean when the AI is reading the docs instead of a person they ignore the ads?
People don't just ask AI to produce a Tailwind app, they also ask AI specific questions that are answered in the docs. When the AI regurgitates the answers from the docs they don't visit the actual docs. Like the Google answer box in search results stealing clicks from the pages that produce the content.
It was a problem with their revenue stream, which was documentation website -> banner for lifetime payment.
All customers already had lifetime access and couldn't pay more. Plus noone was reading the docs on the webpage anymore.
Recurring subscriptions, ads in AI products (think Tailwind MCP server telling you about subscription features.) Those were just two things I pulled out of the hat in a minute.
I can understand recurring subscriptions and ads in MCP being a bright line that the team doesn't want to cross. You will probably say it's a bad business model to not make everything a recurring charge and packed full of ads.
I've experienced this in my own life - I ran my own business and I had to choose between doing a worse job and enshittifying the product to make more money, or doing a good job but risking bankruptcy. I choose bankruptcy, because I believed strongly in doing a good job and not enshittifying the product. I don't regret it.
In which case one has to wonder if we need tailwind at all anymore. To me, years ago, tailwind was a great sell as a tool to work faster by typing less. The tradeoff is that the "inline styles" look awful and become a mess real quickly when too many of them are placed together (so and so has precedence or whatever, a media query for each single property, constantly translating between CSS and tailwind equivalent, etc).
Now? Well, AI solves the entire issue of time taken typing. Classnames always looked cleaner too. Additionally CSS doesn't lag behind Browser features and comes with the full power of the language.
Why bother with Tailwind anymore whatsoever?
They were extremely lucky that AI picked up tailwind to keep it relevant, they should be keeping up with the times if they want to stay relevant. Instead their actions are those of someone that is cowering in fear, making sure that they can put the last of the revenue into the coffer. (reject PR because they don't want AI to do better with tailwind while firing engineers, not to mention the big tantrum).
Lets go back to actual CSS, it is easier to read anyway, it's now a modern tool with variables and all that, there's no longer a need to dumb it down.
Besides, if I wanted to pay for pre-made components, I would go with DaisyUI which is agnostic to the frontend framework, unlike the paid components from tailwindLabs, which strictly require you use one of the javascript frameworks.
I tried playing against it, I didn't have many expectations, but even though I blundered a bishop on move 3 due to a mouse-slip, I could still checkmate it in 6 moves. To me it seemed like it makes random moves.
Chose the most aggressive move (in term of pieces value and check-mate), if none is aggressive, it takes on of the move equally non aggressive.
Didn't remember the depth of the algorithm but it was very simple C code, could check quickly. It should be able to find a mate in 2 or 3 if it was in position of having one.
I didn't check the correctness of the algorithm, just the intention.
I thought it played worse than random moves and couldn't understand how it could beat anyone (no offence to OP).
But if you intentionally hang your pieces, it tends to take them. And it will try to promote pawns in the endgame. So it is possible for it to stumble upon a checkmate, though in my effort where I gave away all my pieces, it instead found the only move to stalemate once it had K+Q+R vs K.
reply