I'm very sorry, I should have phrased my original post in a kinder, less dismissive way, and kudos to you for not reacting badly to my rudeness. It is a cool repo and a great accomplishment. Implementing autograd is great as a learning exercise, but my opinion is that you're not going to get the performance or functionality of one of the large, mainstream autograd libraries. Karpathy, for example, throws away micrograd after implementing it and uses pytorch in his later exercises. So it's great that you did this, but for others to learn how autograd works, Karpathy is usually a better route, because the concepts are built up one by one and explained thoroughly.
Good article, but I think it misdiagnoses the problem. Chromium is complex because what it implements is complex. Dillo is smaller because it doesn't support as many features. It's a solution to a simpler problem. Still, great article.
Thanks for the feedback. I'm not sure if I understand the misdiagnosis part. I think that complex free software reduces the ability from independent groups to modify it, so forcing it to be small ensures it remains easy to modify.
Chromium is complex because they indeed solve a complex problem, and I don't think there is much room for a simpler software solution to implement the current web. So the problem is not the implementation, but the choice of what is being implemented. But I'm not sure how this conflicts with the above argument to preserve the ability to modify the software.
My main objective is to make sure that this problem doesn't happen in Dillo, which necessarily means we need to sacrifice many features. A benefit from using source size is that is easy to measure (we do it in the CI) so we avoid a subjective metric.
I like Markdown as is from a writing perspective. I wrote a recursive descent Markdown parser for a project recently, and I quickly realized how painfully ambiguous Markdown is. Lists (specifically nested lists) are the worst offenders.
Despite CommonMark, I find that many common Markdown parsers tend to "do it their own way" when it comes to edge cases. So I like this. This seems less ambiguous and easier to parse. But I don't think I'm going to be switching from regular Markdown anytime soon.
I agree with this completely; ChatGPT search is perfect for most use cases. I find it to be better than OpenAI's deep research in my experience-- it often uses 2-3x the sources, and has a more comprehensive, well-thought-out report. I'm sure there are still cases where deep research is preferable, but I haven't come across those yet.
I believe this is just the normal mode. In my experience, you don't have to select the web search option to make it search the web. I wonder why they have web search as an option at this point (to force the llm to search?)