Just look at the commit cadance, the bulk of the 8k lines of code was added in a couple of hours. Most commits 2-4 minutes apart. This is 100% vibe coded and it's pretty obvious.
> It doesn't show any obvious indications of being AI.
I agree that he probably asked the AI to omit some common AI tells, like excessive comments, verbose readmes etc.
I did a cloc check on it and it does seem to have 800k lines of typescript. So unless they are vendoring dependencies it's actually as insane as it sounds.
Christ their repo is an absolute nightmare. There's new issues and PRs being posted practically every minute, and I assume 99% of them are from agents given the target demographic. Just full-auto vibeslop from all barrels 24/7.
Even if we count the repos whole lifetime, including when it wasn't so active, the averages are still absurd.
96 days / (4,239+9,170) issues = one issue every 10 minutes
96 days / (5,082+10,221) pull requests = one PR every 9 minutes
5000+ open PRs is pretty insane, that's the highest I've seen. How do you even keep track of this? We'll really need trust management systems like vouch (https://github.com/mitchellh/vouch/tree/main) for open source projects in the future to help with reducing noise.
5 day old repo, 2000 stars on GitHub, 400 total weekly downloads on npm. Frontpage of hacker news with a bunch of weird comments. Moderation has been lacking recently.
You are jumping to conclusions. The author is the CTO of the largest online brokerage in India but more importantly, they have created many open source software of good quality. His website and blog are of great quality. Whether you think this library deserves more attention or not is your personal preference but it is far from spam. I havr no affiliation with them but like their work.
It's possible for both things to be true: this project is written by a developer well-known within India, AND this thread has a lot of bot (bought?) comments of praise in it.
That's pretty common in small companies. It's less common in large companies but can happen - you may use the "CTO" title for the founding engineer who still leads code and architecture, then hire someone under a different title (frequently "VP of Engineering") to handle the management / team growing side of the role.
That sounds like a reasonable split to me, so much so I’m not sure I’d understand why you’d want the same person handling both code/architecture and management.
CTO in my company* remains SME on a several components, commits to several production repositories (and expects the most stringent PR checks), and maintains couple of small tool used by us and the customers.
Its not that rare I think.
*small fintech with couple of billions in the accounts, not a startup, not a Fortune 500 company
I'm not sure which comments you're finding weird, but I spot checked a bunch and didn't see anything that looked particularly bogus, other than https://news.ycombinator.com/item?id=47026348 and some trollish ephemera.
The upvotes on the submission look legit to me, as does the submission itself.
Sad that HN is now also getting boted by LLMs. People are just shameless. HN is one of the few places left where you can post / self promote something you have made only for people to take advantage of it.
I don't know if you're demonstrating reductio ad absurdum, but maybe that's because they are genuine? As people in the thread have pointed out, the author as well as their company is pretty well-known in software circles. They have had multiple projects discussed on HN in the past[1]. 2000 stars is not a lot given that [2].
I fail to understand why a ton of breathless blog posts about the process of AI-assisted coding are more interesting to HNers than some of the actual code (potentially, not claiming anything about it) written.
Maybe you or the GP could actually say what you think are "weird comments" and why you think this is being "boted"?
[2] Why are people obsessed with star counts? I at least only star things to bookmark them, not vouch for them in any way. It does not seem unreasonable to me that 5 times as many people bookmarked the repo in the early days than are using it on npm. Also, npm is not necessary, the author shows at least 2 other ways to use it (direct download, link to GitHub pages) which will not show up in npm stats.
They're probably just Indians using the framework saying "thanks." India has the largest population on Earth, they're close to 1.5 billion now. I think some people underestimate what that means.
An explanation that would fit both the old accounts and the artificial comments would be that they were encouraged by the author to comment (which is against the HN rules).
This seems like some pretty lazy analysis to be honest.
Following the first comment you quoted...
> I love it. We need to see more of this.
...shows that the author talks about using a “Chase card abroad” in a previous comment [1], which means they cannot be Indian as Chase doesn't issue cards or have substantial operations in India.
I don't want to run around following specific comment authors back through their threads, but as an Indian by birth it is pretty hurtful to see this kind of drive-by casual characterization of an population in a space like this. It also seems to be pretty contrary to the HN guidelines (“Please don't sneer, including at the rest of the community.”)
It is probably not bots. The reach of authors is pretty good. He actually loyal fan followers in india. You can see the same when he shows up on a podcast or talk.
I think theres alot indian developers who are hacker news as well as on github and other forums.
Perhaps stolen accounts? I doubt every user is practising good security hygiene with a unique password per each account. Password leaks from other sites might well allow a motivated individual to hijack some here.
I could speculate that someone in the past had the business mindset to create thousands of accounts over multiple sites and offers the ability to loan them out for a period of time.
I don't read a lot of papers, but to me this one seems iffy in spots.
> A1 cost $291.47 ($18.21/hr, or $37,876/year at 40 hours/week). A2 cost $944.07 ($59/hr, $122,720/year). Cost
contributors in decreasing order were the sub-agents, supervisor and triage module. *A1 achieved similar vulnerability
counts at roughly a quarter the cost of A2*. Given the average U.S. penetration tester earns $125,034/year [Indeed],
scaffolds like ARTEMIS are already competitive on cost-to-performance ratio.
The statement about similar vulnerability counts seems like a straight up lie. A2 found 11 vulnerabilities with 9 of these being valid. A1 found 11 vulnerabilities with 6 being valid. Counting invalid vulerabilities to say the cheaper agent is as good is a weird choice.
Also the scoring is suspect and seems to be tuned specifically to give the AI a boost, heavily relying on severity scores.
Also kinda funny that the AI's were slower than all the human participants.
Length is enforced when submitting; titles that are too long generate a message about how many characters they are too long by and the [Submit] button is invalidated until shortened.
Unlike you and your smears, I make my positions clear and cite my sources for them.
If you mean by "anti-vaxer (sic)" my opposition to the Covid shots and mandates, then so be it. Many, including me, who have had older vaccines, especially from before the 1986 US liability shield and subsequent problems, are "anti-vax". Even though we still vaccinate our children.
"Covid conspiracist" must mean I cited lab leak possibility and reasons for considering it. Now that federal agencies agree, you should reconsider this lame smear attempt.
Your use of (misspelled) "spell words" (Roger Scruton's phrase) to curse me marks you as superstitious and thoughtless. Do better!
> Even the infotainment system, which a blind person might want to use, for example when waiting for a sighted acquaintance in the car, does not have a screen reader and is not in any way usable.
It has really excellent voice commands for pretty much any function though. Sadly it can only be triggered by pressing the right scroll wheel on the wheel. While possible to just reach over, it's probably not optimal for your suggested use case.
> (On a side note, Bing chat already knows now that she won the prize. Color me impressed.)
It actually doesn't. Bing searches for your query and uses plain old search results as extra context for the actual LLM. GPT-4 still has the same knowledge cutoff as when the model was last trained.
Here's what it feeds to the model when searching for "nobel prize in physics 2023":
> It doesn't show any obvious indications of being AI.
I agree that he probably asked the AI to omit some common AI tells, like excessive comments, verbose readmes etc.
reply