But there's the rub. If you found the code on Github, you would have seen the "simple licence" which required you to either give an attribution, release your code under a specific licence, seek an alternative licence, or perform some other appropriate action.
But if the LLM generates the code for you, you don't know the conditions of the "simple license" in order to follow them. So you are probably violating the conditions of the original license, but because someone can try to say "I didn't copy that code, I just generated some new code using an LLM", they try to ignore the fact that it's based on some other code in a Github somewhere.
> originally was intended to prevent owner-operators of mechanical printing presses from printing and selling copies of some author's books without paying them or getting permission.
We agree that that was its initial stated intention.
However, what we have seen in practice is that it has resulted in the owner-operators of those machines banding together to restrict access to the machines unless authors sign exploitative contracts assigning their rights to the operators (which they interpret as "getting permission").
The world has changed substantially since the 1710 Statue of Anne; there's a thousand things that you could call the modern-day equivalent of mechanically printing a book, with myriad capital and operating costs and availability. Many ways an independent author or artist can publish their work are extremely cheap and effective. I'm relatively anti-copyright, but that doesn't mean that everyone currently benefiting from copyright law is rent-seeking in an exploitive way.
I used to volunteer for a local non-profit a few years ago.
From time to time, I would reflect on the fact that Microsoft and other commercial suppliers were getting paid for providing services to us, but I was expected to work for free.
> Funny how these things when done by a human is a positive and when done by an LLM is a negative.
> Humans generate perfect code on the first pass every time and it's only LLMs that introduce bad implementations.
That's not what the "anti-llm experts" are saying at all. If you think of LLMs as "bad first draft" machines, then you'll likely be successful in finding ways to use LLMs.
But that's not what is being sold. Atman and Amodei are not selling "this tool will make bad implementations that you can improve on". They are selling "this tool will replace your IT department". Calling out that the tool isn't capable of doing that is not pretending that humans are perfect by comparison.
> QA has to some how think of all the inane ways that a user will actually try using the thing knowing that not all users are technically savvy at all.
The classical joke is: (this variant from Brenan Keller[0])
A QA engineer walks into a bar.
- Orders a beer.
- Orders 0 beers.
- Orders 99999999999 beers.
- Orders a lizard.
- Orders -1 beers.
- Orders a ueicbksjdhd.
First real customer walks in and asks where the bathroom is.
How does PBKDF2 prevent an offline decryption attack with unlimited attempts?
All it does is slow down the attempts, but for the average person's easy-to-remember password, it's probably increasing the effort from milliseconds to a few days.
But there's the rub. If you found the code on Github, you would have seen the "simple licence" which required you to either give an attribution, release your code under a specific licence, seek an alternative licence, or perform some other appropriate action.
But if the LLM generates the code for you, you don't know the conditions of the "simple license" in order to follow them. So you are probably violating the conditions of the original license, but because someone can try to say "I didn't copy that code, I just generated some new code using an LLM", they try to ignore the fact that it's based on some other code in a Github somewhere.
reply