Definitely not a big fan of it, but realistically speaking, it’s here to stay. It is wise for them to govern and regulate it rather than outright ban it. Especially with a project as big as this one, people will try. Saying that the responsibility falls on the human is definitely the right move.
any resulting bugs or security flaws firmly onto the shoulders of the human submitting it.
Watch Americans and their companies pull some mad gymnastics on proportioning blame for this
Well yea, it’s the human submitting the code, and using a tool known to be imperfect
Your comment is pretty dumb
At this point it’s 23 on -5 with opinions on that dumb comment sunshine
Maintainers’ only responsibility is to ensure quality and shouldn’t have to check for rogue AI submissions.
Tho I still miss consistent fucking weather so year of the netbsd?
Ensuring you don’t approve garbage, either human or AI generated, is part of quality
Linux kernel being written by Microsoft’s AI.
Microsoft needs to try to ruin Linux somehow, it can’t just hurt windows 11 with AI slop code, it needs to expand it’s efforts to other systems.
which is trained on free and open source code
That will definitely not introduce some weird things when it starts feeding on itself.
I am the c/fuck_ai person but at this point I have made peace we can’t avoid it. I still don’t want it to do artsy stuff (image gen, video gen) and to blindly use it in critical stuff because humans are the ones that should be doing it or have constant oversight. I think the team’s logic is correct here, because there is no way to know if the code is from an LLM or a human unless something there screams LLM or the contributor explicitly mentions it. Mandating the latter seems like a reasonable move for now.
I consider myself to be more pro AI than not, but I’m certainly not a zealot and mostly agree with the take that it shouldn’t be used in artistic pursuits. However, I love using AI to help me create art. It can give great critiques, often good advice on how to improve, and is great for rapid experimentation and prototyping. I actually used it this weekend to see what a D&D mini might look like with different color schemes before painting it. I could have done the same with Gimp, but it would have taken much longer for worse results that was ultimately just for a brain storming session. How do you feel about my AI usage from your perspective? I suppose from an energy conservation perspective, all of it was bad, but I’m more interested in a less trivial take.
Yes the energy consumption is bad. My main gripe about LLM generated art is that it will not be original. It will use its training data from uncredited artworks to generate it. Art usually is made by humans to express something or convey something in a creative way. LLMs fail at that. What LLMs can actually be helpful at is making learning art more accessible to everyone. Art schools or private art classes can be expensive. This lowers the barrier to entry.
As for you using generated Art is that the it might be really beautiful but it will be very difficult to maintain that style and even more difficult to convince that it is your style. The Artist doesn’t get much recognition with LLM generated art. Using it as a critique also seems stupid because LLMs will aways try to give an objective view on it than subjective. Your art won’t trigger an emotion in it and might say it is bad or “do this to make it more understandable” — that’s where you lose as an artist.
My mom likes to paint as a hobby. What she does it searches stuff on Pinterest (which is mostly LLM Generated). She uses it as an inspiration to do it in her own style and maybe give it some spin. She keeps all of it for herself.
I’m a writer. I got paid to write on a few things here and there, but mostly there are just huge barriers for people without connections.
I plan on using AI to turn my writing into a visual animated format for people to consume. I don’t much care about the style of art, I just want my work to be seen. I can’t afford to pay for artists. If I could, I would. But at least, this would give me an opportunity to show my work without some execs saying no a hundred times.
When I look at the art for cartoons in the 70s/80s, there is so much crap animation with mistakes and duplications, you would think it’s “a.i. slop.” I understand that these were done overseas, pumped out quickly so quality control was overlooked for speed… but it wasn’t the animation I was interested in, it was the stories and characters.
I still think original artists will continue to exist. A.I. is just another tool. People will get bored of the same old stuff and want originality. I really hope it’ll make our lives better in the long run, but we’re just in the weird middle stage of A.I. crawling before running.
I can’t afford to pay for artists
You can afford LLMs right now because all of the LLM companies are losing money on it. If they decide they want to make a profit, they will raise their prices significantly. So you still end up in the same situation. You don’t have much control on what an LLM spits out while with doing animation manually, you have total control or at-least sit with an actual animator to make it look how you envision it to be.
I plan on using AI to turn my writing into a visual animated format for people to consume.
What makes you think that people will respond the same way and in the same numbers to LLM generated animation than if it were crafted by an artist? I reckon that it will be much lower. I see it on youtube constantly. I watched a video about a topic, then I got recommended something related to it from a different channel. Guess what? The script and the animation were so damn similar and the shit they were spewing wasn’t even true in the end. Everything that both the channels made was slop. Sure they spit out more content than conventional methods and got a few thousand views each video and made decent money on it. But they aren’t gonna sustain for long if they want audience retention.
Since then I have been more mindful on what video I click on and going to the extent of disabling recommendations and watch history.
I have downloaded my own LLM that can be used on my own computer… So the only cost is electricity since I upgraded my computer before the prices went to shit. Newegg even gave me free RAM with the purchase of a motherboard so I lucked out on that. Storage is not an issue too since I got that back in 2024 knowing Trump would fuck everything up.
And no, people might not respond the same way to my work, but then again I’m not taking any work away from anyone else because then it would not even exist. If you want to fund me and the artist for our work, then okay. Show me the money.
One thing I’ve noticed is that I see many more people complain about slop than slop itself. It’s so annoying at this point that’s it’s making me go in the opposite direction. Hey everyone, slop here… Microsoft slop here… Use Linux Linux Linux. Slop slop slop. Sloppy joes. It’s like candlestick makers complaining to Nikola Tesla.
my own LLM that can be used on my own computer
May I ask how many B parameters does it have? Because the paradox over here is:
- if it is weak then you will be getting much much worse results than even the Big Models the corpos have (we don’t even know how much tbh), let alone the quality of an actual artist.
- If you have a respectfully powerful model then your PC might cost thousands of dollars (even by ignoring the price hikes) which eliminates the excuse to pay an actual artist.
Another great example of how AI is just wreaking havoc on people’s brains.
- Wants to show an enticing product to execs, doesn’t want to invest in paying an artist
- realizes they have to have connections but doesn’t want to network
- wants recognition of their hard work, hasn’t sought out a community or collaboration but states “show me the money”
AI will fix everything for me! Slop doesn’t exist! (ignores the very article we’re in, any platform algorithm feed, the us president shit posting, all the slop that gets presented here). Go get em Nik, don’t let haters stop your brilliance.
A very extreme takeaway, but okay.
Bad actors submitting garbage code aren’t going to read the documentation anyway, so the kernel should focus on holding human developers accountable rather than trying to police the software they run on their local machines.
“Guns don’t kill people. People kill people”
Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.
The author should elaborate on how exactly AI is like “a specific brand of keyboard”. Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like “a specific brand of keyboard”, does that mean my brain is also a “specific brand of keyboard”?
I get their point. If you want to create good code by having AI create bad code and then spending twice the time to fix it, feel free to do that. But I’m in favor of a complete ban.
You’re the one comparing AI and guns/killing people, and then saying their metaphorical comparison isn’t accurate? Lol
Wooting and Razer had a macro function that allowed Counterstrike players to setup a function to always get counter strafe. Valve decided that was a bridge too far and banned “Hardware level” exploits.
So, Valve once banned a keyboard.
Torvalds and the maintainers are acknowledging reality: developers are going to use AI tools to code faster, and trying to ban them is like trying to ban a specific brand of keyboard.The author should elaborate on how exactly AI is like “a specific brand of keyboard”. Last I checked a keyboard only enters what I type, without hallucinating 50 extra pages. And if AI, a tool that generates content, is like “a specific brand of keyboard”, does that mean my brain is also a “specific brand of keyboard”?
It’s about the heritage of code not being visible from the surface. I don’t know about your brain.
The (very obvious) point is that this cannot be enforced. So might as well deal with it upfront.
The keyboard thing is sort of a parable, it is as difficult to determine if code was generated in part by AI as it is to determine what keyboard was used to create it.
AI is a useful tool for coding as long as it’s being used properly. The problem isn’t the tool, the problem is the companies who scraped the entire internet, trained LLM models, and then put them behind paywalls with no options to download the weights so that they could be self-hosted. Brazen, unaccountable profiteering off of the goodwill of many open source projects without giving anything back.
If LLMs were community-trained on available, open-source code with weights freely available for anyone to host there wouldn’t be nearly as much animosity against the tech itself. The enemy isn’t the tool, but the ones who built the tool at the expense of everyone and are hogging all the benefits.
Eh, trust me, anti AI people don’t think this much about it
Also, there are a lot of open weight models out there that are pretty good
There are hundreds of such LLMs with published training sets and weights available on places like HuggingFace. Lots of people run their own LLMs locally, it’s not hard if you have enough vram and a bit of patience to wait longer for each reply.
Last I checked a keyboard only enters what I type
I’m assuming the author is talking about mobile keyboards, which have autocomplete and autocorrect.
Last I checked a keyboard only enters what I type
I’ve had (broken) keyboard “hallucinate” extra keystrokes before, because of stuck keys. Or ignore keypresses. But yeah, that means the keyboard is broken.
Out of curiosity how much code have you contributed to the Linux kernel?
The title of the article is extraordinary wrong that makes it click bait.
There is no “yes to copilot”
It is only a formalization of what Linux said before: All AI is fine but a human is ultimately responsible.
" AI agents cannot use the legally binding “Signed-off-by” tag, requiring instead a new “Assisted-by” tag for transparency"
The only mention of copilot was this:
“developers using Copilot or ChatGPT can’t genuinely guarantee the provenance of what they are submitting”
This remains a problem that the new guidelines don’t resolve. Because even using AI as a tool and having a human review it still means the code the LLM output could have come from non GPL sources.
That’s probably why they say “a human is responsible” not “a human must validate it.” I certainly agree that validation is not always possible. And this problem will get worse in time.
Yeah, that’s also my question. Partially because I am a former-lawyer-turned-software-developer… but, yeah. How are the kernel maintainers supposed to evaluate whether a particular PR contains non-GPL code?
Granted, this was potentially an issue before LLMs too, but nowhere near the scale it will be now.
(In the interests of full disclosure, my legal career had nothing to do with IP law or software licensing - I did public interest law).
They don’t, just like they don’t with human submitted stuff. The point of the Signed-off-by is the author attests they have the rights to submit the code.
Which I’m guessing they cannot attest, if LLMs truly have the 2-10% plagiarism rate that multiple studies seem to claim. It’s an absurd rule, if you ask me. (Not that I would know, I’m not a lawyer.)
Where are you seeing the 2-10% figure?
In my experience code generation is most affected by the local context (i.e. the codebase you are working on). On top of that a lot of code is purely mechanical - code generally has to have a degree of novelty to be protected by copyright.
Imagine how broken it would be otherwise. The first person to write a while loop in any given language would be the owner of it. Anyone else using the same concept would have to write an increasingly convoluted while loop with extra steps.
Anyone else using the same concept would have to write an increasingly convoluted while loop with extra steps.
Sounds like an origin story for recursion.
If it’s flagged as “assisted by <LLM>” then it’s easy to identify where that code came from. If a commercial LLM is trained on proprietary code, that’s on the AI company, not on the developer who used the LLM to write code. Unless they can somehow prove that the developer had access to said proprietary code and was able to personally exploit it.
If AI companies are claiming “fair use,” and it holds up in court, then there’s no way in hell open-source developers should be held accountable when closed-source snippets magically appear in AI-assisted code.
Granted, I am not a lawyer, and this is not legal advice. I think it’s better to avoid using AI-written code in general. At most use it to generate boilerplate, and maybe add a layer to security audits (not as a replacement for what’s already being done).
But if an LLM regurgitates closed-source code from its training data, I just can’t see any way how that would be the developer’s fault…
Pretty convenient.
This is how copyleft code gets laundered into closed source programs.
All part of the plan.
How would they launder it? Just declare it their own property because a few lines of code look similar? When there’s no established connection between the developers and anyone who has access to the closed-source code?
That makes no sense. Please tell me that wouldn’t hold up in court.
I believe what they’re referring to is the training of models on open source code, which is then used to generate closed source code.
The break in connection you mention makes it not legally infringement, but now code derived from open source is closed source.Because of the untested nature of the situation, it’s unclear how it would unfold, likely hinging on how the request was formed.
We have similar precedent with reverse engineering, but the non sentient tool doing it makes it complicated.
That makes sense. I see the problem with that, and I don’t have a good solution for it. It is a divergence of topic though, as we were discussing open-source programmers using LLMs which are potentially trained on closed-source code.
LLMs trained on open-source code is worth its own discussion, but I don’t see how it fits in this thread. The post isn’t about closed-source programmers using LLMs.
Besides, closed-source code developers could’ve been stealing open-source code all along. They don’t really need AI to do that.
Still, training LLMs on open-source code is a questionable practice for that reason, particularly when it comes to training commercial models on GPL code. But it’s probably hard to prove what code was used in their datasets, since it’s closed-source.
Please tell me that wouldn’t hold up in court.
First tell us how much money you have. Then we’ll be able to predict whether the courts will find in your favor or not
Sad but true…
First of all, who is going to discover the closed source use of gpl code and create a lawsuit anyway?
Second, the llm ingests the code, and then spits it back out, with maybe a few changes. That is how it benefits from copyleft code while stripping the license.
Maybe a human could do the same thing, but it would take much longer.
Wait, did you just move the goalposts? I thought the issue we were talking about was open-source developers who use LLM-generated code and unwittingly commit changes that contain allegedly closed-source snippets from the LLM’s training data.
Now you want to talk about LLM training data that uses open-source code, and then closed-source developers commit changes that contain snippets of GPL code? That’s fine. It’s a change of topic, but we can talk about that too.
Just don’t expect what I said before about the previous topic of discussion to apply to the new topic. If we’re talking about something different now, I get to say different things. That’s how it works.
I was responding specifically to this part
But if an LLM regurgitates closed-source code from its training data, I just can’t see any way how that would be the developer’s fault…
showing what would happen when the llm regurgitates open source code into close source projects.
Sorry if you didn’t like that.
The title of the article is extraordinary wrong that makes it click bait.
It’s the pain in the ass with some of those fucking tech/video/showbiz news outlets and then rules in some fora where you cannot make “editorialized” post titles, even though it’s so tempting to correct the awful titling.
Because even using AI as a tool and having a human review it still means the code the LLM output could have come from non GPL sources.
I get why they are passing this by though, since you don’t know the provenance of that Stack Overflow snippet, either.
Yup.
I would also just point out that this doesnt change the legal exposure to the Linux kernel to infringing submissions from before the advent of LLMs.
the LLM output could have come from non-GPL sources
Fundamentally not how LLMs work, it’s not a database of code snippets.
“Derivative works”
Copilot? You mean the AI with terms of service that are in bold and explicit: “for entertainment purposes only”?
Which is why its in the title and not the article? EntertainBait?
Just legal stuff. Making a huge deal of it is dumb
I disagree.
Legal stuff would be Use at your own risk, or answers may not be correct.
This is really strong language.
I suppose GitHub Copilot is meant, which is a different thing.
Different how, isn’t github owned by microsoft ?
Different in that it’s not an AI model, it’s just a tool you can use to run AI models like Claude.
see my reply here
There are like 70 copilots
The hell. How can they expect people to understand ? They plan to sell 100 things under the same name and try to sell it as one big AI when it is hundred of différents things unrelated ?
They’ve never been good at naming things, but they now seem to be going out of their way to try to be the worst with the names of their software. For instance, they named the successor to the already generically named “remote desktop protocol” “windows app”.
This one is funny. Go google windows app commands. They just fucked sysadmins
Most of those are bundled, no one is buying copilot fot OneNote they just get it when the get the rest of that suite.
Ok, so there are 70-81 copilots, github is one of them.
Why is github copilot a different thing in the context of the reply that was being responded to ?
Copilot is the harness, Claude and GPT are the models
Copilot is by far the worst harness of all the major players
Yes, i get that, copilot is like opencode or cursor, though perhaps with less general access to models.
There was a reply
Copilot? You mean the AI with terms of service that are in bold and explicit: “for entertainment purposes only”?
followed by
I suppose GitHub Copilot is meant, which is a different thing.
i was asking why github copilot is different in that context.
AI is here, another tool to use…the correct way. Very reasonable approach from Torvalds.
I don’t have a problem with LLMs as much as the way people use them. My boss has offloaded all of his thinking to LLMs to the point he can’t fix a sentence in a slide deck without using an LLM.
It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
This is a big point. People need to understand that the LLMs are more like a fancy graphing calculator; they are very good and handle multiple things, but its on you to understand why the calculation is meaningful. At a certain point no one wants to see your long division or factorial. We want the results and for students and professionals to focus on the concept.
I get the metaphor but it’s not a great one for AI in mathematics especially. A statistical word generator is not going to perform reliable math and woe to anyone who acts otherwise.
I would call it an autistic sycophantic savant with brain damage. It’s able to perform apparent miraculous feats of memory and creativity but then be unable to tell reality from fiction, to tell if even the simplest response is valid, and likely will lie about it to make itself seem more competent to please you.
If you have a use for an assistant like that, then great. But a calculator - simple and cheap and reliable - it definitely is not.
It’s the people that try to use LLMs for things outside their domain of expertise that really cause the problems.
That seems to general. Im a mobile developer and sometimes I need a simple script outside my knowledge area. I needed to scrape a website recently, not for anything serious, but to save me time. Claude wrote it and it works. Its probably trash code, but it works and it helped. But you wouldn’t want me using Claude to do important work outside my specific area of focus either or im sure Id cause problems.
I’m also a mobile app dev and at my workplace they’re having non-mobile devs submit code to my codebases totally vibed with no understanding behind it. It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.
So yes basically you’re right. If people only used it to learn and do initial code review passes and other reasonable things we’d be totally fine. But that’s unfortunately not the reality 🙈
It’s absolutely causing problems, especially for me, who is one of the only lines of defense keeping stuff even remotely maintainable.
The next step is, CEO, look at how good these non-mobile devs are, they’re submitting 10x the commits to the mobile repo than boraginoru our mobile dev! We should fire him and just let the backend devs keep vibe coding it!
I’m talking about people that are accountants that now thing they can create software. Or engineers who think they can now write legal briefs for court.
Very frustrating for sure. Like any tool, it’s up to humans to know when the tool is useful.
Partly a marketing issue.
Companies keep advertising their new AI’s as destroyers of worlds, and something that’s too dangerous to even release.
As with anything else, the average user will not have but the most surface level understanding of the tool
Clickbait got me. No mention of “Yes copilot” which I assumed was a joke anyway.
👆🏻true
Seems like a reasonable approach. Make people be accountable for the code they submit, no matter the tools used.
If the accountability cannot be practically fulfilled, the reasonable policy becomes a ban.
What good is it to say “oh yeah you can submit LLM code, if you agree to be sued for it later instead of us”? I’m not a lawyer and this isn’t legal advice, but sometimes I feel like that’s what the Linux Foundation policy says.
What accountability has there been for bad code by humans?
But this was already the case. When someone submitted code to Linux they always had to assume responsibility for the legality of the submitted code, that’s one of the points of mandatory Signed-off-by.
But now, even the person submitting the license-breaching content may be unaware that they are doing that, so the problem is surely worse now that contributors can easily unwittingly be on the wrong side of the law.
That’s their problem. If they are using an LLM and cannot verify the output they shouldn’t be using an LLM
Problem is that broadly most GenAI users don’t take that risk seriously. So far no one can point to a court case where a rights holder successfully sued someone over LLM infringement.
The biggest chance is getty and their case, with very blatantly obvious infringement. They lost in the UK, so that’s not a good sign.
Most GenAI users do not submit code to the Linux kernel project.
So why invite them to?
Nobody can verify that the output of an LLM isn’t from its training data except those with access to its training data.
It is their problem until the second they submit it, then it is the project’s problem. You can lay the blame for the bad actions wherever you want, but the reality is that the work of verifying the legality and validity of these submissions if being abdicated, crippling projects under increased workloads going through ever more submissions that amount to junk.
What is the solution for that? The fact that is the fault of the lazy submitter doesn’t clean up the mess they left.
Frankly I expect the kernel dudes to be pretty good about this, their style guides alone are quite strick and any funny business in a PR that isn’t marked correctly is I think likely a ban from making PRs at all. How it worked beforehand, as already stated by others is the author says “I promise this follows the rules” and that’s basically the end of it. Giving an official avenue for generated code is a great way to reduce the negatives of it that’ll happen anyway. We know this from decades of real life experience trying to ban things like alcohol or drugs, time after time providing a legal avenue with some rules makes things safer. Why wouldn’t we see a similar effect here?
I do think that some projects will fare better than others, particularly ones like you mentioned, where the team is robust and capable of handling the filtering of increased submissions from these new sources.
I believe we are going to end up having to see some new mechanism for project submissions to deal with the growing imbalance between submission volume and work hours available for review, as became necessary when viruses, malware, and spam first came into being. It has quickly become incredibly easy for anyone to make a PR, but not at all easier to review them, so something is going to have to give in the FOSS world.
No, it’s not a reasonable approach. Make people be the authors of the code they submit is reasonable, because then it can be released under the GPL. AI generated code is public domain.
I suppose there should be no code generators, assemblers, compilers, linkers, or lsp’s then either? Just etching 1’s and 0’s?
Also, having buttons on your clothes is an abomination. Hooks and eyes only.
The copyright office has made it explicitly clear that those tools do not interfere with the traditional elements of authorship, and that the use of LLMs does. So, if you don’t want to take my word for it, take the US Copyright Office’s word for it.
As the agency overseeing the copyright registration system, the Office has extensive experience in evaluating works submitted for registration that contain human authorship combined with uncopyrightable material, including material generated by or with the assistance of technology. It begins by asking “whether the ‘work’ is basically one of human authorship, with the computer [or other device] merely being an assisting instrument, or whether the traditional elements of authorship in the work (literary, artistic, or musical expression or elements of selection, arrangement, etc.) were actually conceived and executed not by man but by a machine.” In the case of works containing AI-generated material, the Office will consider whether the AI contributions are the result of “mechanical reproduction” or instead of an author’s “own original mental conception, to which [the author] gave visible form.” The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry. If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user. Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output. For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.
In other cases, however, a work containing AI-generated material will also contain sufficient human authorship to support a copyright claim. For example, a human may select or arrange AI-generated material in a sufficiently creative way that “the resulting work as a whole constitutes an original work of authorship.” Or an artist may modify material originally generated by AI technology to such a degree that the modifications meet the standard for copyright protection. In these cases, copyright will only protect the human-authored aspects of the work, which are “independent of ” and do “not affect” the copyright status of the AI-generated material itself.
This policy does not mean that technological tools cannot be part of the creative process. Authors have long used such tools to create their works or to recast, transform, or adapt their expressive authorship. For example, a visual artist who uses Adobe Photoshop to edit an image remains the author of the modified image, and a musical artist may use effects such as guitar pedals when creating a sound recording. In each case, what matters is the extent to which the human had creative control over the work’s expression and “actually formed” the traditional elements of authorship.
What this makes clear is that it certainly isn’t black or white as you say. Nevertheless, automation converting an input to an output, simply cannot be the only mechanism used in determining authorship.
And that wouldn’t change my statement anyway, but rather supports it. The person submitting a patch must be accountable for its contents.
An outright ban would need to carefully define how an input gets converted to an output, and that may not be so clear. To be effectively clear, one would have to potentially end the use of many tools that have been used for many years in the kernel, including snippet generation, spelling and grammar correction, IDE autocompleting. So such a reductive view simply will not suffice.
Additionally, copywritability and licenseability are wholly different questions. And it does not violate GPL to include public domain content, since the license applies to the aggregate work.
If a work’s traditional elements of authorship were produced by a machine, the work lacks human authorship and the Office will not register it For example, when an AI technology receives solely a prompt from a human and produces complex written, visual, or musical works in response, the “traditional elements of authorship” are determined and executed by the technology—not the human user. Based on the Office’s understanding of the generative AI technologies currently available, users do not exercise ultimate creative control over how such systems interpret prompts and generate material. Instead, these prompts function more like instructions to a commissioned artist—they identify what the prompter wishes to have depicted, but the machine determines how those instructions are implemented in its output. For example, if a user instructs a text-generating technology to “write a poem about copyright law in the style of William Shakespeare,” she can expect the system to generate text that is recognizable as a poem, mentions copyright, and resembles Shakespeare’s style. But the technology will decide the rhyming pattern, the words in each line, and the structure of the text. When an AI technology determines the expressive elements of its output, the generated material is not the product of human authorship. As a result, that material is not protected by copyright and must be disclaimed in a registration application.
That seems very clear to me. Generative AI output is not human authored, and therefore not copyrighted.
The policy I use also makes very clear the definition of AI generated material:
https://sciactive.com/human-contribution-policy/#Definitions
I’m not exactly sure how you can possibly think there is an equivalence between a tool like a spelling and grammar checker and a generative AI, but there’s a reason the copyright office will register works that have been authored using spelling and grammar checkers, but not works that have been authored using LLMs.
Just read the next two paragraphs. Don’t just stop because you got to something that you like. The equivalence I draw is clear. You don’t like it, and that’s okay. But one would have to clarify exactly what the ban entails, and that wouldn’t be as clear as you might think. LLM’s only, transformers specifically, what about graph generation, other ML models? Is it just ML? If so, is that because a matrix lattice was used to get from input to output? Could other deterministic math functions trigger the same ban? What is a spell checker used RNG to select best replacement from a list of correct options? What if a compiler introduces an assembled output with an optimization not of the authors writing?
Do you see why they say “The answer will depend on the circumstances, particularly how the AI tool operates and how it was used to create the final work. This is necessarily a case-by-case inquiry”?
And that still affects copywriteability, not license compliance.
Do you want to explain to me what, in those two paragraphs, means that the use of spell checkers and LLMs is equivalent with regard to copyrightability? It seems like those paragraphs make it clear that the use of spell checkers is not the same as LLMs.
The policy I use bans “generative AI model” output. Generative AI is a pretty well defined term:
https://en.wikipedia.org/wiki/Generative_AI
https://www.merriam-webster.com/dictionary/generative AI
If you have trouble determining whether something is a generative AI model, you can usually just look up how it is described in the promotional materials or on Wikipedia.
Type: Large language model, Generative pre-trained transformer
- https://en.wikipedia.org/wiki/Claude_(language_model)
I never said it violates GPL to include public domain code. I’m not sure where you got that from. What I said is that public domain code can’t really be released under the GPL. You can try, but it’s not enforceable. As in, you can release it under that license, but I can still do whatever I want with it, license be damned, because it’s public domain.
I did that with this vibe coded project:
https://github.com/hperrin/gnata
I just took it and rereleased it as pubic domain, because that’s what it is anyway.
Isn’t that the rule? The author has to be a human?
The new guidelines mandate that AI agents cannot use the legally binding “Signed-off-by” tag, requiring instead a new “Assisted-by” tag for transparency. Ultimately, the policy legally anchors every single line of AI-generated code and any resulting bugs or security flaws firmly onto the shoulders of the human submitting it.
There is a difference between authoring and submitting, right?
If the author is an LLM, then the author is not a human.
There are so many reasons not to include any AI generated code.
“yes to copilot no to AI slop” lol lmfao
I’d still be highly sceptical about pull requests with code created by llms. Personally what I noticed is that the author of such pr doesn’t even read the code, and i have to go through all the slop
Ya I’m finding myself being the bad code generator at work as I’m scattered across so many things at the moment due to attrition and AI can do a lot of the boilerplate work, but it’s such a time and energy sink to fully review what it generates and I’ve found basic things I missed that others catch and shows the sloppiness. I usually take pride in my code, but I have no attachment to what’s generated and that’s exposing issues with trying to scale out using this
Same. There’s reduction in workforce, pressure to move faster, and no good way to do that without sloppiness. I have never been this down on the industry before; it was never great, but now it’s terrible.
Some thought I had the other day: LLM is supposed to make us more productive, say by 20%. Have you won a 20% pay rise since you adopted it? I haven’t
Increases in productivity go to the owners, not the workers. Even imaginary increases in productivity.
Just fucking stop using it? Wtf? Tell you boss to pound sand! They’re going to blame you when it goes south anyway so you might as well stay honest.
I suspect the answer will be that such large requested as you frequently see with LLM codegen will just be rejected.
Already I see changes broken up and suggested bit by bit, so I presume the same best practice applies.
Did we all forget about stackoverflow?
Peopleblindly copy/pasted from there all the time.
Couple of years back I got a PR at work that used a block of code that read a CSV, used some stream method to covert it to binary to then feed it to pandas to make a dataframe. I don’t remember the exact steps it did, but was just crazy when pd.read_csv existed.
On a hunch I pasted the code in google and found an exact match on overflow for a very weird use case on very early pandas.
I’m lucky and if people send obvious shit at work I can just cc their manager, but I fell for the volunteers at large FOSS projects, or even paid employees.
Yeah people have not understood their code for centuries now
I agree. If AI becomes outlawed, it will simply be used without other people knowing about it.
This approach, at least, means that people will label AI-generated code as such.
Maybe. There’s still strong disapproval around it. I can imagine many will still hide it.
Ah, the solution that recognizes there’s no way to eliminate AI from the supply chain after it’s already been introduced.
You make it sound as if there was another choice if just people had better principles. Prey tell us, what would you have done, now. Not in the past, now.
That wasn’t my intent. This is me saying, “of course that’s what they’re going to do because there’s nothing else they can do.”
I completely misunderstood you. I’m sorry.
You’re agreeing with the comment you replied to. Why the fuck are you trying to be so smug???














