[HN Gopher] If ChatGPT produces AI-generated code for your app, ...
___________________________________________________________________
If ChatGPT produces AI-generated code for your app, who does it
belong to?
Author : bookofjoe
Score : 21 points
Date : 2024-12-24 20:25 UTC (2 hours ago)
(HTM) web link (www.zdnet.com)
(TXT) w3m dump (www.zdnet.com)
| tommek4077 wrote:
| Plot twist: Nobody who is in charge should care.
|
| Leave the no to the naysayers.
|
| Ship your app, generate traffic, usage, income. Leave the
| discussions to other people.
| SavageBeast wrote:
| Commenting on this to mark it in my feed for later reference.
| Well said!
| suryajena wrote:
| Unless you are now involved in a lawsuit that asks for a
| hypothetical 50% of your income for using a tech very similar
| to their and they speculate its been stolen and not permitted
| by their license and even if you know you are going to win/or
| that it doesn't affect you still have to spend money on the
| lawyers fighting it.
| david-gpu wrote:
| Do that at $BigCorp and Legal will eat you alive, if not fired.
|
| Long ago I went through the company-approved process to link to
| SQLite and they had such a long list of caveats and concerns
| that we just gave up. It gave me a new understanding of how
| much legal risk a company takes when they use a third-party
| library, even if it's popular and the license is not copyleft.
| SAI_Peregrinus wrote:
| Nothing particularly new, since none of the cases around this
| have concluded.
|
| If I were to guess, I'd say the output of an LLM isn't
| copyrightable (it's not the creation of a human), unless it's a
| verbatim copy of some copyrighted training data in which case it
| belongs to the authors of the work(s) used in training. This
| creates the most annoying combination of legal problems around
| using it, so by Murphy's Law it must be correct!
| christkv wrote:
| If i pay a consultant to write code out generally belongs to me.
| Why would a tool like an llm be any different if you are a paying
| customer. If you are on the free model shrug....
| xandrius wrote:
| Incorrect, it clearly belongs to you if there is an agreement
| with the transfer of such rights otherwise you are in murky
| waters.
|
| LLMs are not individuals automatically covered by copyright
| laws, as they are simply tools based off other (often)
| copyrighted work. This means that the initial copyright
| infringement is still a valid concern, hence these discussions.
|
| If it was a easy and clear cut as just shrugging, the
| conversation wouldn't be so prevalent.
| icedchai wrote:
| Unless you have a work-for-hire agreement, it belongs to them.
| I once explained this to a client (that still owed me money)
| and he got pretty angry.
| wodenokoto wrote:
| Isn't this article trying to heavily over complicate matters?
| OpenAI grants you ownership of output. Why do we even need to
| discuss autherless rights?
| PittleyDunkin wrote:
| Ownership of software never made any sense to begin with. We
| should abandon such a concept as belonging to the dark ages.
| Etheryte wrote:
| On one hand I agree with you, if I can build the same software
| you can, I should be able to sell it all the same. On the
| other, if there's no copyright or similar, what stops theft of
| source code and an identical program with fresh branding?
| PittleyDunkin wrote:
| > I should be able to sell it all the same
|
| No, you should be compensated for your labor. This does not
| entail a market product.
| Etheryte wrote:
| So only the first company to make a search engine should be
| able to sell a search engine? I don't see how this stance
| makes any sense.
| thot_experiment wrote:
| I assume op means you should be compensated through a
| method other than pretending software is scarce and
| trying to assign value to it through a system that relies
| on equivocating scarcity with value.
| PittleyDunkin wrote:
| No, selling a search engine never made any sense to begin
| with. Services should be publicly funded and freely
| accessible. As a bonus we wouldn't have to put up with
| spam on every site on the internet.
| energy123 wrote:
| How will your system compensate people who write useful
| code if that code isn't allowed to be an excludable market
| product?
| itake wrote:
| Does ownership of books make sense? What is the difference
| between code and books? Code is translated to machine code,
| just like books can be translated to other languages.
| 1659447091 wrote:
| You own the medium the content is contained in (for books,
| the paper). Not the content itself. You do not get to copy
| the books content and place it in another container and sell
| the new container as though the content within it was yours
| to distribute in the first place.
| kcb wrote:
| I own my hard drive too, not sure how that's any different
| than paper.
|
| And we can't ignore that ebooks exist.
| tibbon wrote:
| My understanding from a talk by an attorney at HOPE 2024 was that
| AI-generated materials cannot be defended/owned under copyright.
| outofpaper wrote:
| Yes the copyright office ha already published guidance on the
| issue but journalists continue to skip over this as a primary
| source.
| Animats wrote:
| This varies widely by country. The US does not have "database
| copyright" or "sweat of the brow" copyright. See _Feist vs. Rural
| Telephone_ , which was about telephone directories. This
| restriction comes directly from the U.S. Constitution and would
| require a constitutional amendment to change.[1]
|
| The UK and EU are different. The EU allows copyrights on
| databases.
|
| [1]
| https://constitution.congress.gov/browse/essay/artI-S8-C8-3-...
| badsectoracula wrote:
| I already mentioned in another thread (which didn't get much
| discussion), but the recent EU AI Act takes into account the
| source material for training by essentially saying that you can
| train on copyrighted data _unless_ the author opts out. The text
| from the AI Act is:
|
| > General-purpose AI models, in particular large generative AI
| models, capable of generating text, images, and other content,
| present unique innovation opportunities but also challenges to
| artists, authors, and other creators and the way their creative
| content is created, distributed, used and consumed. The
| development and training of such models require access to vast
| amounts of text, images, videos, and other data. Text and data
| mining techniques may be used extensively in this context for the
| retrieval and analysis of such content, which may be protected by
| copyright and related rights.
|
| > Any use of copyright protected content requires the
| authorisation of the rightsholder concerned unless relevant
| copyright exceptions and limitations apply.
|
| > Directive (EU) 2019/790 introduced exceptions and limitations
| allowing reproductions and extractions of works or other subject
| matter, for the purpose of text and data mining, under certain
| conditions. Under these rules, rightsholders may choose to
| reserve their rights over their works or other subject matter to
| prevent text and data mining, unless this is done for the
| purposes of scientific research. Where the rights to opt out has
| been expressly reserved in an appropriate manner, providers of
| general-purpose AI models need to obtain an authorisation from
| rightsholders if they want to carry out text and data mining over
| such works.
|
| (note that the "appropriate manner" is meant to be some machine
| readable way, AFAIK the way this will happen is still in works -
| the Act wont become law until 2026 anyway)
|
| Under EU copyright law the machine generated output (like code,
| etc) cannot be copyrighted. Essentially this means that:
|
| 1. ChatGPT et al. can can train on copyrighted code, text, etc
| unless the authors opt out via some (machine readable) way.
|
| 2. ChatGPT et al. can then reproduce a bunch of code from
| whatever it was trained on, that code _by itself_ is not
| copyrightable (but it can be modified and become part of a
| copyrighted work - think of it as combining public domain code
| with some other project).
|
| AFAIK the only muddy aspect is what happens when ChatGPT (or
| really any AI generative algorithm) reproduces already
| copyrighted works without the knowledge of the user. Again AFAIK
| this is something that is currently being worked on.
|
| There was an AMA on Reddit recently[0] by someone who worked on
| the act and answered a bunch of questions. IMO it is a great AMA
| on the topic (at least if you ignore the trolls that ask "why do
| you want to destroy EU", etc).
|
| Also (unrelated to the above AMA) i think both UK and US are
| likely going towards a similar direction.
|
| [0]
| https://www.reddit.com/r/ArtificialInteligence/comments/1fqm...
| jjice wrote:
| This felt like a way bigger topic (LLM copyright in general) when
| ChatGPT first dropped and now no one cares it seems. Are there
| ongoing cases for this or did something get settled that set a
| copyright-free precedent that I missed?
| Wowfunhappy wrote:
| > In 2021, the Canadian agency ISED (Innovation, Science and
| Economic Development Canada) recommended three approaches to the
| question:
|
| > 1. Ownership belongs to the person who arranged for the work to
| be created.
|
| > 2. Ownership and copyright are only applicable to works
| produced by humans, and thus, the resultant code would not be
| eligible for copyright protection.
|
| > 3. A new "authorless" set of rights should be created for AI-
| generated works.
|
| It seems obvious to me that the answer should be #1. An artist
| who creates pieces out of random paint splatters (modern art!)
| didn't purposefully choose the location of their paint marks.
| However, they still own the copyright because they arranged for
| the creation of the work. You would never argue that a piece like
| this is uncopyrightable.
___________________________________________________________________
(page generated 2024-12-24 23:01 UTC)