The evolution of intellectual property (IP) law has historically mirrored technological progress. From the industrial revolution’s emphasis on mechanical invention to the digital age’s adaptation to intangible creations, IP frameworks have developed to protect human creativity. Today, the Fourth Industrial Revolution is characterised by data driven automation and artificial intelligence (AI) poses unprecedented challenges to the foundations of copyright law, particularly in the context of generative AI.
While previous industrial revolutions have been all about replacing human or animal physical effort with machines, and more recently computers, we are now experiencing the Fourth Industrial Revolution, often referred to as Industry 4.0, which is characterised by humanity’s ability to collect, share and process huge amounts of data. These developments make it possible to automate previously unthinkable tasks on an unprecedented scale with unprecedent access. As machine-generated content becomes increasingly better at mimicing human effort and avoiding the uncanny valley1, we naturally face new questions about intellectual property protection, or perhaps more accurately, we are re-posing old questions in dramatically new contexts.
The terminology we use to refer to and define artificial intelligence is itself misleading. We use “AI” broadly and colloquially, but we encounter a fundamental issue of anthropomorphism. This is the attribution of intrinsic human qualities onto non-humans, derived from the Greek ánthrōpos (“human”) and morphē (“form”).
The problem is significant: these anthropomorphised qualities can quickly become part of moral and legal judgment. Deep learning is likened to human learning, with neural networks designed to mimic the brain’s structure and function. We refer to AI’s “understanding,” “learning,” and “creating”—but this linguistic convenience hides the reality that these systems are performing mathematical exercises i.e. statistical correlations and pattern matching at an unprecedented scale.
At a prima facie level, anthropomorphism creates a challenge for intellectual property law, which has developed around human creativity and authorship. As AI systems become more sophisticated at generating content that appears creative and original, we frequently encounter the question as to whether our existing legal frameworks are enough or whether they need to be re-evaluated.
The Infosoc2 Directive establishes the exclusive right “for authors, of their works,” whilst in Malta, the Copyright Act3 defines an author as “the natural person or group of natural persons who created the work.” In Eva Maria Panier v Standard Verlags GmbH4, the CJEU held that even a portrait photograph can be protected if it “is an intellectual creation of the author reflecting his personality and expressing his free and creative choices in the production of that photograph.”5 The court emphasized that “an intellectual creation is an author’s own if it reflects the author’s personality.“
This creates a two-pronged test: works must be (1) the author’s own intellectual creation (originality) and (2) feature the expression of the author’s own intellectual creation. While the determination is generally left to national courts, the CJEU has made clear on other occasions that certain categories such as sporting events6 or the taste of food products7 cannot qualify as copyrightable works.
Why AI-Generated Works Fail the Originality Test and the Problem of Anthropomorphism
The CJEU has established that originality requires “free and creative choices” by the author. In the BSA case (Bezpečnostní softwarová asociace – Svaz softwarové ochrany v Ministerstvo kultury,)8, the CJEU held that “where the expression of those components is dictated by their technical function, the criteria of originality is not met, since the different methods of implementing an idea are so limited that the idea and the expression become indissociable.”9
This principle raises profound questions about AI-generated content. When we examine how GPTs work, processing prompts and generating outputs based on statistical patterns learned from training data, we encounter the fundamental question: where are the free and creative choices of a human author? Consider a practical example in Figure 1: when prompted to:
“Generate an image of a robot in a renaissance style painting scene. The scene should be in Monet style, oil paint brushwork on canvas,”
the AI system filled numerous gaps in the prompt by drawing on its training data. It applied Impressionist techniques while maintaining Renaissance compositional elements, created period-appropriate clothing for background figures, and balanced the composition according to classical art principles.

Figure 1
The system’s ability to handle these ambiguities demonstrates sophisticated pattern matching, but it also reveals a critical distinction: the AI was not making “free and creative choices” in the legal sense, but rather selecting the most statistically likely results based on its training data.
The Problem with Anthropomorphism
The challenge deepens when we consider how anthropomorphism distorts our understanding of AI. We describe AI as “understanding” artistic styles or “choosing” compositional elements, but these are metaphors based on the human experience that obscure the underlying computational processes.
The AI system doesn’t necessarily “understand” that Renaissance and Impressionist styles are historically distinct—it simply matches patterns in labelled datasets. When the system generates period-appropriate clothing without specific prompts, it’s not exercising creative judgment but accessing statistical correlations from thousands of analysed artworks.
This anthropomorphic framing creates what might be called a “false analogy” problem in legal discourse. A very convincing explanation of this argument is proposed by Professor Carys J. Craig: “The notion that AI is doing something functionally equivalent to human authorship depends upon a vision of technology distorted by misunderstanding and romantic notions of machine creativity.”10
Unlimited Combinations and the Exhaustion Problem
AI generation also challenges copyright’s core assumptions about scarcity and effort. AI-generation of new creations based on a training set can be unleashed with little marginal costs and can explore any kind of combinations and variations.11 Reports have even showed that AI has already generated more photos than the entire 150-year history of photography12. This raises the question: if AI production can theoretically exhaust all combinations of expressions of an idea, how can we still grant protection without undermining copyright’s traditional balance between protection and public domain access? Where do we draw the line when humans use AI tools with some level of creative direction?
Should AI-Generated Works Be Protected?
Proponents of AI copyright protection argue from a consequentialist perspective: “As human beings recede from direct participation in the creation of many works, continued insistence on human authorship as a prerequisite to copyright threatens the protection—and, ultimately, the production—of works that are indistinguishable in merit and value from protected works created by human beings.”13
This argument suggests that qualitative similarity should trump authorship requirements if an AI-generated work appears as creative and valuable as human-created work, why shouldn’t it receive protection? Others suggest sui generis protection specifically for AI-generated works, arguing for instance that investment in AI development deserves legal incentives similar to those provided for the sui generis database right affording a limited protection for the skill and work in developing databases, under EU law.
In Part 2: Copyrightability of its’ Copyright and Artificial Intelligence series, the US Copyright Office states: “To begin with, it is not clear that new incentives are needed. The developers of AI models and systems already enjoy meaningful incentives under existing law… These incentives include patent, copyright, and trade-secret protection for the machinery and software, as well as potential funding and first-mover advantages.”
More philosophically, the US Copyright Office emphasized: “If authors cannot make a living from their craft, they are likely to produce fewer works. And in our view, society would be poorer if the sparks of human creativity become fewer or dimmer.”
European Union member states have shared the view that AI-generated content may be eligible for copyright “only if the human input in [the] creative process was significant.” This suggests a middle path—not blanket protection for AI outputs, but potential protection for works where humans provide substantial creative direction.14
Technological neutrality and risks to look out for
The principle of technological neutrality, treating functionally similar activities equally regardless of the technology involved, becomes complex in an AI context. A restrictive approach might simply extend existing copyright law to AI outputs, but these risks treating substantially different processes as equivalent. Some technologies are paradigm-shifting, and in such cases, neutral legal treatment will not necessarily produce a substantively equivalent legal effect. The challenge is determining when technological differences are so fundamental that formally equal treatment produces substantively unequal results.
As a result, using generative AI tools in content pipelines has a diverse set of risks. Questions that arise immediately range from the ownership of all the generated works including the generative tool’s terms and conditions as well as any potential risks of third-party infringement and whether you would even be entitled to commercially exploit the generated content itself. As previously discussed, AI generated content cannot be created from nothing and the ownership of the results will also vary greatly depending on the laws governing the tool itself, the training data or whether other external sources through RAG were used. In fact, depending on the terms of use, the generated works may even be set contractually to not belong to anyone i.e. within the public domain.
Conclusion
The relationship between AI and intellectual property is not one of simple opposition, but of fundamental tension. AI doesn’t threaten to eradicate IP, but it challenges IP’s foundational assumptions in three critical ways: it tests the probative value of IP rights, questions traditional concepts of protectability, and dramatically improves the quality and scale of potential infringements.
The legal system’s response must be nuanced. While purely AI-generated content fails to meet established authorship and originality requirements, the growing prevalence of AI-assisted creation requires careful case-by-case analysis. The challenge lies in distinguishing between AI as a tool (like Photoshop filters or spell-check) and AI as a creative agent.
Legal professionals must prepare for a landscape where proving independent human authorship becomes more complex, where infringement detection requires new technological tools, and where traditional concepts of originality face unprecedented challenges. We need to ensure that the content we commission, build and create is up to the challenge and up to the task of defending against the onslaught. The ultimate question is not whether AI will change intellectual property law—it already has. The question is whether legal systems can adapt quickly and thoughtfully enough to preserve the essential functions of IP while accommodating transformative new technologies. The answer will shape not only legal practice but the future of human creativity itself.
1 uncanny valley, theorized relation between the human likeness of an object and a viewer’s affinity toward it. https://www.britannica.com/topic/uncanny-valley
2 Directive 2001/29/EC of the European Parliament and of the Council of 22 May 2001 on the harmonisation of certain aspects of copyright and related rights in the information society
3 Chapter 415 of the Laws of Malta
4 Case C-145/10, Judgment of the Court (Third Chamber) of 1 December 2011
5 Vide 6, Para 94
6 Football Association Premier League (C-403/08 and C-429/08)
7 Levola (C-310/17)
8 CJEU, BSA, C-393/09
9 Vide 10, Para 49
10 The AI–copyright challenge: tech-neutrality, authorship, and the public interest. Prof. Carys J. Craig
11 The use of copyrighted works by AI Systems: Art works in the data mill, (Sartor, Lagioa, Contissa (2018)
13 R. C. Denicola, “Ex Machina: Copyright Protection for Computer Generated Works”, 69 Rutgers University Law Review 251 (2016)
14 Council of the European Union, Policy questionnaire on the relationship between generative Artificial Intelligence and copyright and related rights – Revised Presidency summary of the Member States contributions, at 16–18 (Dec. 20, 2024), https://data.consilium.europa.eu/doc/document/ST-16710-2024-REV-1/en/pdf.
OUTLINE