Intellectual Property and GenAI: Fair Use, Fair Game?
Marsya Amnee Mohammad Masri¹ & Lawrence Tan²
¹ Executive – Strategy and Communications, Sunway iLabs
² Founder of LAWENCO | Advocates & Solicitors, Director of IP GennesisWhen “Ghiblification” Goes Viral
Remember when ChatGPT rolled out its GPT-4o image generator earlier this year and suddenly half the internet became ghiblified? Social media feeds were flooded with AI-generated images that looked straight out of a Studio Ghibli film, from celebrities and politicians to movie scenes, memes, and even people’s pets.
If AI can imitate creative works so convincingly that it feels almost… too good, where do we draw the line between inspiration and imitation — and who owns it?
That question is no longer theoretical. It’s now being debated in courtrooms, policy discussions, and creative industries around the world.
AI, Copyright, and the Question That Really Matters
As AI-generated content increasingly resembles familiar artistic styles, courts and policymakers have begun grappling with how existing copyright rules apply when creativity is assisted by machines.
Over the past year, lawsuits involving AI and intellectual property (IP) have been popping up with increasing frequency, reflecting growing concerns about whether frameworks designed to protect human creativity still hold up.
Cases involving Anthropic, Open AI, Google, and others have produced mixed outcomes across jurisdictions. One case, however, offers a particularly insightful lens into how courts are currently thinking about these issues. A lawsuit brought by a group of authors against Meta, commonly known as Kadrey v. Meta Platforms Inc.
What the Meta Case Reveals
A group of authors led by Richard Kadrey sued Meta, accusing it for using books downloaded through “shadow libraries” to train its AI Llama models. The court ultimately ruled in Meta’s favour, deciding that training AI models qualifies as fair use under U.S. copyright law (Taft, 2025).
The reasoning for this centered on the idea that training AI models is considered as transformative because AI models do not copy or redistribute creative works as they are, but are trained to learn statistical patterns from vast amounts of data and generate new content based on probabilities. Students of copyright law have described this process as “highly creative in its own right” (Kadrey v. Meta Platforms, Inc., 2025).
A similar line of reasoning has also appeared in other recent cases, including the Bartz v. Anthropic, where judges likewise recognised that training AI models can be highly transformative. In that case, however, Anthropic still faced consequences for how some training data was obtained, highlighting that while courts may accept the purpose of AI training, the methods used to acquire data still matter (Bartz et al v. Anthropic PBC, 2025).
Looking at the Broader Landscape
Interestingly, not all regions are relying on courtroom battles to resolve these questions. In parts of Asia, countries like Japan and Singapore have proactively amended their copyright laws to explicitly allow the use of copyrighted works for computational data analysis and data training. These changes were introduced to provide clearer legal ground and avoid the prolonged, case-by-case litigation seen in the United States.
While Japan and Singapore have amended copyright laws to allow the use of protected works for AI training, these reforms include clear boundaries, such as requiring lawful access to data and avoiding harm to copyright owners.
Copyright frameworks around the world aim to preserve human creativity. At its core, IP law is designed to protect creations of the human mind — someone capable of intent, judgement and responsibility. Not every creation however, is automatically protected by the law.
Even though AI models are capable of generating text, images, music, and video with minimal human input, the law has not evolved to recognise machines as copyright owners. “Under current Malaysian law, for example, AI has no legal personality — meaning it cannot own copyright on its own,” said Lawrence Tan, Founder of LAWENCO and Director of IP Gennesis.
In a recent case, computer scientist Dr. Stephen Thaler attempted to register copyright for an artwork generated entirely by his AI platform, The Creativity Machine, without any human creative input. The court rejected the application, ruling that works generated solely by AI are not eligible for copyright protection because the law does not recognise machines as authors (Thaler v. Perlmutter, 2025).
What these broader view reveals isn’t a clear rulebook, but an ongoing adjustment — where traditional IP principles still apply, yet are being re-examined as AI changes how creative work is produced, used, and valued.
Where This Leaves Us
Taken together, responses to generative AI vary case by case, but across these approaches, the priority of existing IP frameworks remains intact.
Even as AI training and generated content becomes increasingly common, the fundamental rule has not changed. If something closely imitates a protected work, it can still cross the line into infringement, no matter how it was created. This is where the grey area lives.
Moments like the wave of Ghibli-inspired AI images illustrate this complexity. Not because they are automatically problematic, but because they sit in a grey area that forces us to ask where inspiration ends and imitation begins — a question that still depends heavily on context rather than clear-cut rules.
At the same time, responsibility doesn’t rest with courts alone. As generative AI becomes part of everyday creative workflows, some industry players are choosing to act proactively. Recent partnerships, such as OpenAI’s licensing agreement with Disney to permit the use of Disney’s iconic characters in its Sora model points to one way stakeholders are exploring consent and collaboration alongside evolving legal interpretations.
Ultimately, this moment isn’t about banning creativity or giving AI free rein. It’s about judgment. AI may make creation faster and easier, but deciding what is original, acceptable, or too close for comfort still rests with the people.
Coming Up Next
We’ll explore what that means in practice and how to navigate IP risks responsibly when using generative AI in our next article.
Disclaimer: This article is for general information only and does not constitute legal advice.Acknowledgements
Thank you to the Sunway iLabs team for their invaluable contribution and insights in preparing this article.




The discussion around "Ghiblification" really articulatet the core challenge, illustrating so vividly how AI-generated content pushes the boundaries between artistic influence and outright replication. While current legal frameworks grapple with 'how' these works are produced, I wonder if we also need to develop entirely new paradigms that consider the intent behind the AI's training and output, rather than solely focusing on the aesthetic outcome, especially since the 'inspiration' is often a statistical aggregate of human works.