UEL academic calls for ethical rethink of generative AI
Published
22 April 2025
Share
The incredible outputs of generative AI may wow users, but behind the images, music and text lies a more uncomfortable truth: many of these systems are trained on unlicensed creative work. As the European Union (EU) prepares to implement the world’s first sweeping AI legislation, Dr Mohammad Hossein Amirhosseini, Associate Professor in Computer Science and Digital Technologies at the University of East London (UEL), is calling for urgent ethical reform in how generative AI systems are built and deployed.

The EU Artificial Intelligence Act, officially published in July 2024 and entered into force on 1 August, 2024, introduces a phased compliance timeline. While the full act will apply 24 months after this date, some parts will come into force earlier:
- The ban on AI systems posing unacceptable risks began on 2 February, 2025
- Codes of practice will apply by 2 May, 2025
- Transparency rules for general-purpose AI systems - including tools like ChatGPT - will apply from 2 August, 2025
These transparency obligations require developers to disclose when content is AI-generated, prevent illegal content, and publish summaries of copyrighted data used for training.
While these steps mark significant progress, Dr Amirhosseini argues they still fall short of protecting the rights of original creators.
“The rise of generative AI is one of the most exciting technological shifts of our time,” he said. “But we cannot ignore that it is often built on the creative labour of artists, musicians, writers and photographers—without their knowledge, consent, or compensation.”
He points to a broader industry pattern in which vast datasets - scraped from publicly accessible sources—include copyrighted material, often without any licensing agreements. One recent example involved a viral trend where users used OpenAI’s tools to generate dreamlike scenes in the style of Studio Ghibli. Yet, the studio’s co-founder, Hayao Miyazaki, has famously criticised AI-generated art and there is no indication that the studio granted permission for its style to be emulated.
“This isn’t an isolated incident - it highlights a deeper problem,” said Dr Amirhosseini.
We’re seeing AI systems trained on the unique work of creators who were never asked and are never paid. That’s not innovation - it’s exploitation.”
He adds that this pattern is made possible by scraping large datasets from the internet, often including copyrighted material, with little transparency or accountability.
“Generative AI has enormous potential to empower creativity, but it must be developed responsibly. We need to make sure artists and creators are not simply resources to be mined, but partners in the future of this technology,” he said.
Although the new EU legislation requires transparency, Dr Amirhosseini believes that without stronger enforcement, companies could still operate in ways that sideline ethical practices.
“Transparency is a starting point, not a solution. We need enforcement mechanisms that ensure compliance - and we need regulatory frameworks that make ethical practices viable, not optional,” he said.
“The companies that try to do the right thing—by licensing content, paying creators, and disclosing their training data - often find themselves at a disadvantage. Meanwhile, those willing to cut corners receive the most investment and scale the fastest.”
The EU AI Act will apply not just to companies based in the EU, but to any company whose AI systems are used within the Union. This extraterritorial scope is a step in the right direction, according to Dr Amirhosseini - but meaningful enforcement will be challenging.
“Creating effective oversight across borders, especially in fast-moving industries, is going to require international cooperation and strong technical capacity,” he said. “Otherwise, the rules risk being more symbolic than substantial.”
Dr Amirhosseini’s comments reflect a growing movement within the tech, legal and creative industries calling for AI development that respects the intellectual and artistic rights of those whose work fuels these systems.
Dr Mohammad Hossein Amirhosseini is an Associate Professor in Computer Science and Digital Technologies at the University of East London and a Fellow of the Higher Education Academy (FHEA). He serves as the Apprenticeship Lead within the School of Architecture, Computing and Engineering, and leads both the BSc and MSc Digital and Technology Solutions Apprenticeship programmes. Dr Amirhosseini has contributed to several national and international research projects and has published widely in respected journals and international conferences.