Written by Global HR Leader, Derren Young
Table of Contents
–
Over the past few years since being directly influenced and affected by the application of AI into business (the creative industries more specifically) I’ve been studying, researching and trying to keep up. This short thought piece is to share some insights and help HR & People Management colleagues, focusing on management and governance.
So, what is AI?
Let’s start with my biggest frustration regarding AI – few people actually know what we are talking about. We all need to take a moment to clarify what we all mean when the term ‘A.I.’ is used.
We’ve been living with and adopting AI, automation, machine learning, data science for decades. The AI in the media is mostly focused on Large Language Models (LLM) leading to Generative AI tools and readily available apps. These models digest huge amounts of data (whether, words, images, or music) to predict the next best step to a query we have.
Whether one is to ask ChatGPT to create a short piece on how we govern AI, or a silly picture for Facebook or, even, to make me a song that sounds like Ed Sheeran, it’s all readily possible today.
These thoughts are about how we govern all automation and AI, not just the fun and visible uses and applications.
Can we wait for legislation or has the AI governance train already left the station?
AI content slop is currently the most widespread use yet for generative-AI apps. The demands of infinite scrolling require content, with less and less created by humans. Think of the repetition of social media, the obviously fake images. In a world where automated bots talk with other bots creating an online-tech-zombie existence, no one really knows what is happening today or will in the future.
But the future is here, and we can’t expect legislators or regulators to provide clarity soon. In fact, the Labour government’s recent AI governance proposals do not suggest a tough stance. A further concern is with the current power inside the Trump administration of Meta, Google, Microsoft et al, influencing policy, controls and the global agenda. Perhaps we are on our own to take care, proactively understand AI and guard against thinking it is benevolent technology.
So, what can we do? Build and go beyond today’s data & information protections and policies by taking a position and develop an operational plan and roadmap, considering;
- Human Oversight: Ensure critical decisions have a human-in-the-loop (HITL).
- Transparency: Ensure AI governance & systems are explainable, and decisions are traceable.
- Data Privacy: Handle personal data in accordance with regulations (e.g., GDPR, CCPA).
- Bias & Fairness: Audit AI systems for discriminatory outcomes.
- Accountability: Assign clear responsibility for AI system decisions.
- Security: Protect AI systems and training data from cyber threats.
AI Governance & Policy Recommendations for Businesses
As businesses we need to be responsible and take action, such as:
- Developing an AI Governance Framework
- Establishing an AI Ethics Board or Steering Committee
- Creating policies for:
- Ethical development and deployment
- Data governance
- Monitoring and audit procedures
- Introducing clear communications and training
And lastly, be ready for upcoming legislation…
Key Legislation and Regulatory Frameworks
EU: Artificial Intelligence Act (AI Act) – 2024
- Categorises AI systems by risk: Unacceptable, High, Limited, Minimal.
- High-risk systems require:
- Risk assessments
- Conformity assessments
- Record-keeping and transparency
- Human oversight
- Strict rules for biometric ID, critical infrastructure, and recruitment tools.
USA: Executive Orders & Emerging Frameworks
- Executive Order on AI (2023): Focus on safety, privacy, equity.
- NIST AI Risk Management Framework (AI RMF):
Voluntary but widely adopted. - Core pillars: Govern, Map, Measure, Manage.
UK: Pro-Innovation Approach (White Paper 2023)
- No AI-specific law yet; sector-based regulation (e.g., ICO, FCA).
- Emphasis on principles: safety, transparency, fairness, accountability.
Food for thought: Will AI make culture eat itself?
Recently at a HR Directors networking event with an esteemed panel of experts and attendees, I asked the question “where is the legislation and regulation to protect creators copyright?” (as I was at the time working in music) the disappointing but empathetic reply was that the genie is out of the bottle, LLM’s have already consumed everything, your best hope is litigation now. I think they are right given that 20% of all new music is AI created or AI enhanced.
A further comment which disturbed me is that AI will make culture eat itself – meaning that without economic benefit for creators we are left to ask where will innovation come from?
Let me leave you with that thought and ask what is your experience?

About the Author: Derren Young, Global HR Leader & AI Student
A researcher and student on AI, Derren Young, is a highly accomplished Human Resources and People function leader with three decades of experience in media and creative industries. With a robust background working with renowned media & entertainment brands such as Electronic Arts (EA), Warner Music Group, and England Rugby, Derren has honed his expertise in people management on a global scale. Derren has held esteemed positions including Global Vice President of Human Resources at Visual Data and Global Head of L&D at Universal Music Group. A lifelong learner, Derren has completed a Certificate in Data Science, Machine Learning, and AI at the Massachusetts Institute of Technology, further solidifying his commitment to innovation and excellence in HR leadership.