Written by Nic Hart, Alice Head and Sophia Radford of Duane Morris LLP
If 2023 was the year that generative Artificial Intelligence (AI) entered mainstream consciousness, 2024 will be the year generative AI became part of mainstream establishment following an explosion of growth in users, both commercial and personal.
Full disclosure from the outset- this article is not a product of generative AI and does not discuss the technology and advancements of AI models. Rather this article seeks to highlight some of the workplace issues that may be facing organisations as generative AI becomes an integral part of our working lives.
The discussions of the benefits and pitfalls of generative AI and models such as Google Bard, Microsoft Copilot, Perplexity, ChatGPT, and DALL-E have been widespread, and show no sign of abating. The number of organisations expanding or implementing the use of predictive AI and generative AI models is ever increasing, as is the number of employees becoming aware of the benefits of using AI models in their own daily tasks.
Whilst AI is by no means a new concept, the integration of generative AI models into organisations has been exponentially rapid. A survey undertaken by KPMG in March and June 2023 found that 20% of businesses were already using generative AI and 67% of executives confirmed that budget was allocated towards generative AI technology.
It is widely acknowledged that large language model (LLM) generative AI holds enormous potential across industries. Whilst organisations are considering how best to implement generative AI in the workplace, employers should be aware that given the large number of AI models that are both freely available, and easily accessible, many employees will already be using models such as ChatGPT or Microsoft Copilot without their employer’s knowledge. Employees are recognising that the speed with which generative AI can draft letters, undertake research, summarise documents and create presentations (to list a very small number of tasks) means increased productivity which benefits them professionally and personally. As such many employees are using generative AI models in the workplace without understanding the fallibility of AI or recognising how and where bias may be built into these models. Employers should be mindful that even if they are not yet ready to embrace generative AI within their own organisations it is almost a certainty that many of their employees will have done so already.
Hallucinations
The risks of generative AI have been widely discussed and inaccuracy, hallucinations and bias are the predominant reasons cited for caution.
There have been a number of widely publicised cases both here and in the US highlighting the risk of AI ‘hallucinations’ (where an AI model creates inaccurate or false information). The very plausible and authoritative presentation of information created by generative AI means that both users of generative AI and the recipients of the information produced, often do not have any immediate reason to check the validity or accuracy of the information provided to them.
All organisations and their representatives should ensure, if they are using generative AI there are human checks built into this use to ensure the accuracy of any information produced.
Bias and Discrimination
A fundamental point to be reminded of when using AI models is that AI is only as good as the data it is trained on, and the inputted data can create inherent bias within the models.
The risk of assuming AI is infallible and relying on AI without human interaction was acknowledged by Dr Gideon Christian, assistant professor at the Faculty of Law, University of Calgary who researches race, gender and privacy impacts of AI facial recognition technology. Dr Christian stated that; “There is this false notion that technology, unlike humans, is not biased. That’s not accurate. Technology has been shown to have the capacity to replicate human bias”. This replication of human bias will affect content generation and decision making such that the use of AI algorithms may create a risk of discrimination. Dr Christian has stressed that
“Diversity in design matters. The current industry’s success caters to a homogenous image… (and) it needs diverse perspectives and representative data to ensure fair technology.”
Where organisations are seeking to use any form of AI in automated decision-making human intervention is key at some point in the process to check against any inherent bias in the systems used. Further it is imperative that if an individual claims any form of AI being used by an organisation is incorrect or demonstrating bias, the organisation must take immediate action to review.
Data Protection and Intellectual Property (IP)
The use of generative AI within the workplace has also raised concerns with how these models may be used in compliance with data protection legislation both within and outside the UK.
To assist in compliance with data protection legislation, employers should;
- advise employees about the data that they are permitted to input in generative AI tools;
- provide training and awareness of acceptable use;
- explain and justify the use of AI models when processing personal data;
- clearly set out information regarding use of AI models in their privacy notices;
- ensure that data protection impact assessments are completed prior to using any new model of AI.
- provide appropriate security measures and controls in place regarding acceptable use of personal data and AI; and
- where personal data is used in AI tools consider the ability to locate, extract or amend this data in compliance with a DSAR.
With regard to IP there are risks for organisations in both developing generative AI models and using AI models in the workplace, regarding both the ownership of product produced by AI models and whether any outputs produced are infringing protected works. There are also questions to be addressed regarding the IP rights that can be assigned to content produced by generative AI given this work is not entirely created by humans.
The rapidity of emerging AI technologies and questions regarding IP are beyond the scope of this article but are a key matter for any organisation using AI models given the importance of retaining control of intellectual property rights.
Transparency
Much like the regulations regarding data protection, a key theme in the regulation of AI is transparency. It is vital that organisations are transparent about their use of AI and require the same transparency from their work force. Of particular importance will be transparency regarding employees’ use of generative AI models such as Chat GPT. Given AI technology will only become more prevalent and impactful across all industries, a prohibition on the use of generative AI would be unhelpful in encouraging a culture of openness and awareness amongst workers.
In a study undertaken by Salesforce in 2023, which involved over 14,000 workers across 14 countries, more than half of those interviewed were using generative AI in the workplace without their employer’s approval. This raises concerns that employees are generating work using AI models without guidance regarding the ethical and legal issues nor having received training for effective use and alignment with the style and tone of the organisation.
It is essential that where employers are considering implementing AI in the workplace, they also involve their workforce in this process. If employees are open about their use of generative AI this will enable organisations to check for hallucinations and inaccuracy, maintain data protection compliance and avoid bias and IP infringement, all of which will protect a business in terms of both financial and reputational cost.
Governance
There are currently no explicit UK laws in governing the use of AI in the workplace nor any centralised UK regulator at present, however the UK Government has set out a cross-sector framework for regulating AI underpinned by 5 principles to “drive safe, responsible AI innovation”. These are;
- Safety, security and robustness
- Appropriate transparency and explainability
- Fairness
- Accountability and governance
- Contestability and redress
Organisations should be aware that greater regulatory activity and legislation within the UK is inevitable given the general consensus across industries that this will be required to support in both the control and safe use of AI.
Embrace or evade?
It is right that caution should be exercised given the infancy of many of the AI models, the risk of hallucinations (a risk that is likely to always be present), the concerns regarding bias and, a simple lack of user knowledge.
For all employers whether they have embraced generative AI or are approaching with caution, the key is to be transparent about use of AI both internally and externally and provide a clear statement to employees regarding use of generative AI models. All employers, regardless of size or industry should have a policy in place to address their stance on the use of generative AI.
Where organisations are using AI models, they need to consider the risks in the workplace such as, inaccuracies, algorithmic bias, the impact on personal data and data protection compliance, and the potential infringement on intellectual property rights.
A clear AI policy will be vital in ensuring all workers are aware about acceptable use of AI, inputting of personal data or sensitive or confidential information and the potential of unintended breach of copyright or IP infringement.
The policy should also be supplemented with training workers in the effective and appropriate use of AI and how to identify the risks associated with AI use, to enable them to provide human intervention where required and review AI generated material for accuracy.
The rapid and ongoing evolution of AI will mean organisations will need to consider which AI tools to use and be prepared to change these tools as new technologies emerge. This in turn will mean policies and training will also need to be revisited regularly and amended as required.
In our view evasion is simply not an option. Organisations should not consider AI models as a standalone option but as systems in partnership with their workforce. The human element of review and balance to mitigate risk must not be underestimated and remain ever present, even as AI services become more independent. Whilst there is understandably caution, and some concern regarding the use of AI in the workplace, organisations would be advised to think about how generative AI can support and enhance their businesses with processes that are human led but technology enabled.
Duane Morris would welcome an opportunity to assist organisations in embracing AI in their workplaces and can offer practical guidance and AI policies and training to support on this partnership with technology.

About the Authors
Nic Hart is Managing Partner of the London office of Duane Morris LLP, a law firm offering innovative solutions to today’s legal and business challenges across the US, the UK and internationally. Nic specialises in providing business-focused employment advice and litigation for a range of clients.
Nic is supported by Alice Head and Sophia Radford in the employment department.
Alice Head and Sophia Radford advise clients on a broad range of employment law issues. This includes practical support to employers at each stage of the employment cycle, providing employers with up to date guidance and policies and offering practical support to assist employers compliance with employment legislation.
Alice and Sophia also offer assistance in the implementation of GDPR-compliant policies, responding to data subject access requests and other litigious employee data protection compliance issues.
Nic and his Team are always happy to provide customised training to clients on various employment law topics and how to ensure best practice in the workplace.
