
AI is becoming ever present in our working lives, with the opportunities it presents regularly discussed by all. However, employers must proceed with care.
Artificial intelligence (AI) is a broad catch-all phrase that includes automated processes and generative chatbots like ChatGPT.
Automated processes present opportunities for employers to increase efficiency and reduce costs, and automated recruitment processes are often discussed.
However, while some argue that automation removes unconscious bias in such processes, there is evidence that human biases are adopted by AI systems and, consequently, discrimination may still occur.
Similar discrimination risks arise from the automation of performance processes. For example, if AI flags patterns of lateness or absence, leading to a disciplinary process, the employee may not be afforded the opportunity to explain “why” and, in turn, those with protected characteristics may be disproportionately affected.
These processes may also cause difficulty in defending unfair dismissal claims: if the employee has not been given the opportunity to explain “why” before disciplinary proceedings are instigated, the investigation stage of a fair process may be missed.
Additionally, ChatGPT and equivalent systems enable users to create content and undertake research.
It can be a useful resource for employees, enabling more efficient working, reducing workload and, in turn, pressure and workplace stress.
However, real care needs to be taken in relying on ChatGPT output, as it is not always accurate or factually correct.
Employees’ use of ChatGPT therefore creates a potential liability for employers and, in turn, may result in conduct or capability questions for the employee.
Capability issues may also be relevant if an employee is relying on ChatGPT to mask their poor performance, lack of training, or lack of knowledge.
The use of ChatGPT therefore may make it more difficult for employers to identify capability issues at an early stage.
It may also be difficult to evidence the employees’ use of AI, making disciplinary and dismissal processes more challenging.
To mitigate the above risks, we recommend that employers have clear policies in place as to how AI can or cannot be used in the workplace, and that these are communicated effectively to staff.
In addition to the employment considerations, there are also important data protection, confidentiality, and intellectual property risks that need to be appropriately understood and managed.
The Employment Team at Stone King are involved in wider ongoing work around the education of organisations about AI. If you would like advice on such matters, please do get in touch.
Harriet Broughton
Partner
Stone King LLP

