Friday, 8th September 2023

Artificial Intelligence & Employment Law

Employers are increasingly using Artificial Intelligence (AI) technologies in the workplace. However, while AI offers enhanced productivity, streamlined processes, and cost savings, it is not without challenges. In addition to ethical and data privacy concerns, there is genuine trepidation about AI and potential employee discrimination. As such, AI raises some important legal questions when it comes to employment law.

 

In response, the House of Commons Library has recently published a report on Artificial Intelligence and Employment Law. In particular, the report looks at the thorny issue of algorithmic management.

 

What is algorithmic management?

Algorithmic management describes the use of AI and other technologies to make management decisions. For example to:

 

  • Automatically score tests as part of the recruitment process
  • Assist with performance management reviews
  • Allocate tasks and schedule shifts
  • Monitor the productivity of their workforce
  • Monitor health and safety in the workplace.

 

There is growing unease about how some employers are using AI and algorithmic management, and there have already been legal challenges.

 

In February 2021, Uber lost a judgement in the Netherlands after the ‘robo-firings’ of some of its drivers. The Court of Amsterdam ordered Uber to reinstate drivers who claimed they were unfairly terminated by algorithmic decision-making that was solely automated. Uber was also ordered to pay compensation. In this case, the drivers claimed that Uber’s technology cost them their livelihoods as the software was incapable of recognising their faces.

 

Here in the UK, the Independent Workers’ Union of Great Britain (IWGB) and the App Drivers & Couriers Union (ADCU) have both taken legal action against Uber, alleging that its software is inherently racist as it has difficulty accurately recognising people with darker skin tones and has unfairly dismissed drivers.

 

What does the law say?

Currently, no UK laws exist to specifically govern the use of AI and other algorithmic management tools in the workplace. Instead, regulations built for other purposes attempt to cover these new technologies and restrict how they can be used. These include:

 

  • Common Law. To ensure mutual trust and confidence, employers must be able to explain how they make decisions that affect their employees. Automatic decision-making can make this problematic, thus undermining the employment contract.
  • Equalities Law. The law prohibits employers from discriminating against their employees on the grounds of any protected characteristic. However, AI tools can exhibit bias, resulting in unlawful workplace decisions.
  • Employment Law. The law protects employees with at least two years of continuous service from unfair dismissal. However, flaws in AI could lead to unfair dismissal decisions.
  • Privacy Law. This places restrictions on the use of surveillance tools to monitor workers.
  • Data Protection Law. Article 22 of the GDPR concerns “automated individual decision-making, including profiling”. Under this legislation, people “have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her”.

 

The latest AI report goes into these laws in more detail.

What does the future of AI regulation look like?

In the ministerial foreword of the Government’s 2023 policy paper on AI regulation, Nadine Dorries, the then Secretary of State for Digital, Culture, Media and Sport said that a ‘pro-innovation’ regulatory attitude was key to translating AI’s potential into societal benefits. In short, the Government wants a non-statutory ‘light touch approach’ to AI regulation.

 

The Opposition has criticised the Government’s stance and has called for a more interventionist approach. And the TUC has recently launched a new AI task force that aims to publish a draft “AI and Employment Bill” with new legal protections for workers and employers. According to the TUC, the UK is “way behind the curve” on the regulation of AI, with UK employment law failing to keep pace with technological development.

Safeguard your business from the risk of AI

When it comes to employment law and AI use, the matter is far from settled. But there are steps employers can take now to safeguard their businesses. In particular, where AI has the potential to make or inform decisions about individuals, employers must understand how it could impact their legal obligations. In addition, we advise all employers to:

 

  • Conduct an impact assessment to identify and mitigate any risks before introducing new technology.
  • Carry out an impact assessment to identify where AI is currently being used and how it impacts employees/workers (and potential employees/workers)
  • Establish policies to cover the use of AI in the workplace. This should include where it isn’t acceptable to use AI and the appropriate use of AI.
  • Make sure humans are involved in the decision-making process, with a final say in all determinations.

 

Underwoods’ employment and data protection solicitors will work with you to create an AI policy that protects your business from harmful claims. Providing straightforward advice and practical solutions, contact us today to find out more.

 

This article is for general purpose and guidance only and does not constitute legal advice.  It should not replace legal advice tailored to your specific circumstances.

 

 

< Back to News and Events