Algorithms get taken to court by disgruntled workers
With Artificial Intelligence now used to determine whether you get hired or fired, is legislation needed to protect employees against the rise of the machines? writes Darren Gardner.
It is now possible to be hired and fired by a computer.
Global companies such as Hilton Hotels and Johnson & Johnson are using Artificial Intelligence (AI) to rapidly scan applications or videoed interviews of candidates to determine who they will hire.
Algorithms are also being proclaimed by a growing number of Silicon Valley start-ups as agents of non-bias, able to identify the right skills, fit and motivation for every role.
Companies are even beginning to use AI to determine who should receive a pay rise.
And in the United States a contractor had his role terminated by a computer.
Ibrahim Diallo started work as usual by swiping his security pass to the Los Angeles skyscraper of his employer. His swipe failed, and so a security guard let him in. As soon as he got to his floor, Mr Diallo let his supervisor know. She promised to order a new security pass straight away. Then, when Mr Diallo went to log into his computer, he was not recognised.
Eventually two people suddenly appeared at his desk, at the direction of an email, to escort him from the building.
The problem was, none of his human bosses knew why he had been fired.
After three weeks without pay it was discovered a manager, who had since left the firm, had neglected to put Mr Diallo’s employment contract renewal details into the new upgraded computer system. This set off a chain reaction of automated processes that determined Mr Diallo needed to be marched out the door.
Not surprisingly, Mr Diallo decided to look for another employer.
In Australia, employment contracts are typically between a human (natural person) and non-human (body corporate or some other organisation), so decisions of most employers are made vicariously through humans – but nothing in our law says they have to be. How well can our legal system deal with the Terminator scenario of machines saying “Consider yourself terminated” in employment?
To date tribunals and courts here have taken a dim view though of using technology to deal with employee issues. Sacking by text message was recently described by one Fair Work Commissioner as “brutal, gutless and outrageous”; and by another as not in “good conscience” because it denied a “face to face opportunity to be heard”.
These cases though have involved a strong element of human judgment or lack thereof. Unfair dismissal or discrimination cases take on a whole new dimension of meaning when there’s absent any obvious human judgment at all.
A court may be able to question an employer or manager on the process or rationale for an employee related decision and then determine if their actions were unlawful or discriminatory – its far harder for an algorithm to take the stand.
Increasingly a number of leading US academics connected with major tech companies have also begun to counter the view that AI is somehow neutral and impartial in its hiring decisions.
They’ve raised concern about minimal oversight in the coding of recruitment algorithms, which many companies see as IP to be a closely guarded secret. Exactly what assumptions are being made and how the bias of a coder or the manager they report to may influence who is selected is seldom clear.
Added to this is that AI is open to adopting bias.
Microsoft’s AI bot, Tay, after only 24 hours living in an online world, had to be swiftly taken offline after learning how to make racially and culturally inappropriate tweets from its interactions with other Twitter users - both human and robotic.
A number of commentators say such fears are overblown and that AI is in fact a far fairer way to recruit and retain a more diverse talent pool.
For many of us though that great opportunity or break in our career was because someone decided to give us a chance rather than determining we matched an algorithm.
What is certain is that our current workplace legislation is from the age of the ‘Pentium PC’ and hopelessly out of touch with the speed of change in this space.
Let’s hope it catches up before an automated court is hearing evidence of an AI manager as to why performance of a human worker was simply not up to machine standards.
Author: Darren Gardner