Loading ...
14 August 2018

A robot didn't take Ibrahim's job, but it did fire him

This article was originally published by Monique Ross and Damien Carrick for the Law Report on 
14 August 2018.

Software developer Ibrahim Diallo found out the hard way that he'd been fired.

He turned up to work one day and his login failed. His account returned an 'inactive' message. The recruiter who'd landed him the job called to ask if he was OK.

She'd received an email saying his position had been "terminated" — but Mr Diallo's manager didn't know anything about it.

"I was fired. I could not identify who fired me," he says.

He didn't know yet that he'd been sacked by a machine.

He went to his director, who told him not to worry — he still had a job, and she'd sort out the system error.

"At the end of the day I went to ask her, 'should I come back the next day, is everything OK?' She is like, 'yes, don't worry about it, come back'," Mr Diallo says.

But the following day, his access issues only increased. He was progressively locked out of more and more computer systems.

A few days later two security guards came to his desk to escort him out of the building.

"Right away we got the director, we got my manager, everybody was there," Mr Diallo says.

"I guess the security guard showed them the emails that said I need to get out of the building, and they agreed. So I packed my stuff and left."

Mr Diallo, who's based in Los Angeles, was eight months into a three-year contract.

His managers started doing some digging into who was sending the emails — and discovered they'd come from an automated system.

"This company was going through a transition, it had just been acquired. My previous manager had been laid off. He was supposed to transfer my name into the new system," Mr Diallo explains.

"I guess he only partially did the job and never renewed the end of my contract.

"When that [end] date arrived the system didn't ask for anybody's opinion or confirmation, it just started the process: this employee is terminated now.

"It sent hundreds of emails to different people to disable different parts of my access to the company."

'I could tell my co-workers were suspicious'.

It was three weeks before Mr Diallo was able to return to work, and he wasn't paid for that time.

"They let the system do its job to fire me thoroughly, and then hired me as a new employee," Mr Diallo says.

"I had to get a new key card mailed to me, I had to do my direct deposit, I had to do a bunch of other bureaucratic stuff to get back to normal."

But things didn't get back to normal for Mr Diallo, who felt as though other employees thought he'd done something wrong.

"I became the guy that was fired for an obscure reason. I could tell my co-workers were suspicious. They didn't believe the reason why I was fired," he says.

He ended up leaving, and later wrote a blog post about what happened. It went viral — and he says heaps of people got in touch to say they'd had a similar experience.

Mr Diallo says all automated systems should also come with "a big giant red button that says 'stop'".

"Everybody was telling me the same thing — the only thing that could have saved you was if there was a way to stop the process, which in my case there wasn't, we just had to let the system do its thing," he says.

"We have to remember that machines can make mistakes."

Could it happen to you?

Darren Gardner, an employment law expert with Sydney firm Bartier Perry, says he's not aware of an identical situation in Australia.

"But it's certainly something that could happen," he adds.

A man in a suit stands in an office building, smiling.
PHOTO: Darren Gardner says there's no law that says humans have to make decisions for a company 
(ABC RN: Sophie Kesteven)

He says if Mr Diallo's situation had unfolded in Australia, he would have had grounds for an unfair dismissal case.

"Decisions that are made by employers are usually made by humans — corporations don't have a mind of their own and they need to be directed and controlled by some human being," he says.

"But nothing in the law says that a human being has to make decisions for a company.

"What's interesting about cases such as Mr Diallo's is companies are now using technology to make decisions.

"There's no doubt at all that if a company relied on an automated process to fire someone, the company itself is making the decision."

Hiring by machine

So, what about the flip side of the coin — hiring? Could getting a job also come down to a machine?

Mr Gardner says it's now common for businesses to use artificial intelligence software to make decisions about job applications.

"One of the prime purposes of those sorts of systems is to rapidly scan applications or even videoed interviews," he says.

"That software takes the role of what traditionally was a human role of interviewing candidates."

Text-based algorithms analyse keywords and the use of language, while video systems look at facial responses to specific questions.

One argument in favour of the systems is that they could help eliminate bias in the hiring process.

"A simple example could be a machine system is less likely to choose a candidate because of their gender or because of their physical appearance," Mr Gardner says.

Bias and discrimination

But Mr Diallo isn't convinced.

"Algorithms usually only learn from experience, based on the data that you feed it. So employees that it will most likely hire will be those that already are in the company," he says.

"If, let's say, a particular company has all white employees and that's what works, the moment you bring someone of African descent it will just dismiss it because it doesn't follow the particular norms."

He says it could also base decisions on surnames, addresses or where a person went to school.

"Surnames will play a major role in there, and usually you can identify a person's ethnicity based on their name," he says.

Mr Gardner knows a woman of Asian background who put the name theory to the test while submitting her resume online.

Using her real name, the woman was getting about a 40 per cent response rate. That shot up to 65 per cent when she used an Anglicised name.

Mr Gardner says there is a "real danger" of bias becoming ingrained in a machine system.

"It's simply relying on past assessments of success based on historical data about the success rates of people from certain schools, certain socio-economic backgrounds, particular cultural groups," he says.

"[That] may not be directly input into the algorithms, but indirectly assumptions are being made simply because the majority of the data being looked at tends to come from particular backgrounds, socio-economic groups or schools.

"Some might say the data doesn't lie, and others might say the data is not the truth because it is based on a past history of discrimination or disadvantage that needs to be overcome."

At the end of the day, Mr Diallo says a human being is always the best option.

"It might not be as efficient or as fast, but a human being will have a better judgement than the machine," he says.