Artificial Intelligence Learning to Discriminate

By Michael Bidwell

Originally published in the Queensland Proctor.

September was Brisbane Pride Month celebrating its 28th year supporting the LGBTI+ community. As an openly gay lawyer, Editor in Chief for The Legal Forecast and Events Coordinator for Pride in Law, Australia. I wanted to write this article as a reminder of how far we must go to reach true equality, given artificial intelligence has learned and expressed homophobia. As artificial intelligence (AI) may be incorporated into Human Resources (HR) practices for organisations, we must do better as employers and human beings to ensure everyone can have a fair go.

AI and HR

In short, AI is intelligence demonstrated by machines. According to IBM’s 2017 survey of 6,000 executives, 66% of Chief Executive Officers believe AI can drive significant value in HR. 54% of HR executives agreed, noting AI will affect key roles in their departments.

Kate Guarino, director of HR operations for Pegasystems, said AI presents an opportunity for HR to automate “repetitive, low-value add tasks” and increase the focus on more strategic work. Kate cited the example of HR spending time processing the steps of onboarding a new employee (allocating space, provisioning a laptop, etc.). Saving time in those arenas can help HR teams pivot to making sure they focus on “value-add work like mentoring and continuous feedback.” Kate said companies have implemented “AI recruiters” to automate scheduling interviews, provide ongoing feedback to candidates and answer their questions in real time.

This all sounds exciting, right? However, we should consider where AI can go wrong if implemented in HR practices without appropriate supervision.

AI learning and homophobia

In 2016, Microsoft released an AI chat bot, Tay, who was learning to talk like millennials by analysing conversations on Twitter, Facebook and the internet. Tay was originally having polite conversations but Tay was designed to learn from its review of everyone’s posts. I will not share in this article the posts that were made but Microsoft apologised for the racist, sexist and homophobic posts Tay learned to create.

In 2017, Google revealed a new software called Cloud Natural Language API that was built to help businesses test their messages and rate them on a scale from negative to positive. It ended up having a considerably negative reaction to words and phrases that are about homosexuality. For example, the AI rated the phrase “I’m straight” a 0.1 and the phrase “I’m homosexual”, a -0.4.

Working with Jigsaw, a division of Google that creates tools that deal with abusive comments, Google plans to train future AI to recognise the difference between slurs against LGBTI+ people and legitimate terms. The plan was announced at SXSW and Jigsaw product manager CJ Adams explained that their “mission is to help communities have great conversations at scale. We can’t be content to let computers adopt negative biases from the abuse and harassment targeted groups face online.”

Who should learn from who?

AI is learning these behaviours from humans which shows we need to do better to treat one another with respect. On one hand, you may feel we must do better as a society to ensure our AI learns to also be better. On the other hand, you may feel AI can show us how to be better and we learn from it as a society.

Either way, this article is not able to explore the workplace legal liability for humans or AI treating a prospective or current employee differently for being LGBTI+ but all solicitors should be aware of Rule 42; ‘A solicitor must not in the course of practice, engage in conduct which constitutes discrimination, sexual harassment or workplace bullying’.

If your organisation is considering implementing AI into any practices, I suggest you consider all the risks and include appropriate management practices. I ask each of you to learn, engage and try to be better so our AI can learn from us to be more inclusive. While our diversity in the profession provides great potential, it is only realised once we come together.

About Michael Bidwell
Michael Bidwell is an Executive Member of The Legal Forecast and Events Coordinator at Pride in Law, Australia. Special thanks to Benjamin Teng of The Legal Forecast for technical advice and editing. The Legal Forecast aims to advance legal practice through technology and innovation. TLF is a not-for-profit run by early career professionals passionate about disruptive thinking and access to justice.

Featured image by Kirsty Lee on Unsplash

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s