Safety, fairness and privacy in AI

NewsAI111223.jpg

11 December 2023

In short

  • Teachers must navigate ethical and responsible use of artificial intelligence (AI) on a daily basis.
  • Though AI has the potential to offer many benefits, it also presents a range of issues.
  • One of the key concerns raised by AI is its capacity to entrench existing biases.

Teachers need to be aware of the ethical challenges in using any AI tool, says Jeannie Marie Paterson, Professor of Law and Co-Director of the Centre for AI and Digital Ethics (CAIDE), University of Melbourne.

These include respect for privacy and recognising the capacity for bias and error in the technology, she says.

“None of those are easy questions but they are questions that are core to trust in education. They are questions about fairness, values and policy.”

One of the issues is the potential for AI tools to gather student data without their knowledge.

The AEU’s AI position statement asserts that experiences during the COVID-19 pandemic “… demonstrated that the business model of the major international technology companies involves entrenching their products in education systems and schools, with little concern about their educational value and without transparency about the pedagogical, curriculum, and assessment and reporting algorithms integral to them”.

In 2022, global advocacy group Human Rights Watch analysed 164 edtech products and claimed 89 per cent of them appeared to engage in data practices that put children’s rights at risk, contributed to undermining them, or actively infringed on their rights. The edtech products at least had the capacity to monitor children and often harvested data such as identity, location, and classroom activities and some installed tracking technologies that traced children outside of the virtual classroom, it found.

Paterson says the privacy issues are “huge”.

“If we take ChatGPT: it collects information from user prompts and then uses that information to retrain or refine the bot. That might sound bland, but it might be significant in terms of the resharing of sensitive information, intellectual property, and indeed what happens to the accuracy of the tool.

“If we think about social media: it’s collecting information. If we think about parents posting pictures of their children: that’s being reproduced and repurposed, perhaps infinitely. People sideline privacy because we can’t see any harm in sharing information,” she says.

Paterson says stronger privacy laws are needed. “We also need more discussions about the value of privacy and why the widespread hoovering up of personal information via digital technology isn’t a great thing for individuals or society.”

Safeguarding students extends to the technology’s capacity to build fakes.

“What we’re seeing now is an explosion of digital fake. This might be used for scam advertising or to manipulate the political process. Horrifyingly, we’re also seeing a lot of fake pornographic images and cyber bullying using fake images. This is because the capacity to create such images is increasingly available via a phone app or images scraped from social media,” says Paterson.

“I think the e-Safety Commissioner is right on to that usage, but there’s still a way to go.”

Beware of bias

Another concern raised by AI is its potential to entrench existing biases or embed new ones.

The AEU submission to the House Standing Committee on Employment, Education and Training Inquiry into the use of Generative Artificial Intelligence in the Australian Education System states: “Considering AI generates based on popular or dominant thinking, the risk for perpetuating stereotypes, single perspective, and ultimately misinformation remains unacceptably high, especially taking into consideration perspectives on gender, non-Anglo cultures, First Nations cultures, non-binary and queerness, disability, people living outside urban centres, as well as intersectionality within underrepresented groups.”

The lack of Aboriginal and Torres Strait Islander stories and knowledge systems in the datasets mined by AI models effectively renders First Nations people invisible, “perpetuating historical patterns of exclusion”, writes Dharug man Dr Josh Tobin.

“In a world where AI plays a role in everything from generating text to informing policy decisions, the dearth of Aboriginal and Torres Strait Islander perspectives is questionable,” he says.

“Aboriginal and Torres Strait Islander communities possess unique cultures, languages, and histories that are not accurately or sufficiently represented in mainstream digital platforms and databases.”

Tobin says the solution begins with acknowledging Aboriginal and Torres Strait Islander data sovereignty.

The global Indigenous Data Sovereignty movement aims to impact governance and interpretation of data collected in regard to First Nations people. While there has been a long history of data collection on Aboriginal and Torres Strait Islander people, there has been little data collected for or with Aboriginal and Torres Strait Islander people, write Professor Bronwyn Carlson, an Aboriginal woman born on D’harawal Country, and Wiradjuri woman Peita Richards.

“Many Indigenous people are concerned with how the data involving our knowledges and cultural practices is being used,” say Carlson and Richards.

Men in tweed coats

Says Paterson: “If you ask a generative image AI to draw some pictures of academics, you’ll get a whole lot of white men in tweed coats and if you ask them to draw cleaners, you’ll get a whole lot of middle-aged women of colour. That’s not right at all. But it comes about because the AI is reflecting a particular segment of the internet that it has been trained on.

“So, we need to understand how that bias occurs, where that occurs, and then be able to have the policy and value discussions about whether AI-driven representations or predictions are appropriate to be used in any particular context.”

Reducing bias can mean tackling structural gender imbalances in the AI workforce and the gender divide in digital and STEM skills. According to a 2019 UNESCO report, I’d Blush If I Could, only 12 per cent of AI researchers and 6 per cent of professional software developers are women. Furthermore, the AI skills gap for women in Australia sits below the OCED average.

It’s a gap that Women in AI is aiming to close. The non-profit do-tank, founded in 2016 by two female AI professionals in France, is working towards inclusive AI that benefits global society. It now has more than 8000 members in 140 countries and provides training and mentoring for women in the AI industry and those wishing to become involved.

Angela Kim is the Australian ambassador and global chief education officer for Women in AI.

“I’m really mindful that [by] 2030 almost 80 per cent of professions and jobs will be technology-related, which means if you don’t prepare women now there will be less opportunity and that is inequity,” she says.

She also sees a need to support refugee, immigrant, and First Nations communities.

Targeted STEM

In 2019 Women in AI partnered with Macquarie Group and IAG Group to host a school holiday AI camp for Year 9 STEM students at Cabramatta High School in Western Sydney.

Kim says 90 per cent of students there are immigrants: “Their parents can’t speak English and they work long hours. So, students do housework, they look after siblings and they cook. They are bright – and they love the STEM subjects.”

The following year, the same group of 47 girls and three boys attended more AI camps, says Kim. “We ran three AI camps for four days. They learned about coding, data, digital storytelling and AI. We also brought in tech executives with refugee and immigrant backgrounds. They shared their stories and how it was challenging but they didn’t give up.”

Eight girls who attended the camps went on to enrol in engineering studies at the University of NSW. Women in AI has also worked with NSW universities to offer tech literacy workshops for female students, including one on responsible AI.

In the US not-for-profit organisation Equal AI is tackling unconscious bias from another angle: through the development of responsible AI governance.

Aware that bias can embed at each human touchpoint from data collection to testing to development and deployment, Equal AI is working with industry leaders and experts, academia, and government to develop standards and tools to increase awareness and reduce bias and identify regulatory and policy solutions. It introduced a Responsible AI Badge certification program for senior executives and in August it released a White Paper: An insider’s guide to designing and operationalising a responsible AI governance framework.

Support for teachers

Paterson says students and educators need support to demystify AI, something key to CAIDE’s work at the University of Melbourne. “They don’t need to be expert coders, but they need to understand enough about the technology to see where the pressure points and the worries are.”

That includes understanding how it may develop in the future.

Kim agrees: “We need to provide solid tech and data and AI literacy programs for teachers so that they can be well equipped. A lot of people complain about generative AI like ChatGPT and how useless it is because they have experienced incorrect answers as a result of the data being biased.

“But a lot of people don’t know that you need to learn how to use generative AI tools correctly. How do you prompt in such a way that you can guide ChatGPT to give you the best, most appropriate answer for you?”

By Christine Long

This article was originally published in the Australian Educator, Summer 2023