AI Archives - Women's Agenda https://womensagenda.com.au/tag/ai/ News for professional women and female entrepreneurs Mon, 12 Feb 2024 00:04:28 +0000 en-AU hourly 1 https://wordpress.org/?v=6.4.2 Elon Musk made a meme about the sexual exploitation of women’s bodies online. So I made some memes about him. https://womensagenda.com.au/latest/soapbox/elon-musk-made-a-meme-about-the-sexual-exploitation-of-womens-bodies-online-so-i-made-some-memes-about-him/ https://womensagenda.com.au/latest/soapbox/elon-musk-made-a-meme-about-the-sexual-exploitation-of-womens-bodies-online-so-i-made-some-memes-about-him/#respond Mon, 12 Feb 2024 00:04:26 +0000 https://womensagenda.com.au/?p=74853 At the frontline of global technological development is a man who treats generative AI as a game, one that is played at the expense of women’s bodies.

The post Elon Musk made a meme about the sexual exploitation of women’s bodies online. So I made some memes about him. appeared first on Women's Agenda.

]]>
At the frontline of global technological development is a man who treats generative AI as a game, one that is played at the expense of women’s bodies, with hundreds of millions of spectators watching on.

Just weeks after sexually-explicit AI-generated images of Taylor Swift were circulated and viewed on his social media platform X more than 47 million times, Elon Musk trivialised what is a very real threat for women when it comes to these technologies.

On Sunday night (Monday morning in Australia), Musk posted a meme that shows just how little he cares about this threat.

“Boobs rock, it’s a fact,” he wrote on the post.

Elon Musk’s post shows how men (still) disrespect women’s bodies – online and beyond.

It’s the billionaire, Silicon Valley bro version of an 11-year-old boy typing “5318008” on his calculator and turning it upside down so it spells “BOOBIES”. I can picture an adolescent-like giggle escaping from Musk as he put fake boobs on the woman in the meme, wrote the caption and posted it to his 172 million followers on X, his very own platform.

But the truth is – the meme was immature, tone deaf and the clearest indication we have that Musk just isn’t funny.

Luckily, I am. And I want to show him how meme-making is really done.

The AI blame game

If there was any real accountability for Musk and people wanted to bring him down for making fun of a very real issue, I wonder if he would point the finger at AI.

Because that’s the pattern we’re seeing. When the pornographic images of Taylor Swift were distributed all over the platform, people vaguely blamed it on technology. No humans were accountable.

When Victorian MP Georgie Purcell’s body was edited and aired on a national television news broadcast, Nine News director Hugh Nailon cited AI as the reason the image was altered. No humans were accountable.

I’m tired of the AI blame game. It’s about time we point the finger at the real problem here – the people running the show.

Men are so quick to blame it on the robots.

SpaceX’s lawsuit

It’s an interesting choice Elon Musk has made to meme-ify image-based sexual harassment when his company SpaceX is facing a law suit for sexual harassment and discrimination.

In January, the California civil rights department informed SpaceX of seven complaints made by former employees at the rocket-making company. The complaints were in relation to managers nurturing a hostile work environment which allowed jokes about sexual harassment to go unnoticed. According to the accusations, women were paid less than men at the organisation, and any employee who complained about the conditions was dismissed.

Last week, Bloomberg broke the story that, as a result of those complaints, SpaceX is being sued for sexual harassment and discrimination.

Did Musk miss that memo? Because I don’t think prompting AI to alter an image to make a woman’s breasts bigger is helping the case, nor is making a meme about it.

Bad timing on that meme, bro.

Women’s bodies and AI technologies

Elon Musk owns one of the world’s biggest social media platforms. Ultimately, this guy gets to decide what goes on the platform and what stays off.

Last week, Women’s Agenda published an article about a woman who was kicked out of a shopping mall for wearing a midriff top. In the article, the main image showed a picture of her stomach.

When we posted the stories on social media, we ran into a problem. The article was blocked and unable to be posted on X.

Why? Because of the main image. Because of the woman’s stomach.

To be clear: AI-generated pornographic images of women are able to be widely distributed on the platform, seen 47 million times before Musk and the team at X notices a problem. But a woman’s belly? Not ok.

AI-generated porn? Yes. Tummies? Absolutely not.

Women’s bodies are still being regulated by men – online and beyond. The men running the online world don’t see a problem with deep fake images, because it doesn’t affect them: rather, they see it curated for their pleasure and their pleasure only, because “boobs rock”, right? But stuff like this can ruin names, reputations, lives and so much more. 

Of course, regulation on the technology itself is important. Giving people the ability to create this dangerous content gives people the choice to create this dangerous content. That’s why so many women in AI are calling for more regulation and a stronger gender lens in government regulation of AI.

But don’t try to tell me it’s a robot’s fault. Because Musk’s poor attempt at being funny speaks volumes to how men in these spaces (still) disrespect women. 

Maybe it’s not giving people the choice to create the dangerous content that is the problem. Maybe it’s the fact we’re letting them get away with it.

The post Elon Musk made a meme about the sexual exploitation of women’s bodies online. So I made some memes about him. appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/soapbox/elon-musk-made-a-meme-about-the-sexual-exploitation-of-womens-bodies-online-so-i-made-some-memes-about-him/feed/ 0
How Fei-Fei Li helped shape the AI revolution in a field dominated by alpha males https://womensagenda.com.au/latest/how-fei-fei-li-helped-shape-the-ai-revolution-in-a-field-dominated-by-alpha-males/ https://womensagenda.com.au/latest/how-fei-fei-li-helped-shape-the-ai-revolution-in-a-field-dominated-by-alpha-males/#respond Tue, 06 Feb 2024 02:12:17 +0000 https://womensagenda.com.au/?p=74698 Fei Fei Li is one of the most prominent women in the heavily male dominated field of artificial intelligence.

The post How Fei-Fei Li helped shape the AI revolution in a field dominated by alpha males appeared first on Women's Agenda.

]]>
Fei-Fei Li is one of the most prominent women in the heavily male dominated field of artificial intelligence. Jane Goodall, from Western Sydney University shares this review of her book, The Worlds I see, in this article republished from The Conversation.

Public debate on Artifical Intelligence has escalated in the past six months, with an outpouring of opinion pieces on the risks and ethics of a science that is undergoing an exponential period of advance.

One of the key figures in this, as a contributor to both the science and the debate, is Fei-Fei Li, Sequoia Professor of Computer Science at Stanford, and co-director of AI4All, a non-profit organisation promoting diversity and inclusion in the field of AI.


Review: The Worlds I See – Fei-Fei Li (Flatiron Books)


Aside from one controversy during her tenure as Chief Scientist for Google Cloud, involving a proposed partnership between Google and the Pentagon, Li has been something of a role model, not least because of her prominence in an area dominated by alpha-male personalities.

Free of the influence of stylists and image-makers, she comes across in interviews with the fluency of someone who wants to think their way through ideas as they arise, rather than deliver platform statements.

Li describes The Worlds I See as “a double helix memoir”. One thread is the coming-of-age of the science of AI; the other is an account of her own coming-of-age as a scientist. The personal dimension came to the fore, she says, after what was initially a “very nerdy book” was given the thumbs-down by a colleague.

Matter becomes mind

The story begins in Chengdu in China’s Sichuan province. As the only child of a family “in a state of quiet upheaval”, Li had a sense that her elders had been through more than they could tell. Her academically trained maternal grandparents found themselves on the wrong side of history during the Cultural Revolution. Her mother’s intellectual energies were thwarted.

As if there were some braided version of Yin and Yang in her heritage, her father’s free-spirited personality provided a complementary, if antithetical, form of influence. He was, says Li, the kind of parent a child might design for themselves if left to their own devices. He was impulse-driven, possessed of miscellaneous fascinations, which took him on excursions through the rice fields looking for butterflies, stick insects, wild rodents.

Her mother, meanwhile, was determined to escape. This ambition was realised in 1992, when the family moved to the United States. They settled in Parsippany, New Jersey, where 15-year-old Li, grappling with the demands of high school in a foreign language, demonstrated a capacity for long hours of work directed towards the academic goals her mother valued.

Her father’s fascination with natural life forms transferred to the object world of garage sales. He continued to involve Li in the practice of “studying everything in sight”.

Throughout The Worlds I See, Li reflects on the influence of this parental binary on her advancing career as a scientist. Without the fierce intellectual determination of her mother, she could not have persevered with her high school studies, given the family’s ongoing struggle for economic survival. Without her father’s childlike capacity to pay total attention to random phenomena, her research might never have found its innovative path.

The braid of fascination and intellectual drive twists in unexpected ways. It eventually fuses into an almost visionary faith in what Li terms the North Star of her life: a vocation to shift the parameters of understanding by asking “audacious questions” of the kind pursued by the great physicists who inspire her: Albert Einstein, Roger Penrose, Erwin Schrödinger.

Her own audacious question – “what is vision about?” – came into focus by degrees. For someone given to describing her enterprise in terms of revelation and revolution, her actual research on vision seems anything but visionary.

Undergraduate study in physics and computational mathematics at Princeton yielded an opportunity for vacation work as an assistant to a neuroscience team at UC Berkeley. They were attempting to capture the neuronal responses of a cat to visual stimuli. The targeted area of the brain was probed by hairline electrodes to pick up signals.

These signals were translated first into to sound waves, then back to visual patterns from which the team were able to recompose something approximating the original image shown to the animal.

Hardly the stuff of romance, yet Li comments: “Something transcendent happens. Matter somehow becomes mind.”

What is data?

This insight sustains Li through her protracted labours. She becomes convinced that the principle can be applied to machine learning.

Following evidence that visual recognition in the human brain moves from the general to the increasingly specific (bird, water bird, duck, mallard), Li and her postgraduate collaborator set out to feed the computer with a comprehensive range of examples in a limited set of categories.

New image technologies in other domains came to their assistance. Google Street View identified 2,657 models of car on the road in 2014. Amazon Mechanical Turk escalated the scale and speed of their research as categories multiplied, from the original ten to thirty, a hundred, a thousand.

But the project had all the burdens that faced Charles Darwin as he attempted a comprehensive taxonomy of pigeons, or James Murray compiling the Oxford English Dictionary.

For Li, the apparently humdrum conviction that learning should be driven by data rather than algorithms arrives as “a moment of epiphany”. The audacious questions “what is vision?” and “what is intelligence?” merge. They become associated with a third question: “what is data?”

A rapid thaw in the “AI winter” of the first decade of the 21st century commenced in 2012, when research into machine learning made a breakthrough in the direction of “big data”. It was all about scaling up, increasing the retention capacities of AI to incorporate the range and complexity of phenomena in the world itself.

Li found her approach converging with that of Geoffrey Hinton, the Toronto-based cognitive psychologist often credited with spearheading the AI paradigm shift. Data can be exponentially multiplied, Hinton proposed, when machines talk among themselves. Digital agents scan diverse areas of data and exchange what they have learned to generate more sophisticated modes of correlation.

Intelligence comes to be seen not as an inherent property of a machine or a human brain, but as something out there. It arises from interactions between objects, events, beings and environments. There is more of the gatherer than the hunter in its development.

Distributed intelligence

Distributed intelligence means distributed opportunities to participate in the co-evolution of human and machine intelligence. Big science and high technology cease to be the exclusive preserve of specialists whose modes of knowledge are beyond the understanding of ordinary people. Anyone who has had an exchange with Chat GPT on Open AI is contributing.

Li insists, however, that effective human learning requires education. The most important figure in her own education was her high-school maths teacher, Bob Sabella, who kept her on track as she struggled with the English language curriculum. He remained a friend and mentor through every stage of her academic advancement.

It is the dedicated school teacher, Li says, who is the real emblem of the future in human technology. She co-founded AI4All in 2017 with the aim of providing hands-on training for high-school students, especially girls, students of colour and those from immigrant families or low income communities. Li herself fits most of these categories.

The experiences Li recounts in The Worlds I See display an extraordinary capacity for persistence in the face of obstacles. She completed high school while supplementing the family income with a $2 an hour job in a Chinese restaurant. As a graduate student, she was running the family dry-cleaning business.

Her exams at Princeton were done by special arrangement at the hospital clinic where her mother was undergoing surgery for a deteriorating cardiovascular condition.

But it is as if everything she experiences is turned to account in the pursuit of the North Star. Recurring crises in her mother’s health gave her a familiarity with hospitals, which led her to explore how AI might be deployed, not to replace the vital role of human nurses and health workers, but to support them.

If Li’s efforts can be seen as a feminist enterprise, it is perhaps because the field in which she works is dominated by male celebrities, who persist in seeing the future as a Darwinian struggle between human and machine intelligence.

“Which is smarter?” is less an audacious question than one that needs to be consigned to the dustbin of history. Speaking in 2018 to a Congressional hearing on Power and Responsibility in the application of advanced technologies, Li said:

There’s nothing artificial about AI. It’s inspired by people, created by people, and most importantly it has an impact on people.

Explicitly distancing herself from those, like Hinton, who are seeing the current breakthrough in AI potential as an existential crisis, Li is concerned with tangible social risks, and specific ways to address them.

In a recent discussion with former US Secretary of State Condoleezza Rice, now Head of the Hoover Institution at Stanford, Li expressed her belief that policy intervention can install the important safeguards in areas where the impact of AI is likely to be greatest.

These include its benign potential in health and education, as well as the dangers opening up through disinformation, the loss of privacy and the replacement of human work.

If there is an overriding theme in The Worlds I See, it is that human and artificial intelligence form a double helix. How this evolves, and with what consequences, will depend, Li says, on whether we create “a healthy ecosystem” in which talent, technology and public sector participation are co-ordinated.

Jane Goodall, Emeritus Professor, Writing and Society Research Centre, Western Sydney University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The Conversation

The post How Fei-Fei Li helped shape the AI revolution in a field dominated by alpha males appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/how-fei-fei-li-helped-shape-the-ai-revolution-in-a-field-dominated-by-alpha-males/feed/ 0
‘Egregious invasion of privacy’: Taylor Swift’s name blocked on X after sexually explicit deepfakes go viral https://womensagenda.com.au/latest/egregious-invasion-of-privacy-taylor-swifts-name-blocked-on-x-after-sexually-explicit-deepfakes-go-viral/ https://womensagenda.com.au/latest/egregious-invasion-of-privacy-taylor-swifts-name-blocked-on-x-after-sexually-explicit-deepfakes-go-viral/#respond Mon, 29 Jan 2024 05:37:50 +0000 https://womensagenda.com.au/?p=74452 Sexually explicit deep fake images of Taylor Swift have been circulated on X, sparking grave concerns over the growth of AI.

The post ‘Egregious invasion of privacy’: Taylor Swift’s name blocked on X after sexually explicit deepfakes go viral appeared first on Women's Agenda.

]]>
Sexually explicit deep fake images of Taylor Swift have been circulated on Elon Musk’s social media platform X, sparking grave concerns over the growth of artificial intelligence (AI).

X Corp. (formerly known as Twitter) responded to the incident on Sunday night by removing the images and the account that first published the deepfakes, as well as temporarily blocking users’ ability to search “Taylor Swift” on the platform.

“This is a temporary action and done with an abundance of caution as we prioritise safety on this issue,” said Joe Benarroch, head of business operations at X.

The pop star’s name is still blocked on X, resulting in an error when trying to search her name.

Typing “Taylor Swift” into the X search bar results in an error message. Credit: Women’s Agenda

According to a report from The New York Times, one of the several images that were in circulation was viewed 47 million times before the deepfake, along with the account that published it, was removed from X.

In a news briefing on Friday, White House press secretary Karine Jean-Pierre called on Congress to take legislative action against the abuse and misuse of AI technologies online, but also urged social media platforms to take greater measures to regulate content.

“This is very alarming. And so, we’re going to do what we can to deal with this issue,” Jean-Pierre said.

“We know that lax enforcement disproportionately impacts women and they also impact girls, sadly, who are the overwhelming targets.

“We believe they (the platforms) have an important role to play in enforcing their own rules to prevent the spread of misinformation and non-consensual, intimate imagery of real people.”

The creation and distribution of deepfake AI images has been widely regarded as a form of gender-based violence, as it disproportionately targets women and girls online.

In 2019, a study by Deeptrace, a cyber security company, found 96 per cent of deepfake videos online were of an intimate or sexual nature. The people depicted in the AI-generated content were primarily women actors, musicians and media professionals.

‘Extremely harmful content’

Australia’s eSafety Commissioner Julie Inman Grant spoke to Women’s Agenda, explaining how easy it is to create deepfakes and how devastating it can be for people.

“Deepfakes, especially deepfake pornography, can be devastating to the person whose image is hijacked and altered without their knowledge or consent, no matter who they are,” Commissioner Inman Grant said.

“Image-based abuse, including deepfake porn, is persistent online harm which also represents one of the most egregious invasions of privacy.”

Generative AI is user-friendly and widely accessible to people. Inman Grant said something that would previously have taken large software and computing power to generate now can be generated with a click of a button.

“As a result, it’s becoming harder and harder to tell the difference between what’s real and what’s fake. And it’s much easier to inflict great harm,” Inman Grant said.

Australia’s online safety regulatory body, eSafety, lists the use of AI to create sexually explicit deepfake images as “image-based abuse”. Online users can report image-based abuse on eSafety’s website.

While eSafety has a 90 per cent success rate in getting deepfakes and other abusive material down from online sites, including social media platform X, Commissioner Inman Grant called on the “purveyors and profiteers of AI” to do more.

“We’re not going to regulate or litigate our way out of this – the primary digital safeguards must be embedded at the design phase and throughout the model development and deployment process,” she said.

“And platforms need to be doing much to detect, remove and prevent the spread of this extremely harmful content.”

Earlier this month, the eSafety Commission released a transparency report, revealing massive staff cuts at X Corp around the world.

According to the report, the global Trust and Safety staff was reduced by 30 per cent, while the Trust and Safety staff in the Asia Pacific region, including Australia, had a 45 per cent reduction.

Between November 2022 and May 2023, there were 6,103 previously banned accounts on Twitter that were reinstated on X.

At the time of the report’s release, eSafety commissioner Julie Inman Grant said Elon Musk’s staff cuts at X Corp. had created a “perfect storm” for the platform.

eSafety urges those concerned about the non-consensual sharing of images to report to eSafety at www.esafety.gov.au/Report.

The post ‘Egregious invasion of privacy’: Taylor Swift’s name blocked on X after sexually explicit deepfakes go viral appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/egregious-invasion-of-privacy-taylor-swifts-name-blocked-on-x-after-sexually-explicit-deepfakes-go-viral/feed/ 0
The government has released its interim response to AI growth – but where is the gender lens? https://womensagenda.com.au/latest/the-government-has-released-its-interim-response-to-ai-growth-but-where-is-the-gender-lens/ https://womensagenda.com.au/latest/the-government-has-released-its-interim-response-to-ai-growth-but-where-is-the-gender-lens/#respond Thu, 18 Jan 2024 00:27:33 +0000 https://womensagenda.com.au/?p=74224 The government has released its interim response to the Safe and Responsible AI in Australia consultation, outlining its action plan for the technology.

The post The government has released its interim response to AI growth – but where is the gender lens? appeared first on Women's Agenda.

]]>
The Australian government has released its interim response to the Safe and Responsible AI in Australia consultation, outlining its short-term action plan for the growing technology. However, there are concerns that no gender lens was applied in the interim response.

The Department of Industry, Science and Resources published the interim response on Wednesday afternoon, in reply to public submissions to the Supporting Responsible AI discussion paper from June-August 2023.

The Minister for Industry and Science Ed Husic said the interim response is a start to ensuring safety and responsibility remains at the forefront of AI growth in Australian industries, particularly high-risk settings.

“Australians understand the value of artificial intelligence, but they want to see the risks identified and tackled,” Minister Husic said.

“We have heard loud and clear that Australians want stronger guardrails to manage higher-risk AI.

“The Albanese government moved quickly to consult with the public and industry on how to do this, so we start building the trust and transparency in AI that Australians expect.”

The key principles of the AI interim response from the government include testing, transparency and accountability.

In its response, the government said it is considering implementing mandatory guardrails for AI in high-risk settings by either amending current AI laws or creating new laws specific to AI.

While this is a long-term action plan for the government, Minister Husic said its short-term plan is to work with industry to develop a voluntary AI Safety Standard, voluntary labelling and watermarking of AI-generated materials and to establish an expert AI advisory group.

“We want safe and responsible thinking baked in early as AI is designed, developed and deployed,” Minister Husic said.

According to research from McKinsey and company, the growth of AI could boost Australia’s GDP anywhere between $170 billion and $600 billion per year by 2030.

Considering AI with a gender lens

In a recent report from the International Monetary Fund, it is estimated that AI will impact almost 40 per cent of jobs around the world, and around 60 per cent of jobs in advanced economies like Australia.

The report also found that women have higher exposure rates to AI than their male counterparts. Therefore, women face greater opportunities, yet greater risks, in the growth of AI in Australia.

While the interim response paper mentions general “bias and discriminatory outputs” that currently exist in AI, there is no specific mention of women or considering AI with a gender lens in the government’s interim response, something that concerns Tracey Spicer AM, author of Man-Made: How the bias of the past is being built into the future

Speaking with Women’s Agenda, Spicer was disappointed the Albanese government did not consider the “safe and sensible approach” of the European Union’s AI Act, world-first legislation that ensures AI is “safe, transparent, traceable, non-discriminatory and environmentally friendly”.

Tracey Spicer AM, author of Man-Made: How the bias of the past is being built into the future.

“I can only assume that Big Tech has pulled the wool over the eyes of federal Ministers, who are solely focused on the potential benefits of AI to the economy,” Spicer said.

“The bias and discrimination deeply embedded in artificial intelligence is a human rights issue.”

Spicer said the “dirty data sets” that currently exist in the technology have unconscious bias embedded into its system, which worsens with machine learning. For example, when AI is used in the hiring process, studies show it discriminates against diverse people, including women, people with disabilities and older workers. It’s the same story in several other instances, Spicer said.

“If you’re applying for a home loan or credit card, you’re less likely to be approved by an automated system if you identify as a woman, or belong to a marginalised community,” she said.

“And medical algorithms regularly rob people over the age 50 of access to life-saving treatment.

“It’s appalling that this government has failed to put a gender lens, let alone an intersectional one, over this area.”

When asked on the consequences for women if a gender lens is not taken when looking at AI, Spicer referred to a quote from Dr Joy Buolamwini, a computer scientist and leader in advocacy against AI bias.

“We risk losing the gains made with the civil rights movement and women’s movement under the false assumption of machine neutrality,” Dr Buolamwini said.

Spicer said the government must take a gender lens when regulating AI, as they are “inextricably linked”.

“The government should put an intersectional lens on artificial intelligence, to create a fairer future for all,” Spicer said. 

“Women cannot be left behind in the fourth industrial revolution. Data indicates that AI will replace four times as many women’s jobs than men’s. The fact that a so-called Labor government has failed to recognise this is, frankly, an epic fail.”

The post The government has released its interim response to AI growth – but where is the gender lens? appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/the-government-has-released-its-interim-response-to-ai-growth-but-where-is-the-gender-lens/feed/ 0
Women are more exposed to AI and could reap the benefits, new research shows https://womensagenda.com.au/latest/women-are-more-exposed-to-ai-and-could-reap-the-benefits-new-research-shows/ https://womensagenda.com.au/latest/women-are-more-exposed-to-ai-and-could-reap-the-benefits-new-research-shows/#respond Mon, 15 Jan 2024 23:29:38 +0000 https://womensagenda.com.au/?p=74133 Almost 40 per cent of jobs are exposed to artificial intelligence (AI), with women expected to reap the benefits of AI automation, new research has found.

The post Women are more exposed to AI and could reap the benefits, new research shows appeared first on Women's Agenda.

]]>
Almost 40 per cent of jobs around the world are exposed to artificial intelligence (AI), with women and younger people expected to reap the benefits of AI automation, new research has found.

The International Monetary Fund (IMF) released its report Gen-AI: Artificial Intelligence and the Future of Work over the weekend, predicting the winners and losers in what some suggest will be a new Industrial Revolution with the growing prevalence of AI.

“AI promises to increase productivity, while threatening to replace humans in some jobs and to complement them in others,” the report says.

Unlike reports that have come before, the IMF’s report found women have greater exposure to AI technologies than their male counterparts. The international body suggested AI presents both greater risks and greater opportunities for women in the workforce.

The research also found college-educated workers and younger people are better prepared for AI automation in the workforce and will be better equipped to move between jobs that could be replaced or complemented with AI technologies. AI automation places older people and workers who are not college-educated at risk of job displacement.

The IMF’s research found AI is more prevalent in advanced economies: around 60 per cent of jobs involve some form of AI, compared to 26 per cent exposure in developing economies. Worldwide, AI is present in almost 40 per cent of jobs.

In their research, the IMF recognises both the benefits and the consequences of AI. On one hand, AI replacing or complementing workers will improve productivity and therefore generate greater profits for workplaces. AI automation will also create thousands of new jobs and revolutionise the labour market.

On the other hand, AI technologies replacing workers poses great risks for employees needing to find new work. While previous waves of automations, such as the Industrial Revolution in the 19th century, predominately displaced low-income and middle-income workers the most, the IMF suggests AI automation also risks the replacement of high-skill and high-income jobs.

How AI impacts the labour market depends on the strength of the economy, the IMF’s report found. Advanced economies, the IMF’s research suggests, will feel a short-term hit from AI automation, as many workers could be replaced or complemented by AI technologies imminently. However, in the long run, advanced economies are in a better position to benefit from AI automation because of greater opportunities for workers to technologically upskill overtime.

Meanwhile, in developing countries that still rely heavily on manual labour and traditional industries, AI automation will be less disruptive in the short term future. However, overtime, AI automation will widen the gap between advanced and developing countries even more.

The IMF suggests AI in workplaces will benefit higher-wage earners more than lower-wage earners. This is expected to exacerbate labour income inequality within economies even further.

Last year, an Australian report from Roy Morgan found 57 per cent of survey respondents said AI will produce more challenges than it will benefits in Australian society.

The report found women were the most sceptical of AI automation, with 62 per cent agreeing it creates more problems than it solves.

According to the Roy Morgan survey, one in five Australians believe that advancement of AI risks human extinction in just 20 years.

The Australian government’s AI ethics framework outlines eight principles to promote responsible use of AI in Australian society. These principles aim to achieve better outcomes with AI, reduce the risks of negative impacts and practice the highest standards of ethical business and good governance.

The post Women are more exposed to AI and could reap the benefits, new research shows appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/women-are-more-exposed-to-ai-and-could-reap-the-benefits-new-research-shows/feed/ 0
How I felt after ‘beautifying’ myself through an AI tool and the impossible standards for women https://womensagenda.com.au/latest/how-i-felt-after-beautifying-myself-through-an-ai-tool-and-the-impossible-standards-for-women/ https://womensagenda.com.au/latest/how-i-felt-after-beautifying-myself-through-an-ai-tool-and-the-impossible-standards-for-women/#respond Fri, 20 Oct 2023 00:00:47 +0000 https://womensagenda.com.au/?p=72306 The danger of deep fakes, the inherent biases and the high beauty standards society and now technology places on women.

The post How I felt after ‘beautifying’ myself through an AI tool and the impossible standards for women appeared first on Women's Agenda.

]]>
Did you know that you can generate artificial intelligence (AI) images on Canva now? It’s as easy as putting a prompt into the search box and Canva will wave a magic wand and produce an almost-realistic image for you.

This morning, when I found out about this feature, I let my imagination run wild. 

The first prompt I entered was “disco balls in Parliament House”. I just think it would brighten the place up a bit, you know? The image that Canva came back with was everything I’d imagined (Canberra, if you’re reading this, I really recommend hanging disco balls in Parliament House. You’ll thank me later). 

My next prompt was “labrador in crocs”. Adorable, right? Canva replied with the cutest image you will ever see.

The final prompt I entered was one simple word: “woman”.

After a few moments, Canva responded. She was thin, had pale skin and piercing blue eyes. Her blonde hair brushed her shoulders and framed a face without a blemish, pimple or wrinkle in sight. Eyebrows, lips, cheeks – it all looked perfect to me.

The “woman” Canva created for me through their AI-generated image tool. Credit: Canva

In a matter of moments, the generative AI tool had given me an image of a woman – that much is true. But it also shone a light on the danger of deep fakes, the inherent biases and the high beauty standards society and now technology places on women.

Deep Fakes

A few weeks ago, the Australian Financial Review (AFR) Magazine published its annual Power issue for 2023. Traditionally, the publication sends photographers out to capture portraits of those who made the top 10 lists for Australia’s most powerful people.

This year, however, the AFR hopped on the generative AI bandwagon and created their own portraits of some people who made this list, including Hollywood actor Margot Robbie and soccer legend Sam Kerr.

Apart from the captions indicating they were AI-generated images, it is clear the portraits are fake. Margot Robbie’s and Sam Kerr’s hands both had odd numbers of fingers, for starters.

“They (the images) are both uncanny and yet slightly unreal,” Matthew Drummond, the AFR Magazine editor, wrote in a piece defending the use of AI-generated images last month.

“All have the distinctive fuzzy texture of AI images, as if they were drawn. Our prompts were very minimal and the output hints at the way AI is learning 21st-century human culture.”

As I discovered, it takes seconds to generate an image that is borderline realistic, which of course raises ethical questions of “deep fakes”, or false videos, images and even voice recordings. Deep fakes can depict high-profile people and can often be used against them.

“Such fakes are set to add a new dimension to misinformation campaigns, and we’re keen not to add to that problem,” Drummond continued in his piece in the AFR. 

“Our input images were all selected from what’s publicly available on the web; the fresh portraits that our photographers took for this issue were kept separate.”

The frequency of deep fakes becomes dangerously problematic, particularly in global issues including the humanitarian crisis in Gaza. Just last week, a WhatsApp voice memo which supposedly gave insider information from the Israeli army was discovered to be a fake.

Reuters predict approximately 500,000 video and voice deep fakes will be shared globally on social media this year. However, a survey from 2022 found 43 per cent of respondents admitted they would not be able to detect a deep fake video.

Beauty Standards

I played with another AI tool this morning. The platform I used could take an existing photo and AI-ify the image.

I uploaded a picture of me at my graduation ceremony I took on Monday. I love this photo – it reminds me of a proud, happy, exciting occasion, and you can see it on my face how cheerful I felt at that moment.

Then I clicked the filter “Beautify”.

I decided to use the “glam” setting, maintaining the natural lighting and tidying up the background image a bit. I hit “apply” and waited.

I zoomed in. I looked like me, but kind of different. My skin had lost the red-ish tone I always have and looked the clearest it has ever been. My eyebrows were perfectly shaped. My eyes were symmetrical – I always criticise how wonky they look in photos. The hair on my arms had vanished and my dimple was more pronounced on my left cheek.

Before (left) vs After (right) being “beautified” by AI. Credit: Olivia Cleal

Experts have warned how AI-generated images could worsen the problem that people, particularly young people, face when it comes to self-esteem and body image. Already, 73 per cent of people wished they could change the way they look, according to statistics from Butterfly. What’s more, 41.5 per cent of people most of the time or always compare themselves to others on social media.

Analysing the pictures of Sam Kerr, Margot Robbie, the “woman” Canva created for me this morning and even the AI version of me, the threat that AI poses to exacerbated issues of self-esteem and body issues is clear.

The “beautification” of women through AI-generated images, as well as the risk of deep fakes, raises big ethical problems. It’s time that platforms and politicians start thinking up solutions.

Meanwhile, I have a feeling I will find myself looking back and forth between my original graduation photo and the “beautified” AI version throughout today.

For those still thinking about the disco balls in Parliament House, or the labrador in crocs – Happy Friday.

AI-generated images of disco balls hanging in Parliament House (left) and a labrador in crocs (right).
AI-generated images of disco balls hanging in Parliament House (left) and a labrador in crocs (right).

The post How I felt after ‘beautifying’ myself through an AI tool and the impossible standards for women appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/how-i-felt-after-beautifying-myself-through-an-ai-tool-and-the-impossible-standards-for-women/feed/ 0
Women are more sceptical of AI than men, new research finds https://womensagenda.com.au/latest/women-are-more-sceptical-of-ai-than-men-new-research-finds/ https://womensagenda.com.au/latest/women-are-more-sceptical-of-ai-than-men-new-research-finds/#respond Tue, 29 Aug 2023 23:40:25 +0000 https://womensagenda.com.au/?p=71090 More than half of Australians believe AI will create more problems than it solves, with women far more likely to be concerned.

The post Women are more sceptical of AI than men, new research finds appeared first on Women's Agenda.

]]>
More than half of Australians believe artificial intelligence (AI) will create more problems than it solves, with women far more likely to be concerned over the advanced technology.

New data from Roy Morgan, which surveyed 1,481 Australians aged 16 and over in the SMS Survey, found 57 per cent of respondents think the future of AI will produce more challenges than it will benefits in Australian society.

The survey also found females were more likely to be sceptical of AI, with 62 per cent agreeing it creates more problems than it solves, than their male counterparts (52 per cent).

Roy Morgan conducted the research in conjunction with the Campaign for AI Safety, which works towards greater public awareness of AI safety and advocates for strong laws to regulate AI.

Nik Samouloc, the coordinator of the Campaign for AI safety, said the results of the survey demonstrate the need for greater regulation on the advancing technology.

“Most Australians are pessimistic about artificial intelligence, especially when it comes to job security and opportunities for misuse,” he said.

“The poll suggests that people want government regulation to deal with these issues, including unknown consequences and new problems that AI will create.”

Roy Morgan’s survey found one in five Australians believe the advancement of AI risks human extinction in just 20 years, and Samouloc said the government should respond to this concern from the public.

“The Australian government does not have time to delay AI regulation, nor to delay banning the development of dangerous AI that can be misused or cause grave accidents,” he said.

Reasons for the respondents’ scepticism, including the majority of women, include the risk of job losses and the need for greater regulation.

While AI-related jobs are on the rise, a UNESCO report in 2022 found, globally, women represent only 29 per cent of science research and development positions, including in AI. 

CEO of Roy Morgan Michele Levine agreed greater regulation is needed surrounding AI, particularly to protect the jobs of Australians.

“Australians are excited about the benefits that AI technology can bring to everyday life, but on the balance, the majority of us feel the potential for job losses, misuse and inaccuracy outweigh these benefits,” she said.

“Australians feel there is a clear need for regulation in the AI space, to ensure that these risks can be adequately managed.”

There are no specific laws in Australia to regulate AI. “General regulation” of the technology currently falls under the Privacy Act 1988 (Cth) or the Australian Consumer Law, while the use of AI in workplaces or particular sectors are managed under “sector-specific regulations”, like the Therapeutic Goods Act 1989 (Cth) when used in the medical sector.

In June this year, the Department of Industry, Science and Resources opened the Safe and Responsible AI in Australia Discussion Paper. The Department consulted on how the Australian government can “mitigate any potential risks of AI and support safe and responsible AI practices.” Submissions for the Discussion Paper closed on August 4. 

The Australian Human Rights Commission (AHRC) made a submission to the Discussion Paper with 47 recommendations, including establishing specific laws and policies for AI regulation, a government taskforce to prevent misuse of AI in Australia, public education on digital literacy and more.

The post Women are more sceptical of AI than men, new research finds appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/women-are-more-sceptical-of-ai-than-men-new-research-finds/feed/ 0
Can AI help the 1 in 3 working age Australians with depression or anxiety? https://womensagenda.com.au/life/womens-health-news/can-ai-help-the-1-in-3-working-age-australians-with-depression-or-anxiety/ https://womensagenda.com.au/life/womens-health-news/can-ai-help-the-1-in-3-working-age-australians-with-depression-or-anxiety/#respond Thu, 17 Aug 2023 22:22:03 +0000 https://womensagenda.com.au/?p=70648 New research shows 1 in 3 working age Australians have depression or anxiety, with AI-guided support cited as a key solution.

The post Can AI help the 1 in 3 working age Australians with depression or anxiety? appeared first on Women's Agenda.

]]>
New research shows one in three working age Australians are managing symptoms of moderate to severe depression or anxiety, with AI-guided support cited as a key solution. 

This comes from a survey of 2000 Australians aged 16 to 65 years old who took part in two standardised clinical screening questionnaires (GAD-2 and PHQ-2). 

Published by Wysa, a leading global provider of mental health support, the study suggests that the numbers of working people currently suffering from these mental health issues are far higher than reported in the 2022 Australian Bureau of Statistics (ABS) study of all Australians, which found 16.8 per cent to be living with an anxiety disorder and 7.5 per cent with depression.

Wysa’s research found almost twice as many 16-24 year olds screened positive for symptoms of depression (46 per cent) or anxiety (46 per cent) versus 55-64 year olds with anxiety (24 per cent) or depression (23 per cent). 

Additionally, almost half (49 per cent) of students are experiencing significant symptoms of anxiety, compared to 31 per cent overall, and 52 per cent have depression symptoms that are moderate to severe, compared to 32 per cent of all participants.

The reasons for these numbers vary. Data shows that money is the top reason Australians feel depressed in 2023, with two thirds of people worried about the cost of living, including 79 per cent of full-time homemakers/parents.

Work is also a cause for stress for 4 in 10 full time employed people. Symptoms of anxiety are highest for real estate workers (44 per cent), social care (44 per cent) and engineering (42 per cent). Depression is highest for those in IT (47 per cent), engineering (41 per cent) and retail (41 per cent).

And despite Medicare support for mental health being available in Australia, many are managing symptoms without support. 

Nearly half (46 per cent) of those who screen positive for symptoms of moderate to severe depression or anxiety have not spoken to a healthcare professional. Thirty-one per cent said this was because they don’t believe their symptoms are serious enough, 16 per cent cited perceived cost as a barrier to healthcare and 15 per cent said embarrassment was preventing them from seeking support. 

CEO and Co-founder of Wysa, Jo Aggarwal says clinically safe AI-guided support could be the key to supporting Australians’ mental health. 

Our experience from supporting over 6 million people in 95 countries has shown us that conversational AI as the first step of care can help bridge the shortage of qualified professionals, but more importantly, it overcomes the barriers to access people face related to stigma, cost, and the need to self-identify a need for support,” said Aggarwal.

Within Wysa’s survey, participants were asked who they’d rather go to for support with their mental health, and half of them selected ‘a mental health app with clinically proven self-help resources tailored to their needs’ over anyone in the workplace or school. Fifty per cent would also choose an app over HR or school services. 

“People open up in AI-guided therapy much faster than to a human therapist, and it creates equitable access to support, at scale,” said Aggarwal.

“Our research shows there’s an appetite for clinically safe AI-guided support and it could be the best opportunity we have to address the mental health crisis in Australia.”

The post Can AI help the 1 in 3 working age Australians with depression or anxiety? appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/life/womens-health-news/can-ai-help-the-1-in-3-working-age-australians-with-depression-or-anxiety/feed/ 0
‘Embrace your different perspective’: Dr Fang Chen on the power of women in AI https://womensagenda.com.au/tech/embrace-your-different-perspective-dr-fang-chen-on-the-power-of-women-in-ai/ https://womensagenda.com.au/tech/embrace-your-different-perspective-dr-fang-chen-on-the-power-of-women-in-ai/#respond Wed, 26 Jul 2023 00:22:06 +0000 https://womensagenda.com.au/?p=70176 Professor Fang Chen from UTS’ Data Science Institute is working towards transformative impact is embracing inclusivity and diversity in AI. 

The post ‘Embrace your different perspective’: Dr Fang Chen on the power of women in AI appeared first on Women's Agenda.

]]>
Inclusive leadership and transformative impact needs to occur across all industries, and involve all genders, races, ages and abilities. YWCA Canberra’s She Leads Conference, taking place on August 4, will bring together a host of incredible female thought leaders across advocacy, politics, academia and science.

When it comes to new technological advances, we can’t escape the growing impact of AI and its frightening reaches.

For distinguished Professor Fang Chen from the UTS’ Data Science Institute, the most critical aspect in working towards transformative impact is embracing inclusivity and diversity in AI

Dr Chen believes that inclusive leadership needs to be the kind that is adaptive to changes while also enabling women to embrace their leadership potential.

“Today, not only do we see more challenges in leadership, we also see more technological challenges, and these transformations need to be made so that they no longer perpetuate inequality,” she said. 

“How are we going to disrupt the status quo to foster some positive changes?”

Dr Chen has combatted bias perception in the past by focusing on her own skills and capabilities rather than on her gender. 

Several years ago, Chen was introduced by a male speaker on a panel with a question — “As a female, how you convince people to trust the business intelligence?”

Chen hit back, saying that such a question simply diminishes her complex personhood. 

“I’m not only female, I’m also Asian, and English is not my first language and I’m skinny as well, I’m short, relative to other people, but I prefer to focus on the skills I’ve acquired,” she said.

Despite this, she adds that women are perhaps more observant, and “better at looking at things from a different angle.”

Chen, who won the Australian Museum Eureka Prize in 2018 for Excellence in Data Science, started out as an electrical engineer before pursuing a PhD in AI focusing on speech recognition. 

She emigrated to Australia and is now one of the world’s leading AI and Human-Machine Interaction expert, promoting ethical, human-centred AI.

In the years she has worked in the industry, she has seen a remarkable shift in the perception of women in AI. 

“Even back in 2013, I was often the only woman at the table talking to all male colleagues,” she said.

“Now, AI is absolutely a welcoming atmosphere, and the boundary of the profession is blurring. In the past, AI was very strictly just about computer science, but now…there are different skills involved, and women bring a range of very unique strengths to the field.”

Chen commends Australia’s tertiary education system for leading the way in global AI research, though admits that we are also “a very risk averse nation…we tend to follow the suit of a lot of people rather than trying to jump on being the leader.”

“We can do better,” Chen says. 

As an internationally-recognised leader in AI and data science, Chen has mentored many young scientists and budding PhD candidates. 

She tells her students the importance of never underestimating their abilities, and to notice self-doubt when it creeps up on them.

“Nothing can save you more if you keep your focus in doing what your heart wants, and trusting you can do it,” she said. “My advice is to trust yourself. You’re not less capable than anyone else.” 

You can see Dr. Fang Chen speak at the YWCA She Leads Conference on August 4. Book your tickets here

The post ‘Embrace your different perspective’: Dr Fang Chen on the power of women in AI appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/tech/embrace-your-different-perspective-dr-fang-chen-on-the-power-of-women-in-ai/feed/ 0
Ageism, sexism, classism and more: 7 examples of bias in AI-generated images https://womensagenda.com.au/tech/ageism-sexism-classism-and-more-7-examples-of-bias-in-ai-generated-images/ https://womensagenda.com.au/tech/ageism-sexism-classism-and-more-7-examples-of-bias-in-ai-generated-images/#respond Mon, 10 Jul 2023 23:23:14 +0000 https://womensagenda.com.au/?p=69838 Regardless of the input, AI image generators will have a tendency to return certain kinds of results. This is where the potential for bias arises.

The post Ageism, sexism, classism and more: 7 examples of bias in AI-generated images appeared first on Women's Agenda.

]]>
The next time you see AI-generated imagery, ask yourself how representative it is of the broader population and who stands to benefit from the representations within, says T.J. Thomson, from RMIT University and Ryan J. Thomas, from University of Missouri-Columbia, in this article republished from The Conversation.

If you’ve been online much recently, chances are you’ve seen some of the fantastical imagery created by text-to-image generators such as Midjourney and DALL-E 2. This includes everything from the naturalistic (think a soccer player’s headshot) to the surreal (think a dog in space).

Creating images using AI generators has never been simpler. At the same time, however, these outputs can reproduce biases and deepen inequalities, as our latest research shows.

How do AI image generators work?

AI-based image generators use machine-learning models that take a text input and produce one or more images matching the description. Training these models requires massive datasets with millions of images.

Although Midjourney is opaque about the exact way its algorithms work, most AI image generators use a process called diffusion. Diffusion models work by adding random “noise” to training data, and then learning to recover the data by removing this noise. The model repeats this process until it has an image that matches the prompt.

This is different to the large language models that underpin other AI tools such as ChatGPT. Large language models are trained on unlabelled text data, which they analyse to learn language patterns and produce human-like responses to prompts.

How does bias happen?

In generative AI, the input influences the output. If a user specifies they only want to include people of a certain skin tone or gender in their image, the model will take this into account.

Beyond this, however, the model will also have a default tendency to return certain kinds of outputs. This is usually the result of how the underlying algorithm is designed, or a lack of diversity in the training data.

Our study explored how Midjourney visualises seemingly generic terms in the context of specialised media professions (such as “news analyst”, “news commentator” and “fact-checker”) and non-specialised ones (such as “journalist”, “reporter”, “correspondent” and “the press”).

We started analysing the results in August last year. Six months later, to see if anything had changed over time, we generated additional sets of images for the same prompts.

In total we analysed more than 100 AI-generated images over this period. The results were largely consistent over time. Here are seven biases that showed up in our results.

1 and 2. Ageism and sexism

For non-specialised job titles, Midjourney returned images of only younger men and women. For specialised roles, both younger and older people were shown – but the older people were always men.

These results implicitly reinforce a number of biases, including the assumption that older people do not (or cannot) work in non-specialised roles, that only older men are suited for specialised work, and that less specialised work is a woman’s domain.

There were also notable differences in how men and women were presented. For example, women were younger and wrinkle-free, while men were “allowed” to have wrinkles.

The AI also appeared to present gender as a binary, rather than show examples of more fluid gender expression.

AI showed women for inputs including non-specialised job titles such as journalist (right). It also only showed older men (but not older women) for specialised roles such as news analyst (left). Midjourney

3. Racial bias

All the images returned for terms such as “journalist”, “reporter” or “correspondent” exclusively featured light-skinned people. This trend of assuming whiteness by default is evidence of racial hegemony built into the system.

This may reflect a lack of diversity and representation in the underlying training data – a factor that is in turn influenced by the general lack of workplace diversity in the AI industry.

The AI generated images with exclusively light-skinned people for all the job titles used in the prompts, including news commentator (left) and reporter (right). Midjourney

4 and 5. Classism and conservatism

All the figures in the images were also “conservative” in their appearance. For instance, none had tattoos, piercings, unconventional hairstyles, or any other attribute that could distinguish them from conservative mainstream depictions.

Many also wore formal clothing such as buttoned shirts and neckties, which are markers of class expectation. Although this attire might be expected for certain roles, such as TV presenters, it’s not necessarily a true reflection of how general reporters or journalists dress.

6. Urbanism

Without specifying any location or geographic context, the AI placed all the figures in urban environments with towering skyscrapers and other large city buildings. This is despite only slightly more than half the world’s population living in cities.

This kind of bias has implications for how we see ourselves, and our degree of connection with other parts of society.

Without specifying a geographic context, and with a location-neutral job title, AI assumed an urban context for the images, including reporter (left) and correspondent (right). Midjourney

7. Anachronism

Digital technology was underrepresented in the sample. Instead, technologies from a distinctly different era – including typewriters, printing presses and oversized vintage cameras – filled the samples.

Since many professionals look similar these days, the AI seemed to be drawing on more distinct technologies (including historical ones) to make its representations of the roles more explicit.

AI used anachronistic technology, including vintage cameras, typewriters and printing presses, when depicting certain occupations such as the press (left) and journalist (right). Images by the authors via Midjourney

The next time you see AI-generated imagery, ask yourself how representative it is of the broader population and who stands to benefit from the representations within.

Likewise, if you’re generating images yourself, consider potential biases when crafting your prompts. Otherwise you might unintentionally reinforce the same harmful stereotypes society has spent decades trying to unlearn.

T.J. Thomson, Senior Lecturer in Visual Communication & Digital Media, RMIT University and Ryan J. Thomas, Assistant Professor, Journalism Studies, University of Missouri-Columbia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

The post Ageism, sexism, classism and more: 7 examples of bias in AI-generated images appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/tech/ageism-sexism-classism-and-more-7-examples-of-bias-in-ai-generated-images/feed/ 0
Five women pioneering AI in Australia https://womensagenda.com.au/tech/five-women-pioneering-a-i-in-australia/ https://womensagenda.com.au/tech/five-women-pioneering-a-i-in-australia/#respond Thu, 22 Jun 2023 00:52:45 +0000 https://womensagenda.com.au/?p=69387 AI has become ubiquitous to our modern lives, but we know its gendered consequences. These five women who are pioneering A.I in Australia.

The post Five women pioneering AI in Australia appeared first on Women's Agenda.

]]>
Artificial Intelligence has become increasingly common in our modern lives, and our dependence on it will continue to soar in coming years. But despite its convenience, we know there are numerous unanswered questions and adverse impacts on women, with those at the fore of the industry mostly men.

This piece looks at the women in Australia who are pioneering AI and helping to shift the balance.

Professor Svetha Venkatesh

Since the early 90s, Professor Svetha Venkatesh has undertaken research in AI that has helped solve problems and make predictions more efficiently.

As an internationally renowned computer scientist, she has contributed to research in machine learning, probabilistic models, data mining, health analytics, multimedia and social media analysis. 

This research has led to new technologies in large-scale pattern recognition that sift through video data for anomalies that might represent security threats.

Her research has helped develop a health analytics program that enables doctors to predict suicide risk, and assisted scientists in producing new alloys and biomaterials to streamline their experimental designs. It has also contributed to the creation of the TOBY Playpad app, which provides therapy for children with autism by helping parents implement therapy at home and comprehensive plans on the iPad, as well as in real-world settings.

As the Director of the Strategic Research Centre for Pattern Recognition and Data Analytics at Deakin University, she continues to expand her research – this March, her faculty received a $10 million donation from a business networking and financing company, Scale Facilitation to set up a collaborative AI and machine learning (ML) CoLab to develop next-generation AI technologies.

It is hoped that researchers at the Scale Facilitation will collaborate with Deakin to design cutting edge A.I solutions.

Professor Svetha Venkatesh emigrated to Australia from India in 1983, after she obtained degrees in Electrical Engineering, Electronics and Telecommunications. 

In 2004, she was elected a Fellow of the International Association of Patter Recognition for her research in formulation and extraction of semantics in multimedia data. 

In 2021, she was elected a Fellow of the Australian Academy of Science. 

Kate Crawford 

Kate Crawford wears many hats. She’s a writer, composer, producer and academic. Her research on AI has appeared in TIME, The Washington Post, Nature, and she’s also written a book, “Atlas of A.I” in 2021 which describes A.I as a technology of extraction, and how it centralises power. 

As a leading scholar of the social implications of AI, her work focuses on the political and social implications of the misuse of technological advances, leading to discrimination, especially of women. 

The research professor at USC Annenberg created the first-course studying digital media and its politics at the University of Sydney in 2002. 

She believes we are now experiencing its most dramatic inflection point, as new A.I technologies exploit workers and pollute the planet. 

“It’s really important to understand that there are people who do what’s called reinforcement learning with human feedback,” she told Spanish media El Pais last month. “These are workers, often in the Global South, who are really essentially doing content moderation for companies that make AI.”

“If we look at generative AI, doing a search uses five times more energy than traditional searches. So, that is a huge carbon footprint that in many cases is hidden and unseen by most people.”

Beyond her role as senior principal researcher at Microsoft Research, which she has held for the last decade, she also makes time to collaborate with artists on projects and critical visual design. 

In 2019, her project Anatomy of an AI System with Vladan Joler won the Beazley Design of the Year Award and was placed into Museum of Modern Art’s permanent collection. Her essay and app collaboration with the artist Trevor Paglen, Excavating.ai “The Politics of Images in Machine Learning Training Sets” won the Ayrton Prize from the British Society for the History of Science.

Professor Lyria Bennett Moses

AI has many social implications on how we live, which means that there are also legal impacts we might need to consider. 

This is something Professor Lyria Bennett Moses, the Director of the UNSW Allens Hub and Associate Dean of Research at UNSW Law & Justice, has been thinking about for many years. 

Professor Bennett Moses believes that A.I is a growing part of court processes. She says that AI is becoming increasingly popular in courts and tribunals and that there are benefits and concerns about its compatibility with fundamental values.  

Her research has proposed updates of the Australian curricula on statistics and modelling to include more recent examples of data processing to educate young students how this data is being used. 

“While not every high school student needs to be able to code a machine learning algorithm, young people need to understand what’s going on behind these systems so they can properly assess their use as future citizens, consumers or in a professional capacity,” she said in 2019

In February, she said the invention of ChatGPT has positive ramifications in the courtroom, comparing the technology to a calculator for a math student. 

As the co-lead of the Law and Policy theme in the Cyber Security Cooperative Research Centre and Faculty lead in the UNSW Institute for Cyber Security, she is also a published author, writing a book with Dr Michael Guihot in 2020 addressing the legal and policy issues associated with the use of A.I.

Professor Mary-Anne Williams 

Since she received her PhD in Artificial Intelligence from the University of Sydney almost three decades ago, Professor Mary-Anne Williams has been a leading data scientist focusing on cognitive science, disruptive technologies and digital transformation. 

She uses a transdisciplinary approach, including human-centric methods from behavioural economics to machine learning to improve entrepreneurship, data analytics and social robots.

She was formerly the Director of the Magic Lab at the University of Technology Sydney (UTS), and a Fellow at Stanford University, where she led Social Robotics research groups and studied the risks and challenges of A.I with industry leaders including co-founder of Apple Inc, Steve Wozniak.

Since 2020, when she was named the Michael Crouch Chair in Innovation at the UNSW Business School, Professor Mary-Anne Williams has worked with the broader innovation community to grow entrepreneurship across the University and accelerate innovative thinking in Australia.

When she received her position, she said her main focus would be “to engage and help energise the UNSW community in innovation and entrepreneurship.” 

“There is an urgent imperative for universities, business and government to collaborate in multi-disciplinary efforts to intensify and scale innovation that can deliver societal benefits essential for a prosperous future,” she said

“Our entrepreneurs will continue to play a critical role in driving economic activity and building societal resilience as they reimagine and radically transform people’s lives, business and society.”

In 2019, she won the Australasian Distinguished Artificial Intelligence Contribution Award. That same year, she led the UTS Social Robotics team to victory at the 2019 RoboCup World Championship in Social Robotics. 

Barb Hyman  

Described this month as heading “one of Australia’s most quietly successful artificial intelligence companies” Barb Hyman started out as a lawyer at Herbert Smith Freehills in Melbourne, but quickly realised she wasn’t using her best skills. 

She then became a strategy consultant at Boston Consulting Group before she was transferred to an HR role at the firm. There, she saw the way people were being recruited and didn’t like the system. That was when she thought of a new business idea to improve how both data and diversity were prioritised in hiring decisions.

In 2018, Hyma launched Sapia.ai – an AI-driven recruiting program that is programmed against biases such as gender, race, class, education history and instead, tests the candidate’s strongest aptitude for a particular role. 

It is already being used by some of Australia’s biggest companies including Qantas, Holland & Barrett, Suncorp, Bunnings, and Woolworths. Last year, the company won a prestigious innovation award for its “Ai Smart Interviewer” technology at the Viva Technology conference in Paris — where Hyman said, “There is nothing else that can remove bias, deliver a world-class consumer experience, and create F1 pit crew level efficiency.”

“Our steadfast approach to ethics and the scientific method has demonstrated that our product can truly innovate, and provide recruiters and companies with the peace of mind they need to trust automation.” 

The post Five women pioneering AI in Australia appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/tech/five-women-pioneering-a-i-in-australia/feed/ 0
Emma Bromet appointed CEO of leading AI and advanced analytics company https://womensagenda.com.au/latest/appointments/emma-bromet-appointed-ceo-of-leading-ai-and-advanced-analytics-company/ https://womensagenda.com.au/latest/appointments/emma-bromet-appointed-ceo-of-leading-ai-and-advanced-analytics-company/#respond Wed, 21 Jun 2023 01:22:38 +0000 https://womensagenda.com.au/?p=69379 Emma Bromet has been appointed CEO of Eliiza, the artificial intelligence and advanced analytics brand owned by Mantel Group.

The post Emma Bromet appointed CEO of leading AI and advanced analytics company appeared first on Women's Agenda.

]]>
Emma Bromet has been appointed CEO of Eliiza, the artificial intelligence and advanced analytics brand owned by the Mantel Group.

As one of eight brands to sit under the Mantel Groups consultancy company, Eliiza specialises in data science and digital engineering with expertise in artificial intelligence and machine leaning.

Bromet has become the third woman to be appointed CEO across the Mantel Groups over the past three years. Such an appointment highlights the company’s effort to recognise and promote female leaders working in Australia’s technology sector.

Bromet, who has been employed by Mantel Group for two years, will be officially commencing her role as CEO on July 1st, 2023. Her experience in consultancy and management will come at a pivotal time for artificial intelligence governance and leadership.

“There’s never been a more exciting time to work in the data and AI space and the Eliiza team is working at the bleeding-edge of advanced analytics”, she said.

“The speed we are moving at is incredible. AI will transform what clients are asking for, for example how GenAi will transform the way we approach data governance and the importance of building responsible, scalable AI.”

Prior to working for Mantel Group, Bromet was the Engagement and Consumer Manager for Oliver Wymsn from 2019 to 2021.

She has also worked at the competing data science firm Quantium from 2017 to 2019 as the Executive Manager of global markets and growth analytics.

CEO of Mantel Group Con Mouzouris expressed his satisfaction with the appointment sharing, “we have a clear focus on nurturing and retaining talent and are proud to be able to support Emma to grow into her new role.”

Bromets’ appointment comes at a time of rapid change within the artificial intelligence sector. Her experience in the field and expertise in helping businesses create customer value through data strategy will offer great experience in new role and for the future of AI.

The post Emma Bromet appointed CEO of leading AI and advanced analytics company appeared first on Women's Agenda.

]]>
https://womensagenda.com.au/latest/appointments/emma-bromet-appointed-ceo-of-leading-ai-and-advanced-analytics-company/feed/ 0