Top AI leaders say they don’t want women to get left behind in the tech revolution



At Fortune’s Most Powerful Women Summit on Tuesday, AI leaders from Accenture, Salesforce, and Bloomberg Beta spoke about why many women aren’t using the tech, and how that exacerbates bias within the data. Karin Klein, founding partner at Bloomberg Beta, a venture capital firm, said she read that women are 20% less likely to use ChatGPT in their jobs—and that gap could be even greater. This is because they’re hesitant to use the technology, aware of its biases against them and distrustful of its impact. They’re hopping off the AI train. 

But Klein said women shouldn’t write off AI so quickly—at the current pace of its scale and integration, the technology is constantly changing. “You can’t just try it once and say, ‘Oh I get it,’ or ‘It doesn’t work for me,” or ‘Guess what, it gave me bad results.’ Well, try it again in six months. You might get better results.”

Klein wants women to test out AI on their own time, and bring the expertise and useful utilities—like composing emails and scheduling meetings—to their jobs. She acknowledged that, yes, there are potential hazards with the tools, but there is also an abundance of ways to leverage them. And if women don’t get on the AI bandwagon, they’ll fail to keep up with their male peers. 

“I don’t want women or any community to be left behind, because we always hear the risks instead of hearing the opportunities,” Klein said.  

Lan Guan, chief AI officer at Accenture, echoed her sentiments. 

“Every woman needs to be in this movement of AI by being an early adopter of AI,” Guan said. “There’s a lot of fear at the beginning, and it’s every business leaders’ responsibility to do this grassroot-level enablement, enabling everyone within your company to use the safe and trusted AI tools, because seeing is believing.”

But beyond executives taking the lead, women need to independently seize the moment, Guan said. She recommended that women not only become early adopters of AI, but also be enablers of the GenAI movement. 

And there’s a lot at risk if they don’t. Guan recalled an example of a chatbot assuming that when a woman was taking on a new role, she was becoming a housewife, and when asked about a man’s new job, that he was stepping in as a financial leader. The data set the AI was trained on was biased and produced sexist results—but there’s possibility for change with more women leading AI creation and testing. 

“Something’s wrong here if we’re not taking an active role in enabling AI to be unbiased, by starting with each of us, then this kind of problem will never go away,” Guan said. 

Paula Goldman, chief ethical and humane use officer for Salesforce, agreed. She said underrepresented groups need to be a part of AI processes, and her company has been employing a diverse set of employees to break and change their tech models. They are testing, guiding, and giving feedback on AI—which helps identify any biases or weaknesses in the tools. Without their input, AI will continue down the same path its heading. 

“The feedback in using AI systems really changes its trajectory,” Goldman said.  

Recommended newsletter
The Broadsheet: Covers the trends and issues impacting women in and out of the workplace and the women transforming the future of business.
Sign up here.



Source link

About The Author

Scroll to Top