, pub-8810004177136190, DIRECT, f08c47fec0942fa0, pub-8810004177136190, DIRECT, f08c47fec0942fa0

Responsible AI has a burnout problem

Breakneck speed

The rapid pace of artificial-intelligence research doesn’t help either. New breakthroughs come thick and fast. In the past year alone, tech companies have unveiled AI systems that generate images from textonly to announce—just weeks later—even more impressive AI software that can create videos from text alone too. That’s impressive progress, but the harms potentially associated with each new breakthrough can pose a relentless challenge. Text-to-image AI could violate copyrightsand it might be trained on data sets full of toxic material, leading to unsafe outcomes.

“Chasing whatever’s really trendy, the hot-button issue on Twitter, is exhausting,” Chowdhury says. Ethicists can’t be experts on the myriad different problems that every single new breakthrough poses, she says, yet she still feels she has to keep up with every twist and turn of the AI ​​information cycle for fear of missing something important.

Chowdhury says that working as part of a well-resourced team at Twitter has helped, reassuring her that she does not have to bear the burden alone. “I know that I can go away for a week and things won’t fall apart, because I’m not the only person doing it,” she says.

But Chowdhury works at a big tech company with the funds and desire to hire an entire team to work on responsible AI. Not everyone is as lucky.

People at smaller AI startups face a lot of pressure from venture capital investors to grow the business, and the checks that you’re written from contracts with investors often don’t reflect the extra work that is required to build responsible tech, says Vivek Katial , a data scientist at Multitudes, an Australian startup working on ethical data analytics.

The tech sector should demand more from venture capitalists to “recognize the fact that they need to pay more for technology that’s going to be more responsible,” Katial says.

The trouble is, many companies can’t even see that they have a problem to begin with, according to a report released by MIT Sloan Management Review and Boston Consulting Group this year. AI was a top strategic priority for 42% of the report’s respondents, but only 19% said their organization had implemented a responsible-AI program.

Some may believe they’re giving thought to mitigating AI’s risks, but they simply aren’t hiring the right people into the right roles and then giving them the resources they need to put responsible AI into practice, says Gupta.

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *