Five ways to make AI a greater force for good in 2021

0
16


It’s not that large-scale models could never reach common sense understanding. That’s still an open question. But there are other avenues of research deserving of greater investment. Some experts have placed their bets on neurosymbolic AI, which combines deep learning with symbolic knowledge systems. Others are experimenting with more probabilistic techniques that use far less data, inspired by a human child’s ability to learn with very few examples.

In 2021, I hope the field will realign its incentives to prioritize comprehension over prediction. Not only could this lead to more technically robust systems, the improvements would have major social implications as well. The susceptibility of current deep-learning systems to being fooled, for example, undermines the safety of self-driving cars and poses dangerous possibilities for autonomous weapons. The inability of systems to distinguish between correlation and causation is also at the root of algorithmic discrimination.

Empower marginalized researchers

If algorithms codify the values and perspectives of their creators, a broad cross-section of humanity should be present at the table when they are developed. I saw no better evidence of this than in December of 2019, when I attended NeurIPS. That year, it had a record number of women and minority speakers and attendees, and I could feel it tangibly shift the tenor of the proceedings. There were more talks than ever grappling with AI’s influence on society.

At the time I lauded the community for its progress. But Google’s treatment of Gebru as one of the few prominent Black women in industry showed how far there still is to go. Diversity in numbers is meaningless if those individuals aren’t empowered to bring their lived experience into their work. I’m optimistic though that the tide is changing. The flashpoint sparked by Gebru’s firing turned into a critical moment of reflection for the industry. I hope this momentum continues and converts into long-lasting, systemic change.

Center the perspectives of impacted communities

There’s also another group to bring to the table. One of the most exciting trends from last year was the emergence of participatory machine learning. It’s a provocation to reinvent the process of AI development to include those who ultimately become subject to the algorithms.

In July, the first conference workshop dedicated to this approach collected a wide range of ideas about what that could look like. It included new governance procedures for soliciting community feedback; new model auditing methods for informing and engaging the public; and proposed redesigns of AI systems to give users more control of their settings.

My hope for 2021 is to see more of these ideas trialed and adopted in earnest. Facebook is already testing out a version of this with its external oversight board. If the company follows through with allowing the board to make binding changes to the platform’s content moderation policies, the governance structure could become a feedback mechanism worthy of emulation.

Codify guardrails into regulation

Thus far grassroots efforts have led the movement to mitigate algorithmic harms and hold tech giants accountable. But it will be up to national and international regulators to set up more permanent guardrails. The good news is lawmakers around the world have been watching and are in the midst drafting legislation. In the US, Congress members have already introduced bills to address facial recognition, AI bias, and deepfakes. Several of them also sent a letter to Google in December expressing their intent to continue pursuing this regulation.

So my last hope for 2021 is that we see the passing of some of these bills. It’s time we codify what we’ve learned over the past few years, and move away from the fiction of self-regulation.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here