160 Kemp House, City Road, London EC1V 2NX 812 (345) 6789 Support@webart-studio.com
AI Weekly: The argument in favor of regulation in 2021

2020 was an eventful year, not least because of a pandemic that is showing little sign of easing. The AI ​​research community experienced its own troubles, limited by Google’s dismissal of ethicist Timnit Gebru and a spit on AI ethics and “culture break” involving retired Washington University professor Pedro Domingos. Yann LeCun, Facebook’s chief AI scientist, left Twitter (and returned to Twitter) after a heated debate about the origin of the bias in AI models. Companies like Clearview AI, Palantir, and Rekor have expanded their reach to promote favor and business with law enforcement agencies. Meanwhile, AI could not avoid penalizing (and in some cases actively penalizing) certain groups, whether they are moderating content, predicting UK student grades, or cropping images in Twitter timelines.

With 2020 in the rearview mirror and New Year’s resolutions, I think the AI ​​community would do well to consider the proposal made earlier this year by Zachary Lipton, assistant professor at Carnegie Mellon University. He advocated a one-year moratorium on studies for the entire industry in order to encourage “thinking” as opposed to “sprinting / rushing / spamming” towards deadlines. “Stricter presentation, science, and theory are critical to both scientific progress and promoting productive discourse with the general public,” Berkeley’s Jacob Steinhardt wrote in a meta-analysis with the University of California. “Also, how practitioners apply [machine learning] in critical areas such as health, law and autonomous driving, a calibrated awareness of the capabilities and limits of [machine learning] Systems help us deliver [machine learning] responsible. ”

Lipton and Steinhardt’s advice was ignored by researchers at the Massachusetts Institute of Technology, California Institute of Technology, and Amazon Web Services, who in an article published in July suggested a method for measuring the algorithmic bias of facial analysis algorithms that one critic considered “High-tech blackface. Another study earlier this year, co-authored by researchers from Harvard and Autodesk, aimed to create a “racially balanced” database that would capture a subset of LGBTQ people who were not all gender-contradicting , but also dangerous, according to Washington University’s AI researcher Os Keyes. More troubling was the announcement in August of a study by probation officers in Indiana aimed at predicting relapses using AI, even in the face of evidence that relapse prediction algorithms amplify racial and gender biases.

In conversations with Khari Johnson of VentureBeat last year, Anima Anandkumar, director of machine learning research at Nvidia, Soumith Chintala of Facebook (who built the AI ​​framework PyTorch), and Dario Gil, director of IBM research, predicted ways for AI could be found to better reflect the kind of society people want building would become a central theme in 2020. They also expected that the AI ​​community would address direct issues related to representation, fairness, and data integrity and ensure that datasets used to train models take into account different groups of people.

This did not happen. While researchers criticize Google for its opaque (and censorship) research practices, companies commercialize models whose training contributes to CO2 emissions, and problematic language systems find their way into production, 2020 was in many ways more of a regression than a year for the AI ​​community of progress. But at the regulatory level there is hope to get the ship back on track.

With the Black Lives Matter movement, more cities and states have raised concerns about facial recognition technology and its uses. Oakland and San Francisco, California, and Somerville, Massachusetts are major metropolitan areas where law enforcement agencies are prohibited from using facial recognition. In Illinois, businesses must obtain consent before they can collect any type of biometric information, including facial images. New York recently passed a moratorium on the use of biometric identification in schools through 2022, and lawmakers in Massachusetts pushed the government to suspend the use of biometric surveillance systems within the Commonwealth. More recently, Portland, Maine, approved an election initiative banning the use of facial recognition by police and city authorities.

In a similar development earlier this year, the European Commission proposed the Digital Services Act, which when passed would require companies to disclose information about how their algorithms work. Platforms with more than 45 million users in the European Union would have to offer at least one non-profile based content recommendation option, and fines for non-compliance could reach up to 6% of a company’s annual revenue.

These examples do not suggest that the entire AI research community disregards ethics and therefore requires a restriction. This year, for example, the Association for Computing Machinery’s fourth annual conference on fairness, accountability and transparency will be held, featured under the following topics; other work by Gebru’s research on the implications of large language models. Rather, recent history has shown that AI production, guided by regulations such as moratoria and transparency laws, encourages fairer use of AI than would otherwise have been possible. In one particular case, under pressure from lawmakers, Amazon, IBM and Microsoft agreed to stop or end sales of facial recognition technology to the police.

The interests of shareholders – and even academia – are often at odds with the well-being of the disenfranchised. But the emergence of legal remedies to curb the abuse and misuse of AI shows a weariness with the status quo. In 2021, it is not unreasonable to expect the trend to continue and the AI ​​community to be forced (or preventively) to adapt. Despite all the failures in 2020, with a bit of luck it laid the foundation for a rethinking of AI and its effects.

For AI coverage, send news tips to Khari Johnson and Kyle Wiggers, as well as AI editor Seth Colaner – and be sure to subscribe to the weekly AI newsletter and bookmark our AI channel The Machine.

Thank you for reading,

Kyle Wiggers

AI Staff Writer


VentureBeat’s mission is to be a digital city square for technical decision makers to gain knowledge of transformative technology and transactions. Our website provides important information on data technologies and strategies to help you run your business. We invite you to become a member of our community and access:

  • current information on topics of interest to you,
  • our newsletters
  • gated thought leader content and discounted access to our valuable events like Transform
  • Network functions and more.

become a member

Leave a Reply