Artificial Intelligence Field is Dominated by White Males; Report Explores Diversity Crisis

Photo: autodesk.com

The artificial intelligence industry is facing a “diversity crisis,” researchers from the AI Now Institute said in a report released yesterday, raising key questions about the direction of the field.

Women and people of color are deeply underrepresented, the report found, noting studies finding that about 80 percent of AI professors are men, while just 15 percent of AI research staff at Facebook and 10 percent at Google are women.

People of color are also sidelined, making up only a fraction of staff at major tech companies. The result is a workforce frequently driven by white and male perspectives, building tools that often affect other groups of people.

Bringing Diversity to the AI Workforce

Diversity, while a hurdle across the tech industry, presents specific dangers in AI, where potentially biased technology, like facial recognition, can disproportionately affect historically marginalized groups.

Tools like a program that scans faces to determine sexuality, introduced in 2017, echo injustices of the past, the researchers write. Rigorous testing is needed. But more than that, the makers of AI tools have to be willing to not build the riskiest projects. “We need to know that these systems are safe as well as fair,” AI Now Institute co-director Kate Crawford says.

Tech industry employees have taken a stand on some major AI issues, pressing their companies to drop or review the use of sensitive tools that could hurt vulnerable groups. Workers at Amazon have questioned executives about the company’s facial recognition product. Recently, Google workers pushed back against an AI review board that included the president of the Heritage Foundation, noting the group’s history of lobbying against LGBTQ rights issues. The company soon dissolved the board entirely.

What steps can industries take to address bias and discrimination in AI Systems

The report lists 12 recommendations that AI researchers and companies should employ to improve workplace diversity and address bias and discrimination in AI systems.

  1. Publish compensation levels, including bonuses and equity, across all roles and job categories, broken down by race and gender.
  2. End pay and opportunity inequality, and set pay and benefit equity goals that include contract workers, temps, and vendors.
  3. Publish harassment and discrimination transparency reports, including the number of claims over time, the types of claims submitted, and actions taken.
  4. Change hiring practices to maximize diversity: include targeted recruitment beyond elite universities, ensure more equitable focus on under-represented groups, and create more pathways for contractors, temps, and vendors to become full-time employees.
  5. Commit to transparency around hiring practices, especially regarding how candidates are leveled, compensated, and promoted.
  6. Increase the number of people of color, women and other under-represented groups at senior leadership levels of AI companies across all departments.
  7. Ensure executive incentive structures are tied to increases in the hiring and retention of underrepresented groups.
  8. For academic workplaces, ensure greater diversity in all spaces where AI research is conducted, including AI-related departments and conference committees.
  9. Remedying bias in AI systems is almost impossible when these systems are opaque. Transparency is essential, and begins with tracking and publicizing where AI systems are used, and for what purpose.
  10. Rigorous testing should be required across the lifecycle of AI systems in sensitive domains. Pre-release trials, independent auditing, and ongoing monitoring are necessary to test for bias, discrimination, and other harms.
  11. The field of research on bias and fairness needs to go beyond technical debiasing to include a wider social analysis of how AI is used in context. This necessitates including a wider range of disciplinary expertise.
  12. The methods for addressing bias and discrimination in AI need to expand to include assessments of whether certain systems should be designed at all, based on a thorough risk assessment. AI-related departments and conference committees.

Add Comment