Info Image

Before We Move Forward With AI, We Must Address Diversity and Inherited Bias

Before We Move Forward With AI, We Must Address Diversity and Inherited Bias Image Credit: pasiphae/Bigstockphoto.com

Recently, the European Union (EU) and 14 other countries (Australia, Canada, France, Germany, India, Italy, Japan, Mexico, New Zealand, South Korea, Singapore, Slovenia, the UK, and the US) formed a global forum tasked with overseeing responsible developments and innovations in AI.

In a joint statement, members of the Global Partnership on AI said it aimed to “bridge the gap” between theory and practice, and foster “responsible and human-centric” development of the technology, rooted in human rights, fundamental freedoms and democratic values.

In partnership with international organizations and partners, the group will bring together leading experts from the industry, governments, civil society and academia to explore responsibility, data governance, future of work, innovation and commercialization. And that is a welcoming sign!

Venture capital funding for AI startups

This reached record levels in recent years, increasing by 72% in 2018, from $9.33bn in 2017. Active AI startups in the US increased by 113% from 2015 to 2018. As more money and resources are invested into AI, companies have the opportunity to address the crisis as it unfolds, said Tess Posner, the chief executive officer of AI4ALL, a not-for-profit that works to increase diversity in the AI field. But despite an industry-wide push towards diversity & inclusion (D&I), female representation in the AI sector continues to be below average. A report by the AI Now Institute, New York University, revealed a glaring diversity gap in the field of artificial intelligence, which will affect how these systems approach data and make decisions. Even progressive companies such as Facebook and Google are lagging when it comes to AI diversity. Only 15% of AI research staff at Facebook comprises women, and this number is even lower at Google (10%). There is significant variance among women of different ethnic backgrounds with Caucasian women still preferred over those of other minorities. The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others. We need to acknowledge how the intersections of race, gender, and other identities and attributes shape people’s experiences with AI.

Diversity in AI development?

As AI transforms how you manage your workforce and complete day-to-day tasks, it is essential to keep an eye on the possibility of bias at every level. AI requires a balance with human intelligence, but this human intelligence needs gender, cultural and racial diversity to create a solution that can consider multiple factors when making important decisions. The AI field, which is overwhelmingly white and male, is at risk of replicating or perpetuating historical biases and power imbalances, the report said. Examples cited include image recognition services making offensive classifications of minorities, chatbots adopting hate speech, and Amazon technology failing to recognize users with darker skin colors. The biases of systems built by the AI industry can be largely attributed to the lack of diversity within the field itself, the report said.

An experiment undertaken earlier this year at the Massachusetts Institute of Technology (MIT), for example, involved testing three commercially available face-recognition systems, developed by Microsoft, IBM and the Chinese firm Megvii. The results found that the systems correctly identified the gender of white men 99% of the time, but this success rate plummeted to 35% for black women. The same was true of Amazon's recognition software, misidentifying 28 members of US Congress as criminals.

Biased AI is built on biased data

Over 70% of all computer programmers are white males and despite the best attempts at neutrality, we were raised in a society that inherently devalues women and people of color (POC), teaching us both explicitly and implicitly that they are less capable than white men. This colors our worldview and in turn, the technology we create; we aren’t necessarily actively misogynistic or racist but our environment allows us to perpetuate the biases ingrained in us by society.

Amazon’s controversial Rekognition facial recognition AI struggled with dark-skin females in particular, although separate analysis has found other AIs also face such difficulties with non-white males. Amazon had to scrap a four-year-old recruitment matching tool because it had taught itself to favor male applicants over female ones. Equally qualified female candidates were ranked lower than their male counterparts, with some graduates of all-female colleges losing whole points due to their alma mater. The system was trained on data submitted by applicants over a 10-year period, who were overwhelmingly male.

And it did not come as a surprise that just over a month ago Amazon decided to stop selling its facial recognition technology to police - but only for a year. “We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge,” Amazon said in a blog post. “We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”

The move followed a similar one by IBM earlier in June: they will no longer sell general purpose IBM facial recognition or analysis software but did not specify the timelines.

What can we all do to eliminate bias when we are building AI

The report cautioned against addressing diversity in the tech industry by fixing the “pipeline” problem, or the makeup of who is hired, alone. Men currently make up 71% of the applicant pool for AI jobs in the US, according to the 2018 AI Index, an independent report on the industry released annually.

The AI institute suggested additional measures, including publishing compensation levels for workers publicly, sharing harassment and discrimination transparency reports, and changing hiring practices to increase the number of underrepresented groups at all levels.

What about some government bills? Some additional efforts to increase transparency around how algorithms are built and how they work may be necessary to fix the diversity problems in AI. In April 2019, the US senators Cory Booker and Ron Wyden introduced the Algorithmic Accountability Act, a bill that would require algorithms used by companies that make more than $50m per year or hold information on at least 1 million users to be evaluated for biases.

In addition, we all can take the following steps:

  • Enhancing awareness in the workplace and beyond - to shift the internal and public mindsets from a gender-biased industry to a more gender-neutral one. That way, the tech industry can become more appealing to all genders and deconstruct the prejudices that once shaped the sector.

  • Develop innovative AI software that can, for example, generate bias-free salary suggestions and projections based on a number of crucial factors, none of which take the employees’ look, ethnicity, or gender into account. Instead, the software will analyze variables such as education, certifications, experience, and performance to generate salary suggestions, as well as suggestions regarding bonuses and promotions.

  • The discrimination feedback should loop back into AI to ensure any bias is fixed to improve any future results.

Path forward

The problem of AI performing poorly with certain groups could be fixed if a more diverse set of eyeballs was involved in the technology’s development. And while tech companies say they are aware of the problem, they haven’t done much to fix it. Data collection and preparation should be done by the team with diversified experience, backgrounds, ethnicity, race, age, and viewpoints. The view of someone from a less developed or developing country in Asia is going to be different than the view of someone from a Western country. An illustrative example was a robotic vacuum cleaner in South Korea that sucked the hair of a woman sleeping on the floor. The non-diverse team involved in the training data collection did not anticipate or consider the scenarios of people sleeping on the floor, although it is very common in some cultures.

Another important type of diversity is intellectual diversity. This includes academic discipline, risk tolerance, political perspective, collaboration style - any of the individual characteristics which make us all unique. This type of diversity is known to enhance creativity and productivity growth, but it also improves the likelihood of detecting and correcting bias. Intellectual diversity can even exist within a single human who has developed a multidisciplinary background and experiences dealing with a broad range of people. The value of such people will increase as AI continues to affect a great range of ventures.

The objective should not be to simply diversify the privileged class of technical workers engaged in developing AI systems in the hope that this will result in greater equity. Nor should it be to develop bespoke technical approaches to systemic problems of bias and error, hoping that others won’t come along. Instead, by broadening our frame of reference and integrating both social and technical approaches, we can begin to chart a better path forward.

We must make conscious decisions to elevate the POC and women around us to roles where they are part of the decision making the process. We have to listen when they tell us about the ways our privilege is clouding our judgment and advocate for and work with them to fix the issues. We need to make sure our hiring strategies are deliberately diverse because right now, they’re passively biased and it’s not helping anyone.

Upskill the workforce with the knowledge of how AI works

This will allow employees to spot any instance of bias due to the lack of AI diversity and promptly address it. Without internal capabilities, issues like this could continue to be overlooked, perpetuating the adverse effects of low diversity in the AI sector.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Eugina, a female executive and an immigrant, started her telecom career as a secretary and now has gone on to become the CMO of the prominent industry organization, Telecom Infra Project (TIP).

She has over 20+ years of strategic marketing leadership experience, leading marketing and communications for small and Fortune 500 global technology companies like Starent and Cisco.

Previously, she served as the VP of Marketing of the major telecom industry disruptor Parallel Wireless and was instrumental in creating the Open RAN market category.

She is a well sought-after speaker at many technology and telecom events and webinars. She is a well-known telecom writer contributing to publications like The Fast Mode, RCR Wireless, Developing Telecoms and many others.

She is also an inventor, holding 12 patents that include 5G and Open RAN.

She is a founding member of Boston chapter of CHIEF, an organization for women in the C-Suite, to strengthen their leadership, magnify their influence, pave the way to bring others, cross-pollinate power across industries, and effect change from the top-down.

Her passion is to help other women in tech to realize their full potential through mentorships, community engagement, and workshops. Her leadership development book “Unlimited: How to succeed in a workplace that was not designed for you” is due for release in May 2023.

Ms. Jordan resides in Massachusetts with her husband, teenage son, and three rescue dogs. She loves theater and museums. She volunteers for dog rescues and programs that help underprivileged children and women.

Ms. Jordan has a Master’s in Teaching from Moscow Pedagogical University, and studied computer undergrad at CDI College in Toronto, Canada.

PREVIOUS POST

Overcoming Small Cell Deployment Challenges

NEXT POST

Evaluating Private Networks for Your Business