Info Image

Using Bias Intentionally in Artificial Intelligence

Using Bias Intentionally in Artificial Intelligence Image Credit: Natali_Mis/BigStockPhoto.com

Every day, most people encounter artificial intelligence in some way. At home, your streaming service “learns” your programming habits and suggests similar programs for your consideration. Enterprises use AI to achieve accuracy and efficiency, which can include keeping costs down and enabling better, faster receipt of goods or services. And AI can help save lives such as its use for disease identification and screening or signaling caution on the road during certain conditions. The positive use cases for AI are abundant.

On the flip side, there can be negative AI experiences as well, leading to the widespread - and incorrect - assumption that bias is always negative. This is not true. Trustworthy data-driven AI systems must consider the effects of bias and ensure harmful bias is mitigated - by intentionally using positive bias. For example, a retailer’s system determines which credit applicants should be approved based on their credit score which is based on data that can be biased. This bias can create systemic bias intensifying discrimination. Conversely a retailer can recommend to you items that meet your profile giving you an improved online shopping experience.

Thus, it’s equally important to distinguish how bias is and is not acceptable. The rapid growth of algorithmic driven services has led to growing concerns among civil society, legislators, industry bodies and academics about potential unintended and undesirable biases within intelligent systems.

In response to these concerns, the IEEE Standards Association (IEEE SA) established a standards development project and Working Group, P7003 - Standard for Bias Considerations, to provide a development framework for creating algorithmic systems and to avoid unintended, unjustified and inappropriately differential outcomes for users. IEEE SA works with CertifAIEd criteria for certification in algorithmic bias.

What is algorithmic bias in AI?

Algorithmic bias in AI occurs when an algorithm produces results that are system-prejudiced due to assumptions in the machine learning process. Bias typically occurs when the data the algorithm is being trained on is imbalanced. Unjustified bias can create discriminatory decisions, enhance systematic bias and intensify power imbalances. However, algorithmic bias is not negatively biased toward everyone. For example, bias could occur when there is much more data for one stakeholder group than another or if the attribute of the stakeholder is biased. Bias could be tied to non-human factors as well.

As an example, facial recognition software can be biased toward a specific group of people. For example, if a system’s data is created for white, blue-eyed, blonde men, the software will be positively biased for that group but negatively biased towards others who don’t have these features. A brown-eyed woman with darker skin and black hair might not be recognized at all due to algorithmic bias. So, not all people being facially scanned at border security and customs may be registered. There are numerous examples in which this bias can have serious and negative consequences, like delays for international travelers at borders.

As an example of bias tied to non-human factors, let’s say a German automobile manufacturer has developed an autonomous vehicle. The vehicle prototype was developed in Germany, and its system was trained on visuals there. In Germany, the vehicle performs as expected, but in other countries, the vehicle encounter problems because the visual elements – streets, signage, buildings, etc. – look very different. Because it wasn’t trained to recognize these other countries’ environments, it was biased against them.

On the other hand, an example where intentional bias is appropriate is the development of a healthcare app designed to help men manage prostate health. Obviously, the app should be biased towards men. Conversely, an app for breast cancer should include both sexes because about 1 out of every 100 breast cancers diagnosed in the United States is found in a man.

Three sources of bias

At IEEE SA, the Algorithmic Bias Working Group sought to tackle bias issues and determined there are three basic sources of bias:

  • Bias by the algorithm developers: This type of bias is based on the choice of an optimization target. As an example, a business system’s worker-management algorithm optimizes processes for maximum worker output, but it doesn’t maximize worker health.
  • Bias within the system itself: The system itself expresses differences in performance levels for certain categories, such as higher failures in facial recognition based on race and gender.
  • Bias by users of the system: Users can interpret and act upon the output generated by the algorithm in a biased manner. For example, confirmation bias happens when the results of a generative AI chatbot presents content that you already believe, or want to believe, and you accept it without fact checking for accuracy.

Additionally, the context of a biased decision is very significant. What may be fair in one situation may not be fair in another. As an example, a system that shows bias against people who cannot pay is critical for the business; however, a bias based on gender or race, whether intentional or unintentional, would be a serious issue.

The bottom line is to ensure that bias in the system is performing a function for what the system is designed to do. If the system is accurately trained, it will be trusted and adopted.

Evaluating and understanding bias risk

The goal of IEEE P7003 Working Group is to generate awareness about AI systems and bias so that users and creators can evaluate if a system is actually performing the tasks that they want it to perform.

Recommendations for efficient development of AI systems:

  • Use a bias profile to evaluate and consider processes to determine the impact and risk of bias.
  • Fully understand the intention and context in which the system is being created and how the stakeholders are impacted.
  • Clearly define and understand the actual task you’re asking the system to do and make sure the results fit with the desired tasks.
  • Understand stakeholders who either use the system or will be impacted by it.
  • Repeat evaluation process at regular intervals throughout the system life cycle because usage and stakeholders often evolve over time.
  • Revisit the profile if the system is deployed into a new context.
  • Build diverse development and evaluation teams.

About IEEE SA and the IEEE P7003 Working Group

The IEEE P7003 Working Group aims to provide individuals and organizations creating algorithms the certification-oriented methodologies that clearly articulate accountability and clarity around how algorithms target, assess and influence users and stakeholders of autonomous or intelligent systems. IEEE P7003 will allow algorithm creators to communicate to regulatory authorities and users that the most up-to-date best practices are used in the design, testing and evaluation of algorithms to avoid unjustified differential impact on users.

The initiation of IEEE P7003 comes in conjunction with the recent release of the IEEE publication Ethically Aligned Design: A Vision for Prioritizing Human Wellbeing with Artificial Intelligence and Autonomous Systems, a document that encourages technologists to prioritize ethical considerations in the creation of autonomous and intelligent technologies. Both the document and IEEE P7003 are inspired by the work being done in The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems.

The Working Group continues to need professionals in all industries and roles to focus on this issue. Learn about how to join the IEEE Algorithmic Bias Working Group.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Gerlinde Weger is a CM/AI Ethics consultant with over 28 years' experience in Advanced Technology. With global cross-sector experience, Gerlinde focuses on building organizational capability to enable people and the culture, the processes and technology they work with to successfully adapt, change and grow. She provides direction and capability; supporting leadership in creating strategies and frameworks that guide teams in enhancing their organizational functioning and performance. Gerlinde’s deep and broad experience leading strategic change within organization has put her at the forefront of creating and piloting AI ethical frameworks and models. As Chair of the Algorithmic Bias Working Group with IEEE Standards Association, she is part of the core team that is creating the ethical criteria for the IEEE Ethics Certification program for Autonomous Intelligent Systems and is involved in the development of their standards for ethical AI design and use. Complementing Gerlinde’s international experience are her MBA, ITIL and Lean6Sigma certification.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic