AI&U Episode 3 Bias in AI decisions by AI&U - Sharad Gandhi and Christian Ehl published on 2018-07-24T08:59:41Z We expect the decisions by AI to be neutral, without any bias, because a machine has no emotions, opinions or a personal agenda. However, AI decisions do carry a bias. Why is that? Simple: AI inherits the bias from all the examples used for its training. A Deep Learning Neural Network (DLNN) develops its algorithm for making decisions during the supervised training from the validated examples used. Opinions and biases of real people whose decisions are represented in the examples are integrated into those examples. AI inherits that integrated bias. The validated examples for the DLNN AI learning are like pages from history textbooks. If history textbooks are biased, then we acquire a biased view of the world which makes our decisions biased. Same happens for AI. By itself, bias is neither good nor bad. It is just a natural consequence of learning. All humans have biases which develop during their lifetime of experiences. It represents the integration of our personal and cultural value system and opinions. The bias of AI systems can be reduced by using training examples representing a wide diversity of opinions and biases. Self-learning AI systems learn purely from their original observations and not from examples of human decisions. Such systems are currently only used for games, not for real-life situations. Bias is defined as: Inclination or prejudice for or against one person or group, especially in a way considered to be unfair.