AI&U Episode 8 AI and Human in the Loop - Guest Sara Wasif of Pactera by AI&U - Sharad Gandhi and Christian Ehl published on 2018-09-06T09:59:47Z The quality of the AI is defined by the data that trains it. Any bias in the training data can affect the training of the ML algorithms and ultimately the accuracy of the AI. With human in the loop for algorithm training, tuning and testing, in addition to ensuring that the AI is working as it was intended to, it also serves as a safeguard against bias as human judges can make ethical, cultural and emotional judgements that an AI shouldn’t be tasked or not be trusted to do. However, this introduces another challenge, the implicit bias. The biases people involved in the development of an AI may have, could be inadvertently transferred to the decision making of the AI they helped develop or train. What is considered fair and acceptable in a society also changes over time. Predictive models learning from historical data pose a serious problem when they are being used in high-stake situations. AI powered systems can amplify biases in society, not just individuals. A sophisticated AI powered system, drawing from a historical database for it decision making, maybe blind to bias caused by patterns reflecting centuries old discrimination. Data that seems neutral may have correlations embedded in them that could lead deep learning programs to make decisions that are biased to minorities or under-represented groups. Sampling-bias problem can cause image recognition programs to “ignore” under-represented groups in data. Example of popular data sets used to train image recognition AI included gender-bias where a picture of man cooking would be misidentified as a woman or race-bias where lighter-skinned candidates were deemed more beautiful as they represented the majority. Based on common characteristics of majority of highest ranking corporate executives, an AI program tasked to pick out perfect candidates for high ranking roles may consider white males to fit the bill better. Oversampling can be used to mitigate some of this challenge by assigning heavier statistical weights to underrepresented data. Mostly the members of a dominant majority are oblivious to the experiences of the other group. To fight bias, we need to have more diversity. Diversity in humans in the loop of an AI will ensure that no viewpoint is ignored. And as AI amplifies and brings to light more intrinsic biases in our society, we need to start expecting better from our society as we do from AI. Philosophical question (and might be slight digression from the topic at hand): Do we expect AI to behave more ethically than most humans? If the task of an AI is to “do what humans can do”, can we humans “oversample” in our mind when we are making decisions? I belong to a minority group. If I want an AI program to predict my professional success, would I not want it to take in account all the racial and gender biases that will affect by career in real world? Maybe over time social values evolve to a point where human decisions/actions become gender neutral and bias-free, but for now do I expect more from a machine than I do from the society I currently live in? Should the an AI’s decisions represent where the society stands today or where it should stand in an ideal world? Thank you Sara for being the guest on this show! If you have any question for Sara - please contact her at firstname.lastname@example.org.