Khoja scientist makes artificial intelligence ethical

 
0

A Khoja research scientist in California is trying to make artificial intelligence (AI) more ethical by investigating ways in which AI reasoning can be better explained and understood.

Nazneen Rajani PhD is working as a Research Scientist at Salesforce Research, where she leads the efforts on Explainable AI (XAI). The thing she loves the most about her job is the huge impact her research has the potential to make on the lives of thousands of people. State-of-the-art AI systems currently being deployed, from Alexa and Automatic video captioning to cancer diagnosis, all use deep learning models.

Deep learning

Deep learning is the term for using big neural networks with many layers to train on large amounts of data. These deep learning models, although they perform with almost human accuracy on many problems, are considered black boxes because we don’t know for sure how they make their predictions.

The goal of XAI is to alleviate this problem by having deep learning models provide an explanation for their predictions so that they are more trustworthy and we as humans can understand why they fail when they do.

“What color is that car?”

The reason XAI is difficult is because we would like the explanation to be in natural language or a form that humans can understand, whilst also being faithful to the process of how the model came to the decision. For example, in the final year of Nazneen’s PhD in AI at UT Austin she was looking at the problem of visual question answering, in which researchers ask a model natural language questions about an image.

She explains: “Suppose that you ask a AI model ‘what color is the car?’, an explanation would be the parts of the image the model was focusing on while making the answer prediction. So if it’s looking at the region of the image near the sky and not on the road, even if the model makes the right prediction, we know that it is not trustworthy and it has only memorized based on what it saw during training.”

Overturning bias against women and minorities

Earlier this year Nazneen worked on a project that probed common sense reasoning in neural networks by the way of explanations. By analyzing the explanations generated by her model, she found that these explanations were a way to detect bias in our models. “The current big AI models are trained on the entire internet data, and the internet as we know it is biased against women and minorities. Using models that are explainable means that I can evaluate if they amplify the bias of the data and make predictions based on these biases,” she says.

Big companies like Google, Microsoft and Facebook have been having trouble because their AI models do not follow ethical principles and can fail spectacularly. For example, the Microsoft face-recognition system couldn’t distinguish between black faces and gorillas, she reveals.

Nazneen hopes her work will urge the AI researchers and engineers to incorporate explanations and make AI models more ethical. Her dedication and her intelligence should be an inspiration to members of the Khoja community all over the world.

Read more about Nazneen’s fascinating work published by renowned media outlets:

venturebeat

datanami

zdnet

siliconangle