Every world-changing technological achievement in the past has had one thing in common: the trust of its users. We get into cars and fly into the air every day because we trust they will work. Though artificial intelligence has already proven its worth, it still is not trusted by the larger public. There are still people in this world that fear the power of AI and more that don’t understand it. One key step to bridging this gap and increasing the trust in AI is to make it explainable and transparent. Transparent AI systems that can explain how they arrived at their conclusion are given an added layer of trust and confidence and will be key to AI’s future.

With predicate learning, an AI system can be built that can explain its reasoning, proven specifically in an image recognition context.

Images are broken down into features and predicates. Predicates are essentially relational structures between multiple objects and are extremely powerful. An object would represent the image and the features that make it up. Then, another predicate would represent the relative location of each feature to each other. Finally, a single predicate would represent the entire image and hold all the features of the image and their relative locations to each other. Then, predicates of different images can be compared and identified. This system with a novel predicate-on-predicate mechanism can explain how it classified an image by outputting the features and relative location predicates in an understandable way.

What inspired you (or your team)?

I’ve always been interested in the big unanswered questions of the world. Which came first: the chicken or the egg? Where did we come from? How is our brain able to think, imagine, create, and understand? This last question, especially in the context of artificial intelligence, has been my main focus in the past month. I’ve read countless articles on artificial general intelligence: the original, illustrious, and ambitious goal of AI to create human-level intelligence. You can check out an article I wrote that quickly describes one of the biggest problems facing AGI here: https://medium.com/@kevn.wanf/the-road-to-smarter-ai-4244d3453cb1
With this competition, I was offered the chance to create my own contributions to the field of AGI, and I did so eagerly. The ability of AI to explain itself would engender a major step forward not only in the field of AGI, which I’m so passionate about, but also public opinion and trust in AI. With explainable AI, people will start to look at AI as not some obscure, incomprehensible enigma to be feared but as an incredible tool that will solve some of our world’s most pressing problems, and I’m really excited about it.