Explaining the Uncertainty in AI-Assisted Decision Making
Prof Tim Miller
Prof Liz Sonenberg
Dr Ronal Singh
School / Faculty:
School of Computing and Information Systems
Faculty of Engineering and Information Technology
My PhD aims to improve human decision-making using explainable AI techniques; specifically, how to explain the (un)certainty of an AI model. In human-AI interaction, users may want to understand why the algorithm is confident (or not confident) to determine whether to accept this measure. Most existing research has only used uncertainty measures to promote trust and trust calibration. However, the area of explaining why the AI model is confident (or not confident) in its prediction is still under-explored. By explaining the model uncertainty, we can promote trust, improve understanding and improve decision-making for users.
Q & A
Why did you decide to do a PhD?
I am passionate about doing research. It gives me much freedom to do what I want and works with many smart people. This is also an opportunity for me to pursue my interest in AI. I am working on a cool project with supportive supervisors, so I have still enjoyed my PhD.
What do you enjoy reading?
I do enjoy reading research papers (when they have interesting ideas). Other than that, I like reading fiction and manga. I read the Harry Potter series many times.
What do you enjoy doing when you're not working on your PhD?
I like to go out for a walk, especially in the area that has the beach. I also like swimming.
Name one fun fact about you.
When I was a kid, I had six cats, many chickens and pigeons, two rabbits, a dog and a fish tank. Many of them came to my house by accident and decided to stay. I spent most of my time taking care of them.