Research statement (Informal)
I'm currently happy to work on improving AI/ML models broadly in terms of safety, transparency, interpretability etc. What you will read below specifically refers to research works I did during my PhD.
Explainable Artificial Intelligence. My research revolves around the broad theme of explainability, interpretability and transparency in the application of machine learning/deep learning, often grouped under the umbrella of eXplainable Artificial Intelligence (XAI). The objective is to achieve fair, transparent, trustworthy and unbiased algorithms.
There's a story to each project.
Deep learning for medical AI. We started off with a research that focuses on automation in the pipeline of ischemic stroke diagnosis. Medical image analysis is performed using deep neural network, in particular the U-Net. Furthermore, we apply explainable AI methods to understand our results. Without going into details, I found that popular XAI methods have been rather lacking -some heatmaps they generated just didn't make sense!- and thus my study on interpretable AI began. It was a bumpy start to my medical AI research, but I have continued working on deep learning methods in the field. For example, in
this paper, ResNet is finetuned for pneumonia classification. Given an X-Ray image, the algorithm quickly tells us if the patient suffers from pneumonia or not.
Research on the quality of XAI. Testing metrics. There have been many metrics to measure the success of XAI, including faithfulness and fidelity. Localization has also been used as a measure of heatmap quality, and one of my earliest projects aims to measure heatmap quality via straightforward accuracy of localization. The results are shown in
this paper. We found that even when deep learning models perform very well in terms of classification, heatmaps generated with XAI methods from these models can still look random! In a similar vein, we tested the MaxBoxAcc metric introduced by
Choe et al, and, surprise, surprise, we got some rather interesting results. We really hope that the research community doesn't blindly use heatmaps and claim that they "explain" something in a convenient, cherry-picking manner.
Extremity as novelty. The frustration with "working" XAI methods has led me to develop methods with extreme level of transparency. I wanted to demonstrate that it is possible to have meaningful parameters, not just senseless weights and parameter tuning, not just "bigger model is better." The following two papers, one on
universal approximation and another on a
reinforcement learning algorithm, are the products. I went as far as giving each neuron in a neural network model a meaning while retaining the properties of recently popular neural network models. There was also a fair attempt in which I used the general pattern theory (a very old math). It's not too fleshed out, but I personally think a more modern and less convoluted theory may help with the transparency and fairness of neural network.
Debugging. As an extension to my research on univeral approximation, I thought one might be able to improve transparency with ordered data. The idea is to know which training data are "influential" to the decision of an algorithm (this is a research topic in itself). Furthermore, by allowing simple adjustment of ordered data, we allow debugging and temporary fixes that help developers perform some quality control before properly tuning the main algorihtm. Thus is born the
kaBEDONN.
My research has therefore been a series of well-motivated attempts at dealing with blackbox issues in ML/AI models. From my experience, "explanation" and "interpretability" might be ill-defined because some problems are heavily context dependent, so attempt to study XAI might be doomed from the start. But there's a lot more to be done, and the topic is both meaningful and exciting. Finally, throughout my PhD training, I've equipped myself with a general set of programming skills and more specialized skills in ML/AI.