Complex intelligent systems, such as self-driving cars and medical expert systems, rely on classifiers built using machine learning. It is often required that these classifiers output confidence together with their predictions, as this allows picking safer options whenever the confidence becomes too low. Therefore, it is extremely important for the classifier not to be overly confident, as this would increase the risk of very costly errors.
Over-confidence results from a failure to account for all uncertainty about the context where the classifier is applied. Our proposed project is the first to consider simultaneously the training-time and application-time uncertainty in costs and class proportions.
In the project we will develop theory and software which allows machine learning practitioners to optimise classifiers directly for their domain and uncertainties while being protected against over-confidence.