Scalable Trustworthy AI

Creating scalable and trustworthy AI with human guidance

Overview

Artificial intelligence (AI) bears hope for a positive future for humanity. For example, AI could help us fight climate change by optimising power usage. Like fire and electricity have fundamentally changed the life of mankind (quote), AI may also increase overall human productivity to balance out the ageing population and increase humanity’s overall welfare.

The status quo is that AI is far from being perfect. One of the greatest issues is that AI systems are not trustworthy yet. It is difficult to understand when and why they fail. This is especially so when the models are deployed in environments that are different from the training environment (even slightly). Even worse, naive or malicious application of AI systems causes harm to humanity by amplifying political polarisation, by treating minority groups unfairly, and by jeopardising the human liberty and free will.

This leads to our study of Trustworthy AI. We aim to understand the trustworthiness of current AI systems and develop new technologies that enhance their trustworthiness. We focus on three sub-topics among other important topics:

Fortunately, we are not alone in this effort. There are many other research labs around the world that make important contributions on Trustworthy AI. Our group find our uniqueness by striving for working solutions that are widely applicable and can be deployed at a large-scale. We thus name our group Scalable Trustworthy AI. To achieve the scalability, we commit ourselves to the following principles:

With these principles in mind, we do research on Scalable Trustworthy AI technologies to guide the field to the right direction. We hope to contribute to mitigating the negative side-effects of AI and accelerating the AI-led advances for the future of humanity.

For prospective students: You might be interested in our internal curriculum and guidelines for a PhD program: Principles for a PhD Program.

STAI group is part of the Tübingen AI Center and the University of Tübingen. STAI is also within the ecosystem of International Max Planck Research School for Intelligent Systems (IMPRS-IS) and the ELLIS Society.

logo
logo
logo
logo

Members

Seong Joon Oh

Group Leader

Elisa Nguyen

PhD Student

Elif Akata

PhD Student

Michael Kirchhof

Collaborating PhD Student

Evgenii Kortukov

MSc Student

Arnas Uselis

PhD Student

Stefano Woerner

PhD Student

Ankit Sonthalia

PhD Student

Publications

Do Deep Neural Network Solutions Form a Star Domain?

arXiv

Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks

arXiv

Pretrained Visual Uncertainties

arXiv

Exploring Practitioner Perspectives On Training Data Attribution Explanations

NeurIPS XAI in Action Workshop

A Bayesian Perspective On Training Data Attribution

NeurIPS

ID and OOD Performance Are Sometimes Inversely Correlated on Real-world Datasets

NeurIPS

URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates

NeurIPS D&B

Neglected Free Lunch -- Learning Image Classifiers Using Annotation Byproducts

ICCV

Scratching Visual Transformer's Back with Uniform Attention

ICCV

Probabilistic Contrastive Learning Recovers the Correct Aleatoric Uncertainty of Ambiguous Inputs

ICML

URL: A Representation Learning Benchmark for Transferable Uncertainty Estimates

UAI-EAI Best Student Paper

Playing repeated games with Large Language Models

arXiv

ECCV Caption: Correcting False Negatives by Collecting Machine-and-Human-verified Image-Caption Associations for MS-COCO

ECCV

Dataset Condensation via Efficient Synthetic-Data Parameterization

ICML

Weakly Supervised Semantic Segmentation Using Out-of-Distribution Data

CVPR

Which Shortcut Cues Will DNNs Choose? A Study from the Parameter-Space Perspective

ICLR

Openings

Postdoc Opportunity: Scalable Trustworthy AI - Novel Dataset Development

We are seeking a highly motivated Postdoctoral Researcher to join our team at the University of Tübingen for an exciting two-year project on Scalable Trustworthy AI. The successful candidate will play a pivotal role in developing and collecting novel datasets that capture not only task outputs from human annotators but also valuable annotation byproducts, such as mouse traces, gaze patterns, click history, time to complete the task, and any corrections made during the process. Our goal is to leverage this rich data to better align AI systems with human cognitive mechanisms. Read the Annotation Byproducts paper for further details. This unique opportunity will allow the selected applicant to enhance their research expertise, contribute to cutting-edge advancements in AI, and benefit from Tübingen’s vibrant research ecosystem and extensive international network. The position comes with a competitive postdoc salary and German social benefits. The starting date is flexible, and the selected candidate will be based at the Tübingen AI Center. We encourage candidates with a strong PhD degree in machine learning, natural language processing, computer vision, mathematics, statistics, human-computer interaction, or a related field to apply. To apply, please send your CV and research statement to coallaoh@gmail.com.