My research focuses on reliable and trustworthy machine learning, with emphasis on uncertainty quantification, selective prediction, and out-of-distribution robustness.

Peer-Reviewed Publications

What Does It Take to Build a Performant Selective Classifier?

Stephan Rabanser, Nicolas Papernot

Advances in Neural Information Processing Systems (NeurIPS), 2025

Gatekeeper: Improving Model Cascades Through Confidence Tuning

Stephan Rabanser, Nathalie Rauschmayr, Achin Kulshrestha, Petra Poklukar, Wittawat Jitkrittum, Sean Augenstein, Congchao Wang, Federico Tombari

Advances in Neural Information Processing Systems (NeurIPS), 2025 Best Poster @ TTODLer-FM Workshop

Confidential Guardian: Cryptographically Prohibiting the Abuse of Model Abstention

Stephan Rabanser, Ali Shahin Shamsabadi, Olive Franzese, Xiao Wang, Adrian Weller, Nicolas Papernot

Proceedings of the International Conference on Machine Learning (ICML), 2025

Suitability Filter: A Statistical Framework for Model Evaluation in Real-World Deployment Settings

Angéline Pouget, Mohammad Yaghini, Stephan Rabanser, Nicolas Papernot

Proceedings of the International Conference on Machine Learning (ICML), 2025 Oral

Training Private Models That Know What They Don't Know

Stephan Rabanser, Anvith Thudi, Abhradeep Thakurta, Krishnamurthy Dvijotham, Nicolas Papernot

Advances in Neural Information Processing Systems (NeurIPS), 2023

Robust and Actively Secure Serverless Collaborative Learning

Olive Franzese, Adam Dziedzic, Christopher A Choquette-Choo, Mark R Thomas, Muhammad Ahmad Kaleem, Stephan Rabanser, Congyu Fang, Somesh Jha, Nicolas Papernot, Xiao Wang

Advances in Neural Information Processing Systems (NeurIPS), 2023

Failing Loudly: An Empirical Study of Methods for Detecting Dataset Shift

Stephan Rabanser, Stephan Günnemann, Zachary Lipton

Advances in Neural Information Processing Systems (NeurIPS), 2019

Selective Prediction Via Training Dynamics

Stephan Rabanser, Anvith Thudi, Kimia Hamidieh, Adam Dziedzic, Nicolas Papernot

Transactions on Machine Learning Research, 2025

Workshop Papers

The Effectiveness of Discretization in Forecasting: An Empirical Study on Neural Time Series Models

Stephan Rabanser, Tim Januschowski, Valentin Flunkert, David Salinas, Jan Gasthaus

7th KDD Workshop on Mining and Learning from Time Series (MiLeTS), 2020 Oral

Preprints & Technical Reports

Cascadia: A Cascade Serving System for Large Language Models

Youhe Jiang, Fangcheng Fu, Wanru Zhao, Stephan Rabanser, Nicholas D Lane, Binhang Yuan

arXiv preprint arXiv:2506.04203, 2025

Intrinsic Anomaly Detection for Multi-Variate Time Series

Stephan Rabanser, Tim Januschowski, Kashif Rasul, Oliver Borchert, Richard Kurle, Jan Gasthaus, Michael Bohlke-Schneider, Nicolas Papernot, Valentin Flunkert

arXiv preprint arXiv:2206.14342, 2022

$p$-DkNN: Out-of-Distribution Detection Through Statistical Testing of Deep Representations

Adam Dziedzic, Stephan Rabanser, Mohammad Yaghini, Armin Ale, Murat A. Erdogdu, Nicolas Papernot

arXiv preprint arXiv:2207.12545, 2022

Denoising Spectral Clustering Through Latent Data Decomposition

Stephan Rabanser, Oleksandr Shchur, Stephan Günnemann

2018

Improving Online GMM Learning Via Covariance Weighting

Stephan Rabanser, Maksim Greiner

2018

Introduction to Tensor Decompositions and their Applications in Machine Learning

Stephan Rabanser, Oleksandr Shchur, Stephan Günnemann

arXiv preprint arXiv:1711.10781, 2017

For a complete list with citations, please see my Google Scholar profile.