Husband and father. Brings systems to life using artificial intelligence. Obtained his degree at Bielefeld University. Still researches adversarial robustness. His spare time revolves around fitness.
PhD in Machine Learning, 2022
Bielefeld University
MSc in Intelligent Systems, 2015
Bielefeld University
BSc in Mathematics & Computer Science, 2013
Bielefeld University
We use recently proposed robustness curves to show that point-wise measures for adversarial robustness do not capture important global properties that are essential to reliably compare the robustness of different classifiers.
We extend k nearest neighbors and develop a method that allows learning locally adaptive metrics.
We propose intuitiveness as a property of machine learning algorithms, largely impacting how easy it is for users to interact with a given algorithm without any explicit instruction or training.
We extend k nearest neighbors and develop a method that allows learning locally adaptive metrics.
We compose a technique that allows to hide adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer.
We propose robustness curves as a more general view of the robustness behavior of a model and investigate under which circumstances they can qualitatively depend on the chosen norm.
We quantitatively and qualitatively investigate the capability of three popular explainers of classifications – classic salience, guided backpropagation, and LIME – with respect to their ability to identify regions of attack as the explanatory regions for the (incorrect) prediction in representative examples from image classification.
We formalize two interpretations of the all-relevant problem and propose a polynomial method to approximate one of them for the important hypothesis class of linear classifiers, which also enables a distinction between weakly relevant features.
We extend two recent learning architectures for drift, the self-adjusting memory architecture (SAM-kNN) and adaptive random forests (ARF), to incorporate a reject option, resulting in highly competitive state-of-the-art technologies.
We propose a method to calculate feature relevance intervals for the special case of linear reject option support vector machines that have the option of rejecting a data point if they are unsure about its label.
We perform a systematic analysis in order to determine how the presence of different types of variability in the training data affects the generalization properties of the network for 3-dimensional head reconstruction.