Excitation Pullbacks: Enhancing Human Perception with AI

Instead of generating hallucinations, our method amplifies the real predictive features in the data, transforming AI into a transparent tool for high-stakes decision-making. The current implementation relies on a single architectural hyperparameter (temp) shared across all hidden neurons. Future work will enable neuron-specific adjustment, which is expected to significantly enhance explanation quality. For details, check out our paper and its corresponding code repository.

Choose an input image - either sample from Imagenette dataset, select a predefined example or upload your own. Square images are resized to 224x224 pixels, others are first resized to 256x256 and then center-cropped to 224x224 pixels.

Example images from Imagenette val (corresponding to Example classes)

Select a target class and amplify it's features - via Projected Gradient Ascent along the Excitaton Pullback. Very low temperature approximates vanilla gradients (noisy), while very high temperature linearizes the model.

Target Class (ImageNet)

idx - class name

Model

ImageNet-pretrained ReLU model

Example classes (corresponding to Example images + "ostrich")