200

Exploring Compositionality in Vision Transformers using Wavelet Representations

Akshad Shyam Purushottamdas
Pranav K Nayak
Divya Mehul Rajparia
Deekshith Patel
Yashmitha Gogineni
Konda Reddy Mopuri
Sumohana S. Channappayya
Main:7 Pages
6 Figures
Bibliography:2 Pages
8 Tables
Abstract

While insights into the workings of the transformer model have largely emerged by analysing their behaviour on language tasks, this work investigates the representations learnt by the Vision Transformer (ViT) encoder through the lens of compositionality. We introduce a framework, analogous to prior work on measuring compositionality in representation learning, to test for compositionality in the ViT encoder. Crucial to drawing this analogy is the Discrete Wavelet Transform (DWT), which is a simple yet effective tool for obtaining input-dependent primitives in the vision setting. By examining the ability of composed representations to reproduce original image representations, we empirically test the extent to which compositionality is respected in the representation space. Our findings show that primitives from a one-level DWT decomposition produce encoder representations that approximately compose in latent space, offering a new perspective on how ViTs structure information.

View on arXiv
Comments on this paper