On the Complexity of Neural Computation in Superposition
Author(s)
Adler, Micah; Shavit, Nir
DownloadSuperposition.pdf (816.5Kb)
Terms of use
Metadata
Show full item recordAbstract
Recent advances in the understanding of neural networks suggest that superposition, the ability of a single neuron to represent multiple features simultaneously, is a key mechanism underlying the computational efficiency of large-scale networks. This paper explores the theoretical foundations of computing in superposition, focusing on explicit, provably correct algorithms and their efficiency.
We present the first lower bounds showing that for a broad class of problems, including permutations and pairwise logical operations, a neural net- work computing in superposition requires at least Ω(m′ log m′) parameters and Ω(√(m′ log m′)) neurons, where m′ is the number of output features being computed. This implies that any “lottery ticket” sparse sub-network must have at least Ω(m′ log m′ ) parameters no matter what the initial dense network size. Conversely, we show a nearly tight upper bound: logical operations like pair- wise AND can be computed using O(√(m′) log m′) neurons and O(m′ log^2 m′) parameters. There is thus an exponential gap between computing in superposition, the subject of this work, and representing features in superposition, which can require as little as O(log m′) neurons based on the Johnson-Lindenstrauss Lemma.
Our hope is that our results open a path for using complexity theoretic techniques in neural network interpretability research.
Date issued
2024-09-30Keywords
superposition, neural network, neurons, complexity
Collections
The following license files are associated with this item: