Show simple item record

dc.contributor.authorHitron, Y
dc.contributor.authorLynch, N
dc.contributor.authorMusco, C
dc.contributor.authorParter, M
dc.date.accessioned2021-11-05T18:28:40Z
dc.date.available2021-11-05T18:28:40Z
dc.date.issued2020
dc.identifier.urihttps://hdl.handle.net/1721.1/137566
dc.description.abstract© Yael Hitron, Nancy Lynch, Cameron Musco, and Merav Parter. We study input compression in a biologically inspired model of neural computation. We demonstrate that a network consisting of a random projection step (implemented via random synaptic connectivity) followed by a sparsification step (implemented via winner-take-all competition) can reduce well-separated high-dimensional input vectors to well-separated low-dimensional vectors. By augmenting our network with a third module, we can efficiently map each input (along with any small perturbations of the input) to a unique representative neuron, solving a neural clustering problem. Both the size of our network and its processing time, i.e., the time it takes the network to compute the compressed output given a presented input, are independent of the (potentially large) dimension of the input patterns and depend only on the number of distinct inputs that the network must encode and the pairwise relative Hamming distance between these inputs. The first two steps of our construction mirror known biological networks, for example, in the fruit fly olfactory system [9, 29, 17]. Our analysis helps provide a theoretical understanding of these networks and lay a foundation for how random compression and input memorization may be implemented in biological neural networks. Technically, a contribution in our network design is the implementation of a short-term memory. Our network can be given a desired memory time tm as an input parameter and satisfies the following with high probability: any pattern presented several times within a time window of tm rounds will be mapped to a single representative output neuron. However, a pattern not presented for c · tm rounds for some constant c > 1 will be “forgotten”, and its representative output neuron will be released, to accommodate newly introduced patterns.en_US
dc.language.isoen
dc.relation.isversionof10.4230/LIPIcs.ITCS.2020.23en_US
dc.rightsCreative Commons Attribution 4.0 International licenseen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceDROPSen_US
dc.titleRandom sketching, clustering, and short-term memory in spiking neural networksen_US
dc.typeArticleen_US
dc.identifier.citationHitron, Y, Lynch, N, Musco, C and Parter, M. 2020. "Random sketching, clustering, and short-term memory in spiking neural networks." Leibniz International Proceedings in Informatics, LIPIcs, 151.
dc.contributor.departmentMassachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory
dc.relation.journalLeibniz International Proceedings in Informatics, LIPIcsen_US
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/ConferencePaperen_US
eprint.statushttp://purl.org/eprint/status/NonPeerRevieweden_US
dc.date.updated2021-01-29T15:14:10Z
dspace.orderedauthorsHitron, Y; Lynch, N; Musco, C; Parter, Men_US
dspace.date.submission2021-01-29T15:14:13Z
mit.journal.volume151en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record