Andreas
Stöckel,
PhD
candidate
David
R.
Cheriton
School
of
Computer
Science
The artificial neurons typically employed in machine learning and computational neuroscience bear little resemblance to biological neurons. They are often derived from the “leaky integrate and fire” (LIF) model, neglect spatial extent, and assume a linear combination of input variables. It is well known that these simplifications have a profound impact on the family of functions that can be computed in a single-layer neural network.
We demonstrate that even small, biologically plausible extensions, such as conductance-based synapses and a passive dendritic tree, significantly increase the complexity of the functions that can be computed within a single-layer neural network. We furthermore present a trust-region based algorithm that robustly solves for nonnegative synaptic weights approximating arbitrary nonlinear multivariate functions within a single neuron. Our work characterizes the extent to which biological neurons are computationally more powerful than artificial neurons and has direct applications with respect to neuroscientific modelling and neuromorphic hardware design.