The connection between biological neurons and artificial neural networks is not metaphorical — it's historical. Artificial neural networks were explicitly designed to model how biological neurons compute. McCulloch and Pitts (1943) described the neuron as a logical computing unit before computer science was a formal discipline. Rosenblatt's perceptron (1958) was a direct implementation of simplified neuron dynamics.
Understanding real neurons fills in what the abstraction deliberately omitted: the richness of neural dynamics, the role of neural geometry, the mechanisms of synaptic plasticity, and the reasons why the brain is so much more energy-efficient and robust than silicon implementations. It also explains why neuroscience and machine learning are in an increasingly productive dialogue.
Structure of a Neuron
A neuron is a specialized cell optimized for receiving, integrating, and transmitting electrical signals. Its key structural features:
Dendrites: branching processes that extend from the cell body (soma). They receive synaptic inputs from thousands of other neurons. The complex dendritic tree allows different computations in different branches — evidence suggests individual dendritic branches can function as independent computational units.
Soma (cell body): contains the nucleus and most cellular machinery. Integrates inputs from all dendrites. The axon hillock — where the axon begins — is the site of action potential initiation; it has the highest density of voltage-gated sodium channels and the lowest threshold for firing.
Axon: a long, thin process that transmits the output signal to synaptic terminals. Can be millimeters (interneurons) to over a meter long (motor neurons to the toes). Many axons are wrapped in myelin — a lipid insulating sheath produced by glial cells — which dramatically increases signal propagation speed.
Synaptic terminals (boutons): at the end of the axon, these specialized structures release neurotransmitters into the synaptic cleft in response to an action potential.
The Neuron as an Integrate-and-Fire Device
In the simplest functional description, a neuron:
- Integrates incoming signals over space (across dendrites) and time
- Fires (generates an action potential) when the integrated input exceeds a threshold
- Resets to resting state and can fire again
This is the biological basis of the artificial neuron: weighted sum of inputs → nonlinear activation function → output. The biological version is more complex, but this abstraction captures the essential logic.
A neuron operates in two modes simultaneously. The dendritic integration is analog: membrane potentials vary continuously, weighted inputs sum up, and there's a continuous relationship between input strength and membrane depolarization.
The axonal output is digital: either an action potential fires or it doesn't. All action potentials in a neuron have the same amplitude (~100 mV) and duration (~1–2 ms). Information about signal strength is encoded not in amplitude but in firing rate (frequency coding) or the precise timing of spikes (temporal coding).
This analog-to-digital conversion at the axon hillock is what makes the signal robust over long distances — the digital spike doesn't degrade as it travels down the axon, unlike a graded analog potential that attenuates with distance.
The Action Potential: The Spike
The action potential (AP) is the fundamental unit of neural communication — a rapid, stereotyped reversal of membrane potential.
Resting Potential
At rest, the neuron's interior is at approximately −70 mV relative to the outside. This is maintained by:
- K⁺ leak channels: K⁺ flows out down its concentration gradient, leaving negative charge inside
- Na⁺/K⁺-ATPase pump: exports 3 Na⁺ for every 2 K⁺ imported, maintaining the ionic gradients
Depolarization and the Voltage-Gated Channels
When synaptic inputs depolarize the membrane past threshold (~−55 mV at the axon hillock):
- Voltage-gated Na⁺ channels open (activation gate opens rapidly): Na⁺ rushes in, reversing the potential to +40 mV
- Repolarization: voltage-gated Na⁺ channels inactivate (a separate inactivation gate closes) + voltage-gated K⁺ channels open → K⁺ flows out, repolarizing the membrane
- Hyperpolarization (undershoot): K⁺ channels are slow to close → membrane briefly drops below resting potential
- Return to rest: K⁺ channels close; Na⁺/K⁺-ATPase restores ionic gradients
Total duration: ~1–2 ms. This is what makes neurons capable of firing hundreds of times per second.
Refractory period: immediately after an AP, Na⁺ channels are in the inactivated state and cannot reopen regardless of membrane potential (absolute refractory period, ~1–2 ms). This enforces a maximum firing rate and ensures APs propagate in only one direction (forward down the axon).
The Hodgkin-Huxley Model
Alan Hodgkin and Andrew Huxley described action potential dynamics using four differential equations that model the voltage- and time-dependent opening of Na⁺ and K⁺ channels. Their model (1952) not only reproduced action potential shape with extraordinary accuracy but also made testable predictions about channel properties that were later confirmed.
C_m * dV/dt = I_ext - g_Na * m³h * (V - E_Na) - g_K * n⁴ * (V - E_K) - g_L * (V - E_L)
dm/dt = α_m(V)(1-m) - β_m(V)m [Na activation gate]
dh/dt = α_h(V)(1-h) - β_h(V)h [Na inactivation gate]
dn/dt = α_n(V)(1-n) - β_n(V)n [K activation gate]
Where m, h, n are gating variables (probabilities of gate being open), and the α/β functions are empirically determined voltage-dependent rate constants. This was a landmark in quantitative biology — a mathematical model that precisely predicted biological behavior from physical principles.
Synaptic Transmission: Communication Between Neurons
Neurons communicate at synapses — specialized junctions between the presynaptic terminal (axon bouton) and postsynaptic membrane (dendrite or soma).
Chemical Synapses (the majority)
When an AP reaches the presynaptic terminal:
- Voltage-gated Ca²⁺ channels open; Ca²⁺ enters the terminal
- Ca²⁺ triggers vesicle fusion (SNARE proteins mediate this) → neurotransmitter released into the synaptic cleft
- Neurotransmitter diffuses across the 20–30 nm cleft
- Binds postsynaptic receptors → ion channels open (ionotropic) or G-protein signaling activated (metabotropic)
- Postsynaptic potential generated
- Neurotransmitter is cleared: reuptake transporters (for glutamate, dopamine, serotonin) or enzymatic degradation (acetylcholine by AChE)
Excitatory synapses (glutamate): open cation channels (AMPA, NMDA) → depolarize the postsynaptic membrane (EPSP: excitatory postsynaptic potential)
Inhibitory synapses (GABA, glycine): open Cl⁻ channels → hyperpolarize or stabilize the membrane (IPSP: inhibitory postsynaptic potential)
The neuron's output depends on the balance of EPSPs and IPSPs arriving at the axon hillock.
Key Neurotransmitters
| Neurotransmitter | Receptors | Function |
|---|---|---|
| Glutamate | AMPA, NMDA, mGluR | Excitatory; learning and memory (NMDA) |
| GABA | GABA-A (ionotropic), GABA-B (metabotropic) | Inhibitory; anxiety control |
| Dopamine | D1-D5 (metabotropic) | Reward, motivation, motor control |
| Serotonin | 5-HT1–7 | Mood, sleep, appetite |
| Acetylcholine | nAChR (ionotropic), mAChR (metabotropic) | Muscle control, attention, memory |
| Norepinephrine | α, β adrenergic | Arousal, attention, stress |
Most psychiatric drugs target neurotransmitter systems: SSRIs block serotonin reuptake; benzodiazepines potentiate GABA-A; antipsychotics block dopamine D2 receptors; stimulants increase dopamine and norepinephrine.
Neural Computation: Beyond Simple Threshold Logic
Real neurons perform more sophisticated computations than simple integrate-and-fire:
Coincidence detection (NMDA receptors): NMDA receptors require simultaneous presynaptic neurotransmitter release AND postsynaptic depolarization to open. They detect temporal coincidence — the neuron responds to correlated inputs, not just summed inputs. This coincidence detection property is central to Hebbian learning and LTP.
Dendritic computation: Dendrites don't just passively sum inputs. They contain voltage-gated channels and can generate local "dendritic spikes." Certain dendritic configurations implement XOR logic — impossible in a simple perceptron but achievable with dendritic nonlinearities.
Rate coding vs. temporal coding: Different neural circuits use different coding schemes. Sensory systems often use rate coding (firing rate encodes stimulus intensity). Some circuits use temporal coding, where precise spike timing (relative to other spikes or neural oscillations) carries information.
Inhibitory circuits: Local interneurons (inhibitory neurons) sculpt the response of excitatory neurons. Feedforward inhibition (stimulation → excitatory neuron AND inhibitory interneuron → inhibitory neuron suppresses excitatory) creates precise temporal windows for integration. Feedback inhibition limits firing rate and creates oscillatory dynamics.
Neuronal Types
The nervous system contains hundreds of distinct cell types. Key distinctions:
Excitatory vs. inhibitory: In the cortex, ~80% of neurons are excitatory (pyramidal neurons, granule cells using glutamate); ~20% are inhibitory interneurons (using GABA). The balance of E/I is tightly regulated — perturbations cause epilepsy (too excitatory) or coma (too inhibitory).
Projection neurons vs. interneurons: Projection neurons (pyramidal cells, Purkinje cells) send long axons to distant targets. Interneurons are local circuit elements.
Glial cells (not neurons but often forgotten): Astrocytes regulate synaptic transmission, provide metabolic support, and form the blood-brain barrier. Oligodendrocytes produce myelin. Microglia are the brain's immune cells. Glia outnumber neurons roughly 1:1 (the old "10:1 glia" ratio is a myth) and are increasingly recognized as active participants in neural computation.
Why This Matters for Computational Neuroscience
Understanding real neurons explains both why artificial neural networks work and where they diverge from biology:
- ANNs use continuous activations (not spikes); spiking neural networks (SNNs) are closer to biology and more energy-efficient on neuromorphic hardware
- Biological neurons have rich temporal dynamics (adaptation, bursting, rebound); most ANNs use memoryless activations
- Biological synapses are local and online learners (Hebbian, STDP); ANNs use backpropagation with global error signals
- Biology uses sparse representations (few neurons active at any moment); ANNs are typically dense
These differences motivate ongoing work in neuromorphic computing and biologically plausible learning rules — and explain why the brain remains a computationally interesting existence proof for very different approaches to machine intelligence.