How does experience change the brain? How do you remember a phone number for 30 seconds but remember your first day of school for 50 years? Why do some memories form instantly while others require repetition? The answers lie in synaptic plasticity — the ability of synaptic connections to change their strength based on activity patterns.
Plasticity is the mechanistic bridge between neural activity and learned behavior. It is also the molecular basis for the computational power of neural circuits: a fixed-weight network can only implement functions encoded in its initial wiring; a plastic network can learn.
Hebb's Postulate
In 1949, Donald Hebb proposed a mechanism for learning:
"When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A's efficiency, as one of the cells firing B, is increased."
Colloquially: "neurons that fire together, wire together."
This simple principle — strengthen connections between co-active neurons — has been remarkably productive. It captures the correlation-based learning that underlies associative memory, classical conditioning, and many other learning paradigms. And it has a direct molecular implementation: long-term potentiation (LTP).
Long-Term Potentiation (LTP)
LTP is the sustained increase in synaptic strength following high-frequency stimulation — the best-studied cellular correlate of memory.
Induction
The canonical LTP mechanism at hippocampal CA3→CA1 synapses:
- A high-frequency burst of presynaptic activity releases glutamate
- Glutamate binds both AMPA receptors (immediately opens, provides current) and NMDA receptors (blocked at rest by Mg²⁺ in the channel)
- The sustained AMPA-mediated depolarization removes the Mg²⁺ block from NMDA receptors
- NMDA receptors open → Ca²⁺ influx into the postsynaptic spine
The NMDA receptor is the coincidence detector: it requires both presynaptic glutamate release AND sufficient postsynaptic depolarization to open. This implements Hebb's rule at the molecular level — the synapse is strengthened only when pre- and post-synaptic activity coincide.
Expression
Ca²⁺ influx activates signaling cascades:
- CaMKII (calmodulin-dependent protein kinase II): phosphorylates AMPA receptors → increased conductance
- PKA, PKC: additional kinases that phosphorylate synaptic proteins
- AMPA receptor trafficking: more AMPA receptors are inserted into the synapse from internal stores → increased synaptic current
Early LTP (E-LTP): lasts 1–3 hours; depends on protein phosphorylation; doesn't require new protein synthesis.
Late LTP (L-LTP): lasts days to weeks; requires gene transcription and new protein synthesis (via CREB activation); structural changes to the synapse (spine enlargement, new spine formation).
LTD: The Flip Side
Long-term depression (LTD) weakens synapses. At the same CA1 synapses, low-frequency stimulation (1 Hz for 15 min) induces LTD. The molecular difference: lower Ca²⁺ influx (moderate, sustained vs. high, transient) activates phosphatases (PP1, calcineurin) rather than kinases → AMPA receptor internalization → weakened synapse.
The balance between LTP and LTD allows bidirectional modification: synapses can be potentiated or depressed based on activity patterns.
Spike-Timing-Dependent Plasticity (STDP)
A more precise form of Hebbian plasticity: the relative timing of pre- and postsynaptic spikes determines whether a synapse is potentiated or depressed.
- If presynaptic spike precedes postsynaptic spike (by 0–50 ms): LTP — "A caused B to fire"
- If postsynaptic spike precedes presynaptic spike (0–50 ms): LTD — "B fired before A; A doesn't cause B"
This asymmetric timing window implements a causal learning rule: synapses are strengthened when the presynaptic neuron predicts postsynaptic firing, and weakened when it doesn't.
STDP is believed to underlie sequence learning, predictive coding, and the temporal precision of neural representations. It requires millisecond-scale coordination — suggesting that spike timing (not just firing rate) carries information in some circuits.
Memory Consolidation: From Synapse to System
A single memory is not stored in a single synapse. Memories involve distributed patterns of synaptic weights across circuits. How do these transient activity patterns become durable memories?
Molecular Consolidation
Immediately after learning: early LTP (protein phosphorylation, receptor trafficking)
Hours later: gene transcription and new protein synthesis are required for long-term storage. Memory can be disrupted by blocking protein synthesis immediately after learning — the "consolidation window." Key molecular players: CREB (transcription factor), BDNF (brain-derived neurotrophic factor), Arc/Arg3.1 (immediate early gene, required for LTD and spine remodeling).
Systems Consolidation
The hippocampus is required for encoding new episodic memories but not for retrieving old ones. Over weeks to years, memories are transferred to the cortex (systems consolidation). Sleep plays a critical role: during slow-wave sleep, hippocampal sharp-wave ripples replay recently formed memories, driving cortical consolidation.
Standard consolidation theory: memory → hippocampus → (over weeks/months) → neocortex. The hippocampus gradually becomes dispensable as the neocortical representation becomes sufficient.
Multiple trace theory: contextually rich memories always require the hippocampus; only semantic abstractions become fully cortical.
Homeostatic Plasticity: Keeping Networks Stable
Hebbian plasticity is unstable by itself. If active synapses get stronger, they drive more activity, which strengthens synapses further — a positive feedback loop leading to runaway activity or saturation. How does the brain maintain stable function despite ongoing plasticity?
Homeostatic plasticity mechanisms counter-regulate activity to maintain a target firing rate:
Synaptic scaling: prolonged inactivity → all synapses on a neuron scale up proportionally (more AMPA receptors). Prolonged hyperactivity → all synapses scale down. This is multiplicative — it preserves the relative weights while rescaling the total.
Intrinsic excitability changes: chronic under-activity → reduced K⁺ channels (raises excitability). Chronic over-activity → increased K⁺ channels (reduces excitability).
These homeostatic mechanisms operate on timescales of hours to days, providing a slow stabilizing brake on fast Hebbian changes. The interplay between Hebbian (destabilizing, forms memories) and homeostatic (stabilizing, maintains function) plasticity is a major topic in theoretical neuroscience — the "Hebbian instability problem."
Structural Plasticity
Beyond synaptic weight changes, the brain also changes its physical structure:
Dendritic spine dynamics: spines appear, enlarge, shrink, and disappear in response to activity. LTP is associated with spine enlargement; LTD with spine shrinkage. New spines form during learning. Two-photon in vivo imaging has revealed that ~5–10% of dendritic spines are replaced per month in the adult cortex.
Axonal sprouting: after injury or in response to sustained activity, axons can grow new branches and form new synapses.
Adult neurogenesis: in the hippocampus (dentate gyrus) and olfactory bulb, new neurons are born in adulthood in rodents and other mammals. These newborn neurons initially have high excitability and may be particularly important for encoding new memories. Evidence for meaningful adult neurogenesis in the human hippocampus is debated.
Relevance for AI
Plasticity mechanisms have directly inspired or can improve AI:
Hebbian learning rules are used in unsupervised and self-supervised learning — correlation-based updates without labeled data.
STDP has been implemented in spiking neural network simulations and neuromorphic chips (Intel Loihi, IBM TrueNorth). It provides online, local learning — updates based only on pre- and postsynaptic signals at each synapse, requiring no global error propagation.
Catastrophic forgetting: artificial neural networks trained on a new task typically forget old ones — they overwrite the weights encoding prior learning. The brain avoids this via multiple mechanisms: hippocampal-cortical complementary learning systems, inhibitory interneurons that protect specific weight patterns, and the slow cortical consolidation process. Continual learning in AI (avoiding catastrophic forgetting) is an active research area, increasingly drawing on neuroscientific principles.
Memory replay: reinforcement learning algorithms that replay past experiences to stabilize learning (experience replay in DQN) were inspired by the hippocampal sharp-wave replay that consolidates memories during sleep.
The gap between biological and artificial learning is shrinking, partly because AI researchers are paying closer attention to the principles that have made biological neural networks so efficient at stable, continual, low-energy learning.