Part 7·7.2·12 min read

Biological Neural Networks

How neurons are organized into circuits, regions, and systems — and how connectivity, oscillations, and population dynamics give rise to cognition.

neuroscienceneural circuitscortexconnectome

A single neuron computes almost nothing interesting. A neuron embedded in a circuit, receiving inputs from hundreds of thousands of other neurons and projecting to hundreds of targets, becomes part of systems that recognize faces, remember events, generate language, and regulate emotion. The leap from single-cell biophysics to cognition is explained by neural circuits — the structured connectivity patterns that transform inputs into behavior.

This chapter covers how neurons are organized into circuits and brain regions, how those regions are interconnected, and what computational properties emerge from network-level dynamics. These concepts underlie computational neuroscience, connectomics, and the interpretation of neuroimaging data.

Cortical Organization: Columns and Layers

The cerebral cortex — the outermost layer of the brain, responsible for most of what we'd call "thinking" — has a remarkably consistent structure across mammals.

Laminar structure: the cortex is organized into 6 layers (I–VI), distinguished by cell density, cell types, and connectivity:

  • Layer IV: primary input from the thalamus (sensory relay station)
  • Layer V: output to subcortical structures (motor commands, brainstem, spinal cord)
  • Layer VI: output to the thalamus (feedback)
  • Layers II/III: intracortical connections (to other cortical areas)

Columnar structure: vertically, the cortex is organized into cortical columns — groups of neurons spanning all layers that process similar information. In primary visual cortex, columns respond to lines of the same orientation; in primary auditory cortex, to similar frequencies. This is functional column organization.

The cortical column is sometimes called the basic computational unit of the cortex — though its precise definition and universality are debated.

Brain Areas and Their Functions

The brain is divided into specialized regions, each with characteristic cell types, connectivity patterns, and functions:

RegionFunctionKey cell types
Primary sensory cortices (V1, A1, S1)Basic sensory processingPyramidal neurons, stellate cells
Association cortices (PFC, PPC)Integration, planning, working memoryLayer II/III pyramidal neurons
HippocampusEpisodic memory formation, spatial navigationCA1/CA3 pyramidal cells, granule cells
CerebellumMotor coordination, procedural learningPurkinje cells, granule cells
Basal ganglia (striatum, GPe, GPi, SNr)Action selection, reward learningMedium spiny neurons, dopaminergic neurons
AmygdalaFear learning, emotional responsesPyramidal neurons, interneurons
ThalamusRelay station between cortex and subcortexRelay neurons, reticular neurons
BrainstemVital functions (breathing, heart rate), sleepDiverse nuclei

This list is a massive oversimplification — each region contains multiple subregions with distinct functions — but provides a map of the major systems.

Neural Circuits: Basic Motifs

Like gene regulatory networks, neural circuits use recurring structural motifs:

Feedforward excitation: A→B→C. Signal propagates from input to output. Sensory processing pathways use hierarchical feedforward circuits, each stage abstracting the representation further. V1 detects edges → V4 detects shapes → IT cortex detects objects.

Feedback (recurrent) connections: Higher areas send projections back to lower areas. Most cortical connections are recurrent — roughly equal numbers of feedforward and feedback synapses. Feedback connections may carry predictions (predictive coding framework) or attention signals.

Lateral inhibition: Excitatory neurons activate local inhibitory interneurons, which then inhibit neighboring excitatory neurons. This creates a "winner-take-all" competition among nearby neurons, sharpening representations and creating contrast enhancement. Fundamental to sensory tuning.

Oscillatory circuits: Excitatory-inhibitory loops generate rhythmic activity. PING (Pyramidal Interneuron Network Gamma) oscillations arise from E-I dynamics at 40 Hz (gamma band), thought to coordinate information binding.

The Connectome

The connectome is the complete map of all neural connections in a nervous system. It is to neuroscience what the genome is to genetics — a complete wiring diagram.

Current status:

  • C. elegans (302 neurons, ~7000 synapses): fully mapped in 1986 by White et al. The first and still most complete connectome.
  • Drosophila larva (~3,000 neurons): completed by Zlatic lab (2023)
  • Drosophila adult (~140,000 neurons): FlyWire connectome completed by Seung lab (2024) — a landmark
  • Mouse cortex (~100 billion neurons total; full connectome): ongoing work at Allen Institute; portions at nanometer resolution via electron microscopy
  • Human: decades away; the challenge scales with neuron count
Connectomics and electron microscopy

Mapping synaptic-resolution connectomes requires electron microscopy (EM) — the synapse is ~20 nm, far below the resolution of light microscopy. A cubic millimeter of mouse cortex contains ~50,000 neurons and ~500 million synapses. Mapping it generates ~1 petabyte of image data.

The Connectomics field uses machine learning to automatically segment neurons in EM image stacks — a massive annotation challenge that has driven development of computer vision and active learning methods. Google's Machine Intelligence team and academic labs have produced landmark large-scale EM reconstructions in the last few years.

Neural Oscillations: The Brain's Rhythms

The brain exhibits rhythmic electrical activity at multiple frequency bands, detectable by EEG (electroencephalography):

BandFrequencyAssociated states/functions
Delta0.5–4 HzDeep sleep, slow-wave sleep
Theta4–8 HzMemory encoding, navigation (hippocampus), drowsiness
Alpha8–12 HzRelaxed wakefulness, visual cortex inhibition
Beta12–30 HzMotor activity, active concentration
Gamma30–100 HzActive processing, attention, binding

Neural oscillations arise from E-I circuit dynamics and serve several proposed functions:

  • Temporal binding: gamma oscillations may synchronize activity across brain areas, "binding" features of an object processed in separate areas (the binding problem)
  • Phase coding: information encoded in the phase of spikes relative to an oscillation cycle (place cells in hippocampus fire at specific theta phases)
  • Communication: oscillatory synchrony between areas may enable selective communication (coherence as a gate)
  • Memory consolidation: slow oscillations during sleep are thought to replay recently formed memories, transferring them from hippocampus to cortex

Population Coding: Representations in Ensembles

Individual neurons respond to specific features (a V1 neuron responds to a line of orientation 45°), but representations are more reliably distributed across populations:

Population vector: the pattern of activity across many neurons encodes a stimulus. Even if each individual neuron is noisy, the population average can be very precise. A population of orientation-tuned V1 neurons collectively encodes orientation with much higher precision than any single neuron.

Dimensionality reduction: recording from many neurons simultaneously (multi-electrode arrays, calcium imaging) produces high-dimensional activity vectors. Principal component analysis (PCA), t-SNE, and UMAP are used to visualize low-dimensional structure in neural population activity. These manifolds often have interpretable structure: in motor cortex, the activity trajectory during arm movement traces out a smooth curve in PC space.

Decoders: given population activity, can you decode what the animal was seeing, hearing, or planning? Machine learning decoders trained on neural data can do this with remarkable accuracy — the basis of brain-computer interfaces (covered in Chapter 7.5).

The Default Mode Network and Brain-Wide Connectivity

Functional MRI measures blood-oxygen-level-dependent (BOLD) signals as a proxy for neural activity. Correlating activity across brain regions during rest reveals resting-state networks — sets of regions that fluctuate together even without explicit tasks:

  • Default Mode Network (DMN): medial prefrontal cortex, posterior cingulate, angular gyrus — active during rest and self-referential thought; suppressed during focused attention
  • Salience Network: anterior insula, anterior cingulate — detects salient stimuli and switches between DMN and task networks
  • Frontoparietal Network: lateral prefrontal cortex, posterior parietal — working memory, cognitive control

These networks are disrupted in psychiatric disorders: DMN hyperconnectivity in depression; salience network dysfunction in schizophrenia; reduced frontoparietal connectivity in ADHD.

Graph theory of the connectome: treating brain regions as nodes and functional correlations as edges allows calculation of network metrics — hubs (highly connected regions), clustering coefficients, path lengths. The brain shows "small-world" topology: high local clustering (efficient local processing) and short global path lengths (efficient global integration), similar to other efficient real-world networks.

Why Neural Network Architecture Matters for AI

The hierarchical feedforward architecture of sensory cortex was directly influential on deep convolutional neural networks (CNNs). Hubel and Wiesel's work on V1 simple and complex cells (Nobel 1981) showed that visual processing builds invariant representations by hierarchically pooling and combining simpler features — exactly what ConvNet architectures do.

Current AI research draws from neuroscience in:

  • Predictive coding: the brain as a prediction machine that suppresses expected input and passes only prediction errors — implemented in some generative model architectures
  • Working memory: the prefrontal cortex maintains information over delays — possibly through sustained activity maintained by recurrent connections; LSTM and attention mechanisms address similar computational problems
  • Reinforcement learning: dopamine signals in the basal ganglia implement something remarkably like temporal difference learning (the Schultz experiments showing dopamine encodes reward prediction errors were pivotal for RL theory)

The dialogue between neuroscience and ML is increasingly two-directional: neural network techniques help analyze neural data, and neuroscientific findings suggest new architectural principles for AI.