1. Trang chủ
  2. » Khoa Học Tự Nhiên

computational neuroscience - theoretical insights into brain function - p. cisek, et al., (elsevier, 2007)

549 292 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Theoretical insights into brain function
Tác giả L.F. Abbott, A.P.L. Abdala, D.E. Angelaki, J. Beck, Y. Bengio, C. Cadieu, C.E. Carr, P. Cisek, C.M. Colbert, E.P. Cook, P. Dario, A.G. Feldman, M.S. Fine, D.W. Franklin, W.J. Freeman, T. Gisiger, S. Giszter, V. Goussev, R. Grashow, A.M. Green, S. Grillner, S. Grossberg, J.A. Guest, C. Hart, M. Hawken, M.R. Hinder, G.E. Hinton, A. Ijspeert, J.F. Kalaska, M. Kerszberg
Trường học Columbia University
Chuyên ngành Computational Neuroscience
Thể loại book
Năm xuất bản 2007
Thành phố New York
Định dạng
Số trang 549
Dung lượng 17,82 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Here we present a discussion of different models for visual cortex and orientation selectivity, and thendiscuss our own experimental findings about the dynamics of orientation selectivity

Trang 1

J Beck, Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY, USA

Y Bengio, Department IRO, Universite´ de Montre´al, P.O Box 6128, Downtown Branch, Montreal, QCH3C 3J7, Canada

C Cadieu, Redwood Center for Theoretical Neuroscience and Helen Wills Neuroscience Institute,University of California, Berkeley, CA, USA

C.E Carr, Department of Biology, University of Maryland, College Park, MD 20742, USA

P Cisek, Groupe de Recherche sur le Syste`me Nerveux Central, De´partement de Physiologie, Universite´ deMontre´al, Montre´al, QC H3C 3J7, Canada

C.M Colbert, Biology and Biochemistry, University of Houston, Houston, TX, USA

E.P Cook, Department of Physiology, McGill University, 3655 Sir William Osler, Montreal, QC H3G1Y6, Canada

P Dario, CRIM Laboratory, Scuola Superiore Sant’Anna, Viale Rinaldo Piaggio 34, 56025 Pontedera(Pisa), Italy

A.G Feldman, Center for Interdisciplinary Research in Rehabilitation (CRIR), Rehabilitation Institute ofMontreal, and Jewish Rehabilitation Hospital, Laval, 6300 Darlington, Montreal, QC H3S 2J4, CanadaM.S Fine, Department of Biomedical Engineering, Washington University, 1 Brookings Dr., St Louis, MO

S Grillner, Nobel Institute for Neurophysiology, Department of Neuroscience, Karolinska Institutet,Retzius va¨g 8, SE-171 77 Stockholm, Sweden

S Grossberg, Department of Cognitive and Neural Systems, Center for Adaptive Systems, and Center forExcellence for Learning in Education, Science and Technology, Boston University, 677 Beacon Street,Boston, MA 02215, USA

v

Trang 2

J.A Guest, Biology and Biochemistry, University of Houston, Houston, TX, USA

C Hart, Neurobiology and Anatomy, Drexel University College of Medicine, 2900 Queen Lane,Philadelphia, PA 19129, USA

M Hawken, Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003,USA

M.R Hinder, Perception and Motor Systems Laboratory, School of Human Movement Studies, University

of Queensland, Brisbane, Queensland 4072, Australia

G.E Hinton, Department of Computer Science, University of Toronto, 10 Kings College Road, Toronto,M5S 3G4 Canada

A Ijspeert, School of Computer and Communication Sciences, Ecole Polytechnique Fe´de´rale de Lausanne(EPFL), Station 14, CH-1015 Lausanne, Switzerland

J.F Kalaska, GRSNC, De´partement de Physiologie, Faculte´ de Me´decine, Pavillon Paul-G Desmarais,Universite´ de Montre´al, C.P 6128, Succursale Centre-ville, Montre´al, QC H3C 3J7, Canada

M Kerszberg, Universite´ Pierre et Marie Curie, Mode´lisation Dynamique des Syste`mes Inte´gre´s UMRCNRS 7138—Syste´matique, Adaptation, e´volution, 7 Quai Saint Bernard, 75252 Paris Cedex 05, France

U Knoblich, Center for Biological and Computational Learning, McGovern Institute for Brain Research,Computer Science and Artificial Intelligence Laboratory, Brain and Cognitive Sciences Department,Massachusetts Institute of Technology, 43 Vassar Street #46-5155B, Cambridge, MA 02139, USA

C Koch, Division of Biology, California Institute of Technology, MC 216-76, Pasadena, CA 91125, USAJ.H Kotaleski, Computational Biology and Neurocomputing, School of Computer Science andCommunication, Royal Institute of Technology, SE 10044 Stockholm, Sweden

M Kouh, Center for Biological and Computational Learning, McGovern Institute for Brain Research,Computer Science and Artificial Intelligence Laboratory, Brain and Cognitive Sciences Department,Massachusetts Institute of Technology, 43 Vassar Street #46-5155B, Cambridge, MA 02139, USA

A Kozlov, Computational Biology and Neurocomputing, School of Computer Science andCommunication, Royal Institute of Technology, SE 10044 Stockholm, Sweden

J.W Krakauer, The Motor Performance Laboratory, Department of Neurology, Columbia UniversityCollege of Physicians and Surgeons, New York, NY 10032, USA

G Kreiman, Department of Ophthalmology and Neuroscience, Children’s Hospital Boston, HarvardMedical School and Center for Brain Science, Harvard University

N.I Krouchev, GRSNC, De´partement de Physiologie, Faculte´ de Me´decine, Pavillon Paul-G Desmarais,Universite´ de Montre´al, C.P 6128, Succursale Centre-ville, Montre´al, QC H3C 3J7, Canada

I Kurtzer, Centre for Neuroscience Studies, Queen’s University, Kingston, ON K7L 3N6, Canada

A Lansner, Computational Biology and Neurocomputing, School of Computer Science andCommunication, Royal Institute of Technology, SE 10044 Stockholm, Sweden

P.E Latham, Gatsby Computational Neuroscience Unit, London WC1N 3AR, UK

M.F Levin, Center for Interdisciplinary Research in Rehabilitation, Rehabilitation Institute of Montrealand Jewish Rehabilitation Hospital, Laval, QC, Canada

J Lewi, Georgia Institute of Technology, Atlanta, GA, USA

Y Liang, Biology and Biochemistry, University of Houston, Houston, TX, USA

W.J Ma, Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627,USA

K.M MacLeod, Department of Biology, University of Maryland, College Park, MD 20742, USA

L Maler, Department of Cell and Molecular Medicine and Center for Neural Dynamics, University ofOttawa, 451 Smyth Rd, Ottawa, ON K1H 8M5, Canada

E Marder, Volen Center MS 013, Brandeis University, 415 South St., Waltham, MA 02454-9110, USAS.N Markin, Department of Neurobiology and Anatomy, Drexel University College of Medicine, 2900Queen Lane, Philadelphia, PA 19129, USA

Trang 3

N.Y Masse, Department of Physiology, McGill University, 3655 Sir William Osler, Montreal, QC H3G1Y6, Canada

D.A McCrea, Spinal Cord Research Centre and Department of Physiology, University of Manitoba, 730William Avenue, Winnipeg, MB R3E 3J7, Canada

A Menciassi, CRIM Laboratory, Scuola Superiore Sant’Anna, Viale Rinaldo Piaggio 34, 56025 Pontedera(Pisa), Italy

T Mergner, Neurological University Clinic, Neurocenter, Breisacher Street 64, 79106 Freiburg, GermanyT.E Milner, School of Kinesiology, Simon Fraser University, Burnaby, BC V5A 1S6, Canada

P Mohajerian, Computer Science and Neuroscience, University of Southern California, Los Angeles, CA90089-2905, USA

L Paninski, Department of Statistics and Center for Theoretical Neuroscience, Columbia University, NewYork, NY 10027, USA

V Patil, Neurobiology and Anatomy, Drexel University College of Medicine, 2900 Queen Lane,Philadelphia, PA 19129, USA

J.F.R Paton, Department of Physiology, School of Medical Sciences, University of Bristol, Bristol BS81TD, UK

J Pillow, Gatsby Computational Neuroscience Unit, University College London, Alexandra House, 17Queen Square, London WC1N 3AR, UK

T Poggio, Center for Biological and Computational Learning, McGovern Institute for Brain Research,Computer Science and Artificial Intelligence Laboratory, Brain and Cognitive Sciences Department,Massachusetts Institute of Technology, 43 Vassar Street #46-5155B, Cambridge, MA 02139, USA

A Pouget, Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627,USA

A Prochazka, Centre for Neuroscience, 507 HMRC University of Alberta, Edmonton, AB T6G 2S2,Canada

R Rohrkemper, Physics Department, Institute of Neuroinformatics, Swiss Federal Institute of Technology,Zu¨rich CH-8057, Switzerland

I.A Rybak, Department of Neurobiology and Anatomy, Drexel University College of Medicine,Philadelphia, PA 19129, USA

A Sangole, Center for Interdisciplinary Research in Rehabilitation, Rehabilitation Institute of Montrealand Jewish Rehabilitation Hospital, Laval, QC, Canada

S Schaal, Computer Science and Neuroscience, University of Southern California, Los Angeles, CA

R Shadmehr, Laboratory for Computational Motor Control, Department of Biomedical Engineering,Johns Hopkins School of Medicine, Baltimore, MD 21205, USA

R Shapley, Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003,USA

J.C Smith, Cellular and Systems Neurobiology Section, National Institute of Neurological Disorders andStroke, National Institutes of Health, Bethesda, MD 20892-4455, USA

C Stefanini, CRIM Laboratory, Scuola Superiore Sant’Anna, Viale Rinaldo Piaggio 34, 56025 Pontedera(Pisa), Italy

J.A Taylor, Department of Biomedical Engineering, Washington University, 1 Brookings Dr., St Louis,

MO 63130, USA

vii

Trang 4

K.A Thoroughman, Department of Biomedical Engineering, Washington University, 1 Brookings Dr., StLouis, MO 63130, USA

L.H Ting, The Wallace H Coulter Department of Biomedical Engineering at Georgia Tech and EmoryUniversity, 313 Ferst Drive, Atlanta, GA 30332-0535, USA

A.-E Tobin, Volen Center MS 013, Brandeis University, 415 South St., Waltham, MA 02454-9110, USA

E Torres, Division of Biology, California Institute of Technology, Pasadena, CA 91125, USA

J.Z Tsien, Center for Systems Neurobiology, Departments of Pharmacology and Biomedical Engineering,Boston University, Boston, MA 02118, USA

D Tweed, Departments of Physiology and Medicine, University of Toronto, 1 King’s College Circle,Toronto, ON M5S 1A8, Canada

D.B Walther, Beckman Institute for Advanced Science and Technology, University of Illinois at Champaign, 405 N Mathews Ave., Urbana, IL 61801, USA

Urbana-A.C Wilhelm, Department of Physiology, McGill University, 3655 Sir William Osler, Montreal, QC H3G1Y6, Canada

D Xing, Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003,USA

S Yakovenko, Departe´ment de Physiologie, Universite´ de Montre´al, Pavillon Paul-G Desmarais,Universite de Montreal, C.P 6128, Succ Centre-ville, Montreal, QC H3C 3J7, Canada

D Zipser, Department of Cognitive Science, UCSD 0515, 9500 Gilman Drive, San Diego, CA 92093, USA

Trang 5

In recent years, computational approaches have become an increasingly prominent and influential part ofneuroscience research From the cellular mechanisms of synaptic transmission and the generation of actionpotentials, to interactions among networks of neurons, to the high-level processes of perception andmemory, computational models provide new sources of insight into the complex machinery which underliesour behaviour These models are not merely mathematical surrogates for experimental data Moreimportantly, they help us to clarify our understanding of a particular nervous system process or function,and to guide the design of our experiments by obliging us to express our hypotheses in a language ofmathematical formalisms A mathematical model is an explicit hypothesis, in which we must incorporate all

of our beliefs and assumptions in a rigorous and coherent conceptual framework that is subject tofalsification and modification Furthermore, a successful computational model is a rich source ofpredictions for future experiments Even a simplified computational model can offer insights that unifyphenomena across different levels of analysis, linking cells to networks and networks to behaviour Overthe last few decades, more and more experimental data have been interpreted from computationalperspectives, new courses and graduate programs have been developed to teach computationalneuroscience methods and a multitude of interdisciplinary conferences and symposia have been organized

to bring mathematical theorists and experimental neuroscientists together

This book is the result of one such symposium, held at the Universite´ de Montre´al on May 8–9, 2006 (see:http://www.grsnc.umontreal.ca/XXVIIIs) It was organized by the Groupe de Recherche sur le Syste`meNerveux Central (GRSNC) as one of a series of annual international symposia held on a different topiceach year This was the first symposium in that annual series that focused on computational neuroscience,and it included presentations by some of the pioneers of computational neuroscience as well as prominentexperimental neuroscientists whose research is increasingly integrated with computational modelling Thesymposium was a resounding success, and it made clear to us that computational models have become amajor and very exciting aspect of neuroscience research Many of the participants at that meeting havecontributed chapters to this book, including symposium speakers and poster presenters In addition, weinvited a number of other well-known computational neuroscientists, who could not participate in thesymposium itself, to also submit chapters

Of course, a collection of 34 chapters cannot cover more than a fraction of the vast range ofcomputational approaches which exist We have done our best to include work pertaining to a variety ofneural systems, at many different levels of analysis, from the cellular to the behavioural, from approachesintimately tied with neural data to more abstract algorithms of machine learning The result is a collectionwhich includes models of signal transduction along dendrites, circuit models of visual processing,computational analyses of vestibular processing, theories of motor control and learning, machinealgorithms for pattern recognition, as well as many other topics We asked all of our contributors toaddress their chapters to a broad audience of neuroscientists, psychologists, and mathematicians, and tofocus on the broad theoretical issues which tie these fields together

The conference, and this book, would not have been possible without the generous support of theGRSNC, the Canadian Institute of Advanced Research (CIAR), Institute of Neuroscience, Mental Healthand Addiction (INMHA) of the Canadian Institutes of Health Research (CIHR), the Fonds de la

ix

Trang 6

Recherche en Sante´ Que´bec (FRSQ), and the Universite´ de Montre´al We gratefully acknowledge thesesponsors as well as our contributing authors who dedicated their time to present their perspectives on thecomputational principles which underlie our sensations, thoughts, and actions.

Paul CisekTrevor DrewJohn F Kalaska

Trang 7

P Cisek, T Drew & J.F Kalaska (Eds.)

Progress in Brain Research, Vol 165

ISSN 0079-6123

Copyright r 2007 Elsevier B.V All rights reserved

CHAPTER 1

The neuronal transfer function: contributions from

voltage- and time-dependent mechanisms

Erik P Cook1, , Aude C Wilhelm1, Jennifer A Guest2, Yong Liang2, Nicolas Y Masse1

1

Department of Physiology, McGill University, 3655 Sir William Osler, Montreal, QC H3G 1Y6, Canada

2

Biology and Biochemistry, University of Houston, Houston, TX, USA

Abstract: The discovery that an array of voltage- and time-dependent channels is present in both the rites and soma of neurons has led to a variety of models for single-neuron computation Most of thesemodels, however, are based on experimental techniques that use simplified inputs of either single synapticevents or brief current injections In this study, we used a more complex time-varying input to mimic thecontinuous barrage of synaptic input that neurons are likely to receive in vivo Using dual whole-cellrecordings of CA1 pyramidal neurons, we injected long-duration white-noise current into the dendrites Theamplitude variance of this stimulus was adjusted to produce either low subthreshold or high suprathresholdfluctuations of the somatic membrane potential Somatic action potentials were produced in the high varianceinput condition Applying a rigorous system-identification approach, we discovered that the neuronal input/output function was extremely well described by a model containing a linear bandpass filter followed by anonlinear static-gain Using computer models, we found that a range of voltage-dependent channel propertiescan readily account for the experimentally observed filtering in the neuronal input/output function Inaddition, the bandpass signal processing of the neuronal input/output function was determined by the time-dependence of the channels A simple active channel, however, could not account for the experimentallyobserved change in gain These results suggest that nonlinear voltage- and time-dependent channels con-tribute to the linear filtering of the neuronal input/output function and that channel kinetics shape temporalsignal processing in dendrites

dend-Keywords: dendrite; integration; hippocampus; CA1; channel; system-identification; white noise

The neuronal input/output function

What are the rules that single neurons use to

process synaptic input? Put another way, what is

the neuronal input/output function? Revealing the

answer to this question is central to the larger task

of understanding information processing in thebrain The past two decades of research havesignificantly increased our knowledge of how neu-rons integrate synaptic input, including the findingthat dendrites contain nonlinear voltage- and time-dependent mechanisms (for review, see Johnston

on the precise structure of the rules for synapticintegration

Corresponding author Tel.: +1 514 398 7691;

Fax: +1 514 398 8241; E-mail: erik.cook@mcgill.ca

1

Trang 8

Early theoretical models of neuronal

computa-tion described the neuronal input/output funccomputa-tion

as a static summation of the synaptic inputs

that cable theory could account for the passive

electrotonic properties of dendritic processing

inte-gration has been extremely useful because it

encompasses both the spatial and temporal aspects

of the neuronal input/output function using a single

quantitative framework For example, the passive

model predicts that the temporal characteristics of

dendrites are described by a lowpass filter with

a cutoff frequency that is inversely related to the

distance from the soma

The recent discovery that dendrites contain a rich

collection of time- and voltage-dependent channels

has renewed and intensified the study of dendritic

signal processing at the electrophysiological level

(for reviews, seeHausser et al., 2000;Magee, 2000;

been to understand how these active mechanisms

augment the passive properties of dendrites These

studies, however, have produced somewhat

con-flicting results as to whether dendrites integrate

synaptic inputs in a linear or nonlinear fashion

electrophysi-ological studies has also been to identify the

con-ditions in which dendrites initiate action potentials

2004), to understand how dendrites spatially and

temporally integrate inputs (Magee, 1999; Polsky

of local dendritic computation (Mel, 1993;Hausser

Although these past studies have shed light on

many aspects of single-neuron computation, most

studies have focused on quiescent neurons in vitro

A common experimental technique is to observe

how dendrites process brief ‘‘single-shock’’ inputs,

either a single EPSP or the equivalent dendritic

current injection, applied with no background vity present (but see Larkum et al., 2001; Oviedo

average spike rate of central neurons, it is unlikelythat dendrites receive single synaptic inputs inisolation A more likely scenario is that dendritesreceive constant time-varying excitatory and inhibi-tory synaptic input that together produces randomfluctuations in the membrane potential (Ferster

2004) The challenge is to incorporate this type

of temporally varying input into our study of theneuronal input/output function Fortunately, sys-tem-identification theory provides us with severaluseful tools for addressing this question

Using a white-noise input to reveal the neuronalinput/output function

The field of system-identification theory has oped rigorous methods for describing the input/output relationships of unknown systems (forreviews, see Marmarelis and Marmarelis, 1978;

been used to describe the relationship betweenexternal sensory inputs and neuronal responses in avariety of brain areas (for reviews, seeChichilnisky,

system-identification is the use of a ‘‘white-noise’’ stimulus

to characterize the system Such an input ically contains all temporal correlations and power

theoret-at all frequencies If the unknown system is linear,

or slightly nonlinear, it is a straightforward process

to extract a description of the system by correlatingthe output with the random input stimulus If theunknown system is highly nonlinear, however, thisapproach is much more difficult

One difficulty of describing the input/outputfunction of a single neuron is that we lack precisestatistical descriptions of the inputs neurons receiveover time Given that a typical pyramidal neuronhas over ten thousand synaptic contacts, one mightreasonably estimate that an input arrives on thedendrites every millisecond or less, producingmembrane fluctuations that are constantly varying

Trang 9

in time Thus, using a white-noise input has two

advantages: (1) it affords the use of quantitative

methods for identifying the dendrite input/output

function and (2) it may represent a stimulus that is

statistically closer to the type of input dendrites

receive in vivo

We applied a system-identification approach to

reveal the input/output function of hippocampal

CA1 pyramidal neurons in vitro (Fig 1) We used

standard techniques to perform dual whole-cell

patch clamp recordings in brain slices (Colbert and

white-noise current (Id) into the dendrites with one

electrode and measured the membrane potential atthe soma (Vs) with a second electrode The ampli-tude distribution of the injected current wasGaussian with zero mean Electrode separationranged from 125 to 210 mm with the dendrite elec-trode placed on the main proximal apical dendriticbranch Figure 1 illustrates a short segment ofthe white-noise stimulus and the correspondingsomatic membrane potentials

To examine how the input/output functionchanged with different input conditions, we alter-nately changed the variance of the input currentbetween low and high values The low-varianceinput produced small subthreshold fluctuations inthe somatic membrane potential In contrast, thehigh-variance input produced large fluctuationsthat caused the neurons to fire action potentialswith an average rate of 0.9 spikes/s This rate offiring was chosen because it is similar to the averagefiring rate of CA1 hippocampal neurons in vivo

Thus, we examined the dendrite-to-soma input/output function under physiologically reasonablesubthreshold and suprathreshold operating regimes

The LN model

input/output function of the neuron using an LNmodel (Hunter and Korenberg, 1986) This is afunctional model that provides an intuitive descrip-tion of the system under study and has been parti-cularly useful for capturing temporal processing inthe retina in response to random visual inputs (forreviews, seeMeister and Berry, 1999;Chichilnisky,

2001) and the processing of current injected atthe soma of neurons (Bryant and Segundo, 1976;

2005) The LN model is a cascade of two processingstages: The first stage is a filter (the ‘‘L’’ stage) thatlinearly convolves the input current Id The output

of the linear filter, F, is the input to the nonlinearsecond stage (the ‘‘N’’ stage) that converts the out-put of the linear filter into the predicted somaticpotentials ( ^VS) This second stage is static and can

be viewed as capturing the gain of the system Thetwo stages of the LN model are represented

Fig 1 Using a system-identification approach to characterize

the dendrite-to-soma input/output function (A) Fifty seconds

of zero-mean Gaussian distributed random current (I d ) was

in-jected into the proximal apical dendrites of CA1 pyramidal

neurons and the membrane potential (V s ) was recorded at the

soma The variance of the injected current was switched

be-tween low (bottom traces) and high (top traces) on alternate

trials Action potentials were produced with the high-variance

input (B) An LN model was fit to the somatic potential The

input to the model was the injected current and the output of

the model was the predicted soma potential ( ^ VS) The LN

model was composed of a linear filter that was convolved with

the input current followed by a static-gain function The output

of the linear filter, F (arbitrary units), was scaled by the

static-gain function to produce the predicted somatic potential The

static-gain function was modeled as a quadratic function of F.

3

Trang 10

mathematically as

F ¼ HId

^

where H is a linear filter, * the convolution

oper-ator, and G a quadratic static-gain function

Having two stages of processing is an important

aspect of the model because it allows us to separate

temporal processing from gain control The linear

filter describes the temporal processing while the

nonlinear static-gain captures

amplitude-depend-ent changes in gain Thus, this functional model

permits us to describe the neuronal input/output

function using quantitatively precise terms such as

filtering and gain control In contrast, highly

de-tailed biophysical models of single neurons, with

their large number of nonlinear free parameters,

are less likely to provide such a functionally clear

description of single-neuron computation

It is important to note that we did not seek to

describe the production of action potentials in the

dendrite-to-soma input/output function Action

potentials are extremely nonlinear events and would

not be captured by the LN model We instead

focu-sed on explaining the subthreshold fluctuations of

the somatic voltage potential Thus, action

poten-tials were removed from the somatic potential

be-fore the data were analyzed This was accomplished

by linearly interpolating the somatic potential from

1 ms before the occurrence of the action potential to

either 5 or 10 ms after the action potential Because

action potentials make up a very small part of the

50 s of data (typically less than 2%), our results

were not qualitatively affected when the spikes were

left in place during the analysis

The LN model accounts for the dendrite-to-soma

input/output function

Using standard techniques, we fit the LN model to

reproduce the recorded somatic potential in

res-ponse to the injected dendritic current (Hunter and

and high variance input conditions affected the

components of the LN model Therefore, these

conditions were fit separately An example of

the LN model’s ability to account for the neuronal

input/output function is shown in Fig 2 Forthis neuron, the LN model’s predicted somaticmembrane voltage ( ^VS; dashed line) almostperfectly overlapped the neuron’s actual somaticpotential (Vs, thick gray line) for both input condi-tions (Fig 2A and B) The LN model was able tofully describe the somatic potentials in response tothe random input current with very little error.Computing the Pearson’s correlation coefficientover the entire 50 s of data, the LN modelaccounted for greater than 97% of the variance ofthis neuron’s somatic potential

Repeating this experiment in 11 CA1 neurons,the LN model accounted for practically all of thesomatic membrane potential (average R240.97).Both the low and high variance input conditionswere captured equally well by the LN model Thus,the LN model is a functional model that describesthe neuronal input/output function over a range ofinput regimes from low-variance subthreshold tohigh-variance suprathreshold stimulation

Gain but not filtering adapts to the input variance

The LN model’s linear filters and nonlinear gain functions are shown for our example neuron in

the linear filters (Fig 2C) for both the low (solidline) and high (dashed line) variance inputs hadpronounced negativities corresponding to a band-pass in the 1–10 Hz frequency range (inset).Although the two input conditions were signifi-cantly different, the filters for the low- and high-variance inputs were very similar Across our pop-ulation of neurons, we found no systematic change

in the linear filters as the input variance was variedbetween low and high levels Therefore, the tem-poral processing performed by CA1 pyramidalneurons on inputs arriving at the proximal apicaldendrites does not change with the input variance

In contrast to the filtering properties of CA1 rons, the static-gain function changed as a function

neu-of input variance Figure 2D illustrates the gain function for both input conditions In thisplot, the resting membrane potential corresponds to

static-0 mV and the units for the output of the linear filter(F) are arbitrary The static-gain function for the

Trang 11

low-variance input was a straight line indicating that

the neuronal input/output function was linear For

the high-variance input, however, the static-gain

function demonstrated two important

nonlineari-ties First, the static-gain function showed a

com-pressive nonlinearity at depolarized potentials

Thus, at large depolarizing potentials, there was a

reduction in the gain of the input/output

relation-ship Second, there was a general reduction in slope

of the static-gain function for high-variance input

compared with the low-variance slope, indicating an

overall reduction in gain Thus, for this neuron,

in-creasing the variance of the input reduced the gain

of the input/output function at rest that was further

reduced for depolarizing potentials

Across our population of 11 neurons, we found

that increasing the variance of the input reduced

the gain of CA1 neurons by an average of 16% at

the resting membrane potential This reduction ingain also increased with both hyperpolarized anddepolarized potentials Adapting to the variance of

an input is an important form of gain controlbecause it ensures that the input stays within theoperating range of the neuron Although a 16%reduction may seem small in comparison to thelarge change in the input-variance, there are manyinstances where small changes in neuronal activityare related to significant changes in behavior Forvisual cortical neurons, it has been shown thatsmall changes in spike activity (o5%) are corre-

lated with pronounced changes in perceptualabilities (Britten et al., 1996; Dodd et al., 2001;

even small modulations of neuronal activity canhave large effects on behavior

Fig 2 The dendrite-to-soma input/output function of a CA1 neuron is well described by the LN model (A) Example of 500 ms of the input current and somatic potential for the low-variance input The predicted somatic membrane potential of the LN model ( ^ VS; dashed line) overlaps the recorded somatic potential (V s , thick gray line) (B) Example of the LN model’s fit to the high-variance input Action potentials were removed from the recorded somatic potential before fitting the LN model to the data (C) The impulse-response function of the linear filters for the optimized LN model corresponding to the low (solid line) and high (dashed line) variance inputs Inset is the frequency response of the filters (D) Static-gain function for the optimized LN model plotted for the low (solid line) and high (dashed line) variance inputs The axes for the high variance input were appropriately scaled so that the slope of both static-gain functions could be compared.

5

Trang 12

Voltage- and time-dependent properties that

underlie neuronal bandpass filtering

The above experimental results suggest that the

dendrite-to-soma input/output relationship is well

described as a linear filter followed by an adapting

static-gain function We wanted to know the

bio-physical components that produce the filtering and

gain control To address this, we used the computer

program NEURON (Hines and Carnevale, 1997)

to simulate a multi-compartment ‘‘ball & stick’’

model neuron (Fig 3A)

We applied the random stimulus that we used in

the experimental recordings to the dendrite of the

passive model and then fit the data with the LN

model to describe its input/output function As

would be expected from Rall’s passive theory of

dendrites, the estimated filters and gain functions

were identical for the low and high variance input

conditions (Fig 3B) In addition, the filters from

the passive model’s impulse-response function had

no negativity and thus were not bandpass (inset)

and the static-gain function was linear (Fig 3C).Thus, the passive properties of dendrites in thecompartmental model do not produce the samecharacteristics of the experimentally observed dend-rite-to-soma input/output function

We wanted to know what type of voltage- andtime-dependent channels might account for ourexperimental observations Active channels come

in a variety of classes Instead of focusing onone particular class, we used the freedom ofcomputer simulations to construct a hypotheticalchannel Using a generic channel, referred to as

Ix, we systematically varied channel parameters toinvestigate how the voltage- and time-dependentproperties affected temporal filtering and gaincontrol in the ball & stick model Our theore-tical channel was based on the classic Hodgkinand Huxley formulation (Hodgkin and Huxley,

1952) that incorporated a voltage- and dependent activation variable, n(v, t) This acti-vation variable had sigmoidal voltage-dependentsteady-state activation with first-order kinetics

time-Fig 3 Dendrite-to-soma input/output function of a passive neuron model (A) The passive model had 20 dendrite compartments with

a total length of 2000 mm and a diameter that tapered distally from 3 to 1 mm The soma was a single 20  20 mm compartment The passive parameters of the model were R m ¼ 40,000 O cm2, C m ¼ 2 mF/cm2, and R a ¼ 150 O cm (B) The optimized filters of the LN model were fit to the passive model Filters for the low- and high-variance input were identical (C) Static-gain functions of the optimized LN model were linear and had the same slope for both input conditions.

Trang 13

Mathematically, our hypothetical channel is

where nN is the steady-state activation based on

a sigmoid centered at v1/2with a slope of 1/b, ¯gxthe

maximal conductance, t the time constant of

acti-vation, and Erevthe reversal potential of the channel

We first examined the effects of varying the

steady-state voltage activation curve on the input/

output function of the model Voltage-dependent

channels can have either depolarizing or larizing activation curves We inserted a uniformdensity of our Ixcurrent throughout the dendritesand left the soma compartment passive We set theparameters of Ixto have decreasing activation withdepolarizing voltage (Fig 4A) and stimulated themodel with our low- and high-variance dendriticcurrent injection Fitting the LN model to theresults of the simulation resulted in a bandpassfilter and linear static-gain (Fig 4A) The LNmodel accounted for greater than 98% of thesomatic membrane potential and thus represented

hyperpo-an excellent description of the input/outputrelationship of the compartmental model It isworth mentioning that the simulated properties

of Ix resembled the prominent dendritic current

Fig 4 The direction of steady-state voltage activation has little effect on bandpass features of the LN model This figure shows the LN models that describe the dendrite-to-soma input/output function of the compartmental model containing the dendritic channel I x Two different steady-state activation curves were used for I x (A) Hyperpolarizing steady-state voltage activation of I x produced bandpass features in the LN model (i.e., biphasic impulse-response function) but did not produce a reduction in gain between the low (solid line) and high (dashed line) variance input conditions (B) Depolarizing steady-state voltage activation of I x also produced bandpass features with no reduction in gain In all simulations, I x had a t of 50 ms The vertical dashed line in the activation plots indicates the resting membrane potential of 65 mV.

7

Trang 14

Ih (Magee, 1998) Thus, a simple voltage- and

time-dependent channel can account for the

band-pass filtering observed in our experimental data

To see how the activation properties of the

channel affected the input/output function, we

reversed the activation curve of our hypothetical

channel to have an increasing activation with

depolarized potentials (Fig 4B) The other

para-meters were the same except that the reversal

potential of Ixwas changed and the activation curve

was shifted slightly to maintain stability Injecting

the low- and high-variance input current and fitting

the LN model to the somatic potential, we found

that this active current also produced a bandpass

input/output function Interestingly, there was still

a lack of change in gain with input variance as can

be seen in the static-gain function Similar results

were also observed when the slope of the activation

curves was varied (data not shown)

From these simulations we can draw two

con-clusions First, it appears that a variety of voltage

dependencies can produce the bandpass filteringobserved in neurons Of course, this is only truewhen the membrane potential falls within thevoltage activation range of the channel In otherwords, a voltage-dependent channel that is alwaysopen or closed would not produce bandpassfiltering Second, a simple voltage-dependentmechanism does not seem to account for the ex-perimentally observed change in gain between thelow and high variance input conditions (comparethe static-gain functions inFigs 2D and 4).Next, we examined the effect of the time depen-dencies of our theoretical channel on the neuronalinput/output function In the above simulations, weheld the t of Ix fixed at 50 ms By varying t wefound that the time dependencies of the channelgreatly affected the filtering properties A shorter t

of 8 ms produced a model with an input/outputfunction that exhibited less bandpass filtering thatwas shifted to higher frequencies (Fig 5A) Theshorter t, however, created a slight increase in gain

Fig 5 Temporal channel properties determine bandpass features of the LN model Shown are the LN models for both the low (solid line) and high (dashed line) variance input conditions Except for t, the parameters for I x were the same as in Fig 4A (A) Fast activation of I x (t ¼ 8 ms) moved the bandpass to higher frequencies, but did not produce a reduction in gain with increased input variance (B) Slow activation of I x (t ¼ 200 ms) increased the bandpass property of the filter and moved it toward lower frequencies with no reduction in gain.

Trang 15

for the high-variance input compared with the

low-variance input, which is opposite to the gain change

observed experimentally In comparison, increasing

t to 200 ms had the opposite effect of enhancing

the bandpass filtering of the model (Fig 5B)

Compared with a t of 50 ms (Fig 4A), the slower

channel also moved the bandpass region to a lower

frequency range However, increasing t produced

no change in the gain of the neuron from the

low-variance to the high-low-variance condition

Discussion

Determining how neurons integrate synaptic input

is critical for revealing the mechanisms underlying

higher brain function A precise description of the

dendrite-to-soma input/output function is an

im-portant step We found that the dendrite-to-soma

input/output function of CA1 pyramidal neurons

is well described by a simple functional LN model

that combines linear filtering with static nonlinear

gain control The fact that the LN model

ac-counted for over 97% of the somatic potential

variance during a relatively long random input

cannot be overemphasized Even when producing

action potentials during the high-variance input,

the neuronal input/output function was well

de-scribed by the LN model The combination of

bandpass filtering characteristics and nonlinear

gain changes suggests that the input/output

func-tion cannot be explained by passive cellular

prop-erties, but requires active membrane mechanisms

The advantages of characterizing the neuronal

input/output relationship using a functional LN

model are many This model allows us to describe

neuronal processing using the well-defined signal

processing concepts of linear filtering and gain

control Although useful in understanding the

bio-physical aspects of neurons, a realistic

compart-mental model of a neuron would not allow such a

clear description of the dendrite-to-soma input/

output function As demonstrated by our

mode-ling of a hypothetical voltage-dependent

conduct-ance, Ix, different channel parameters can produce

the same qualitative input/output characteristics

of a compartmental neuron model

That a simple functional model accounted sowell for the dendrite-to-soma processing was ini-tially surprising given that dendrites contain a widerange of nonlinear voltage- and time-dependentchannels (Johnston et al., 1996) However, oursubsequent computer simulations using a compart-mental model indicate that nonlinear channels canunderlie the linear temporal dynamics observedexperimentally The bandpass filtering produced byour theoretical voltage- and time-dependent chan-nel is a result of a complex interaction between thepassive filtering properties of the membrane andthe temporal dynamics of the channel (for review,

steady-state activation curve also influenced thebandpass filtering, we found that channel kineticshad the greatest effect on the temporal filtering ofthe model

It is significant that the dendrite-to-soma input/output relationship contains a prominent bandpass

in the theta frequency range Neuronal networks inthe hippocampus have prominent theta oscillationsthat are correlated with specific cognitive andbehavioral states Hippocampal theta oscillationsoccur during active exploration of the environ-ment, during REM sleep, and may underliememory-related processes (for reviews, seeBuzsaki,

dynamics of the dendrite-to-soma input/outputfunction may contribute directly to network-leveloscillations in the hippocampus and other brainareas such as the neocortex (Ulrich, 2002)

Adaptation of gain is an important processing mechanism because it ensures that theamplitude of the stimulus is maintained within thedynamic range of the system (for review, seeSalinas

basis for the popular idea that the brain adapts tothe statistical properties of the signals encoded forefficient representation (e.g., Barlow, 1961; Atick,

For example, the spike activity of neurons in thevisual system has repeatedly been shown to adapt

to the variance (or contrast) of a visual stimulus(e.g., Maffei et al., 1973; Movshon and Lennie,

We found a similar change in gain to the variance

9

Trang 16

of the injected current, suggesting that the intrinsic

properties of dendrites may provide part of the

foundation for gain adaptation observed at the

cir-cuit and systems level Recent studies have reported

similar changes in the gain of signals injected into

the soma of cortical neurons in vitro It has been

proposed that this regulation of gain may be due to

either intrinsic channel mechanisms (Sanchez-Vives

acti-vity (Chance et al., 2002; Rauch et al., 2003; Shu

the importance of maintaining the optimal level of

activity in the brain, it is not surprising that there

may exist multiple mechanisms for regulating gain

With our computer simulations, however, we

were not able to link the properties of our simple

theoretical channel to the experimentally observed

adaptation of the static-gain function Although

we observed changes in gain that occurred between

the low- and high-variance input conditions, these

were in the wrong direction (compare Fig 2D

the compressive reduction in gain observed at

depolarized potentials with the high-variance

input This suggests that the experimentally

observed change in the static-gain function may

be due to other mechanisms such as an increase in

intracellular Ca2+during the high-variance input

Another possibility is that the reduction in gain

with increased input variance may arise from the

interaction of many different channel types and

mechanisms

The theoretical channel model in Fig 4A is

based closely on the voltage-dependent current, Ih

This channel is expressed throughout the dendrites

and has been shown to affect the temporal

inte-gration of synaptic inputs (Magee, 1999) Using a

‘‘chirp’’ sinusoidal stimulus, Ulrich showed that Ih

plays a role in dendrite-to-soma bandpass filtering

in neocortical neurons (Ulrich, 2002) Our

prelim-inary experiments conducted in the presence of

pharmacological blockers suggest that Ihmay have

a similar role in hippocampal pyramidal cells

However, dendrites contain many other

voltage-dependent mechanisms and understanding how

they work together to shape the dendrite-to-soma

input/output function is an important topic for

future studies

Acknowledgments

Supported by the Canada Foundation for tion and operating grants from the CanadianInstitutes of Health Research and the NaturalSciences and Engineering Research Council ofCanada (EPC)

Innova-References

Albrecht, D.G., Farrar, S.B and Hamilton, D.B (1984) Spatial contrast adaptation characteristics of neurones recorded in the cat’s visual cortex J Physiol., 347: 713–739.

Ariav, G., Polsky, A and Schiller, J (2003) Submillisecond cision of the input-output transformation function mediated

pre-by fast sodium dendritic spikes in basal dendrites of CA1 pyramidal neurons J Neurosci., 23: 7750–7758.

Atick, J.J (1992) Could information theory provide an cal theory of sensory processing? Network, 3: 213–251 Baccus, S.A and Meister, M (2002) Fast and slow contrast adaptation in retinal circuitry Neuron, 36: 909–919 Barlow, H.B (1961) In: RosenBlith W.A (Ed.), Sensory Communication MIT Press, Cambridge, MA, pp 217–234 Bialek, W and Rieke, F (1992) Reliability and information transmission in spiking neurons Trends Neurosci., 15: 428–434.

ecologi-Binder, M.D., Poliakov, A.V and Powers, R.K (1999) Functional identification of the input-output transforms of mammalian motoneurones J Physiol Paris, 93: 29–42 Britten, K.H., Newsome, W.T., Shadlen, M.N., Celebrini, S and Movshon, J.A (1996) A relationship between behavioral choice and the visual responses of neurons in macaque MT Vis Neurosci., 13: 87–100.

Bryant, H.L and Segundo, J.P (1976) Spike initiation by transmembrane current: a white-noise analysis J Physiol., 260: 279–314.

Buzsaki, G (2002) Theta oscillations in the hippocampus Neuron, 33: 325–340.

Cash, S and Yuste, R (1999) Linear summation of excitatory inputs by CA1 pyramidal neurons Neuron, 22: 383–394 Chance, F.S., Abbott, L.F and Reyes, A.D (2002) Gain modulation from background synaptic input Neuron, 35: 773–782.

Chichilnisky, E.J (2001) A simple white noise analysis of neuronal light responses Network, 12: 199–213.

Colbert, C.M and Pan, E (2002) Ion channel properties underlying axonal action potential initiation in pyramidal neurons Nat Neurosci., 5: 533–538.

Cook, E.P and Maunsell, J.H (2002) Dynamics of neuronal responses in macaque MT and VIP during motion detection Nat Neurosci., 5: 985–994.

Destexhe, A and Pare, D (1999) Impact of network activity on the integrative properties of neocortical pyramidal neurons in vivo J Neurophysiol., 81: 1531–1547.

Trang 17

Destexhe, A., Rudolph, M and Pare, D (2003) The

high-conductance state of neocortical neurons in vivo Nat Rev.

Neurosci., 4: 739–751.

Dodd, J.V., Krug, K., Cumming, B.G and Parker, A.J (2001)

Perceptually bistable three-dimensional figures evoke high

choice probabilities in cortical area MT J Neurosci., 21:

4809–4821.

Fairhall, A.L., Lewen, G.D., Bialek, W and de Ruyter Van

Steveninck, R.R (2001) Efficiency and ambiguity in an

adaptive neural code Nature, 412: 787–792.

Ferster, D and Jagadeesh, B (1992) EPSP-IPSP interactions

in cat visual cortex studied with in vivo whole-cell patch

recording J Neurosci., 12: 1262–1274.

Gasparini, S and Magee, J.C (2006) State-dependent dendritic

computation in hippocampal CA1 pyramidal neurons J.

Neurosci., 26: 2088–2100.

Gasparini, S., Migliore, M and Magee, J.C (2004) On the

initiation and propagation of dendritic spikes in CA1

pyramidal neurons J Neurosci., 24: 11046–11056.

Golding, N.L and Spruston, N (1998) Dendritic sodium spikes

are variable triggers of axonal action potentials in

hippo-campal CA1 pyramidal neurons Neuron, 21: 1189–1200.

Hausser, M and Mel, B (2003) Dendrites: bug or feature?

Curr Opin Neurobiol., 13: 372–383.

Hausser, M., Spruston, N and Stuart, G.J (2000) Diversity

and dynamics of dendritic signaling Science, 290: 739–744.

Higgs, M.H., Slee, S.J and Spain, W.J (2006) Diversity of gain

modulation by noise in neocortical neurons: regulation by the

slow after-hyperpolarization conductance J Neurosci., 26:

8787–8799.

Hines, M.L and Carnevale, N.T (1997) The NEURON

simulation environment Neural Comput., 9: 1179–1209.

Hodgkin, A.L and Huxley, A.F (1952) A quantitative

description of membrane current and its application to

conduction and excitation in nerve J Physiol., 117: 500–544.

Hosoya, T., Baccus, S.A and Meister, M (2005) Dynamic

predictive coding by the retina Nature, 436: 71–77.

Hunter, I.W and Korenberg, M.J (1986) The identification of

nonlinear biological systems: Wiener and Hammerstein

cascade models Biol Cybern., 55: 135–144.

Hutcheon, B and Yarom, Y (2000) Resonance, oscillation

and the intrinsic frequency preferences of neurons Trends

Neurosci., 23: 216–222.

Johnston, D., Magee, J.C., Colbert, C.M and Cristie, B.R.

(1996) Active properties of neuronal dendrites Annu Rev.

Neurosci., 19: 165–186.

Kim, K.J and Rieke, F (2001) Temporal contrast adaptation in

the input and output signals of salamander retinal ganglion

cells J Neurosci., 21: 287–299.

Larkum, M.E and Zhu, J.J (2002) Signaling of layer 1 and

whisker-evoked Ca2+ and Na+ action potentials in distal

and terminal dendrites of rat neocortical pyramidal neurons

in vitro and in vivo J Neurosci., 22: 6991–7005.

Larkum, M.E., Zhu, J.J and Sakmann, B (2001) Dendritic

mechanisms underlying the coupling of the dendritic with the

axonal action potential initiation zone of adult rat layer 5

pyramidal neurons J Physiol., 533: 447–466.

Lengyel, M., Huhn, Z and Erdi, P (2005) Computational theories on the function of theta oscillations Biol Cybern., 92: 393–408.

London, M and Hausser, M (2005) Dendritic computation Annu Rev Neurosci., 28: 503–532.

Maffei, L., Fiorentini, A and Bisti, S (1973) Neural correlate of perceptual adaptation to gratings Science, 182: 1036–1038 Magee, J.C (1998) Dendritic hyperpolarization-activated currents modify the integrative properties of hippocampal CA1 pyramidal neurons J Neurosci., 18: 7613–7624 Magee, J.C (1999) Dendritic I h normalizes temporal summation

in hippocampal CA1 neurons Nat Neurosci., 2: 508–514 Magee, J.C (2000) Dendritic integration of excitatory synaptic input Nat Rev Neurosci., 1: 181–190.

Markus, E.J., Qin, Y.L., Leonard, B., Skaggs, W.E., McNaughton, B.L and Barnes, C.A (1995) Interactions between location and task affect the spatial and directional firing of hippocampal neurons J Neurosci., 15: 7079–7094 Marmarelis, P.Z and Marmarelis, V.Z (1978) Analysis of Physiological Systems: The White Noise Approach Plenum Press, New York, NY.

McCulloch, W.S and Pitts, W (1943) A logical calculus of ideas immanent in nervous activity Bull Math Biophys., 5: 115–133.

Meister, M and Berry II, M.J (1999) The neural code of the retina Neuron, 22: 435–450.

Mel, B.W (1993) Synaptic integration in an excitable dendritic tree J Neurophysiol., 70: 1086–1101.

Movshon, J.A and Lennie, P (1979) Pattern-selective adaptation in visual cortical neurones Nature, 278: 850–852 Nettleton, J.S and Spain, W.J (2000) Linear to supralinear summation of AMPA-mediated EPSPs in neocortical pyramidal neurons J Neurophysiol., 83: 3310–3322 Nevian, T., Larkum, M.E., Polsky, A and Schiller, J (2007) Properties of basal dendrites of layer 5 pyramidal neurons: a direct patch-clamp recording study Nat Neurosci., 10: 206–214.

Oviedo, H and Reyes, A.D (2002) Boosting of neuronal firing evoked with asynchronous and synchronous inputs to the dendrite Nat Neurosci., 5: 261–266.

Oviedo, H and Reyes, A.D (2005) Variation of input-output properties along the somatodendritic axis of pyramidal neurons J Neurosci., 25: 4985–4995.

Poliakov, A.V., Powers, R.K and Binder, M.D (1997) tional identification of the input-output transforms of moto- neurones in the rat and cat J Physiol., 504(Pt 2): 401–424 Polsky, A., Mel, B.W and Schiller, J (2004) Computational subunits in thin dendrites of pyramidal cells Nat Neurosci., 7: 621–627.

Func-Purushothaman, G and Bradley, D.C (2005) Neural tion code for fine perceptual decisions in area MT Nat Neurosci., 8: 99–106.

popula-Rall, W (1959) Branching dendritic trees and motoneuron membrane resistivity Exp Neurol., 1: 491–527.

Rauch, A., La Camera, G., Luscher, H.R., Senn, W and Fusi, S (2003) Neocortical pyramidal cells respond as

11

Trang 18

integrate-and-fire neurons to in vivo-like input currents J.

Neurophysiol., 90: 1598–1612.

Reyes, A (2001) Influence of dendritic conductances on the

input-output properties of neurons Annu Rev Neurosci.,

24: 653–675.

Sakai, H.M (1992) White-noise analysis in neurophysiology.

Physiol Rev., 72: 491–505.

Salinas, E and Thier, P (2000) Gain modulation: a major

computational principle of the central nervous system.

Neuron, 27: 15–21.

Sanchez-Vives, M.V., Nowak, L.G and McCormick, D.A.

(2000) Membrane mechanisms underlying contrast

adapta-tion in cat area 17 in vivo J Neurosci., 20: 4267–4285.

Segev, I and London, M (2000) Untangling dendrites with

quantitative models Science, 290: 744–750.

Shu, Y., Hasenstaub, A., Badoual, M., Bal, T and McCormick,

D.A (2003) Barrages of synaptic activity control the gain and

sensitivity of cortical neurons J Neurosci., 23: 10388–10401.

Slee, S.J., Higgs, M.H., Fairhall, A.L and Spain, W.J (2005)

Two-dimensional time coding in the auditory brainstem.

J Neurosci., 25: 9978–9988.

Stuart, G., Schiller, J and Sakmann, B (1997) Action potential

initiation and propagation in rat neocortical pyramidal

neurons J Physiol., 505(Pt 3): 617–632.

Tamas, G., Szabadics, J and Somogyi, P (2002) Cell type- and

subcellular position-dependent summation of unitary

postsy-naptic potentials in neocortical neurons J Neurosci., 22:

740–747.

Uka, T and DeAngelis, G.C (2004) Contribution of area MT

to stereoscopic depth perception: choice-related response

modulations reflect task strategy Neuron, 42: 297–310.

Ulrich, D (2002) Dendritic resonance in rat neocortical pyramidal cells J Neurophysiol., 87: 2753–2759.

Urban, N.N and Barrionuevo, G (1998) Active summation of excitatory postsynaptic potentials in hippocampal CA3 pyramidal neurons Proc Natl Acad Sci U.S.A., 95: 11450–11455.

Wei, D.S., Mei, Y.A., Bagal, A., Kao, J.P., Thompson, S.M and Tang, C.M (2001) Compartmentalized and binary behavior of terminal dendrites in hippocampal pyramidal neurons Science, 293: 2272–2275.

Westwick, D.T and Kearney, R.E (2003) Identification of Nonlinear Physiological Systems IEEE Press, Piscataway, NJ.

Williams, S.R (2004) Spatial compartmentalization and functional impact of conductance in pyramidal neurons Nat Neurosci., 7: 961–967.

Williams, S.R and Stuart, G.J (2002) Dependence of EPSP efficacy on synapse location in neocortical pyramidal neurons Science, 295: 1907–1910.

Williams, S.R and Stuart, G.J (2003) Role of dendritic synapse location in the control of action potential output Trends Neurosci., 26: 147–154.

Womack, M.D and Khodakhah, K (2004) Dendritic control

of spontaneous bursting in cerebellar Purkinje cells J sci., 24: 3511–3521.

Neuro-Wu, M.C., David, S.V and Gallant, J.L (2006) Complete functional characterization of sensory neurons by system identification Annu Rev Neurosci., 29: 477–505.

Yoganarasimha, D., Yu, X and Knierim, J.J (2006) Head rection cell representations maintain internal coherence during conflicting proximal and distal cue rotations: comparison with hippocampal place cells J Neurosci., 26: 622–631.

Trang 19

di-P Cisek, T Drew & J.F Kalaska (Eds.)

Progress in Brain Research, Vol 165

Department of Physiology and Cellular Biophysics, Center for Neurobiology and Behavior, Columbia University College

of Physicians and Surgeons, New York, NY 10032-2695, USA

Keywords: network activity; homeostasis; plasticity; network development

Introduction

Theoretical (also known as computational)

neuro-science seeks to use mathematical analysis and

computer simulation to link the anatomical and

physiological properties of neural circuits to

be-havioral and cognitive functions Often,

research-ers working in this field have a general principle of

circuit design or a computational mechanism in

mind when they start to work on a project For

the project to be described here, the general issue

concerns the connectivity of neural circuits For all

but the smallest of neural circuits, we typically do

not have a circuit diagram of synaptic connectivity

or a list of synaptic strengths How can we model a

circuit when we are ignorant of such basic facts

about its structure? One answer is to approach the

problem statistically, put in as much as we knowand essentially average over the rest Anotherapproach, and the one that inspires this work, is tohope that we can uncover properties of a neuralcircuit from basic principles of synapse formationand plasticity In other words, if we knew the rules

by which neural circuits develop, maintain selves, and change in response to activity, we couldwork out their architecture on the basis of thatknowledge To this end, we need to uncover thebasic rules and principles by which neural circuitsconstruct themselves

them-When neurons are removed from the brain andgrown in culture, they change from disassociatedneurons into reconnected networks or, in the case

of slice cultures, from brain slices to essentially dimensional neural circuits These re-developmentprocesses provide an excellent opportunity for ex-ploring basic principles of circuit formation Usingslice cultures from rat cortex (and also acute slices),

two-Corresponding author Tel.: +1 212-543-5070;

Fax: +1 212-543-5797; E-mail: lfa2103@columbia.edu

13

Trang 20

Beggs and Plenz (2003, 2004)uncovered an

intrigu-ing property of networks of neurons developed in

this way By growing neural circuits on electrode

arrays, they were able to record activity over long

periods of time and accumulate a lot of data on

the statistical properties of the activity patterns

that arise spontaneously in such networks Of

particular interest are the observations of scaling

behavior and criticality These results provide

the inspiration for the model we construct and

study here

The networks recorded by Beggs and Plenz

punctuated by spontaneous bursts of activity

ob-served on variable numbers of electrodes for

different periods of time Beggs and Plenz called

these bursts avalanches To define and

parameter-ize neural avalanches, they divided time into bins

of size tbinthrough a procedure that selects an

op-timal size Here, we simply use tbin¼10 ms, typical

of the values they used An avalanche is defined as

an event in which activity is observed on at least

one electrode for a contiguous sequence of time

bins, bracketed before and after by at least one bin

of silence on all electrodes We use an identical

definition here, except that electrode activity is

re-placed by neuronal activity, because our model has

no electrodes and we can easily monitor each

neuron we simulate

The results of Beggs and Plenz (2003, 2004) of

particular importance for our study are histograms

characterizing both the durations and sizes of the

avalanches they recorded Duration was mined by counting the number of consecutive binswithin an avalanche Size was measured either interms of the number of electrodes on which activ-ity was recorded during an avalanche, or by ameasure of the total signal seen on all electrodesduring the course of an avalanche In our mode-ling work, we measure the size of an avalanche bycounting the total number of action potentialsgenerated during its time course

deter-The histograms of duration and size constructedfrom the data revealed a fascinating property

a power-law form The number of events of agiven size fell as the size to the 3/2 power, andthe number of events of a given duration fell as theduration to the 2 power Power-law distributionsare interesting because they contain no naturalscale For example, in this context we might expectthe typical size of a neuronal dendritic tree oraxonal arbor (around 100 mm) to set the spatialscale for avalanches Similarly, we might expect atypical membrane time constant of around 10 ms

to set the scale for avalanche durations If thiswere true, the distributions should be exponentialrather than power-law Power-law distributionsindicate that these networks can, at least occa-sionally, produce activity patterns that are muchlarger and much long-lasting that we would haveexpected This is what makes power-law distribu-tions so interesting Another intriguing feature isthat power-law behavior typically arises in systems

Fig 1 Results of Beggs and Plenz on avalanche distributions Left: probability of avalanches of different spatial sizes The dashed line corresponds to a 3/2 power Right: probability of avalanches of different durations The dashed line corresponds to a 2 power (Adapted with permission from Beggs and Plenz, 2004).

Trang 21

when they are critical, meaning that they are close

to a transition in behavior Thus, power laws arise

when systems are specially configured

that the powers they observed, 3/2 and 2, are

the same as those that arise in a very simple model

connects to n other neurons and, if it fires an

ac-tion potential, causes each of its targets to fire with

probability p If po1/n, activity in this model

tends to die out, and if p41/n it tends to blow up

If p ¼ 1/n, on the other hand, this simple model

produces distributions with the same power-law

dependence and same powers as those observed in

the data The condition p ¼ 1/n implies that every

neuron that fires an action potential causes, on

average, one other neuron to fire This is critical in

the sense discussed above that smaller values of p

tend to induce patterns of activity that die out over

time, and larger values of p tend to produce

exploding bursts of activity Thus, the results from

these array recordings lead to the puzzle of how

networks develop and maintain patterns of

con-nectivity that satisfy this criticality condition Do

neurons somehow count the number of other

neurons they project to and adjust the strengths of

their synapses in inverse proportion to this

number? If so, what would be the biophysical

substrate for such a computation and adjustment

To address these questions, we made use of

a model of neuronal circuit growth due to Van

simple, but here simplicity is exactly the point We

ask, in place of the above questions, whether a

simple, biophysically plausible mechanism could

account for the power-law behavior seen in the

avalanche histograms without requiring any

counting of synapses or criticality calculations

We are not proposing that the model we present is

realistic, but rather use it to show that adjusting a

network to be critical may not be as difficult as it

would first appear

The model

Following the work of (Van Ooyen and Van Pelt

2003), our model consists of N neurons positioned

at random locations within a square region Thelength and width of this square defines 1 unit oflength We can think of each location as theposition of the soma of a neuron The axonal anddendritic processes of each neuron are character-ized by a circle drawn around its location The size

of this circle represents the extent of the processesprojecting from the centrally located soma Neu-rons interact synaptically when the circles repre-senting their processes overlap, and the strength ofthe coupling is proportional to the area of overlapbetween these two circles This is reasonablebecause synapses form in areas where neuronalprocesses intersect, and more intersectionsare likely to result in more synapses All synapticconnections are excitatory

The critical component of the model is thegrowth rule that determines how the process-defining circles expand or contract as a function ofneuronal activity The rule is simple: high levels ofactivity, which signify excessively strong excita-tion, cause the neuronal circle to contract, and lowlevels of activity, signifying insufficient excitation,cause it to grow The initial sizes of the circles arechosen randomly and uniformly over the rangefrom 0 to 0.05, in the units defined by the size ofthe square ‘‘plating’’ region

Each neuron in the model is characterized by afiring rate and a radius, which is the radius of thecircle defining the extent of its processes Neuronalactivity is generated by a Poisson spiking model onthe basis of a computed firing rate The firing rate

for neuron i, where i ¼ 1, 2, 3,y, N, relaxes

exponentially to a background rate r0with a timeconstant tr according to

trdri

We took r0¼0.1 Hz and tr¼5 ms The lowbackground firing rate of 0.1 Hz is important toprevent the network from simply remaining silent

At every time step Dt, neuron i fires an action tential with probability riDt We took Dt ¼ 1 ms.After a neuron fires an action potential, it is held in

po-a refrpo-actory stpo-ate in which it cpo-annot fire for 20 ms.Whenever another neuron, neuron j, fires anaction potential, the firing rate of neuron i is

15

Trang 22

incremented by

where Aijis the overlap area between the two

cir-cles characterizing the processes of neurons i and j

In our simulations, the constant g, which sets the

scale of synaptic strength in the model, is set to

g ¼ 500 Hz This number is large because the

overlap areas between the neurons are quite small

in the units we are using

The average level of activity of neuron i is

mon-itored by a variable Cithat represents the internal

calcium concentration in that neuron Cidecays to

zero exponentially,

tC

dCi

and is incremented by one unit ðCi!Ciþ1Þ

whenever neuron i fires an action potential This

step size defines the unit of calcium concentration

The value of the time constant tCis not critical in

what follows, but we took it to be 100 ms

Two features make calcium a useful indicator of

neuronal activity First, resting calcium

concen-trations inside neurons are very small, but calcium

enters the cell whenever the neuron fires an action

potential Because of this, the calcium

concentra-tion acts as an integrator of the acconcentra-tion potential

response and, for this reason, imaging calcium

concentrations is a common way to monitor

neuronal activity Second, many molecules in a

neuron are sensitive to the internal calcium

con-centrations, so this indicator can activate

numer-ous biochemical cascades, including those

responsible for growth

The remaining equation in the model is the one

that determines the contraction or growth of the

radius aicharacterizing neuron i This is

dai

dt ¼kðCtargetCiÞ (4)where k determines the rate of growth We used a

variety of values for k, but growth was always slow

on the time scale of neuronal activity We often

started a run with a larger value of k (k ¼ 0.02 s1)

to speed up growth, but as an equilibrium state

was reached we lowered this to k ¼ 0.002 s1 The

determining the behavior of the model This sets

a target level of calcium, and therefore a targetlevel of activity, for the neurons If activity is low

so that CioCtarget, the above equation causes theprocesses from neuron i to grow (ai increases)leading to more excitatory connections with otherneurons and hence more activity If activity is high

so that Ci4Ctarget, the processes will retract (aidecreases) lowering the amount of excitationreaching neuron i In this way, each neuron grows

or contracts in an attempt to maintain the targetlevel of calcium concentration (Ci¼Ctarget),which implies a certain target level of activity

We discuss the value of Ctarget more fully below,but Ctarget¼0.08 was used to obtain the results inthe figures we show

Results

The left panel of Fig 2 shows a typical ration at the beginning of a run In this case, 100neurons have been located randomly with variousradii, also chosen randomly At this initial point,many of the neurons are disconnected or, at most,connected together in small clusters Each neuronhas a spontaneous firing rate of 0.1 Hz, even whenisolated, so this network exhibits activity, but at alow level.Fig 2 (left) shows a typical initial state

configu-of the model, but the results configu-of running a modelsimulation are independent of the initial stateunless a highly unlikely initial configuration (such

as many neurons at the same position) limitsthe possibilities for developing connectionsthrough growth The target calcium level we use,

Ctarget¼0.08, is larger than the average calciumlevel attained by the neurons in this initial config-uration Thus, when the simulation starts, theneurons (the circles in Fig 2, left) grow larger

As the neurons grow, they begin to form moreand stronger connections, which causes the level ofactivity in the network to increase Growth con-tinues until the neurons are active enough to bringtheir average calcium concentrations near to thevalue Ctarget At this point, the average rate ofgrowth of the network goes to zero, but there arestill small adjustments in the sizes of individualneurons As neurons adjust their own radii, and

Trang 23

react to the adjustments of their neighbors, they

eventually achieve a quasi-equilibrium point in

which their time-averaged calcium concentrations

remain close to Ctarget, with small fluctuations in

their radii over time From this point on, the

net-work will remain in the particular configuration

it has achieved indefinitely This growth process

has been described previously (Van Ooyen and

Our only modification on the original growth

model of Van Ooyen and Van Pelt (1994, 1996)

was to add Poisson spikes to their firing-rate

model The right panel of Fig 2 shows the

equi-librium configuration that arose from the initial

configuration shown in the left panel

The size of the small fluctuations in neuronal

size about the equilibrium configuration is

deter-mined by the magnitude of the growth rate, k

Because growth processes are much slower than

the processes generating activity in a network, we

chose k to be as small as we could without

requir-ing undue amounts of computer time to achieve

equilibrium The results we report are insensitive

to the exact value of k

Once the network has achieved an equilibrium

configuration, we analyze its patterns of activity

using the same approach asBeggs and Plenz (2003,

of the duration and total number of action

poten-tials in periods of activity that were bracketed by

10 ms time bins in which no activity was observed

To assure that the resulting histograms reflect thedynamics of the network and not of the growthprocess, we shut off growth (set k ¼ 0) while weaccumulated data for the histograms, although forthe small growth rate we use, this did not makeany noticeable difference to the results

Histograms of the durations and number of tion potentials for the avalanches seen in themodel at equilibrium are shown in Fig 3 Theseare log-log plots, and the straight lines drawn in-dicate 3/2 (Fig 3, left) and 2 (Fig 3, right)power-law dependences Over the range shown,the histograms follow the power-law dependences

ac-of a critical cascade model As in the data (Beggs

large, rare events due to finite-size effects

Changing the initial size of the circles ing the neuronal processes in these simulations has

represent-no effect, because the growth rule simply expandssmall circles or shrinks large circles until they are

in the equilibrium range The model is, however,sensitive to the value of the target calciumconcentration The most sensitive result is theexponent of the power function describing thedistribution of spike counts, as shown in the leftpanels ofFigs 1 and 3 The exponent for the dis-tribution of durations is less sensitive.Fig 4showshow the spike count distribution exponent de-pends on Ctargetover a range of values from 0.04 to1.2, with the value used for the previous figures,0.08, in the middle of this range

Fig 2 Configuration of the model network before (left) and after (right) activity-dependent growth Each circle represents the extent

of the processes for one neuron Neurons with overlapping circles are connected Initially (left), the neurons are either uncoupled or coupled in small clusters At equilibrium (right), the network is highly connected.

17

Trang 24

In our network model, the spontaneous level of

activity for each neuron, 0.1 Hz, is insufficient to

allow the internal calcium concentration to

ap-proach the target level we set Therefore,

discon-nected neurons grow, and they can only reach an

equilibrium size if they ‘‘borrow’’ activity from

other neurons Even the activity in small clusters is

insufficient to halt growth However, the target

calcium concentration was set so that all-to-allconnections or excessive large-scale firing over theentire network would produce internal calciumconcentrations that exceed the target level andtherefore induce process withdrawal Therefore,the network is forced to find a middle ground inwhich individual neurons share activity in varia-ble-sized groups, drawing excitation from bothnearby and faraway neurons This is what providesthe potential for critical, power-law behavior.The power-laws shown in Figs 3 and 4 occurover a range of values of Ctarget, but they are not

an inevitable consequence in the model Values of

Ctarget significantly higher than those we haveused lead to an essentially flat distribution (overthe finite range) of event sizes and durations.Smaller values lead to a shortage of large, long-lasting events

The model we have considered warrants ing in more depth, and it can be extended in anumber of ways Obviously, inhibitory neuronsshould be added In addition, it would be of in-terest to provide each neuron with two circles, onerepresenting the extent of dendritic outgrowth andthe other axonal Separate growth rules would beneeded for the two circles in this case Finally,the axonal projections could be given both localextension, represented by a circle around the so-matic location, and distal projections, represented

study-by additional circles located away from the soma.The fact that a simple growth rule can generatecircuits with critical, power-law behavior suggests

2 1

log10 spike count

Fig 4 Value of minus the exponent of the power function

describing the spike count distribution as a function of the

target calcium concentration The value seen in the

experi-ments, indicated by the dashed line, is 1.5, corresponding to

C target ¼ 0.08 The solid line is drawn only to guide the eye.

Trang 25

that it could be the basis for developing interesting

network models We have only explored

uncon-trolled spontaneous activity, but the fact that this

can occur over such a large range of sizes and

durations makes the functional implications of

these networks quite intriguing If we can learn to

grow circuits like this in which we can control

the size and time scale of the activity, this could

form a basis for building functional circuits that

go beyond spontaneous activity to perform useful

tasks

Acknowledgments

Research supported by the National Science

Foundation (IBN-0235463) and by an NIH

Di-rector’s Pioneer Award, part of the NIH Roadmap

for Medical Research, through grant number

5-DP1-OD114-02 We thank Tim Vogels and Joe

Monaco for valuable input

References

Abbott, L.F and Jensen, O (1997) Self-organizing circuits of model neurons In: Bower J (Ed.), Computational Neuro- science, Trends in Research 1997 Plenum, NY, pp 227–230 Beggs, J.M and Plenz, D (2003) Neuronal avalanches in neocortical circuits J Neurosci., 23: 11167–11177 Beggs, J.M and Plenz, D (2004) Neuronal avalanches are diverse and precise activity patterns that are stable for many hours in cortical slice cultures J Neurosci., 24: 5216–5229 Teramae, J.n and Fukai, T (2007) Local cortical circuit model inferred from power-law distributed neuronal avalanches.

J Comput Neurosci., 22: 301–312.

Van Ooyen, A (2001) Competition in the development of nerve connections: a review of models Network, 12: R1–R47 Van Ooyen, A (Ed.) (2003) Modeling Neural Development MIT Press, Cambridge, MA.

Van Ooyen, A and Van Pelt, J (1994) Activity-dependent growth of neurons and overshoot phenomena in developing neural networks J Theor Biol., 167: 27–43.

out-Van Ooyen, A and out-Van Pelt, J (1996) Complex periodic haviour in a neural network model with activity-dependent neurite outgrowth J Theor Biol., 179: 229–242.

be-Zapperi, S., Baekgaard, I.K and Stanley, H.E (1995) organized branching processes: mean-field theory for avalanches Phys Rev Lett., 75: 4071–4074.

Self-19

Trang 26

Center for Neural Science, New York University, 4 Washington Place, New York, NY 10003, USA

Abstract: There is a transformation in behavior in the visual system of cats and primates, from neurons inthe Lateral Geniculate Nucleus (LGN) that are not tuned for orientation to orientation-tuned cells inprimary visual cortex (V1) The visual stimuli that excite V1 can be well controlled, and the thalamic inputs

to V1 from the LGN have been measured precisely Much has been learned about basic principles ofcortical neurophysiology on account of the intense investigation of the transformation between LGN andV1 Here we present a discussion of different models for visual cortex and orientation selectivity, and thendiscuss our own experimental findings about the dynamics of orientation selectivity We consider whatthese theoretical analyses and experimental results imply about cerebral cortical function The conclusion isthat there is a very important role for intracortical interactions, especially cortico-cortical inhibition, inproducing neurons in the visual cortex highly selective for orientation

Keywords: V1 cortex; orientation selectivity; computational model; untuned suppression; tunedsuppression; dynamics

Introduction

Orientation tuning, as an emergent property in

visual cortex, must be an important clue to how

the cortex works and why it is built the way it is

There is a transformation in behavior, from

neu-rons in the Lateral Geniculate Nucleus (LGN) that

are not tuned for orientation to orientation-tuned

cells in V1 cortex (for example, in cat area 17,

1982) We have learned about basic principles of

cortical neurophysiology from the intense

investi-gation and constructive disagreements about the

mechanisms of the orientation transformation

between LGN and V1 as discussed below Here

we will present our own findings about the namics of orientation selectivity, and contrast ourresults and conclusions with others Our resultssuggest that intracortical interactions, especiallycortico-cortical inhibition, play an important role

dy-in producdy-ing highly selective neurons dy-in the cortex

Theories of orientation selectivity

The rationale of our experiments came from sidering different models or theories for visualcortical function, so it makes sense to begin withtheory There are two poles of thought about theo-retical solutions for the problem of orientationselectivity: feedforward filtering on the one hand,and attractor states where networks develop

con-Corresponding author Tel.: +1 212 9987614;

Fax: +1 212 9954860; E-mail: shapley@cns.nyu.edu

21

Trang 27

‘‘bumps of activity’’ in the orientation domain as a

response to weakly oriented input on the other

our experimental work, and also on recent

theo-retical work (Troyer et al., 1998; Chance et al.,

orientation selectivity in V1 is recurrent network

filtering We believe that feedforward excitation

induces an orientation preference in V1 neurons

but that cortico-cortical inhibitory interactions

within the V1 network are needed to make V1

neurons highly selective for orientation

Feedforward model of orientation selectivity

The first model offered chronologically, and first

discussed here, is the feedforward model that is

descended from the pioneering work ofHubel and

of being explicit and calculable It involves the

addition of signals from LGN cells that are aligned

in a row along the long axis of the receptive field of

the orientation-selective neuron, as inFig 1 Such

connectivity is likely the basis of orientation

preference (the preferred orientation) but whether

or not feedforward connectivity can account for

orientation selectivity (how much bigger the

pre-ferred response is than responses to nonprepre-ferred

orientations) is a more difficult question There is

some support for a feedforward neural

architec-ture based on studies that have determined the

pattern of LGN input to V1 cells In the ferretvisual cortex Chapman et al (1991)inhibited cor-tical activity with Muscimol, a GABA agonist, andobserved the spatial pattern of LGN inputs to asmall zone of V1.Reid and Alonso (1995)did dualrecordings in LGN and cat V1 and mapped theoverlapping receptive fields of cortical cells andtheir LGN inputs The experiment on cooling ofcat V1 to block cortical activity by Ferster et al

of intracellular recording of synaptic current in V1cells; it was interpreted to mean that there is sub-stantial orientation tuning of the collective thala-mic input to a cortical neuron, consistent with the

HW feedforward model In spite of all thisevidence, there is general agreement that the HWmodel predicts rather weak orientation selectivity,and therefore does not account for the visualproperties of those V1 cells that are highly selective

The reason for the shortfall of orientationselectivity in the HW model has been discussedbefore LGN cells have a low spontaneous rate butare quite responsive to visual stimuli An LGNcell’s firing rate during visual stimulation by anoptimal grating pattern has a sharp peak at onetemporal phase and dips to zero spikes/s at theopposite temporal phase Such nonlinear behaviordepends on stimulus contrast; at very low stimuluscontrast the LGN cells’ minimum firing rate maynot go down as low as zero spikes/s But at most

Fig 1 Classic feedforward model from LGN to simple cells in V1 cortex Adapted with permission from Hubel and Wiesel (1962) Four LGN cells are drawn as converging onto a single V1 cell The circular LGN receptive fields aligned in a row on the left side of the diagram make the receptive field of the cortical cell elongated.

22

Trang 28

stimulus contrasts used in experiments on cortex

(that is contrast 40.1) the LGN cells’ firing rate

will hit zero on the downswing This clipping of

the spike rate at zero spikes/s makes the LGN cells

act like nonlinear excitatory subunits as inputs to

their cortical targets (Palmer and Davis, 1981;

HW model simply adds up the LGN sources, its

summation of the clipped LGN inputs results in a

nonzero response at 901 from the optimal

orien-tation Computational simulations of feedforward

models with estimates of LGN convergent input

derived from the work of Reid and Alonso (1995)

support this analysis (Sompolinsky and Shapley,

given inFig 2, which shows a computation of the

summed excitatory synaptic input from an HW

model onto a cortical cell (cf Sompolinsky and

substan-tial LGN input to a cortical cell at 901 from the

preferred orientation, as seen in the figure

How-ever, highly selective V1 cells respond little or not

at all at 901 from peak orientation Therefore,

feedforward convergence can be only a part of the

story of cortical orientation selectivity

It might be supposed that one could rescue thefeedforward model by setting the spike thresholdjust high enough that the off-peak LGN inputwould be sub-threshold (Carandini and Ferster,

2000) However, this strategy will only work forone contrast One can infer this fromFig 2 If oneadds a threshold that makes the 10% contrastcurve highly selective, the 50% contrast curve willhave a very broadly tuned response This has beenpointed out often before (cf Ben-Yishai et al.,

selectivity we must answer the theoretical question:how does V1 reduce large feedforward responses

at orientations far from the preferred orientation,like those illustrated in Fig 2? The importantexperimental issue therefore is, what is the globalshape of the orientation tuning curve? This focusesattention on global measures of orientation selec-tivity like circular variance (Ringach et al., 2002)

or 1 minus circular variance, sometimes called theorientation selectivity index (Dragoi et al., 2000)

like circular variance or orientation selectivityindex are equivalent to informational measures ofdiscriminability of widely separated orientations,

an important function for visual perception

Models with cortical inhibition and excitation

There is a well-known addition to the HW modelthat would increase the orientation selectivitygreatly One can obtain increased orientation se-lectivity by adding inhibition that is more broadlytuned for orientation than excitation The inhibi-tion can be either spatial-phase-specific, so-calledpush–pull inhibition (Palmer and Davis, 1981;

cross-orientation inhibition (Bonds, 1989; Ben-Yishai

2000) What matters for explaining orientationselectivity is not the phase specificity of the inhi-bition but the breadth of tuning Thalamo-corticalsynapses are thought to be purely excitatory

in-hibition must come through cortical interneurons

Fig 2 Orientation tuning curve of the synaptic current evoked

by the LGN input to a cortical cell, relative to spontaneous

levels of LGN input calculated from a feedforward model

(Sompolinsky and Shapley, 1997) In this model, the LGN

afferents formed an ON–OFF–ON receptive field Each

subre-gion had an aspect ratio of 2 A total of 24 OFF-center cells

comprised the OFF subfield, while 12 ON cells comprised each

ON subregion, in the model The pattern of wiring was based

on the experimental results of Reid and Alonso (1995).

Trang 29

rather than directly from the thalamic afferents.

Experiments about intracortical inhibition in V1

have given mixed results Initially, Sillito’s (1975)

bicucul-line, a GABA antagonist, suggested that

intracor-tical inhibition is necessary for orientation tuning

However, the interpretation of these results is

moot because of possible ceiling effects

Subse-quent experiments ofNelson et al (1994)blocking

inhibition intracellularly were interpreted to mean

that inhibition onto a single neuron is not

neces-sary for that neuron to be orientation tuned There

is some question about this interpretation because

in the Nelson experiments the blocked cells were

hyperpolarized, mimicking the effect of sustained

inhibition Somewhat later, an important role for

intracortical inhibition was indicated by

pharma-cological experiments (Allison et al., 1995; Sato

There are several models that explain cortical

orientation selectivity in terms of broadly tuned

inhibition and more narrowly tuned excitation

One such theory of orientation tuning in cat cortex

in V1 in terms of ‘‘push–pull,’’ that is

spatial-phase-specific, inhibition (Palmer and Davis, 1981;

However, the phase specificity is not the main

rea-son the Troyer et al model generates orientation

selectivity The mechanism for sharpening of

ori-entation tuning in theTroyer et al (1998)model is

cortico-cortical inhibition that is broadly tuned for

orientation In the Troyer et al model there is

broadly tuned LGN convergent excitation as in

the HW model, and then more broadly tuned

in-hibition that cancels out the wide angle responses

but that leaves the tuning curve around the peak

orientation relatively unchanged In having

broadly tuned inhibition and more narrowly

tuned excitation, this particular model resembles

many other cortico-cortical interaction models

for orientation selectivity (Somers et al., 1995;

More recently, our colleagues David McLaughlin

and Michael Shelley and their colleagues

model for macaque V1 They constructed a

large-scale model (16,000 neurons) of four columns in layer 4ca of macaque V1 incorporatingknown facts about the physiology and anatomy.This model accounts for many visual properties ofV1 neurons, among them orientation selectivity.One innovation in this model is its realism: thespatial strength of connections between neurons istaken to be the spatial density of synaptic connec-tions revealed by anatomical investigations of cor-tex (e.g.,Lund, 1988;Callaway, 1998) This modelcauses significant sharpening of orientation selec-tivity of V1 neurons compared to their feedfor-ward LGN input The mechanism of sharpening oforientation tuning is, as in theTroyer et al (1998)model, broadly tuned inhibition The big differ-ence between this model and that of Troyer et al

inhibitory conductance input to a cell is insensitive (and not push–pull) This is aconsequence of the realistic simulation of corticalanatomy: because inhibition onto a model cell is asum from many inhibitory neurons and each cor-tical inhibitory cell has a fixed phase preferencethat is different from that of other inhibitory neu-rons This view of the nonselective nature of localcortico-cortical inhibitory interactions is supported

phase-by the measured phase insensitivity of synaptic hibitory conductance in V1 neurons (Borg-Graham

fea-ture of the large-scale model ofMcLaughlin et al

in orientation selectivity that has been observed

Others have suggested that cortico-cortical citatory interactions play a crucial role in orienta-tion selectivity Somers et al (1995)presented anelaborate computational model for orientationtuning that includes both recurrent cortical exci-tation and inhibition as crucial elements.Douglas

excitation in cortical circuits, reinforcing themessage of Douglas and Martin (1991) on the

‘‘canonical microcircuit’’ of V1 cortex A thirdpaper in this genre was Ben-Yishai et al (1995).Ben-Yishai et al offered an analytical model fromwhich they make several qualitative and quantita-tive predictions One of their theoretical results is

24

Trang 30

that if recurrent feedback is strong enough, one

will observe a ‘‘marginal phase’’ state in which V1

behaves like a set of attractors for orientation The

attractor states in recurrent excitatory models are

discussed not only in Ben-Yishai et al (1995), but

also in Tsodyks et al (1999) The concept is that

the tuning of very weakly orientation-tuned

feed-forward signals can be massively sharpened by

strong recurrent excitatory feedback In such a

network, the neurons will respond to any visual

signal by relaxing into a state of activity governed

by the pattern of cortico-cortical feedback A

simi-lar idea was proposed inAdorjan et al (1999) Our

motivation was to try to decide between

the different cortical models by performing and

analyzing experiments on cortical orientation

dynamics

Cortical orientation dynamics

In an attempt to provide data to test models of

orientation selectivity, we used a reverse

correla-tion method developed originally by Dario

Ring-ach The idea was to measure the time evolution of

orientation selectivity extracellularly in single V1

neurons, with a technique that drove most cortical

neurons above threshold The technique is

illus-trated in Fig 3 The input image sequence is a

stimulus ‘‘movie’’ that runs for 15–30 min Grating

patterns of orientations drawn randomly from a

set of equally spaced orientations around the clock

(usually in 100steps) are presented for a fixed time

(17 ms ¼ 1 frame at a 60 Hz refresh rate in the

early experiments reported inRingach et al., 1997,

and 20 ms ¼ 2 frames at 100 Hz refresh rate in the

more recent experiments reported in Ringach

is presented at eight spatial phases and the

response is phase averaged For each fixed time

interval between a spike and a preceding stimulus,

the probability distribution for orientation is

cal-culated by incrementing the orientation bin

corre-sponding to the orientation that precedes each of

the N spikes, and then dividing the bin counts by

N N is usually of the order of 5000 spikes This is

done for each value of time interval between spike

and stimulus to create a sequence of orientation

tuning curves, one for each time interval — an

‘‘orientation selectivity movie.’’

In more recent experiments on orientationdynamics (Ringach et al., 2003;Xing et al., 2005),

we used a refined technique that allowed us touncover the mechanisms of orientation selectivity

As shown inFig 3, an additional pattern is added

to the sequence — a blank stimulus at the meanluminance of the grating patterns This allows usfor the first time to measure untuned excitationand inhibition because, with this new technique,one can estimate whether the effect of one of theoriented patterns is greater or less than that of theblank pattern If the probability of producing a

Fig 3 Reverse correlation in the orientation domain The put image sequence runs for 15–30 min Grating patterns of orientations drawn randomly from a set of equally spaced orientations in the interval [01, 1801] (usually in 101 angle steps) are presented for 20 ms each (2 frames at 100 Hz frame rate) Each orientation is presented at eight spatial phases; response is phase averaged For each time offset, the probability distribu- tion for orientation is calculated by incrementing the orienta- tion bin corresponding to the orientation that precedes each of the N spikes, and then dividing the bin counts by N N is usually

in-of the order in-of 5000 spikes This is done for each time in-offset t to create an ‘‘orientation selectivity movie.’’ In these experiments

an additional pattern is added — a blank stimulus at the mean luminance of the grating patterns This allows us to create a baseline with which the responses at different angles can be compared Adapted with permission from Shapley et al (2003).

Trang 31

spike by a pattern of orientation y is greater than

that of a blank, we view as evidence that a pattern

of orientation y produces net excitation, while if

the probability of producing a spike by a pattern

of orientation y is less than that of a blank, we take

this as an indication of inhibition Specifically, we

take R(y, t) ¼ log[p(y, t)/p(Blank, t)] If the

prob-ability that angle y evokes a spike is greater than

that of a blank screen, then the sign of R is +

If the probability that angle y evokes a spike is less

than that of a blank screen, then the sign of R is 

If all angles evoke a response above the response

to a blank, then R(y) will have a positive value

for all y A visual neuron equally well excited by

stimuli of all orientation angles would produce a

constant, positive R(y)

The shape of the orientation tuning curve

R(y, t) changes with time, t, and this dynamic

behavior has a number of important properties

that are revealed inFig 4for a representative V1

neuron The black curve is a graph of R(y, t) at the

time offset tpeakwhen the orientation modulation

depth, that is the difference between Rmax and

Rmin, reaches its maximum value The red and blue

curves are graphs of R(y, t) at the two timesbracketing tpeak at which the orientation modula-tion depth is half the maximum value; the redcurve is at the development time tdev, the earlier ofthe two times when the modulation depth first risesfrom zero to half maximum, and the blue curve is

at the declining time tdec when the response hasdeclined back down to half maximum from maxi-mum One striking feature of these curves is thatthe dynamic tuning curve at the earlier time,R(y, tdev), has a large positive pedestal of response,

a sign of untuned or very broadly tuned excitationearly in the response This is just what one mightpredict from the analysis of feedforward models

meas-urable were predominantly feedforward excitation.But then, as the response evolves in time, themaximum value of R(y, t) at the preferred orien-tation grows only a little, while the responses atnonpreferred orientations decline substantially.Thus, Fig 4demonstrates that the maximum ori-entation modulation depth occurs at a time wheninhibition has suppressed nonpreferred responses.Because such inhibition suppresses all responsesfar from the preferred orientation, we infer thatthis is untuned inhibition It is also reasonable

to infer that tuned excitation near the preferredorientation counteracts the untuned inhibition tomaintain the peak value of R(y, t)

While bandwidth often has been the focus ofinterest in previous research, it is rather the globalshape of the tuning curve at all orientations thatdifferentiates between different theoretical mecha-nisms One simple way to study the global shape

of the tuning curve is to compare the response

at the preferred orientation with the response atorthogonal-to-preferred Therefore, we studiedR(ypref, t) and R(yortho, t) in a population of V1neurons because these features of the dynamicaltuning curves are related to the overall shape of thetuning curve and lead to insight about the role

of inhibition in the time evolution of orientationselectivity The average behaviors of R(ypref, t),R(yortho, t) averaged over a population of 101neurons are depicted in Fig 5 An importantfeature is the positive sign of R(ypref, t) andR(yortho, t) early in the response, indicating that,

on average, V1 cells tended to respond to all

Fig 4 Dynamics of orientation tuning in a representative V1

neuron The black curve is a graph of R(y, t) at the time offset

t peak when the orientation modulation depth reaches its

maxi-mum value The red and blue curves are graphs of R(y, t) at the

two times before and after t peak at which orientation

modula-tion is half maximal: the red curve is at t dev , the earlier of the

two times, and the blue curve is for t dec , the later time Adapted

with permission from Shapley et al (2003).

26

Trang 32

orientations early in the response This is a feature

that is consistent with the idea that at early times

feedforward input as in Fig 2 controls the

response Another important feature of the data

is that the time course of R(yortho, t) was differentfrom R(ypref, t) Note especially in the time courses

before R(ypref, t) reached its peak value ally R(yortho, t) declined to negative values mean-ing that later in the response orientations far fromthe preferred orientation were suppressive notexcitatory If the entire response were dominated

Eventu-by feedforward input, one would expect that ferred and orthogonal responses would have thesame time course simply scaled by the relativesensitivity Therefore, the results in Fig 5 quali-tatively rule out an explanation of the timeevolution of orientation selectivity in terms offeedforward inputs alone

pre-The results about the population averages in

suppression generated in the cortex that is rapid,but still somewhat delayed with respect to the earlyexcitatory input The untuned suppression con-tributes to the amount of orientation selectivity atthe time when the neuron is most selective Theseresults could be explained with a theory in whichfeedforward excitation drives the early weaklyselective response Evidence in favor of weaklyselective excitation was obtained by Xing et al

dynam-ics into a sum of excitation and untuned andtuned suppression The orientation tuning of theexcitatory term is shown inFig 6where it is com-pared to the predicted broad tuning curve for

Fig 6 The time course of measured excitation compared with the prediction of feedforward LGN input current The orientation dependence of tuned excitation is plotted in the left hand panel, redrawn from Xing et al (2005) Note especially the broad tuning with nonzero response at orthogonal-to-preferred The dashed curve is for responses to stimulus of optimal size; the solid curve is for a large stimulus of 2–4  the diameter of an optimal stimulus The right panel reproduces the theoretical prediction of Fig 2.

Fig 5 Time course of the population-averaged (101 cells)

re-sponse to preferred orientation (R pref , red curve), to orthogonal

orientation (R orth , green curve) and to the orientation where the

response was minimum (R min , blue curve) in responses to

stim-uli of large size Black dash-dot curve (aR pref ) is the rescaled

R pref The time course of each cell’s responses was shifted so

that its half-max rise time of R pref is at 41 ms Adapted with

permission from Xing et al (2005).

Trang 33

feedforward input, fromFig 2 Sharpening of this

broadly tuned input occurs when, with a very short

delay, relatively rapid intracortical inhibition

reduces the response at all orientations, acting like

an untuned suppression Data from intracellular

recording in V1 indicate that a wide variety of

patterns of cortico-cortical inhibition may influence

orientation selectivity (Monier et al., 2003)

Discussion: inhibition and selectivity

The data in Figs 4 and 5 from the orientation

dynamics experiments demonstrate that early

ex-citation in V1 is very broadly tuned for

orienta-tion, just as predicted for models of feedforward

convergence like the HW model (see Fig 2)

In-deed in simulations of the dynamics experiments

with a large-scale network model of V1,

McLaugh-lin et al demonstrated that feedforward excitation

generates dynamical orientation tuning curves

with very high circular variance, meaning poor

se-lectivity, at all time offsets between stimulus and

spike (see McLaughlin et al., 2000,Fig 2)

There-fore, to us, an important question about

orienta-tion selectivity in V1 is, as we have stated it above,

how does the cortex suppress the feedforward

ex-citation far from the preferred orientation? Our

experimental results show that untuned inhibition

in the cortex answers the question for those V1

neurons that are highly selective for orientation

The inhibitory signals must be fairly rapid, though

not quite as fast in arrival at the V1 neuron as the

earliest excitatory signals Also, inhibition appears

to persist longer than excitation, as illustrated in

anal-ysis of the dynamics of orientation selectivity, and

in particular of untuned suppression, can be found

Additional compelling evidence for the

impor-tant role of inhibition in orientation selectivity has

come from experiments on intracellular recording

from neurons in cat V1 (Borg-Graham et al., 1998;

pharmacological experiments in macaque V1

cor-tex by Sato et al (1996) established that when

cortical inhibition was weakened by

pharmacolog-ical competitive inhibitors, neuronal orientation

selectivity was reduced because the response tooff-peak orientations grew stronger relative to thepeak response (cf especially Fig 8 in Sato et al.,

1996) This is further support for the idea that thefeedforward excitatory input is very broadlytuned in orientation, and that cortical inhibitionsuppresses the responses far from the preferredorientation As presented earlier, the importance

of broadly tuned cortical inhibition has beensuggested also in computational models of thecortex (Troyer et al., 1998; McLaughlin et al.,

Untuned suppression and cortical inhibition

To judge whether or not cortico-cortical inhibition

is the source of untuned suppression requires moredetailed considerations When we stimulated a cellwith a stimulus of optimal size (0.451 radius onaverage in our data), we most likely activated acompact region of V1 (Van Essen et al., 1984;

corresponds to the cell’s local neighborhood

un-tuned suppression even with a stimulus of optimalsize suggests that the untuned suppression mainlycomes from the center mechanism and the localcircuitry within a cortical hypercolumn This isconsistent with the recent anatomical findings

a V1 cell gets most of its inhibitory synaptic inputfrom a local area in the cortex of approximatediameter of 100–250 mm Untuned suppressionexists in all layers as well as in simple and com-plex cell groups (Xing et al., 2005) This suggeststhat untuned suppression is a general mechanism

in primary visual cortex (Ringach et al., 2002;

tuned cortico-cortical inhibition that arises locally

in the cortical circuitry is the likely source of theuntuned suppression we have measured (Troyer

2004) There are other candidate mechanisms foruntuned suppression in V1, for instance synapticdepression at the thalamo-cortical synapses, asproposed byCarandini et al (2002) The fact thatuntuned suppression is stronger in layer 4B and

28

Trang 34

layer 5 than in the main thalamo-recipient layers

(layer 4C and layer 6) suggests that the untuned

suppression is mainly from cortico-cortical effects

instead of from thalamic-cortical effects (Xing

suppres-sion we measured had short persistence (Xing

200–600 ms recovery time (Abbott et al., 1997)

So the time course of untuned suppression is

unlike what has been assumed for synaptic

depression (e.g., Carandini et al., 2002) A likely

possibility is that fast cortical inhibition is the

source of the untuned suppression

Cortico-cortical excitation and selectivity

There is a possibility that tuned cortico-cortical

excitation may contribute also to enhancement of

orientation selectivity by boosting the response

only around the preferred orientation The

possi-bility that cortico-cortical excitation could

en-hance orientation selectivity was suggested

previously in theories of V1 (Ben-Yishai et al.,

observe a substantial sharpening of the excitatory

input during the time evolution of orientation

selectivity (Xing et al., 2005) Therefore, the

ori-entation dynamics data suggest that the role of

tuned cortical excitation is less than that of

untuned inhibition in generating selectivity in V1

Comparison with other studies

In the Introduction we reviewed previous

experi-ments that were taken to support a completely

different point of view, namely that the pattern of

feedforward thalamic input is enough to determine

orientation selectivity Our results as a whole are

not consistent with this viewpoint There are in the

literature two studies with dynamical stimuli that

have been interpreted as supporting the

feedfor-ward theory Gillespie et al (2001), recording

intracellularly in cat V1, reported that the

band-width of orientation tuning curves did not change

with time in their dynamic experiments As stated

above, we think that examining bandwidth misses

the point that the crucial question in orientation

selectivity is how the orthogonal response issuppressed by the cortex Interestingly, Gillespie

a change in the intracellular baseline with timethat reinforces our observations on the dynamicgrowth of inhibition Therefore, our interpretation

of the results of Gillespie et al (2001)is that theysupport the concept that inhibition plays an im-portant role in enhancing orientation selectivity,

by untuned inhibition

In a study that purports to assign a dominantrole to feedforward connections in orientation,

of awake macaques, and used a reverse correlationtechnique very similar to the one we introduced in

1997 (Ringach et al., 1997) However, unlike theresults we have presented here, Mazer et al.’s re-sults were interpreted to indicate that the orienta-tion tuning curves measured dynamically did notchange shape with time Because they did not have

a baseline stimulus, as we did with the blank uli in the stimulus sequence, Mazer et al (2002)could not measure the presence of untuned sup-pression, or broadly tuned excitation either There-fore, their conclusions about the time course oforientation dynamics were not well supported bythe data they had available

stim-Diversity

The diversity of orientation selectivity is ing Others have also reported data that indicatewide diversity of orientation tuning in cat V1

curves were analyzed with global measures ofselectivity like those we have employed There is aneed for understanding what are the functionalconsequences for visual perception of the widediversity of orientation tuning that is observed.This question was considered byKang et al (2004)

in a paper that applied a new technique for uring information transmission by populations ofneurons Kang et al concluded that diversity oforientation selectivity could make the corticalpopulation better at discriminations of differentorientation differences It is also plausible that the

Trang 35

meas-visual cortex is not only designed for tasks like

orientation discrimination, and that diversity of

orientation selectivity may be a result of

special-izations of neurons in other stimulus dimensions

besides orientation

Orientation selectivity and cortical circuits

Our view of V1 is that it is a nonlinear dynamical

system and one of its tasks is to find local stimulus

features in the neural image of the visual scene

relayed to V1 from the eye through the LGN

Different sources of excitation drive the activity in

V1 cells: local thalamo-cortical projections,

local-circuit cortico-cortical excitation, long-distance

horizontal V1 axons, and also feedback Different

sources of intracortical inhibition contribute to the

selectivity of V1 neurons: local-circuit inhibition,

inhibition mediated by signals from long-distance

intrinsic V1 horizontal connections (Gilbert and

from extra-striate cortex (Angelucci et al., 2002)

that drives inhibitory interneurons in the local

circuit While feedforward excitation must play a

role in giving V1 cells preferences for particular

orientations, intracortical inhibition makes

some V1 cells highly selective for their preferred

orientation over all others

Acknowledgments

We thank the US National Eye Institute for

support of our research through grants EY01472

and EY8300

References

Abbott, L.F., Varela, J.A., Sen, K and Nelson, S.B (1997)

Synaptic depression and cortical gain control Science, 275:

220–224.

Adorjan, P., Levitt, J.B., Lund, J.S and Obermayer, K (1999)

A model for the intracortical origin of orientation preference

and tuning in macaque striate cortex Vis Neurosci., 16:

303–318.

Allison, J.D., Casagrande, V.A and Bonds, A.B (1995)

Dynamic differentiation of GABAA-sensitive influences on

orientation selectivity of complex cells in the cat striate tex Exp Brain Res., 104: 81–88.

cor-Anderson, J.S., Carandini, M and Ferster, D (2000) tion tuning of input conductance, excitation, and inhibition

Orienta-in cat primary visual cortex J Neurophysiol., 84: 909–926 Angelucci, A., Levitt, J.B., Walton, E.J., Hupe, J.M., Bullier, J and Lund, J.S (2002) Circuits for local and global signal integration in primary visual cortex J Neurosci., 22: 8633–8646.

Ben-Yishai, R., Bar-Or, R.L and Sompolinsky, H (1995) ory of orientation tuning in visual cortex Proc Natl Acad Sci U.S.A., 92: 3844–3848.

The-Bonds, A.B (1989) Role of inhibition in the specification of orientation selectivity of cells in the cat striate cortex Vis Neurosci., 2: 41–55.

Borg-Graham, L.J., Monier, C and Fregnac, Y (1998) Visual input evokes transient and strong shunting inhibition in visual cortical neurons Nature, 393: 369–373.

Callaway, E.M (1998) Local circuits in primary visual cortex

of the macaque monkey Ann Rev Neurosci., 21: 47–74 Carandini, M and Ferster, D (2000) Membrane potential and firing rate in cat primary visual cortex J Neurosci., 20: 470–484.

Carandini, M., Heeger, D.J and Senn, W (2002) A synaptic explanation of suppression in visual cortex J Neurosci., 22: 10053–10065.

Chance, F.S., Nelson, S.B and Abbott, L.F (1999) Complex cells as cortically amplified simple cells Nat Neurosci., 2: 277–282.

Chapman, B and Stryker, M.P (1993) Development of tation selectivity in ferret visual cortex and effects of depri- vation J Neurosci., 13: 5251–5262.

orien-Chapman, B., Zahs, K.R and Stryker, M.P (1991) Relation of cortical cell orientation selectivity to alignment of receptive fields of the geniculocortical afferents that arborize within a single orientation column in ferret visual cortex J Neurosci., 11: 1347–1358.

Crook, J.M., Kisvarday, Z.F and Eysel, U.T (1998) Evidence for a contribution of lateral inhibition to orientation tuning and direction selectivity in cat visual cortex: reversible inac- tivation of functionally characterized sites combined with neuroanatomical tracing techniques Eur J Neurosci., 10: 2056–2075.

De Valois, R.L., Yund, E.W and Hepler, N (1982) The entation and direction selectivity of cells in macaque visual cortex Vision Res., 22: 531–544.

ori-Douglas, R.J., Koch, C., Mahowald, M., Martin, K.A and Suarez, H.H (1995) Recurrent excitation in neocortical circuits Science, 269: 981–985.

Douglas, R.J and Martin, K.A (1991) A functional cuit for cat visual cortex J Physiol., 440: 735–769 Dragoi, V., Sharma, J and Sur, M (2000) Adaptation-induced plasticity of orientation tuning in adult visual cortex Neuron, 28: 287–298.

microcir-Ferster, D (1988) Spatially opponent excitation and inhibition

in simple cells of the cat visual cortex J Neurosci., 8: 1172–1180.

30

Trang 36

Ferster, D (1992) The synaptic inputs to simple cells of the cat

visual cortex Prog Brain Res., 90: 423–441.

Ferster, D., Chung, S and Wheat, H (1996) Orientation

se-lectivity of thalamic input to simple cells of cat visual cortex.

Nature, 380: 249–252.

Freund, T.F., Martin, K.A., Soltesz, I., Somogyi, P and

Whitteridge, D (1989) Arborisation pattern and postsynaptic

targets of physiologically identified thalamocortical afferents

in striate cortex of the macaque monkey J Comp Neurol.,

289: 315–336.

Gilbert, C.D and Wiesel, T.N (1983) Clustered intrinsic

con-nections in cat visual cortex J Neurosci., 3: 1116–1133.

Gillespie, D.C., Lampl, I., Anderson, J.S and Ferster, D.

(2001) Dynamics of the orientation-tuned membrane

poten-tial response in cat primary visual cortex Nat Neurosci., 4:

1014–1019.

Hubel, D.H and Wiesel, T.N (1962) Receptive fields, binocular

interaction and functional architecture in the cat’s visual

cortex J Physiol., 160: 106–154.

Hubel, D.H and Wiesel, T.N (1968) Receptive fields and

functional architecture of monkey striate cortex J Physiol.,

195: 215–243.

Kang, K., Shapley, R.M and Sompolinsky, H (2004)

Infor-mation tuning of populations of neurons in primary visual

cortex J Neurosci., 24: 3726–3735.

Lund, J.S (1988) Anatomical organization of macaque monkey

striate visual cortex Ann Rev Neurosci., 11: 253–288.

Marino, J., Schummers, J., Lyon, D.C., Schwabe, L., Beck, O.,

Wiesing, P., Obermayer, K and Sur, M (2005) Invariant

computations in local cortical networks with balanced

excitation and inhibition Nat Neurosci., 8: 194–201.

Mazer, J.A., Vinje, W.E., McDermott, J., Schiller, P.H and

Gallant, J.L (2002) Spatial frequency and orientation tuning

dynamics in area V1 Proc Natl Acad Sci U.S.A., 99:

1645–1650.

McLaughlin, D., Shapley, R., Shelley, M and Wielaard, J (2000)

A neuronal network model of sharpening and dynamics of

orientation tuning in an input layer of macaque primary visual

cortex Proc Natl Acad Sci U.S.A., 97: 8087–8092.

Monier, C., Chavane, F., Baudot, P., Graham, L.J and

Fregnac, Y (2003) Orientation and direction selectivity of

synaptic inputs in visual cortical neurons: a diversity of

com-binations produces spike tuning Neuron, 37: 663–680.

Nelson, S., Toth, L., Sheth, B and Sur, M (1994) Orientation

selectivity of cortical neurons during intracellularblockade of

inhibition Science, 265: 774–777.

Palmer, L.A and Davis, T.L (1981) Receptive-field structure in

cat striate cortex J Neurophysiol., 46: 260–276.

Reid, R.C and Alonso, J.M (1995) Specificity of monosynaptic

connections from thalamus to visual cortex Nature, 378:

281–284.

Ringach, D., Hawken, M and Shapley, R (1997) The dynamics

of orientation tuning in the macaque monkey striate cortex.

Nature, 387: 281–284.

Ringach, D.L., Hawken, M.J and Shapley, R (2003)

Dynam-ics of orientation tuning in macaque V1: the role of global

and tuned suppression J Neurophysiol., 90: 342–352.

Ringach, D.L., Shapley, R.M and Hawken, M.J (2002) Orientation selectivity in macaque v1: diversity and laminar dependence J Neurosci., 22: 5639–5651.

Rockland, K.S and Lund, J.S (1983) Intrinsic laminar lattice connections in primate visual cortex J Comp Neurol., 216: 303–318.

Roerig, B and Chen, B (2002) Relationships of local inhibitory and excitatory circuits to orientation preference maps in fer- ret visual cortex Cereb Cortex, 12: 187–198.

Sato, H., Katsuyama, N., Tamura, H., Hata, Y and Tsumoto,

T (1996) Mechanisms underlying orientation selectivity of neurons in the primary visual cortex of the macaque.

J Physiol., 494: 757–771.

Schiller, P.H., Finlay, B.L and Volman, S.F (1976) tative studies of single-cell properties in monkey striate cortex II Orientation specificity and ocular dominance.

Quanti-J Neurophysiol., 39: 1320–1333.

Shapley, R., Hawken, M and Ringach, D.L (2003) Dynamics

of orientation selectivity in macaque V1 cortex, and the importance of cortical inhibition Neuron, 38: 689–699 Shapley, R.M (1994) Linearity and non-linearity in cortical receptive fields In: Higher Order Processing in the Visual System, Ciba Symposium 184, pp 71–87 Wiley, Chichester.

Shelley, M., McLaughlin, D., Shapley, R and Wielaard, J (2002) States of high conductance in a large-scale model of the visual cortex J Comput Neurosci., 13: 93–109 Sillito, A.M (1975) The contribution of inhibitory mechanisms

to the receptive field properties of neurones in the striate cortex of the cat J Physiol., 250: 305–329.

Sillito, A.M., Kemp, J.A., Milson, J.A and Berardi, N (1980)

A re-evaluation of the mechanisms underlying simple cell orientation selectivity Brain Res., 194: 517–520.

Somers, D.C., Nelson, S.B and Sur, M (1995) An emergent model of orientation selectivity in cat visual cortical simple cells J Neurosci., 15: 5448–5465.

Sompolinsky, H and Shapley, R (1997) New perspectives

on the mechanisms for orientation selectivity Curr Opin Neurobiol., 7: 514–522.

Tao, L., Shelley, M., McLaughlin, D and Shapley, R (2004)

An egalitarian network model for the emergence of simple and complex cells in visual cortex Proc Natl Acad Sci U.S.A., 101: 366–371.

Tolhurst, D.J and Dean, A.F (1990) The effects of contrast on the linearity of spatial summation of simple cells in the cat’s striate cortex Exp Brain Res., 79: 582–588.

Tootell, R.B., Switkes, E., Silverman, M.S and Hamilton, S.L (1988) Functional anatomy of macaque striate cortex II Retinotopic organization J Neurosci., 8: 1531–1568 Troyer, T.W., Krukowski, A.E., Priebe, N.J and Miller, K.D (1998) Contrast-invariant orientation tuning in cat visual cortex: thalamocortical input tuning and correlation-based intracortical connectivity J Neurosci., 18: 5908–5927 Tsodyks, M., Kenet, T., Grinvald, A and Arieli, A (1999) Linking spontaneous activity of single cortical neurons and the underlying functional architecture Science, 286: 1943–1946.

Trang 37

Van Essen, D.C., Newsome, W.T and Maunsell, J.H (1984)

The visual field representation in striate cortex of the

macaque monkey: asymmetries, anisotropies, and individual

variability Vision Res., 24: 429–448.

Wielaard, J., Shelley, M., McLaughlin, D.M and Shapley,

R.M (2001) How simple cells are made in a nonlinear

network model of the visual cortex J Neurosci., 21: 5203–5211.

Xing, D., Shapley, R.M., Hawken, M.J and Ringach, D.L (2005) The effect of stimulus size on the dynamics of orien- tation selectivity in macaque V1 J Neurophysiol., 94: 799–812.

32

Trang 38

ISSN 0079-6123

Copyright r 2007 Elsevier B.V All rights reserved

CHAPTER 4

A quantitative theory of immediate visual recognition

and Tomaso Poggio

Center for Biological and Computational Learning, McGovern Institute for Brain Research, Computer Science andArtificial Intelligence Laboratory, Brain and Cognitive Sciences Department, Massachusetts Institute of Technology,

43 Vassar Street #46-5155B, Cambridge, MA 02139, USA

Abstract: Human and non-human primates excel at visual recognition tasks The primate visual systemexhibits a strong degree of selectivity while at the same time being robust to changes in the input image Wehave developed a quantitative theory to account for the computations performed by the feedforward path

in the ventral stream of the primate visual cortex Here we review recent predictions by a model tiating the theory about physiological observations in higher visual areas We also show that the model canperform recognition tasks on datasets of complex natural images at a level comparable to psychophysicalmeasurements on human observers during rapid categorization tasks In sum, the evidence suggests that thetheory may provide a framework to explain the first 100–150 ms of visual object recognition The modelalso constitutes a vivid example of how computational models can interact with experimental observations

instan-in order to advance our understandinstan-ing of a complex phenomenon We conclude by suggestinstan-ing a number ofopen questions, predictions, and specific experiments for visual physiology and psychophysics

Keywords: visual object recognition; hierarchical models; ventral stream; feedforward

Introduction

The primate visual system rapidly and effortlessly

recognizes a large number of diverse objects in

cluttered, natural scenes In particular, it can easily

categorize images or parts of them, for instance as

an office scene or a face within that scene, and

identify a specific object This remarkable ability is

evolutionarily important since it allows us to

dis-tinguish friend from foe and identify food targets

in complex, crowded scenes Despite the ease with

which we see, visual recognition — one of the key

issues addressed in computer vision — is quite

difficult for computers The problem of object

recognition is even more difficult from the point ofview of neuroscience, since it involves several levels

of understanding from the information processing

or computational level to circuits and biophysicalmechanisms After decades of work in differentbrain areas ranging from the retina to higher cor-tical areas, the emerging picture of how cortexperforms object recognition is becoming too com-plex for any simple qualitative ‘‘mental’’ model

A quantitative, computational theory can vide a much-needed framework for summarizingand integrating existing data and for planning,coordinating, and interpreting new experiments.Models are powerful tools in basic research, inte-grating knowledge across several levels of analysis —from molecular to synaptic, cellular, systems and

pro-to complex visual behavior In this paper, we

Corresponding author Tel.: +1 617 253 0548;

Fax: +1 617 253 2964; E-mail: serre@mit.edu

33

Trang 39

describe a quantitative theory of object

recogni-tion in primate visual cortex that (1) bridges

sev-eral levels of understanding from biophysics to

physiology and behavior and (2) achieves human

level performance in rapid recognition of complex

natural images The theory is restricted to the

feedforward path of the ventral stream and

there-fore to the first 100–150 ms of visual recognition; it

does not describe top-down influences, though it

should be, in principle, capable of incorporating

them

In contrast to other models that address the

computations in any one given brain area (such as

primary visual cortex) or attempt to explain a

particular phenomenon (such as contrast

adapta-tion or a specific visual illusion), we describe here a

large-scale neurobiological model that attempts to

describe the basic processes across multiple brain

areas One of the initial key ideas in this and many

other models of visual processing (Fukushima,

come from the pioneering physiological studies

and models of Hubel and Wiesel (1962)

Following their work on striate cortex, they

proposed a hierarchical model of cortical

organ-ization They described a hierarchy of cells within

the primary visual cortex: at the bottom of the

hierarchy, the radially symmetric cells behave

sim-ilarly to cells in the thalamus and respond best to

small spots of light Second, the simple cells which

do not respond well to spots of light require

bar-like (or edge-bar-like) stimuli at a particular

orienta-tion, posiorienta-tion, and phase (i.e., white bar on a black

background or dark bar on a white background)

In turn, complex cells are also selective for bars at

a particular orientation but they are insensitive to

both the location and the phase of the bar within

their receptive fields At the top of the hierarchy,

hypercomplex cells not only respond to bars in a

position and phase invariant way like complex

cells, but also are selective for bars of a particular

length (beyond a certain length their response

starts to decrease) Hubel and Wiesel suggested

that such increasingly complex and invariant

ob-ject representations could be progressively built by

integrating convergent inputs from lower levels

For instance, position invariance at the complex

cell level could be obtained by pooling over simplecells at the same preferred orientation but atslightly different positions The main contributionfrom this and other models of visual processing

hierarchy beyond V1 to extrastriate areas andshow how this can explain the tuning properties ofneurons in higher areas of the ventral stream of thevisual cortex

A number of biologically inspired algorithmshave been described (Fukushima, 1980; LeCun

qual-itatively constrained by the anatomy and ogy of the visual cortex However, there have beenvery few neurobiologically plausible models

high-level computational function such as objectrecognition by summarizing and integrating alarge body of data from different levels of under-standing What should a general theory of biolog-ical object recognition be able to explain? It should

be constrained to match data from anatomy andphysiology at different stages of the ventral stream

as well as human performance in complex visualtasks such as object recognition The theory wepropose may well be incorrect Yet it represents aset of claims and ideas that deserve to be eitherfalsified or further developed and refined

The scope of the current theory is limited to

‘‘immediate recognition,’’ i.e., to the first100–150 ms of the flow of information in the ven-tral stream This is behaviorally equivalent to con-sidering ‘‘rapid categorization’’ tasks for whichpresentation times are fast and back-projectionsare likely to be inactive (Lamme and Roelfsema,

2000) For such tasks, presentation times do notallow sufficient time for eye movements or shifts ofattention (Potter, 1975) Furthermore, EEG stud-ies (Thorpe et al., 1996) provide evidence that thehuman visual system is able to solve an object de-tection task — determining whether a naturalscene contains an animal or not — within 150 ms

34

Trang 40

Extensive evidence shows that the responses of

inferior temporal (IT) cortex neurons begin

80–100 ms after onset of the visual stimulus

re-sponses at the IT level are tuned to the stimulus

essentially from response onset (Keysers et al.,

2001) Recent data (Hung et al., 2005) show that

the activity of small neuronal populations in IT

(100 randomly selected cells) over very short

time intervals from response onset (as small as

12.5 ms) contains surprisingly accurate and robust

information supporting visual object

categoriza-tion and identificacategoriza-tion tasks Finally, rapid

detec-tion tasks, e.g., animal vs non-animal (Thorpe

attention (Li et al., 2002) We emphasize that none

of these rules out the use of local feedback —

which is in fact used by the circuits we propose for

the two main operations postulated by the theory

(see section on ‘‘A quantitative framework for the

ventral stream’’) — but suggests a hierarchical

forward architecture as the core architecture

un-derlying ‘‘immediate recognition.’’

We start by presenting the theory in section ‘‘A

quantitative framework for the ventral stream:’’

we describe the architecture of a model

imple-menting the theory, its two key operations, and its

learning stages We briefly review the evidence

about the agreement of the model with single cell

recordings in visual cortical areas (V1, V2, V4) and

describe in more detail how the final output of the

model compares to the responses in IT cortex

during a decoding task that attempts to identify or

categorize objects (section on ‘‘Comparison with

physiological observations’’) In section

‘‘Perform-ance on natural images,’’ we further extend the

approach to natural images and show that the

model performs surprisingly well in complex

recognition tasks and is competitive with some of

the best computer vision systems As an ultimate

and more stringent test of the theory, we show

that the model predicts the level of performance

of human observers on a rapid categorization

task The final section discusses the state of

the theory, its limitations, a number of open

ques-tions including critical experiments, and its

extension to include top-down effects and cortical

back-projections

A quantitative framework for the ventral stream

Organization of the ventral stream of visual cortex

Object recognition in cortex is thought to be diated by the ventral visual pathway (Ungerleider

conveyed to the lateral geniculate nucleus in thethalamus and then to primary visual cortex, V1.Area V1 projects to visual areas V2 and V4, andV4 in turn projects to IT, which is the last exclu-sively visual area along the ventral stream

phys-iological and lesion experiments in monkeys, IThas been postulated to play a central role in objectrecognition (Schwartz et al., 1983) It is also amajor source of input to prefrontal cortex (PFC)that is involved in linking perception to memoryand action (Miller, 2000)

Neurons along the ventral stream (Perrett and

size as well as in the complexity of their preferredstimuli (Kobatake and Tanaka, 1994).Hubel and

small receptive fields that respond preferentially tooriented bars At the top of the ventral stream, ITcells are tuned to complex stimuli such as faces andother objects (Gross et al., 1972;Desimone et al.,

A hallmark of the cells in IT is the robustness oftheir firing over stimulus transformations such asscale and position changes (Perrett and Oram, 1993;

have shown, most neurons show specificity for acertain object view or lighting condition (Hietanen

neurons are view-invariant and in agreement withearlier predictions (Poggio and Edelman, 1990).Whereas view-invariant recognition requires visualexperience of the specific novel object, significantposition and scale invariance seems to be immedi-ately present in the view-tuned neurons (Logothetis

for views of the specific object at different positionsand scales (see alsoHung et al., 2005)

Ngày đăng: 12/05/2014, 17:28