The Future of the Brain: Essays by the World's Leading Neuroscientists

Edited by:
  • Gary Marcus &
  • Jeremy Freeman
Princeton University Press, 2014 304 pp, hardcover, $24.95 9780691 162768 | ISBN: 978-0-6911-6276-8

A book called The Future of the Brain: Essays by the World's Leading Neuroscientists cannot have modest ambitions. The editors and authors of this collection of essays do not disappoint on that score, taking a broad, well-informed view of recent and potential advances in neuroscience and their implications for the field. The time is right for such self-examination, as the availability of revolutionary new techniques and approaches promises radical progress in understanding the brain while also bringing some long-standing problems and controversies into sharp relief.

Much of the book focuses on the collection and analysis of big data. For now, those data come from techniques such as optical imaging in whole larval zebrafish brains with light-sheet microscopy, multi-electrode recordings, and industrial-scale molecular biology and anatomy. In the near future, the in situ DNA sequencing technique recently developed by George Church may be used for cell-type identification and perhaps connection tracing. The authors discuss examples ranging from completed projects (the Allen Brain Atlas) to those that are well under way (the Human Brain Project) to blue-sky proposals such as 'neural dust', an idea for building tiny, implanted, long-term recording devices that would wirelessly transmit signals to a receiver outside of the brain.

The technical aspects of the projects are well explained. Perhaps more importantly, the authors evenhandedly explore the rationale behind each project, what scientists can expect to learn and the conceptual limitations of the various approaches. These chapters leave little doubt that neuroscientists will soon be in possession of a lot more information and more comprehensive data sets. It is less clear that the field is well positioned to take advantage of them.

One widely shared concern is that the coming tsunami of data may overcome neuroscientists' ability to make sense of it all. Conceptual progress is already lagging behind data collection, placing the field in danger of allowing the sorting and labeling of neurons and their properties to take precedence over the search for overarching principles that underlie brain function (if indeed such principles exist).

To be useful, several contributors emphasize, these new data will need to be accompanied by the development of better theories, both to guide experiments and to interpret their results. In particular, the field needs ideas that help to bridge across levels of investigation. Matteo Carandini suggests that a bridge from circuit function to behavior could be constructed from basic computations that form repeated motifs across different systems. Such theories, although they may be instantiated in computational models, should not be judged on the details or scope of a particular simulation. Instead, the important outcome is their contribution to understanding how different aspects of the brain work together: in other words, their ability to simplify the brain's complexity. As Gilles Laurent suggested in 2000, in this journal's first supplement, we will know that neuroscience has made progress when the textbooks start slimming down.

Another issue for experimental design is the accuracy of categories used in neuroscience research. As Kevin Mitchell explains, psychiatric diagnoses do not correspond well to particular genetic mutations, even though these illnesses run in families. The same mutation can be linked to schizophrenia in one person and to autism in another, suggesting that current psychiatric diagnoses are not true biological categories. By averaging neural and behavioral data across patients with the same diagnosis, researchers may be obscuring important biological distinctions. Comparing phenotypes in people with the same mutation, he suggests, may be a more productive approach. Along similar lines, correctly sorting behaviors into appropriate cognitive categories is crucial to uncovering their neural basis.

For a book with such interesting content, it is unfortunate that the editors did not put more effort into making the text inviting to nonspecialists. Although the authors take care to step back and fill in background information, the writing style—laden with complicated syntax, qualifiers and jargon—would be completely at home in an academic journal. Terms such as 'microcolumn' and 'line attractor', among others, pass by without definition, and the brief glossary enlightens readers to only a small fraction of the jargon that will puzzle outsiders. This approach is likely to limit the book's audience to graduate students searching for a promising project to work on and fellow researchers interested in reading outside their own area, along with the occasional highly motivated undergraduate. Gary Marcus is a talented writer who knows how to communicate with nonspecialists and hold his readers' attention. I wish that he had helped all his contributors to do the same.

In the last chapter, Marcus and Christof Koch attempt to forecast the next 50 years of neuroscience, under the guise of passing along a time-traveler's report from 2064. The authors pull no punches, anticipating that excessive hype about today's big-science projects will lead to public disappointment and a decrease in funding for the field in the 2020s. They predict that neuroscience will solve the retina by 2020, then dissect the principles of the cortex and finally comprehend the visual thalamus. Further into the future, they speculate that nanobots injected into the body will cure or ameliorate many brain diseases and later be applied to enhancing normal function, despite government restrictions intended to prevent such uses. Going back to the book in a decade or two to see how these ideas have held up should be, as the editors say, part of the fun.