Why I Am/Am Not a Physicist

Neil Gershenfeld

Physics Today, pp. 50-51, July, 1995

There is a vigorous battle being fought between the defenders of curiosity-driven basic research and the proponents of applied development to solve practical problems. I would like to suggest that this polarization risks satisfying neither camp, because it misses the deeper and much more interesting interrelationship between research and applications. Neither curiosity nor practice arises in a vacuum.

I think I decided to become a physicist when I read David Mermin's wonderful article in Physics Today about the naming of boojums (month? 1981, page 46). If it was possible to get paid to do that sort of thing, that was what I wanted to do. I was an undergraduate at Swarthmore College, studying philosophy in order to understand the deep secrets of the universe, and was surprised to find that while philosophy was teaching me how to carefully pose such questions but not necessarily how to answer them, physics was providing deep answers to what had appeared to be mundane questions. I've since learned that physicists do a bit more than think up clever names, but I continue to find that the field provides deep answers where I least expect them, and that I like the way physicists think about problems more than the approach of any other discipline.

I think I began questioning physics when I was continually asked whether developing instrumentation for Yo-Yo Ma's cello is "real" physics. "Isn't that just computer science?" I was visiting MIT's Media Lab to collaborate with a composer there, Tod Machover, and Yo-Yo. I was struck that we're approaching a remarkable time when sensing and computing will be able to match the performance of a Stradivarius, and hence it will be possible to emulate (and then generalize) the physics from first principles. The project was intended to be an amusing exercise unrelated to "serious" physics, but something odd happened: I never left the Media Lab. Because of my training that physics happens only in physics departments, it took me some time to recognize that the building was full of physics problems that people are eager to solve and they are having a great time doing it.

Since then I've learned a number of lessons about the practice of research. At times it's felt as if I've been deprogrammed from the culture of my physics background. I had thought that scientific progress occurs by basic research inexorably leading to results that are then handed off to applied development, with practical applications popping out at the end of this assembly line. But consider the following sequence: heat-engine efficiency--> entropy--> thermodynamics--> kinetic theory--> statistical mechanics--> Maxwell's demon--> information theory--> coding theory--> thermodynamics of computation--> reversible computation--> reversible CMOS. What is basic and what is applied? Which is driving which? Obviously, trying to draw such a boundary is not meaningful or relevant here, and it misses the essence of how innovation happens.

The fundamental mistake that recurs in the basic-versus-applied debate is to confuse constraints for connections. There is no such thing as "disconnected" science, but developing the connections requires great skill and insight. This step is frequently and surprisingly overlooked. It is usually unrealistic to expect a basic researcher and a product manager to recognize when they can help each other, and it trivializes the necessary skills of both to expect them to be interchangeable. Easy applications of existing ideas have long since been found; what is needed is a much more thoughtful process for posing problems that need solving and for recognizing when the results are useful. Since coming to the Media Lab I've been surprised to find that no matter what problem I work on, someone is interested in using the results. My work hasn't become more applied, but I am closer to collaborators and sponsors who can identify when something I've done is useful for them or can describe problems that I realize can be solved by something I've done. The traditional academic support model, in which most of the communication occurs before funding rather than after, cuts off this kind of interaction.

Now, this approach is not a prescription for all problems. It is unlikely that the Higgs boson will serendipitously be found while developing sensors for musical instruments--although, in fact, the noncontact sensor we developed for measuring violin-bow position was used by Joe Paradiso in the design of the GEM muon detector alignment system. But when research is isolated off from the push and pull of interesting applications, this source of pleasure and vitality is lost.

Consider the history of condensed matter physics research, which has in large part been driven (in particular, funded) by its application in electronics. There was a time when many fundamental materials questions needed to be answered before transistors could become smaller and faster, and finding answers to those questions was of great practical importance. The remarkable progress in VLSI that we've come to take for granted is a tribute to the success of that enterprise. However, not only is the end of the VLSI scaling era in sight; clever experimenters have already arrived there. The beautiful quantum corrals that Don Eigler and Mike Crommie construct are an example (although there is of course an enormous difference between a proof-of-principle of atomic assembly and a useful production technique). We're not going to undertake construction projects on much shorter length scales. Rolf Landauer, Bob Keyes, Gordon Moore, and others clearly have predicted the arrival of device scaling at fundamental limits, with the implication that further progress will have to come from radically different approaches. But something even more serious has happened on the way to kT: Many of these limits are less important than they used to be. The ability of computers to solve people's problems has not come close to matching the remarkable improvements in device speed and density. Why is this? And are physicists relevant to answering the question?

The answer from many industrial labs is that from here on out what matters is software, and so there is little need for physicists. I admit to being biased, but I think that this is a serious mistake. First, it assumes that research is a fungible quantity that can arbitrarily be scaled up and down. Ignoring the necessity of a critical mass of people and ideas for research to thrive has destroyed many institutions' most precious asset, the research environment (which is later re-built with much effort when needed). The push for developing software can also result in software engineers who are unable or unwilling to recognize when important problems need physical solutions.

However, physicists are just as responsible for this state of affairs. Physics consists of two things: a mode of inquiry and a domain of application. We've come to let the latter define the discipline. There are endless physical science problems that are of great practical as well as fundamental significance, but many of them are no longer where they used to be. Physicists have been at least as guilty as everyone else in maintaining a culture that can inhibit people from working in areas that are new, interesting, and also relevant to current problems. As a result, innovation can be driven outside of the academy (for example, Tom Zimmerman was developing optical flex sensors to create the Data Glove, which arguably changed the world, at his company VPL long before he became one of my grad students and learned about the modes of a dielectric waveguide). The number of students who have come to me seeking a way to practice physics without being constrained by being labeled a physicist is a sign of this serious problem. Another result is that many interesting physicists from my generation are not working in physics departments. The obvious explanation (that there are so few jobs) provides an easy excuse that hides a more significant issue: many of them do not fit in in a physics department. They are working on problems that involve physics, but that also have some other type of content.

For many centuries (if not millennia), physics has been driven by finding the governing equations for phenomena that are ever bigger, smaller, hotter, colder, faster, or slower. This can not go on forever: the recent problems with funding particle accelerators are early warnings that there is a financial event horizon beyond which the insight gained from each new decade we explore is harder and harder to justify. This does not mean the end of physics, but it crucially does mean that the emphasis must shift from finding new fundamental governing equations to finding what emerges from familiar governing equations. The importance of emergent behavior is well known, but I'll go further to argue that physics at the boundary of meaning and representation is going to become one of the central missions of the discipline. The study of quantum computing raises deep question about decoherence and entanglement in quantum mechanics, but these can not be isolated from questions about algorithm design. There are as many device physics questions in user interfaces as there are in CPUs. The statistical mechanics of learning systems is as challenging as the statistical mechanics of glasses, but in addition something is being learned by the system. There is a kind of hubris that assumes that such problems at the margins of physics are marginal, not as important or challenging as "pure" physics. It is true that successful inter-disciplinary research is not possible without the disciplines, and that using physics is not the same as doing physics, but this is not a zero-sum game: the presence of some other kind of content in a problem need not displace rigor. At the very least, to guide effectively, the leadership of the physics community needs to understand where and how physicists are working. I wonder how many department chairs have never been in an industrial laboratory, computer science department, or financial firm?

There are two very different scenarios for the future of physics. One possibility is that physics becomes like Latin, an important canon that is necessary for advanced work in many fields and is kept alive by a small group of dedicated followers, but not expected to evolve rapidly. The other is that physics grows to encompass what physicists are doing. Just as the former risks stagnation and irrelevance, the latter risks loss of focus and discipline (literally). However, I fear that unless there is a thoughtful change in the self-organization of physics, we'll find that the former path has been chosen for us.

--------------------

Neil Gershenfeld directs the Physics and Media Group at the MIT Media Lab, studying the boundary between the content of information and its physical representation. He was a technician at AT&T Bell Labs, received a PhD in Applied Physics from Cornell University, and was a Junior Fellow of the Harvard Society of Fellows.