Concept of Knowledge Revisited
Physics is always a work in progress
Column: From Newton to Einstein to Chen, science is writing new laws all the time.
Laws of physics have their limits. Newton’s law of gravity is good enough to guide astronauts to the Moon. But it took Einstein’s more sophisticated gravity law to design the GPS system that guides you through unfamiliar streets. Now a research team has measured the long-anticipated breakdown on another great physical law – Max Planck’s law of thermal radiation.
It happens at very small separations between two objects, such as the space between the recording head and the hard disk in your computer. Knowing the rate of thermal-radiation exchange between head and disk is a key element in designing hard-disk data-recording systems, in which the recording head tends to heat up.
MIT physicist Gang Chen calls this “a very important issue for magnetic storage.”
Thus the basic physics that Professor Chen, his student Sheng Shen at MIT, and partner Arvind Narayaswamy at Columbia University are pursuing has immediate relevance in our everyday world. Already, it shows that the thermal exchange at very small separations between two surfaces can be 1,000 times greater than Planck’s law predicts. That’s something hard disk designers need to know.
This research illustrates a fundamental truth about science: Nature knows nothing of the scientific laws we cook up. Nature does its own thing. We observe what it does. When we see regularities in natural phenomena we encode our observations in what we call “natural laws.” These reflect what we know about nature at any given time. They allow us to predict cause-and-effect relationships. But they can break down when we try to use them in situations where their underlying assumptions don’t apply.
Newton’s gravity law assumes an attractive force between two or more bodies that has no effect on time. It handles the orbits of planets and spacecraft very well. Einstein’s gravity law assumes no gravitational force. Instead, it assumes that the mass of a body, such as the Sun, distorts time and space. A planet moving through space that is only slightly distorted by the mass of the Sun travels a course very close to what Newton’s theory predicts. Newton’s theory breaks down in situations where space-time distortion is important. That includes GPS navigation, which depends on very precise timing of signals between satellites and ground equipment. Time runs slightly faster at satellite altitude. Thus GPS engineers need to know when to abandon Newton and follow Einstein.
When Planck published his thermal-radiation theory 109 years ago, he warned that it might break down when two physical objects are very close to each other. Physicists have been looking for that breakdown as nanotechnology has developed ever-smaller physical systems. In Nano Letters this month, Chen and his colleagues tell how they measure heat transfer across gaps as small as 10 nanometers (10 billionths of a meter). That’s comparable to the 6 to 7 nanometer gap between a recording head and a hard disk. Mr. Shen notes that engineers now can know that Planck’s law heat-flow predictions are “not a fundamental limitation” as they try to achieve higher energy densities and higher efficiencies in nano devices.
Physical law is always a work in progress. When what we call law today eventually breaks down, new opportunities for scientific and engineering progress usually appear.
Column: From Newton to Einstein to Chen, science is writing new laws all the time.
Laws of physics have their limits. Newton’s law of gravity is good enough to guide astronauts to the Moon. But it took Einstein’s more sophisticated gravity law to design the GPS system that guides you through unfamiliar streets. Now a research team has measured the long-anticipated breakdown on another great physical law – Max Planck’s law of thermal radiation.
It happens at very small separations between two objects, such as the space between the recording head and the hard disk in your computer. Knowing the rate of thermal-radiation exchange between head and disk is a key element in designing hard-disk data-recording systems, in which the recording head tends to heat up.
MIT physicist Gang Chen calls this “a very important issue for magnetic storage.”
Thus the basic physics that Professor Chen, his student Sheng Shen at MIT, and partner Arvind Narayaswamy at Columbia University are pursuing has immediate relevance in our everyday world. Already, it shows that the thermal exchange at very small separations between two surfaces can be 1,000 times greater than Planck’s law predicts. That’s something hard disk designers need to know.
This research illustrates a fundamental truth about science: Nature knows nothing of the scientific laws we cook up. Nature does its own thing. We observe what it does. When we see regularities in natural phenomena we encode our observations in what we call “natural laws.” These reflect what we know about nature at any given time. They allow us to predict cause-and-effect relationships. But they can break down when we try to use them in situations where their underlying assumptions don’t apply.
Newton’s gravity law assumes an attractive force between two or more bodies that has no effect on time. It handles the orbits of planets and spacecraft very well. Einstein’s gravity law assumes no gravitational force. Instead, it assumes that the mass of a body, such as the Sun, distorts time and space. A planet moving through space that is only slightly distorted by the mass of the Sun travels a course very close to what Newton’s theory predicts. Newton’s theory breaks down in situations where space-time distortion is important. That includes GPS navigation, which depends on very precise timing of signals between satellites and ground equipment. Time runs slightly faster at satellite altitude. Thus GPS engineers need to know when to abandon Newton and follow Einstein.
When Planck published his thermal-radiation theory 109 years ago, he warned that it might break down when two physical objects are very close to each other. Physicists have been looking for that breakdown as nanotechnology has developed ever-smaller physical systems. In Nano Letters this month, Chen and his colleagues tell how they measure heat transfer across gaps as small as 10 nanometers (10 billionths of a meter). That’s comparable to the 6 to 7 nanometer gap between a recording head and a hard disk. Mr. Shen notes that engineers now can know that Planck’s law heat-flow predictions are “not a fundamental limitation” as they try to achieve higher energy densities and higher efficiencies in nano devices.
Physical law is always a work in progress. When what we call law today eventually breaks down, new opportunities for scientific and engineering progress usually appear.
August 29, 2009
Op-Ed Contributor
Freud’s Adirondack Vacation
By LEON HOFFMAN
SIGMUND Freud arrived in Hoboken, N.J., 100 years ago today on his first and only visit to the United States. He came to lecture on psychoanalysis and to receive an honorary degree from Clark University, in Worcester, Mass. It was, he said, “an honorable call,” a mark of his academic success. Freud was then 53 and had been practicing for 23 years.
At the time, most doctors here and in Europe still considered mental illness to be caused by “degeneration” of the brain. They assumed that there was little to be done for it beyond physical treatments like diet, exercise, drugs, rest and massage. But a growing awareness that the mind could influence bodily functions was giving rise to debates about the nature of the unconscious mind.
G. Stanley Hall, the president of Clark and the first person to earn a doctorate in psychology from Harvard, invited American scientists to hear Freud’s ideas about the unconscious roots of mental illness. William James, the philosopher and psychologist, was among those who attended, as were other prominent academics, like Adolf Meyer, who would become perhaps the most important psychiatric educator in the first half of the 20th century, and Franz Boas, the father of American anthropology. Emma Goldman, the noted radical, who was also there, remarked, “Among the array of professors, looking stiff and important in their caps and gowns, Sigmund Freud, in ordinary attire, unassuming, almost shrinking, stood out like a giant among Pygmies.”
Speaking in German and without notes, Freud delivered five lectures covering the basic principles of psychoanalysis: hysteria and the psychoanalytic method, the idea that mental illness could arise from a person’s early experience, the importance of dreams and unconscious mental activity, infantile sexuality and the nature of transference.
When Freud learned that James would attend only one day, he chose that day to speak on the interpretation of dreams and the power of the unconscious. After the lecture, the two men spent more than an hour alone together. James would later express ambivalence about Freud’s ideas. “They can’t fail to throw light on human nature,” he wrote, “but I confess that he made on me personally the impression of a man obsessed with fixed ideas.”
While accounts of Freud’s visit have inevitably focused on this conversation with James, a less-known encounter with another prominent American scientist would become far more significant — for the two men and for the future of psychoanalysis in the United States. This person was James Jackson Putnam, a professor of neurology at Harvard and a leader of a growing movement to professionalize psychotherapy in the United States. Putnam and many other scientifically minded people were trying to counteract the growing influence of spiritual healers, who had been trying to treat the mentally ill with religious and mystical approaches. He had recently attended the first medical conference on psychotherapy, in New Haven.
After listening to Freud at Clark, Putnam invited him and the other psychoanalysts who had traveled with him to the United States — Carl Jung (who also lectured and received an honorary degree at Clark) and Sandor Ferenczi — to spend a few days at the Putnam family camp in the Adirondacks, after the group visited Niagara Falls. Freud marveled at Putnam Camp, “where we had an opportunity of being acquainted with the utter wilderness of such an American landscape.” In several days of hiking and feasting, Putnam and Freud cemented a strong bond.
It was, Freud would later write, “the most important personal relationship which arose from the meeting at Worcester.” Putnam lent his stature to Freud’s ideas, promoting the psychoanalytic approach as a way to reach those patients who had been considered incurable. “There are obvious limits to its usefulness,” Putnam wrote in 1910, “but nevertheless it strikes deeper than any other method now known to psychiatry, and reaches some of these very cases to which the terms degenerative and incurable have been applied, forcing us to recast our conception of these states.”
Talk therapy offered a message of hope, in contrast to the pessimism that came with theories of hereditary illness and degeneration.
Looking back on his trip a few years later, Freud wrote that it had been encouraging: “In Europe I felt as though I were despised; but over there I found myself received by the foremost men as an equal.”
Putnam would go on to become the first president of the American Psychoanalytic Association, in 1911. And psychoanalytic ideas would fairly rapidly become part and parcel of American culture and psychiatric education. Freudian terms like transference, the unconscious and the Oedipus complex entered the lexicon. And mental-health practitioners embarked on in-depth studies of their patients’ idiosyncratic life stories from childhood on. Thanks in large measure to Putnam’s work, psychoanalysis would become — and remain for 100 years — an ingrained and respected approach to treating mental illness of all kinds.
Leon Hoffman, a psychiatrist, is a co-director of the Pacella Parent Child Center of the New York Psychoanalytic Society and Institute.
http://www.nytimes.com/2009/08/29/opini ... nted=print
Op-Ed Contributor
Freud’s Adirondack Vacation
By LEON HOFFMAN
SIGMUND Freud arrived in Hoboken, N.J., 100 years ago today on his first and only visit to the United States. He came to lecture on psychoanalysis and to receive an honorary degree from Clark University, in Worcester, Mass. It was, he said, “an honorable call,” a mark of his academic success. Freud was then 53 and had been practicing for 23 years.
At the time, most doctors here and in Europe still considered mental illness to be caused by “degeneration” of the brain. They assumed that there was little to be done for it beyond physical treatments like diet, exercise, drugs, rest and massage. But a growing awareness that the mind could influence bodily functions was giving rise to debates about the nature of the unconscious mind.
G. Stanley Hall, the president of Clark and the first person to earn a doctorate in psychology from Harvard, invited American scientists to hear Freud’s ideas about the unconscious roots of mental illness. William James, the philosopher and psychologist, was among those who attended, as were other prominent academics, like Adolf Meyer, who would become perhaps the most important psychiatric educator in the first half of the 20th century, and Franz Boas, the father of American anthropology. Emma Goldman, the noted radical, who was also there, remarked, “Among the array of professors, looking stiff and important in their caps and gowns, Sigmund Freud, in ordinary attire, unassuming, almost shrinking, stood out like a giant among Pygmies.”
Speaking in German and without notes, Freud delivered five lectures covering the basic principles of psychoanalysis: hysteria and the psychoanalytic method, the idea that mental illness could arise from a person’s early experience, the importance of dreams and unconscious mental activity, infantile sexuality and the nature of transference.
When Freud learned that James would attend only one day, he chose that day to speak on the interpretation of dreams and the power of the unconscious. After the lecture, the two men spent more than an hour alone together. James would later express ambivalence about Freud’s ideas. “They can’t fail to throw light on human nature,” he wrote, “but I confess that he made on me personally the impression of a man obsessed with fixed ideas.”
While accounts of Freud’s visit have inevitably focused on this conversation with James, a less-known encounter with another prominent American scientist would become far more significant — for the two men and for the future of psychoanalysis in the United States. This person was James Jackson Putnam, a professor of neurology at Harvard and a leader of a growing movement to professionalize psychotherapy in the United States. Putnam and many other scientifically minded people were trying to counteract the growing influence of spiritual healers, who had been trying to treat the mentally ill with religious and mystical approaches. He had recently attended the first medical conference on psychotherapy, in New Haven.
After listening to Freud at Clark, Putnam invited him and the other psychoanalysts who had traveled with him to the United States — Carl Jung (who also lectured and received an honorary degree at Clark) and Sandor Ferenczi — to spend a few days at the Putnam family camp in the Adirondacks, after the group visited Niagara Falls. Freud marveled at Putnam Camp, “where we had an opportunity of being acquainted with the utter wilderness of such an American landscape.” In several days of hiking and feasting, Putnam and Freud cemented a strong bond.
It was, Freud would later write, “the most important personal relationship which arose from the meeting at Worcester.” Putnam lent his stature to Freud’s ideas, promoting the psychoanalytic approach as a way to reach those patients who had been considered incurable. “There are obvious limits to its usefulness,” Putnam wrote in 1910, “but nevertheless it strikes deeper than any other method now known to psychiatry, and reaches some of these very cases to which the terms degenerative and incurable have been applied, forcing us to recast our conception of these states.”
Talk therapy offered a message of hope, in contrast to the pessimism that came with theories of hereditary illness and degeneration.
Looking back on his trip a few years later, Freud wrote that it had been encouraging: “In Europe I felt as though I were despised; but over there I found myself received by the foremost men as an equal.”
Putnam would go on to become the first president of the American Psychoanalytic Association, in 1911. And psychoanalytic ideas would fairly rapidly become part and parcel of American culture and psychiatric education. Freudian terms like transference, the unconscious and the Oedipus complex entered the lexicon. And mental-health practitioners embarked on in-depth studies of their patients’ idiosyncratic life stories from childhood on. Thanks in large measure to Putnam’s work, psychoanalysis would become — and remain for 100 years — an ingrained and respected approach to treating mental illness of all kinds.
Leon Hoffman, a psychiatrist, is a co-director of the Pacella Parent Child Center of the New York Psychoanalytic Society and Institute.
http://www.nytimes.com/2009/08/29/opini ... nted=print
A Reflection on Art and Architecture
Professor Ricardo L Castro
This is an edited version of an article that was originally published in The Ismaili Canada, Issue 1, 2008, pp. 6-7.
http://iis.ac.uk/view_article.asp?ContentID=110417
Abstract
N/A
Download PDF version of article (86 KB)
“A work of art is an expressive and communicative medium of feelings and thought.” Pierre Francastel, 1950.[1]
What is the role of art and architecture in society? How can one learn more about history through art and architecture?
These apparently simple questions encapsulate a series of complex responses that could easily fill several volumes. Since antiquity, these questions have been catalysts for the development of philosophical, aesthetic, societal, and architectural theories. A helpful strategy to respond here is to touch upon the work of 20th century thinkers who have elucidated (or addressed) some of the issues raised by the questions, as well as examine in more detail the examples of the Alhambra and the Generalife.
In his seminal work, Peinture et Societe, Pierre Francastel examines early Renaissance and modern works of art and demonstrates that artists act as the transmitters from one state of civilisation to another. Francastel’s pioneer work is among those which have brought attention to the societal role and importance of works of art as well as to their relevance as tools for the writing of history.
I believe that architecture, which is primarily associated with the basic notions of shelter and functionality, possesses other characteristics and operates at many other levels; unfortunately these are often relegated to oblivion. Examples include architecture’s experiential impact on all our senses and its symbolic possibilities. These categories account for the production of extraordinary works throughout the ages. It is through the symbolic and sensorial criteria that we can discuss architecture's expressive quality, hence its relevance as art.
For instance, the Alhambra and the Generalife, which are extraordinary palatial complexes composed of buildings and gardens, are undoubtedly among the extant architectural wonders of the world from the medieval period. The Alhambra, which signifies “The Red” in Arabic (a1 hamra), took most of its present form in the early 14th century during the rule of Ibn Nasrid, the founder of the Nasrid Dynasty. The castle eventually became a strategically fortified and sumptuous city palace on a hill overlooking the city of Granada. Outside the Alhambra is the Generalife, which is derived from the Arabic words Jennat al-Arif - meaning, interestingly, ‘garden of the architect’. With elegantly laid out gardens, the Generalife is another palace from the 14th century, which functioned as the summer retreat for the Nasrid court.
I will focus here on the uses of water that underscore the design of many of the indoor and outdoor spaces in the Alhambra and the Generalife, and by extension, of Islamic architecture. I will not refer to the well-known and mere ecological and functional aspects of water such as its essential quality to sustain life, hence its availability through sources and cisterns, its use as a means of transport (waterways), and the infrastructure to deliver it (aqueducts) or other pragmatic uses. Rather, I will refer to its experiential and symbolic qualities created by such properties as reflection, transparency, sound, taste and its tactility. The anonymous architects and artists of the Nasrid complexes used these qualities in a masterly way to underscore the expressive and symbolic spatial content of gardens and buildings, i.e., to transform them into true works of art.
Consider the famous Patio de los Arrayanes (Court of the Myrtles) in the Alhambra. This rectangular court, which is simultaneously indoor and outdoor, contains a water basin, which adds coolness to the space during the warm months. Functionally, it limits and defines circulation to the edges of the court. At the same time, the basin acts as a gigantic mirror which reflects inversed images of the facades at both ends of the court. I would suggest that the intentions of creating a deliberate reflection of the buildings refers to the symbolic and pervasive notion of reversibility found in Islam. Think of the relationships between paradise-earth (Gardens), the idea of the cosmological tree of life growing upside down in paradise, and the concept of praying to and from the Ka‘ba in Mecca.
At one end of the basin, a discreet and beautifully designed fountain feeds it with a continuous yet subtle flow of water, which does not disturb its mirroring quality. Introducing the soothing sound of bubbling water, the fountain also becomes a contemplation anchor, which visually lures visitors. Vegetation, a tacit symbol of the life-giving power of water, appears as well in the court through the planting of geometrically pruned myrtle hedges, which gives the place its name and visual scale. In some of the walls, the unusual ornamental patterns of the alicatados (tiles) evoke the idea of order, flow and movement, and add colour as well to the court. The skilful combinations of all these elements ultimately produce a powerful expressive space, which will continue to inspire artists, architects and poets.
The Alhambra and the Generalife are in the company of other single works such as the Parthenon in Athens and the Pantheon in Rome, other complexes such as the Cordoba mosque and many Eastern and Western gardens, or entire cities such as Toledo, Venice and Florence. All these works - art objects - possess unusual expressive qualities, which clearly corroborate Francastel’s aphorism at the beginning of this essay.
Perhaps what I have stated above may be reinforced by introducing some of the concepts that Material Culture addresses. A novel inter-disciplinary domain, Material Culture may be considered as yet another possible approach to discuss the issues raised at the beginning of this essay. Material Culture constitutes a type of intersection of many disciplines such as history, art history, anthropology, folklore, and the history of science and technology. As such, it is concerned with the psychological role, the meaning, the experiential impact that all physical objects have on humans in a particular culture. It also refers to the range of manufactured objects or artefacts that are typical of a culture and form an essential part of its identity.
Professor Daniel Waugh, a faculty member at the University of Washington and a well-known scholar in the field, illustrates the point: “Material objects include items with physical substance. They are primarily shaped or produced by human action, though objects created by nature can also play an important role in the history of human societies. For example, a coin is the product of human action. An animal horn is not, but it takes on meaning for humans if used as a drinking cup or a decorative or ritual object ... The physical existence of a religious image in a dark cave as a 'work of art' provides evidence of the piety of an artist or a sponsor.”[2]
Among the artefacts which Material Culture studies, architecture and art objects - whether paintings, sculptures, calligraphy, musical scores and many other similar art works - play an important role in the making and the understanding of culture. They constitute a fundamental repository that informs the writing of history and ultimately supports the idea that without their works of art, without the possibility of their expressive channels, human societies cease to exist as such.
Ricardo L. Castro is an Associate Professor of Architecture at McGill University, Montreal, Quebec
--------------------------------------------------------------------------------
[1] Pierre Francastel, Peinture et Societe (Lyon: Audin ed.) p. ii. (My translation). Unfortunately this important work by the renowned French sociologist, critic, and historian awaits translation into English.
[2] Daniel Waugh, Material Culture / Objects, Center for History and New Media, George Mason University, URL: http://chnm.gmu.edu/worldhistorysources ... smain.html (accessed 2 December 2007).
Professor Ricardo L Castro
This is an edited version of an article that was originally published in The Ismaili Canada, Issue 1, 2008, pp. 6-7.
http://iis.ac.uk/view_article.asp?ContentID=110417
Abstract
N/A
Download PDF version of article (86 KB)
“A work of art is an expressive and communicative medium of feelings and thought.” Pierre Francastel, 1950.[1]
What is the role of art and architecture in society? How can one learn more about history through art and architecture?
These apparently simple questions encapsulate a series of complex responses that could easily fill several volumes. Since antiquity, these questions have been catalysts for the development of philosophical, aesthetic, societal, and architectural theories. A helpful strategy to respond here is to touch upon the work of 20th century thinkers who have elucidated (or addressed) some of the issues raised by the questions, as well as examine in more detail the examples of the Alhambra and the Generalife.
In his seminal work, Peinture et Societe, Pierre Francastel examines early Renaissance and modern works of art and demonstrates that artists act as the transmitters from one state of civilisation to another. Francastel’s pioneer work is among those which have brought attention to the societal role and importance of works of art as well as to their relevance as tools for the writing of history.
I believe that architecture, which is primarily associated with the basic notions of shelter and functionality, possesses other characteristics and operates at many other levels; unfortunately these are often relegated to oblivion. Examples include architecture’s experiential impact on all our senses and its symbolic possibilities. These categories account for the production of extraordinary works throughout the ages. It is through the symbolic and sensorial criteria that we can discuss architecture's expressive quality, hence its relevance as art.
For instance, the Alhambra and the Generalife, which are extraordinary palatial complexes composed of buildings and gardens, are undoubtedly among the extant architectural wonders of the world from the medieval period. The Alhambra, which signifies “The Red” in Arabic (a1 hamra), took most of its present form in the early 14th century during the rule of Ibn Nasrid, the founder of the Nasrid Dynasty. The castle eventually became a strategically fortified and sumptuous city palace on a hill overlooking the city of Granada. Outside the Alhambra is the Generalife, which is derived from the Arabic words Jennat al-Arif - meaning, interestingly, ‘garden of the architect’. With elegantly laid out gardens, the Generalife is another palace from the 14th century, which functioned as the summer retreat for the Nasrid court.
I will focus here on the uses of water that underscore the design of many of the indoor and outdoor spaces in the Alhambra and the Generalife, and by extension, of Islamic architecture. I will not refer to the well-known and mere ecological and functional aspects of water such as its essential quality to sustain life, hence its availability through sources and cisterns, its use as a means of transport (waterways), and the infrastructure to deliver it (aqueducts) or other pragmatic uses. Rather, I will refer to its experiential and symbolic qualities created by such properties as reflection, transparency, sound, taste and its tactility. The anonymous architects and artists of the Nasrid complexes used these qualities in a masterly way to underscore the expressive and symbolic spatial content of gardens and buildings, i.e., to transform them into true works of art.
Consider the famous Patio de los Arrayanes (Court of the Myrtles) in the Alhambra. This rectangular court, which is simultaneously indoor and outdoor, contains a water basin, which adds coolness to the space during the warm months. Functionally, it limits and defines circulation to the edges of the court. At the same time, the basin acts as a gigantic mirror which reflects inversed images of the facades at both ends of the court. I would suggest that the intentions of creating a deliberate reflection of the buildings refers to the symbolic and pervasive notion of reversibility found in Islam. Think of the relationships between paradise-earth (Gardens), the idea of the cosmological tree of life growing upside down in paradise, and the concept of praying to and from the Ka‘ba in Mecca.
At one end of the basin, a discreet and beautifully designed fountain feeds it with a continuous yet subtle flow of water, which does not disturb its mirroring quality. Introducing the soothing sound of bubbling water, the fountain also becomes a contemplation anchor, which visually lures visitors. Vegetation, a tacit symbol of the life-giving power of water, appears as well in the court through the planting of geometrically pruned myrtle hedges, which gives the place its name and visual scale. In some of the walls, the unusual ornamental patterns of the alicatados (tiles) evoke the idea of order, flow and movement, and add colour as well to the court. The skilful combinations of all these elements ultimately produce a powerful expressive space, which will continue to inspire artists, architects and poets.
The Alhambra and the Generalife are in the company of other single works such as the Parthenon in Athens and the Pantheon in Rome, other complexes such as the Cordoba mosque and many Eastern and Western gardens, or entire cities such as Toledo, Venice and Florence. All these works - art objects - possess unusual expressive qualities, which clearly corroborate Francastel’s aphorism at the beginning of this essay.
Perhaps what I have stated above may be reinforced by introducing some of the concepts that Material Culture addresses. A novel inter-disciplinary domain, Material Culture may be considered as yet another possible approach to discuss the issues raised at the beginning of this essay. Material Culture constitutes a type of intersection of many disciplines such as history, art history, anthropology, folklore, and the history of science and technology. As such, it is concerned with the psychological role, the meaning, the experiential impact that all physical objects have on humans in a particular culture. It also refers to the range of manufactured objects or artefacts that are typical of a culture and form an essential part of its identity.
Professor Daniel Waugh, a faculty member at the University of Washington and a well-known scholar in the field, illustrates the point: “Material objects include items with physical substance. They are primarily shaped or produced by human action, though objects created by nature can also play an important role in the history of human societies. For example, a coin is the product of human action. An animal horn is not, but it takes on meaning for humans if used as a drinking cup or a decorative or ritual object ... The physical existence of a religious image in a dark cave as a 'work of art' provides evidence of the piety of an artist or a sponsor.”[2]
Among the artefacts which Material Culture studies, architecture and art objects - whether paintings, sculptures, calligraphy, musical scores and many other similar art works - play an important role in the making and the understanding of culture. They constitute a fundamental repository that informs the writing of history and ultimately supports the idea that without their works of art, without the possibility of their expressive channels, human societies cease to exist as such.
Ricardo L. Castro is an Associate Professor of Architecture at McGill University, Montreal, Quebec
--------------------------------------------------------------------------------
[1] Pierre Francastel, Peinture et Societe (Lyon: Audin ed.) p. ii. (My translation). Unfortunately this important work by the renowned French sociologist, critic, and historian awaits translation into English.
[2] Daniel Waugh, Material Culture / Objects, Center for History and New Media, George Mason University, URL: http://chnm.gmu.edu/worldhistorysources ... smain.html (accessed 2 December 2007).
September 20, 2009
The Holy Grail of the Unconscious
By SARA CORBETT
This is a story about a nearly 100-year-old book, bound in red leather, which has spent the last quarter century secreted away in a bank vault in Switzerland. The book is big and heavy and its spine is etched with gold letters that say “Liber Novus,” which is Latin for “New Book.” Its pages are made from thick cream-colored parchment and filled with paintings of otherworldly creatures and handwritten dialogues with gods and devils. If you didn’t know the book’s vintage, you might confuse it for a lost medieval tome.
And yet between the book’s heavy covers, a very modern story unfolds. It goes as follows: Man skids into midlife and loses his soul. Man goes looking for soul. After a lot of instructive hardship and adventure — taking place entirely in his head — he finds it again.
Some people feel that nobody should read the book, and some feel that everybody should read it. The truth is, nobody really knows. Most of what has been said about the book — what it is, what it means — is the product of guesswork, because from the time it was begun in 1914 in a smallish town in Switzerland, it seems that only about two dozen people have managed to read or even have much of a look at it.
Of those who did see it, at least one person, an educated Englishwoman who was allowed to read some of the book in the 1920s, thought it held infinite wisdom — “There are people in my country who would read it from cover to cover without stopping to breathe scarcely,” she wrote — while another, a well-known literary type who glimpsed it shortly after, deemed it both fascinating and worrisome, concluding that it was the work of a psychotic.
So for the better part of the past century, despite the fact that it is thought to be the pivotal work of one of the era’s great thinkers, the book has existed mostly just as a rumor, cosseted behind the skeins of its own legend — revered and puzzled over only from a great distance.
Which is why one rainy November night in 2007, I boarded a flight in Boston and rode the clouds until I woke up in Zurich, pulling up to the airport gate at about the same hour that the main branch of the United Bank of Switzerland, located on the city’s swanky Bahnhofstrasse, across from Tommy Hilfiger and close to Cartier, was opening its doors for the day. A change was under way: the book, which had spent the past 23 years locked inside a safe deposit box in one of the bank’s underground vaults, was just then being wrapped in black cloth and loaded into a discreet-looking padded suitcase on wheels. It was then rolled past the guards, out into the sunlight and clear, cold air, where it was loaded into a waiting car and whisked away.
THIS COULD SOUND, I realize, like the start of a spy novel or a Hollywood bank caper, but it is rather a story about genius and madness, as well as possession and obsession, with one object — this old, unusual book — skating among those things. Also, there are a lot of Jungians involved, a species of thinkers who subscribe to the theories of Carl Jung, the Swiss psychiatrist and author of the big red leather book. And Jungians, almost by definition, tend to get enthused anytime something previously hidden reveals itself, when whatever’s been underground finally makes it to the surface.
Carl Jung founded the field of analytical psychology and, along with Sigmund Freud, was responsible for popularizing the idea that a person’s interior life merited not just attention but dedicated exploration — a notion that has since propelled tens of millions of people into psychotherapy. Freud, who started as Jung’s mentor and later became his rival, generally viewed the unconscious mind as a warehouse for repressed desires, which could then be codified and pathologized and treated. Jung, over time, came to see the psyche as an inherently more spiritual and fluid place, an ocean that could be fished for enlightenment and healing.
Whether or not he would have wanted it this way, Jung — who regarded himself as a scientist — is today remembered more as a countercultural icon, a proponent of spirituality outside religion and the ultimate champion of dreamers and seekers everywhere, which has earned him both posthumous respect and posthumous ridicule. Jung’s ideas laid the foundation for the widely used Myers-Briggs personality test and influenced the creation of Alcoholics Anonymous. His central tenets — the existence of a collective unconscious and the power of archetypes — have seeped into the larger domain of New Age thinking while remaining more at the fringes of mainstream psychology.
A big man with wire-rimmed glasses, a booming laugh and a penchant for the experimental, Jung was interested in the psychological aspects of séances, of astrology, of witchcraft. He could be jocular and also impatient. He was a dynamic speaker, an empathic listener. He had a famously magnetic appeal with women. Working at Zurich’s Burghölzli psychiatric hospital, Jung listened intently to the ravings of schizophrenics, believing they held clues to both personal and universal truths. At home, in his spare time, he pored over Dante, Goethe, Swedenborg and Nietzsche. He began to study mythology and world cultures, applying what he learned to the live feed from the unconscious — claiming that dreams offered a rich and symbolic narrative coming from the depths of the psyche. Somewhere along the way, he started to view the human soul — not just the mind and the body — as requiring specific care and development, an idea that pushed him into a province long occupied by poets and priests but not so much by medical doctors and empirical scientists.
More....
http://www.nytimes.com/2009/09/20/magaz ... &th&emc=th
The Holy Grail of the Unconscious
By SARA CORBETT
This is a story about a nearly 100-year-old book, bound in red leather, which has spent the last quarter century secreted away in a bank vault in Switzerland. The book is big and heavy and its spine is etched with gold letters that say “Liber Novus,” which is Latin for “New Book.” Its pages are made from thick cream-colored parchment and filled with paintings of otherworldly creatures and handwritten dialogues with gods and devils. If you didn’t know the book’s vintage, you might confuse it for a lost medieval tome.
And yet between the book’s heavy covers, a very modern story unfolds. It goes as follows: Man skids into midlife and loses his soul. Man goes looking for soul. After a lot of instructive hardship and adventure — taking place entirely in his head — he finds it again.
Some people feel that nobody should read the book, and some feel that everybody should read it. The truth is, nobody really knows. Most of what has been said about the book — what it is, what it means — is the product of guesswork, because from the time it was begun in 1914 in a smallish town in Switzerland, it seems that only about two dozen people have managed to read or even have much of a look at it.
Of those who did see it, at least one person, an educated Englishwoman who was allowed to read some of the book in the 1920s, thought it held infinite wisdom — “There are people in my country who would read it from cover to cover without stopping to breathe scarcely,” she wrote — while another, a well-known literary type who glimpsed it shortly after, deemed it both fascinating and worrisome, concluding that it was the work of a psychotic.
So for the better part of the past century, despite the fact that it is thought to be the pivotal work of one of the era’s great thinkers, the book has existed mostly just as a rumor, cosseted behind the skeins of its own legend — revered and puzzled over only from a great distance.
Which is why one rainy November night in 2007, I boarded a flight in Boston and rode the clouds until I woke up in Zurich, pulling up to the airport gate at about the same hour that the main branch of the United Bank of Switzerland, located on the city’s swanky Bahnhofstrasse, across from Tommy Hilfiger and close to Cartier, was opening its doors for the day. A change was under way: the book, which had spent the past 23 years locked inside a safe deposit box in one of the bank’s underground vaults, was just then being wrapped in black cloth and loaded into a discreet-looking padded suitcase on wheels. It was then rolled past the guards, out into the sunlight and clear, cold air, where it was loaded into a waiting car and whisked away.
THIS COULD SOUND, I realize, like the start of a spy novel or a Hollywood bank caper, but it is rather a story about genius and madness, as well as possession and obsession, with one object — this old, unusual book — skating among those things. Also, there are a lot of Jungians involved, a species of thinkers who subscribe to the theories of Carl Jung, the Swiss psychiatrist and author of the big red leather book. And Jungians, almost by definition, tend to get enthused anytime something previously hidden reveals itself, when whatever’s been underground finally makes it to the surface.
Carl Jung founded the field of analytical psychology and, along with Sigmund Freud, was responsible for popularizing the idea that a person’s interior life merited not just attention but dedicated exploration — a notion that has since propelled tens of millions of people into psychotherapy. Freud, who started as Jung’s mentor and later became his rival, generally viewed the unconscious mind as a warehouse for repressed desires, which could then be codified and pathologized and treated. Jung, over time, came to see the psyche as an inherently more spiritual and fluid place, an ocean that could be fished for enlightenment and healing.
Whether or not he would have wanted it this way, Jung — who regarded himself as a scientist — is today remembered more as a countercultural icon, a proponent of spirituality outside religion and the ultimate champion of dreamers and seekers everywhere, which has earned him both posthumous respect and posthumous ridicule. Jung’s ideas laid the foundation for the widely used Myers-Briggs personality test and influenced the creation of Alcoholics Anonymous. His central tenets — the existence of a collective unconscious and the power of archetypes — have seeped into the larger domain of New Age thinking while remaining more at the fringes of mainstream psychology.
A big man with wire-rimmed glasses, a booming laugh and a penchant for the experimental, Jung was interested in the psychological aspects of séances, of astrology, of witchcraft. He could be jocular and also impatient. He was a dynamic speaker, an empathic listener. He had a famously magnetic appeal with women. Working at Zurich’s Burghölzli psychiatric hospital, Jung listened intently to the ravings of schizophrenics, believing they held clues to both personal and universal truths. At home, in his spare time, he pored over Dante, Goethe, Swedenborg and Nietzsche. He began to study mythology and world cultures, applying what he learned to the live feed from the unconscious — claiming that dreams offered a rich and symbolic narrative coming from the depths of the psyche. Somewhere along the way, he started to view the human soul — not just the mind and the body — as requiring specific care and development, an idea that pushed him into a province long occupied by poets and priests but not so much by medical doctors and empirical scientists.
More....
http://www.nytimes.com/2009/09/20/magaz ... &th&emc=th
October 13, 2009
Op-Ed Columnist
The Young and the Neuro
By DAVID BROOKS
When you go to an academic conference you expect to see some geeks, gravitas and graying professors giving lectures. But the people who showed up at the Social and Affective Neuroscience Society’s conference in Lower Manhattan last weekend were so damned young, hip and attractive. The leading figures at this conference were in their 30s, and most of the work was done by people in their 20s. When you spoke with them, you felt yourself near the beginning of something long and important.
In 2001, an Internet search of the phrase “social cognitive neuroscience” yielded 53 hits. Now you get more than a million on Google. Young scholars have been drawn to this field from psychology, economics, political science and beyond in the hopes that by looking into the brain they can help settle some old arguments about how people interact.
These people study the way biology, in the form of genes, influences behavior. But they’re also trying to understand the complementary process of how social behavior changes biology. Matthew Lieberman of U.C.L.A. is doing research into what happens in the brain when people are persuaded by an argument.
Keely Muscatell, one of his doctoral students, and others presented a study in which they showed people from various social strata some images of menacing faces. People whose parents had low social status exhibited more activation in the amygdala (the busy little part of the brain involved in fear and emotion) than people from high-status families.
Reem Yahya and a team from the University of Haifa studied Arabs and Jews while showing them images of hands and feet in painful situations. The two cultures perceived pain differently. The Arabs perceived higher levels of pain over all while the Jews were more sensitive to pain suffered by members of a group other than their own.
Mina Cikara of Princeton and others scanned the brains of Yankee and Red Sox fans as they watched baseball highlights. Neither reacted much to an Orioles-Blue Jays game, but when they saw their own team doing well, brain regions called the ventral striatum and nucleus accumbens were activated. This is a look at how tribal dominance struggles get processed inside.
Jonathan B. Freeman of Tufts and others peered into the reward centers of the brain such as the caudate nucleus. They found that among Americans, that region was likely to be activated by dominant behavior, whereas among Japanese, it was more likely to be activated by subordinate behavior — the same region rewarding different patterns of behavior depending on culture.
All of these studies are baby steps in a long conversation, and young academics are properly circumspect about drawing broad conclusions. But eventually their work could give us a clearer picture of what we mean by fuzzy words like ‘culture.’ It could also fill a hole in our understanding of ourselves. Economists, political scientists and policy makers treat humans as ultrarational creatures because they can’t define and systematize the emotions. This work is getting us closer to that.
The work demonstrates that we are awash in social signals, and any social science that treats individuals as discrete decision-making creatures is nonsense. But it also suggests that even though most of our reactions are fast and automatic, we still have free will and control.
Many of the studies presented here concerned the way we divide people by in-group and out-group categories in as little as 170 milliseconds. The anterior cingulate cortices in American and Chinese brains activate when people see members of their own group endure pain, but they do so at much lower levels when they see members of another group enduring it. These effects may form the basis of prejudice.
But a study by Saaid A. Mendoza and David M. Amodio of New York University showed that if you give people a strategy, such as reminding them to be racially fair, it is possible to counteract those perceptions. People feel disgust toward dehumanized groups, but a study by Claire Hoogendoorn, Elizabeth Phelps and others at N.Y.U. suggests it is possible to lower disgust and the accompanying insula activity through cognitive behavioral therapy.
In other words, consciousness is too slow to see what happens inside, but it is possible to change the lenses through which we unconsciously construe the world.
Since I’m not an academic, I’m free to speculate that this work will someday give us new categories, which will replace misleading categories like ‘emotion’ and ‘reason.’ I suspect that the work will take us beyond the obsession with I.Q. and other conscious capacities and give us a firmer understanding of motivation, equilibrium, sensitivity and other unconscious capacities.
The hard sciences are interpenetrating the social sciences. This isn’t dehumanizing. It shines attention on the things poets have traditionally cared about: the power of human attachments. It may even help policy wonks someday see people as they really are.
http://www.nytimes.com/2009/10/13/opini ... nted=print
******
October 13, 2009
Essay
The Collider, the Particle and a Theory About Fate
By DENNIS OVERBYE
More than a year after an explosion of sparks, soot and frigid helium shut it down, the world’s biggest and most expensive physics experiment, known as the Large Hadron Collider, is poised to start up again. In December, if all goes well, protons will start smashing together in an underground racetrack outside Geneva in a search for forces and particles that reigned during the first trillionth of a second of the Big Bang.
Then it will be time to test one of the most bizarre and revolutionary theories in science. I’m not talking about extra dimensions of space-time, dark matter or even black holes that eat the Earth. No, I’m talking about the notion that the troubled collider is being sabotaged by its own future. A pair of otherwise distinguished physicists have suggested that the hypothesized Higgs boson, which physicists hope to produce with the collider, might be so abhorrent to nature that its creation would ripple backward through time and stop the collider before it could make one, like a time traveler who goes back in time to kill his grandfather.
Holger Bech Nielsen, of the Niels Bohr Institute in Copenhagen, and Masao Ninomiya of the Yukawa Institute for Theoretical Physics in Kyoto, Japan, put this idea forward in a series of papers with titles like “Test of Effect From Future in Large Hadron Collider: a Proposal” and “Search for Future Influence From LHC,” posted on the physics Web site arXiv.org in the last year and a half.
According to the so-called Standard Model that rules almost all physics, the Higgs is responsible for imbuing other elementary particles with mass.
“It must be our prediction that all Higgs producing machines shall have bad luck,” Dr. Nielsen said in an e-mail message. In an unpublished essay, Dr. Nielson said of the theory, “Well, one could even almost say that we have a model for God.” It is their guess, he went on, “that He rather hates Higgs particles, and attempts to avoid them.”
This malign influence from the future, they argue, could explain why the United States Superconducting Supercollider, also designed to find the Higgs, was canceled in 1993 after billions of dollars had already been spent, an event so unlikely that Dr. Nielsen calls it an “anti-miracle.”
You might think that the appearance of this theory is further proof that people have had ample time — perhaps too much time — to think about what will come out of the collider, which has been 15 years and $9 billion in the making.
The collider was built by CERN, the European Organization for Nuclear Research, to accelerate protons to energies of seven trillion electron volts around an 18-mile underground racetrack and then crash them together into primordial fireballs.
For the record, as of the middle of September, CERN engineers hope to begin to collide protons at the so-called injection energy of 450 billion electron volts in December and then ramp up the energy until the protons have 3.5 trillion electron volts of energy apiece and then, after a short Christmas break, real physics can begin.
Maybe.
Dr. Nielsen and Dr. Ninomiya started laying out their case for doom in the spring of 2008. It was later that fall, of course, after the CERN collider was turned on, that a connection between two magnets vaporized, shutting down the collider for more than a year.
Dr. Nielsen called that “a funny thing that could make us to believe in the theory of ours.”
He agreed that skepticism would be in order. After all, most big science projects, including the Hubble Space Telescope, have gone through a period of seeming jinxed. At CERN, the beat goes on: Last weekend the French police arrested a particle physicist who works on one of the collider experiments, on suspicion of conspiracy with a North African wing of Al Qaeda.
Dr. Nielsen and Dr. Ninomiya have proposed a kind of test: that CERN engage in a game of chance, a “card-drawing” exercise using perhaps a random-number generator, in order to discern bad luck from the future. If the outcome was sufficiently unlikely, say drawing the one spade in a deck with 100 million hearts, the machine would either not run at all, or only at low energies unlikely to find the Higgs.
Sure, it’s crazy, and CERN should not and is not about to mortgage its investment to a coin toss. The theory was greeted on some blogs with comparisons to Harry Potter. But craziness has a fine history in a physics that talks routinely about cats being dead and alive at the same time and about anti-gravity puffing out the universe.
As Niels Bohr, Dr. Nielsen’s late countryman and one of the founders of quantum theory, once told a colleague: “We are all agreed that your theory is crazy. The question that divides us is whether it is crazy enough to have a chance of being correct.”
Dr. Nielsen is well-qualified in this tradition. He is known in physics as one of the founders of string theory and a deep and original thinker, “one of those extremely smart people that is willing to chase crazy ideas pretty far,” in the words of Sean Carroll, a Caltech physicist and author of a coming book about time, “From Eternity to Here.”
Another of Dr. Nielsen’s projects is an effort to show how the universe as we know it, with all its apparent regularity, could arise from pure randomness, a subject he calls “random dynamics.”
Dr. Nielsen admits that he and Dr. Ninomiya’s new theory smacks of time travel, a longtime interest, which has become a respectable research subject in recent years. While it is a paradox to go back in time and kill your grandfather, physicists agree there is no paradox if you go back in time and save him from being hit by a bus. In the case of the Higgs and the collider, it is as if something is going back in time to keep the universe from being hit by a bus. Although just why the Higgs would be a catastrophe is not clear. If we knew, presumably, we wouldn’t be trying to make one.
We always assume that the past influences the future. But that is not necessarily true in the physics of Newton or Einstein. According to physicists, all you really need to know, mathematically, to describe what happens to an apple or the 100 billion galaxies of the universe over all time are the laws that describe how things change and a statement of where things start. The latter are the so-called boundary conditions — the apple five feet over your head, or the Big Bang.
The equations work just as well, Dr. Nielsen and others point out, if the boundary conditions specify a condition in the future (the apple on your head) instead of in the past, as long as the fundamental laws of physics are reversible, which most physicists believe they are.
“For those of us who believe in physics,” Einstein once wrote to a friend, “this separation between past, present and future is only an illusion.”
In Kurt Vonnegut’s novel “Sirens of Titan,” all of human history turns out to be reduced to delivering a piece of metal roughly the size and shape of a beer-can opener to an alien marooned on Saturn’s moon so he can repair his spaceship and go home.
Whether the collider has such a noble or humble fate — or any fate at all — remains to be seen. As a Red Sox fan my entire adult life, I feel I know something about jinxes.
http://www.nytimes.com/2009/10/13/scien ... nted=print
Op-Ed Columnist
The Young and the Neuro
By DAVID BROOKS
When you go to an academic conference you expect to see some geeks, gravitas and graying professors giving lectures. But the people who showed up at the Social and Affective Neuroscience Society’s conference in Lower Manhattan last weekend were so damned young, hip and attractive. The leading figures at this conference were in their 30s, and most of the work was done by people in their 20s. When you spoke with them, you felt yourself near the beginning of something long and important.
In 2001, an Internet search of the phrase “social cognitive neuroscience” yielded 53 hits. Now you get more than a million on Google. Young scholars have been drawn to this field from psychology, economics, political science and beyond in the hopes that by looking into the brain they can help settle some old arguments about how people interact.
These people study the way biology, in the form of genes, influences behavior. But they’re also trying to understand the complementary process of how social behavior changes biology. Matthew Lieberman of U.C.L.A. is doing research into what happens in the brain when people are persuaded by an argument.
Keely Muscatell, one of his doctoral students, and others presented a study in which they showed people from various social strata some images of menacing faces. People whose parents had low social status exhibited more activation in the amygdala (the busy little part of the brain involved in fear and emotion) than people from high-status families.
Reem Yahya and a team from the University of Haifa studied Arabs and Jews while showing them images of hands and feet in painful situations. The two cultures perceived pain differently. The Arabs perceived higher levels of pain over all while the Jews were more sensitive to pain suffered by members of a group other than their own.
Mina Cikara of Princeton and others scanned the brains of Yankee and Red Sox fans as they watched baseball highlights. Neither reacted much to an Orioles-Blue Jays game, but when they saw their own team doing well, brain regions called the ventral striatum and nucleus accumbens were activated. This is a look at how tribal dominance struggles get processed inside.
Jonathan B. Freeman of Tufts and others peered into the reward centers of the brain such as the caudate nucleus. They found that among Americans, that region was likely to be activated by dominant behavior, whereas among Japanese, it was more likely to be activated by subordinate behavior — the same region rewarding different patterns of behavior depending on culture.
All of these studies are baby steps in a long conversation, and young academics are properly circumspect about drawing broad conclusions. But eventually their work could give us a clearer picture of what we mean by fuzzy words like ‘culture.’ It could also fill a hole in our understanding of ourselves. Economists, political scientists and policy makers treat humans as ultrarational creatures because they can’t define and systematize the emotions. This work is getting us closer to that.
The work demonstrates that we are awash in social signals, and any social science that treats individuals as discrete decision-making creatures is nonsense. But it also suggests that even though most of our reactions are fast and automatic, we still have free will and control.
Many of the studies presented here concerned the way we divide people by in-group and out-group categories in as little as 170 milliseconds. The anterior cingulate cortices in American and Chinese brains activate when people see members of their own group endure pain, but they do so at much lower levels when they see members of another group enduring it. These effects may form the basis of prejudice.
But a study by Saaid A. Mendoza and David M. Amodio of New York University showed that if you give people a strategy, such as reminding them to be racially fair, it is possible to counteract those perceptions. People feel disgust toward dehumanized groups, but a study by Claire Hoogendoorn, Elizabeth Phelps and others at N.Y.U. suggests it is possible to lower disgust and the accompanying insula activity through cognitive behavioral therapy.
In other words, consciousness is too slow to see what happens inside, but it is possible to change the lenses through which we unconsciously construe the world.
Since I’m not an academic, I’m free to speculate that this work will someday give us new categories, which will replace misleading categories like ‘emotion’ and ‘reason.’ I suspect that the work will take us beyond the obsession with I.Q. and other conscious capacities and give us a firmer understanding of motivation, equilibrium, sensitivity and other unconscious capacities.
The hard sciences are interpenetrating the social sciences. This isn’t dehumanizing. It shines attention on the things poets have traditionally cared about: the power of human attachments. It may even help policy wonks someday see people as they really are.
http://www.nytimes.com/2009/10/13/opini ... nted=print
******
October 13, 2009
Essay
The Collider, the Particle and a Theory About Fate
By DENNIS OVERBYE
More than a year after an explosion of sparks, soot and frigid helium shut it down, the world’s biggest and most expensive physics experiment, known as the Large Hadron Collider, is poised to start up again. In December, if all goes well, protons will start smashing together in an underground racetrack outside Geneva in a search for forces and particles that reigned during the first trillionth of a second of the Big Bang.
Then it will be time to test one of the most bizarre and revolutionary theories in science. I’m not talking about extra dimensions of space-time, dark matter or even black holes that eat the Earth. No, I’m talking about the notion that the troubled collider is being sabotaged by its own future. A pair of otherwise distinguished physicists have suggested that the hypothesized Higgs boson, which physicists hope to produce with the collider, might be so abhorrent to nature that its creation would ripple backward through time and stop the collider before it could make one, like a time traveler who goes back in time to kill his grandfather.
Holger Bech Nielsen, of the Niels Bohr Institute in Copenhagen, and Masao Ninomiya of the Yukawa Institute for Theoretical Physics in Kyoto, Japan, put this idea forward in a series of papers with titles like “Test of Effect From Future in Large Hadron Collider: a Proposal” and “Search for Future Influence From LHC,” posted on the physics Web site arXiv.org in the last year and a half.
According to the so-called Standard Model that rules almost all physics, the Higgs is responsible for imbuing other elementary particles with mass.
“It must be our prediction that all Higgs producing machines shall have bad luck,” Dr. Nielsen said in an e-mail message. In an unpublished essay, Dr. Nielson said of the theory, “Well, one could even almost say that we have a model for God.” It is their guess, he went on, “that He rather hates Higgs particles, and attempts to avoid them.”
This malign influence from the future, they argue, could explain why the United States Superconducting Supercollider, also designed to find the Higgs, was canceled in 1993 after billions of dollars had already been spent, an event so unlikely that Dr. Nielsen calls it an “anti-miracle.”
You might think that the appearance of this theory is further proof that people have had ample time — perhaps too much time — to think about what will come out of the collider, which has been 15 years and $9 billion in the making.
The collider was built by CERN, the European Organization for Nuclear Research, to accelerate protons to energies of seven trillion electron volts around an 18-mile underground racetrack and then crash them together into primordial fireballs.
For the record, as of the middle of September, CERN engineers hope to begin to collide protons at the so-called injection energy of 450 billion electron volts in December and then ramp up the energy until the protons have 3.5 trillion electron volts of energy apiece and then, after a short Christmas break, real physics can begin.
Maybe.
Dr. Nielsen and Dr. Ninomiya started laying out their case for doom in the spring of 2008. It was later that fall, of course, after the CERN collider was turned on, that a connection between two magnets vaporized, shutting down the collider for more than a year.
Dr. Nielsen called that “a funny thing that could make us to believe in the theory of ours.”
He agreed that skepticism would be in order. After all, most big science projects, including the Hubble Space Telescope, have gone through a period of seeming jinxed. At CERN, the beat goes on: Last weekend the French police arrested a particle physicist who works on one of the collider experiments, on suspicion of conspiracy with a North African wing of Al Qaeda.
Dr. Nielsen and Dr. Ninomiya have proposed a kind of test: that CERN engage in a game of chance, a “card-drawing” exercise using perhaps a random-number generator, in order to discern bad luck from the future. If the outcome was sufficiently unlikely, say drawing the one spade in a deck with 100 million hearts, the machine would either not run at all, or only at low energies unlikely to find the Higgs.
Sure, it’s crazy, and CERN should not and is not about to mortgage its investment to a coin toss. The theory was greeted on some blogs with comparisons to Harry Potter. But craziness has a fine history in a physics that talks routinely about cats being dead and alive at the same time and about anti-gravity puffing out the universe.
As Niels Bohr, Dr. Nielsen’s late countryman and one of the founders of quantum theory, once told a colleague: “We are all agreed that your theory is crazy. The question that divides us is whether it is crazy enough to have a chance of being correct.”
Dr. Nielsen is well-qualified in this tradition. He is known in physics as one of the founders of string theory and a deep and original thinker, “one of those extremely smart people that is willing to chase crazy ideas pretty far,” in the words of Sean Carroll, a Caltech physicist and author of a coming book about time, “From Eternity to Here.”
Another of Dr. Nielsen’s projects is an effort to show how the universe as we know it, with all its apparent regularity, could arise from pure randomness, a subject he calls “random dynamics.”
Dr. Nielsen admits that he and Dr. Ninomiya’s new theory smacks of time travel, a longtime interest, which has become a respectable research subject in recent years. While it is a paradox to go back in time and kill your grandfather, physicists agree there is no paradox if you go back in time and save him from being hit by a bus. In the case of the Higgs and the collider, it is as if something is going back in time to keep the universe from being hit by a bus. Although just why the Higgs would be a catastrophe is not clear. If we knew, presumably, we wouldn’t be trying to make one.
We always assume that the past influences the future. But that is not necessarily true in the physics of Newton or Einstein. According to physicists, all you really need to know, mathematically, to describe what happens to an apple or the 100 billion galaxies of the universe over all time are the laws that describe how things change and a statement of where things start. The latter are the so-called boundary conditions — the apple five feet over your head, or the Big Bang.
The equations work just as well, Dr. Nielsen and others point out, if the boundary conditions specify a condition in the future (the apple on your head) instead of in the past, as long as the fundamental laws of physics are reversible, which most physicists believe they are.
“For those of us who believe in physics,” Einstein once wrote to a friend, “this separation between past, present and future is only an illusion.”
In Kurt Vonnegut’s novel “Sirens of Titan,” all of human history turns out to be reduced to delivering a piece of metal roughly the size and shape of a beer-can opener to an alien marooned on Saturn’s moon so he can repair his spaceship and go home.
Whether the collider has such a noble or humble fate — or any fate at all — remains to be seen. As a Red Sox fan my entire adult life, I feel I know something about jinxes.
http://www.nytimes.com/2009/10/13/scien ... nted=print
October 20, 2009
Op-Ed Columnist
Where the Wild Things Are
By DAVID BROOKS
In Homer’s poetry, every hero has a trait. Achilles is angry. Odysseus is cunning. And so was born one picture of character and conduct.
In this view, what you might call the philosopher’s view, each of us has certain ingrained character traits. An honest person will be honest most of the time. A compassionate person will be compassionate.
These traits, as they say, go all the way down. They shape who we are, what we choose to do and whom we befriend. Our job is to find out what traits of character we need to become virtuous.
But, as Kwame Anthony Appiah, a Princeton philosopher, notes in his book “Experiments in Ethics,” this philosopher’s view of morality is now being challenged by a psychologist’s view. According to the psychologist’s view, individuals don’t have one thing called character.
The psychologists say this because a century’s worth of experiments suggests that people’s actual behavior is not driven by permanent traits that apply from one context to another. Students who are routinely dishonest at home are not routinely dishonest at school. People who are courageous at work can be cowardly at church. People who behave kindly on a sunny day may behave callously the next day when it is cloudy and they are feeling glum. Behavior does not exhibit what the psychologists call “cross-situational stability.”
The psychologists thus tend to gravitate toward a different view of conduct. In this view, people don’t have one permanent thing called character. We each have a multiplicity of tendencies inside, which are activated by this or that context. As Paul Bloom of Yale put it in an essay for The Atlantic last year, we are a community of competing selves. These different selves “are continually popping in and out of existence. They have different desires, and they fight for control — bargaining with, deceiving, and plotting against one another.”
The philosopher’s view is shaped like a funnel. At the bottom, there is a narrow thing called character. And at the top, the wide ways it expresses itself. The psychologist’s view is shaped like an upside-down funnel. At the bottom, there is a wide variety of unconscious tendencies that get aroused by different situations. At the top, there is the narrow story we tell about ourselves to give coherence to life.
The difference is easy to recognize on the movie screen. Most movies embrace the character version. The hero is good and conquers evil. Spike Jonze’s new movie adaptation of “Where the Wild Things Are” illuminates the psychological version.
At the beginning of the movie, young Max is torn by warring impulses he cannot control or understand. Part of him loves and depends upon his mother. But part of him rages against her.
In the midst of turmoil, Max falls into a primitive, mythical realm with a community of Wild Things. The Wild Things contain and re-enact different pieces of his inner frenzy. One of them feels unimportant. One throws a tantrum because his love has been betrayed. They embody his different tendencies.
Many critics have noted that, in the movie version, the Wild Things are needlessly morose and whiney. But in one important way, the movie is better than the book. In the book, Max effortlessly controls the Wild Things by taming them with “the magic trick of staring into all their yellow eyes without blinking once.”
In the movie, Max wants to control the Wild Things. The Wild Things in turn want to be controlled. They want him to build a utopia for them where they won’t feel pain. But in the movie, Max fails as king. He lacks the power to control his Wild Things. The Wild Things come to recognize that he isn’t really a king, and maybe there are no such things as kings.
In the philosopher’s picture, the good life is won through direct assault. Heroes use reason to separate virtue from vice. Then they use willpower to conquer weakness, fear, selfishness and the dark passions lurking inside. Once they achieve virtue they do virtuous things.
In the psychologist’s version, the good life is won indirectly. People have only vague intuitions about the instincts and impulses that have been implanted in them by evolution, culture and upbringing. There is no easy way to command all the wild things jostling inside.
But it is possible to achieve momentary harmony through creative work. Max has all his Wild Things at peace when he is immersed in building a fort or when he is giving another his complete attention. This isn’t the good life through heroic self-analysis but through mundane, self-forgetting effort, and through everyday routines.
Appiah believes these two views of conduct are in conversation, not conflict. But it does seem we’re in one of those periods when words like character fall into dispute and change their meaning.
http://www.nytimes.com/2009/10/20/opini ... nted=print
******
October 20, 2009
Findings
For Decades, Puzzling People With Mathematics
By JOHN TIERNEY
For today’s mathematical puzzle, assume that in the year 1956 there was a children’s magazine in New York named after a giant egg, Humpty Dumpty, who purportedly served as its chief editor.
Mr. Dumpty was assisted by a human editor named Martin Gardner, who prepared “activity features” and wrote a monthly short story about the adventures of the child egg, Humpty Dumpty Jr. Another duty of Mr. Gardner’s was to write a monthly poem of moral advice from Humpty Sr. to Humpty Jr.
At that point, Mr. Gardner was 42 and had never taken a math course beyond high school. He had struggled with calculus and considered himself poor at solving basic mathematical puzzles, let alone creating them. But when the publisher of Scientific American asked him if there might be enough material for a monthly column on “recreational mathematics,” a term that sounded even more oxymoronic in 1956 than it does today, Mr. Gardner took a gamble.
He quit his job with Humpty Dumpty.
On Wednesday, Mr. Gardner will celebrate his 95th birthday with the publication of another book — his second book of essays and mathematical puzzles to be published just this year. With more than 70 books to his name, he is the world’s best-known recreational mathematician, and has probably introduced more people to the joys of math than anyone in history.
How is this possible?
Actually, there are two separate puzzles here. One is how Mr. Gardner, who still works every day at his old typewriter, has managed for so long to confound and entertain his readers. The other is why so many of us have never been able to resist this kind of puzzle. Why, when we hear about the guy trying to ferry a wolf and a goat and a head of cabbage across the river in a small boat, do we feel compelled to solve his transportation problem?
It never occurred to me that math could be fun until the day in grade school that my father gave me a book of 19th-century puzzles assembled by Mr. Gardner — the same puzzles, as it happened, that Mr. Gardner’s father had used to hook him during his school days. The algebra and geometry were sugar-coated with elaborate stories and wonderful illustrations of giraffe races, pool-hall squabbles, burglaries and scheming carnival barkers. (Go to nytimes.com/tierneylab for some examples.)
The puzzles didn’t turn Mr. Gardner into a professional mathematician — he majored in philosophy at the University of Chicago — but he remained a passionate amateur through his first jobs in public relations and journalism. After learning of mathematicians’ new fascination with folding certain pieces of paper into different shapes, he sold an article about these “flexagons” to Scientific American, and that led to his monthly “Mathematical Games” column, which he wrote for the next quarter-century.
Mr. Gardner prepared for the new monthly column by scouring Manhattan’s second-hand bookstores for math puzzles and games. In another line of work, that would constitute plagiarism, but among puzzle makers it has long been the norm: a good puzzle is forever.
For instance, that puzzle about ferrying the wolf, the goat and the cabbage was included in a puzzle collection prepared for the emperor Charlemagne 12 centuries ago — and it was presumably borrowed by Charlemagne’s puzzlist. The row-boat problem has been passed down in cultures around the world in versions featuring guards and prisoners, jealous spouses, missionaries, cannibals and assorted carnivores.
“The number of puzzles I’ve invented you can count on your fingers,” Mr. Gardner says. Through his hundreds of columns and dozens of books, he always credited others for the material and insisted that he wasn’t even a good mathematician.
“I don’t think I ever wrote a column that required calculus,” he says. “The big secret of my success as a columnist was that I didn’t know much about math.
“I had to struggle to get everything clear before I wrote a column, so that meant I could write it in a way that people could understand.”
After he gave up the column in 1981, Mr. Gardner kept turning out essays and books, and his reputation among mathematicians, puzzlists and magicians just kept growing. Since 1994, they have been convening in Atlanta every two years to swap puzzles and ideas at an event called the G4G: the Gathering for Gardner.
“Many have tried to emulate him; no one has succeeded,” says Ronald Graham, a mathematician at the University of California, San Diego. “Martin has turned thousands of children into mathematicians, and thousands of mathematicians into children.”
Mr. Gardner says he has been gratified to see more and more teachers incorporating puzzles into the math curriculum. The pleasure of puzzle-solving, as he sees it, is a happy byproduct of evolution.
“Consider a cow,” he says. “A cow doesn’t have the problem-solving skill of a chimpanzee, which has discovered how to get termites out of the ground by putting a stick into a hole.
“Evolution has developed the brain’s ability to solve puzzles, and at the same time has produced in our brain a pleasure of solving problems.”
Mr. Gardner’s favorite puzzles are the ones that require a sudden insight. That aha! moment can come in any kind of puzzle, but there’s a special pleasure when the insight is mathematical — and therefore eternal, as Mr. Gardner sees it. In his new book, “When You Were a Tadpole and I Was a Fish,” he explains why he is an “unashamed Platonist” when it comes to mathematics.
“If all sentient beings in the universe disappeared,” he writes, “there would remain a sense in which mathematical objects and theorems would continue to exist even though there would be no one around to write or talk about them. Huge prime numbers would continue to be prime even if no one had proved them prime.”
I share his mathematical Platonism, and I think that is ultimately the explanation for the appeal of the puzzles. They may superficially involve row boats or pool halls or giraffes, but they’re really about transcendent numbers and theorems.
When you figure out the answer, you know you’ve found something that is indisputably true anywhere, anytime. For a brief moment, the universe makes perfect sense.
Correction: An earlier version of this column incorrectly said that Martin Gardner was 37 in 1956.
http://www.nytimes.com/2009/10/20/scien ... nted=print
Op-Ed Columnist
Where the Wild Things Are
By DAVID BROOKS
In Homer’s poetry, every hero has a trait. Achilles is angry. Odysseus is cunning. And so was born one picture of character and conduct.
In this view, what you might call the philosopher’s view, each of us has certain ingrained character traits. An honest person will be honest most of the time. A compassionate person will be compassionate.
These traits, as they say, go all the way down. They shape who we are, what we choose to do and whom we befriend. Our job is to find out what traits of character we need to become virtuous.
But, as Kwame Anthony Appiah, a Princeton philosopher, notes in his book “Experiments in Ethics,” this philosopher’s view of morality is now being challenged by a psychologist’s view. According to the psychologist’s view, individuals don’t have one thing called character.
The psychologists say this because a century’s worth of experiments suggests that people’s actual behavior is not driven by permanent traits that apply from one context to another. Students who are routinely dishonest at home are not routinely dishonest at school. People who are courageous at work can be cowardly at church. People who behave kindly on a sunny day may behave callously the next day when it is cloudy and they are feeling glum. Behavior does not exhibit what the psychologists call “cross-situational stability.”
The psychologists thus tend to gravitate toward a different view of conduct. In this view, people don’t have one permanent thing called character. We each have a multiplicity of tendencies inside, which are activated by this or that context. As Paul Bloom of Yale put it in an essay for The Atlantic last year, we are a community of competing selves. These different selves “are continually popping in and out of existence. They have different desires, and they fight for control — bargaining with, deceiving, and plotting against one another.”
The philosopher’s view is shaped like a funnel. At the bottom, there is a narrow thing called character. And at the top, the wide ways it expresses itself. The psychologist’s view is shaped like an upside-down funnel. At the bottom, there is a wide variety of unconscious tendencies that get aroused by different situations. At the top, there is the narrow story we tell about ourselves to give coherence to life.
The difference is easy to recognize on the movie screen. Most movies embrace the character version. The hero is good and conquers evil. Spike Jonze’s new movie adaptation of “Where the Wild Things Are” illuminates the psychological version.
At the beginning of the movie, young Max is torn by warring impulses he cannot control or understand. Part of him loves and depends upon his mother. But part of him rages against her.
In the midst of turmoil, Max falls into a primitive, mythical realm with a community of Wild Things. The Wild Things contain and re-enact different pieces of his inner frenzy. One of them feels unimportant. One throws a tantrum because his love has been betrayed. They embody his different tendencies.
Many critics have noted that, in the movie version, the Wild Things are needlessly morose and whiney. But in one important way, the movie is better than the book. In the book, Max effortlessly controls the Wild Things by taming them with “the magic trick of staring into all their yellow eyes without blinking once.”
In the movie, Max wants to control the Wild Things. The Wild Things in turn want to be controlled. They want him to build a utopia for them where they won’t feel pain. But in the movie, Max fails as king. He lacks the power to control his Wild Things. The Wild Things come to recognize that he isn’t really a king, and maybe there are no such things as kings.
In the philosopher’s picture, the good life is won through direct assault. Heroes use reason to separate virtue from vice. Then they use willpower to conquer weakness, fear, selfishness and the dark passions lurking inside. Once they achieve virtue they do virtuous things.
In the psychologist’s version, the good life is won indirectly. People have only vague intuitions about the instincts and impulses that have been implanted in them by evolution, culture and upbringing. There is no easy way to command all the wild things jostling inside.
But it is possible to achieve momentary harmony through creative work. Max has all his Wild Things at peace when he is immersed in building a fort or when he is giving another his complete attention. This isn’t the good life through heroic self-analysis but through mundane, self-forgetting effort, and through everyday routines.
Appiah believes these two views of conduct are in conversation, not conflict. But it does seem we’re in one of those periods when words like character fall into dispute and change their meaning.
http://www.nytimes.com/2009/10/20/opini ... nted=print
******
October 20, 2009
Findings
For Decades, Puzzling People With Mathematics
By JOHN TIERNEY
For today’s mathematical puzzle, assume that in the year 1956 there was a children’s magazine in New York named after a giant egg, Humpty Dumpty, who purportedly served as its chief editor.
Mr. Dumpty was assisted by a human editor named Martin Gardner, who prepared “activity features” and wrote a monthly short story about the adventures of the child egg, Humpty Dumpty Jr. Another duty of Mr. Gardner’s was to write a monthly poem of moral advice from Humpty Sr. to Humpty Jr.
At that point, Mr. Gardner was 42 and had never taken a math course beyond high school. He had struggled with calculus and considered himself poor at solving basic mathematical puzzles, let alone creating them. But when the publisher of Scientific American asked him if there might be enough material for a monthly column on “recreational mathematics,” a term that sounded even more oxymoronic in 1956 than it does today, Mr. Gardner took a gamble.
He quit his job with Humpty Dumpty.
On Wednesday, Mr. Gardner will celebrate his 95th birthday with the publication of another book — his second book of essays and mathematical puzzles to be published just this year. With more than 70 books to his name, he is the world’s best-known recreational mathematician, and has probably introduced more people to the joys of math than anyone in history.
How is this possible?
Actually, there are two separate puzzles here. One is how Mr. Gardner, who still works every day at his old typewriter, has managed for so long to confound and entertain his readers. The other is why so many of us have never been able to resist this kind of puzzle. Why, when we hear about the guy trying to ferry a wolf and a goat and a head of cabbage across the river in a small boat, do we feel compelled to solve his transportation problem?
It never occurred to me that math could be fun until the day in grade school that my father gave me a book of 19th-century puzzles assembled by Mr. Gardner — the same puzzles, as it happened, that Mr. Gardner’s father had used to hook him during his school days. The algebra and geometry were sugar-coated with elaborate stories and wonderful illustrations of giraffe races, pool-hall squabbles, burglaries and scheming carnival barkers. (Go to nytimes.com/tierneylab for some examples.)
The puzzles didn’t turn Mr. Gardner into a professional mathematician — he majored in philosophy at the University of Chicago — but he remained a passionate amateur through his first jobs in public relations and journalism. After learning of mathematicians’ new fascination with folding certain pieces of paper into different shapes, he sold an article about these “flexagons” to Scientific American, and that led to his monthly “Mathematical Games” column, which he wrote for the next quarter-century.
Mr. Gardner prepared for the new monthly column by scouring Manhattan’s second-hand bookstores for math puzzles and games. In another line of work, that would constitute plagiarism, but among puzzle makers it has long been the norm: a good puzzle is forever.
For instance, that puzzle about ferrying the wolf, the goat and the cabbage was included in a puzzle collection prepared for the emperor Charlemagne 12 centuries ago — and it was presumably borrowed by Charlemagne’s puzzlist. The row-boat problem has been passed down in cultures around the world in versions featuring guards and prisoners, jealous spouses, missionaries, cannibals and assorted carnivores.
“The number of puzzles I’ve invented you can count on your fingers,” Mr. Gardner says. Through his hundreds of columns and dozens of books, he always credited others for the material and insisted that he wasn’t even a good mathematician.
“I don’t think I ever wrote a column that required calculus,” he says. “The big secret of my success as a columnist was that I didn’t know much about math.
“I had to struggle to get everything clear before I wrote a column, so that meant I could write it in a way that people could understand.”
After he gave up the column in 1981, Mr. Gardner kept turning out essays and books, and his reputation among mathematicians, puzzlists and magicians just kept growing. Since 1994, they have been convening in Atlanta every two years to swap puzzles and ideas at an event called the G4G: the Gathering for Gardner.
“Many have tried to emulate him; no one has succeeded,” says Ronald Graham, a mathematician at the University of California, San Diego. “Martin has turned thousands of children into mathematicians, and thousands of mathematicians into children.”
Mr. Gardner says he has been gratified to see more and more teachers incorporating puzzles into the math curriculum. The pleasure of puzzle-solving, as he sees it, is a happy byproduct of evolution.
“Consider a cow,” he says. “A cow doesn’t have the problem-solving skill of a chimpanzee, which has discovered how to get termites out of the ground by putting a stick into a hole.
“Evolution has developed the brain’s ability to solve puzzles, and at the same time has produced in our brain a pleasure of solving problems.”
Mr. Gardner’s favorite puzzles are the ones that require a sudden insight. That aha! moment can come in any kind of puzzle, but there’s a special pleasure when the insight is mathematical — and therefore eternal, as Mr. Gardner sees it. In his new book, “When You Were a Tadpole and I Was a Fish,” he explains why he is an “unashamed Platonist” when it comes to mathematics.
“If all sentient beings in the universe disappeared,” he writes, “there would remain a sense in which mathematical objects and theorems would continue to exist even though there would be no one around to write or talk about them. Huge prime numbers would continue to be prime even if no one had proved them prime.”
I share his mathematical Platonism, and I think that is ultimately the explanation for the appeal of the puzzles. They may superficially involve row boats or pool halls or giraffes, but they’re really about transcendent numbers and theorems.
When you figure out the answer, you know you’ve found something that is indisputably true anywhere, anytime. For a brief moment, the universe makes perfect sense.
Correction: An earlier version of this column incorrectly said that Martin Gardner was 37 in 1956.
http://www.nytimes.com/2009/10/20/scien ... nted=print
November 10, 2009
Op-Ed Columnist
The Rush to Therapy
By DAVID BROOKS
We’re all born late. We’re born into history that is well under way. We’re born into cultures, nations and languages that we didn’t choose. On top of that, we’re born with certain brain chemicals and genetic predispositions that we can’t control. We’re thrust into social conditions that we detest. Often, we react in ways we regret even while we’re doing them.
But unlike the other animals, people do have a drive to seek coherence and meaning. We have a need to tell ourselves stories that explain it all. We use these stories to supply the metaphysics, without which life seems pointless and empty.
Among all the things we don’t control, we do have some control over our stories. We do have a conscious say in selecting the narrative we will use to make sense of the world. Individual responsibility is contained in the act of selecting and constantly revising the master narrative we tell about ourselves.
The stories we select help us, in turn, to interpret the world. They guide us to pay attention to certain things and ignore other things. They lead us to see certain things as sacred and other things as disgusting. They are the frameworks that shape our desires and goals. So while story selection may seem vague and intellectual, it’s actually very powerful. The most important power we have is the power to help select the lens through which we see reality.
Most people select stories that lead toward cooperation and goodness. But over the past few decades a malevolent narrative has emerged.
That narrative has emerged on the fringes of the Muslim world. It is a narrative that sees human history as a war between Islam on the one side and Christianity and Judaism on the other. This narrative causes its adherents to shrink their circle of concern. They don’t see others as fully human. They come to believe others can be blamelessly murdered and that, in fact, it is admirable to do so.
This narrative is embraced by a small minority. But it has caused incredible amounts of suffering within the Muslim world, in Israel, in the U.S. and elsewhere. With their suicide bombings and terrorist acts, adherents to this narrative have made themselves central to global politics. They are the ones who go into crowded rooms, shout “Allahu akbar,” or “God is great,” and then start murdering.
When Maj. Nidal Malik Hasan did that in Fort Hood, Tex., last week, many Americans had an understandable and, in some ways, admirable reaction. They didn’t want the horror to become a pretext for anti-Muslim bigotry.
So immediately the coverage took on a certain cast. The possibility of Islamic extremism was immediately played down. This was an isolated personal breakdown, not an ideological assault, many people emphasized.
Major Hasan was portrayed as a disturbed individual who was under a lot of stress. We learned about pre-traumatic stress syndrome, and secondary stress disorder, which one gets from hearing about other people’s stress. We heard the theory (unlikely in retrospect) that Hasan was so traumatized by the thought of going into a combat zone that he decided to take a gun and create one of his own.
A shroud of political correctness settled over the conversation. Hasan was portrayed as a victim of society, a poor soul who was pushed over the edge by prejudice and unhappiness.
There was a national rush to therapy. Hasan was a loner who had trouble finding a wife and socializing with his neighbors.
This response was understandable. It’s important to tamp down vengeful hatreds in moments of passion. But it was also patronizing. Public commentators assumed the air of kindergarten teachers who had to protect their children from thinking certain impermissible and intolerant thoughts. If public commentary wasn’t carefully policed, the assumption seemed to be, then the great mass of unwashed yahoos in Middle America would go off on a racist rampage.
Worse, it absolved Hasan — before the real evidence was in — of his responsibility. He didn’t have the choice to be lonely or unhappy. But he did have a choice over what story to build out of those circumstances. And evidence is now mounting to suggest he chose the extremist War on Islam narrative that so often leads to murderous results.
The conversation in the first few days after the massacre was well intentioned, but it suggested a willful flight from reality. It ignored the fact that the war narrative of the struggle against Islam is the central feature of American foreign policy. It ignored the fact that this narrative can be embraced by a self-radicalizing individual in the U.S. as much as by groups in Tehran, Gaza or Kandahar.
It denied, before the evidence was in, the possibility of evil. It sought to reduce a heinous act to social maladjustment. It wasn’t the reaction of a morally or politically serious nation.
http://www.nytimes.com/2009/11/10/opini ... nted=print
Op-Ed Columnist
The Rush to Therapy
By DAVID BROOKS
We’re all born late. We’re born into history that is well under way. We’re born into cultures, nations and languages that we didn’t choose. On top of that, we’re born with certain brain chemicals and genetic predispositions that we can’t control. We’re thrust into social conditions that we detest. Often, we react in ways we regret even while we’re doing them.
But unlike the other animals, people do have a drive to seek coherence and meaning. We have a need to tell ourselves stories that explain it all. We use these stories to supply the metaphysics, without which life seems pointless and empty.
Among all the things we don’t control, we do have some control over our stories. We do have a conscious say in selecting the narrative we will use to make sense of the world. Individual responsibility is contained in the act of selecting and constantly revising the master narrative we tell about ourselves.
The stories we select help us, in turn, to interpret the world. They guide us to pay attention to certain things and ignore other things. They lead us to see certain things as sacred and other things as disgusting. They are the frameworks that shape our desires and goals. So while story selection may seem vague and intellectual, it’s actually very powerful. The most important power we have is the power to help select the lens through which we see reality.
Most people select stories that lead toward cooperation and goodness. But over the past few decades a malevolent narrative has emerged.
That narrative has emerged on the fringes of the Muslim world. It is a narrative that sees human history as a war between Islam on the one side and Christianity and Judaism on the other. This narrative causes its adherents to shrink their circle of concern. They don’t see others as fully human. They come to believe others can be blamelessly murdered and that, in fact, it is admirable to do so.
This narrative is embraced by a small minority. But it has caused incredible amounts of suffering within the Muslim world, in Israel, in the U.S. and elsewhere. With their suicide bombings and terrorist acts, adherents to this narrative have made themselves central to global politics. They are the ones who go into crowded rooms, shout “Allahu akbar,” or “God is great,” and then start murdering.
When Maj. Nidal Malik Hasan did that in Fort Hood, Tex., last week, many Americans had an understandable and, in some ways, admirable reaction. They didn’t want the horror to become a pretext for anti-Muslim bigotry.
So immediately the coverage took on a certain cast. The possibility of Islamic extremism was immediately played down. This was an isolated personal breakdown, not an ideological assault, many people emphasized.
Major Hasan was portrayed as a disturbed individual who was under a lot of stress. We learned about pre-traumatic stress syndrome, and secondary stress disorder, which one gets from hearing about other people’s stress. We heard the theory (unlikely in retrospect) that Hasan was so traumatized by the thought of going into a combat zone that he decided to take a gun and create one of his own.
A shroud of political correctness settled over the conversation. Hasan was portrayed as a victim of society, a poor soul who was pushed over the edge by prejudice and unhappiness.
There was a national rush to therapy. Hasan was a loner who had trouble finding a wife and socializing with his neighbors.
This response was understandable. It’s important to tamp down vengeful hatreds in moments of passion. But it was also patronizing. Public commentators assumed the air of kindergarten teachers who had to protect their children from thinking certain impermissible and intolerant thoughts. If public commentary wasn’t carefully policed, the assumption seemed to be, then the great mass of unwashed yahoos in Middle America would go off on a racist rampage.
Worse, it absolved Hasan — before the real evidence was in — of his responsibility. He didn’t have the choice to be lonely or unhappy. But he did have a choice over what story to build out of those circumstances. And evidence is now mounting to suggest he chose the extremist War on Islam narrative that so often leads to murderous results.
The conversation in the first few days after the massacre was well intentioned, but it suggested a willful flight from reality. It ignored the fact that the war narrative of the struggle against Islam is the central feature of American foreign policy. It ignored the fact that this narrative can be embraced by a self-radicalizing individual in the U.S. as much as by groups in Tehran, Gaza or Kandahar.
It denied, before the evidence was in, the possibility of evil. It sought to reduce a heinous act to social maladjustment. It wasn’t the reaction of a morally or politically serious nation.
http://www.nytimes.com/2009/11/10/opini ... nted=print
November 27, 2009
Op-Ed Columnist
The Other Education
By DAVID BROOKS
Like many of you, I went to elementary school, high school and college. I took such and such classes, earned such and such grades, and amassed such and such degrees.
But on the night of Feb. 2, 1975, I turned on WMMR in Philadelphia and became mesmerized by a concert the radio station was broadcasting. The concert was by a group I’d never heard of — Bruce Springsteen and the E Street Band. Thus began a part of my second education.
We don’t usually think of this second education. For reasons having to do with the peculiarities of our civilization, we pay a great deal of attention to our scholastic educations, which are formal and supervised, and we devote much less public thought to our emotional educations, which are unsupervised and haphazard. This is odd, since our emotional educations are much more important to our long-term happiness and the quality of our lives.
In any case, over the next few decades Springsteen would become one of the professors in my second education. In album after album he assigned a new course in my emotional curriculum.
This second education doesn’t work the way the scholastic education works. In a normal schoolroom, information walks through the front door and announces itself by light of day. It’s direct. The teacher describes the material to be covered, and then everybody works through it.
The knowledge transmitted in an emotional education, on the other hand, comes indirectly, seeping through the cracks of the windowpanes, from under the floorboards and through the vents. It’s generally a byproduct of the search for pleasure, and the learning is indirect and unconscious.
From that first night in the winter of 1975, I wanted the thrill that Springsteen was offering. His manager, Jon Landau, says that each style of music elicits its own set of responses. Rock, when done right, is jolting and exhilarating.
Once I got a taste of that emotional uplift, I was hooked. The uplifting experiences alone were bound to open the mind for learning.
I followed Springsteen into his world. Once again, it wasn’t the explicit characters that mattered most. Springsteen sings about teenage couples out on a desperate lark, workers struggling as the mills close down, and drifters on the wrong side of the law. These stories don’t directly touch my life, and as far as I know he’s never written a song about a middle-age pundit who interviews politicians by day and makes mind-numbingly repetitive school lunches at night.
What mattered most, as with any artist, were the assumptions behind the stories. His tales take place in a distinct universe, a distinct map of reality. In Springsteen’s universe, life’s “losers” always retain their dignity. Their choices have immense moral consequences, and are seen on an epic and anthemic scale.
There are certain prominent neighborhoods on his map — one called defeat, another called exaltation, another called nostalgia. Certain emotional chords — stoicism, for one — are common, while others are absent. “There is no sarcasm in his writing,” Landau says, “and not a lot of irony.”
I find I can’t really describe what this landscape feels like, especially in newspaper prose. But I do believe his narrative tone, the mental map, has worked its way into my head, influencing the way I organize the buzzing confusion of reality, shaping the unconscious categories through which I perceive events. Just as being from New York or rural Georgia gives you a perspective from which to see the world, so spending time in Springsteen’s universe inculcates its own preconscious viewpoint.
Then there is the man himself. Like other parts of the emotional education, it is hard to bring the knowledge to consciousness, but I do think important lessons are communicated by that embarrassed half-giggle he falls into when talking about himself. I do think a message is conveyed in the way he continually situates himself within a tradition — de-emphasizing his own individual contributions, stressing instead the R&B groups, the gospel and folk singers whose work comes out through him.
I’m not claiming my second education has been exemplary or advanced. I’m describing it because I have only become aware of it retrospectively, and society pays too much attention to the first education and not enough to the second.
In fact, we all gather our own emotional faculty — artists, friends, family and teams. Each refines and develops the inner instrument with a million strings.
Last week, my kids attended their first Springsteen concert in Baltimore. At one point, I looked over at my 15-year-old daughter. She had her hands clapped to her cheeks and a look of slack-jawed, joyous astonishment on her face. She couldn’t believe what she was seeing — 10,000 people in a state of utter abandon, with Springsteen surrendering himself to them in the center of the arena.
It begins again.
http://www.nytimes.com/2009/11/27/opini ... nted=print
Op-Ed Columnist
The Other Education
By DAVID BROOKS
Like many of you, I went to elementary school, high school and college. I took such and such classes, earned such and such grades, and amassed such and such degrees.
But on the night of Feb. 2, 1975, I turned on WMMR in Philadelphia and became mesmerized by a concert the radio station was broadcasting. The concert was by a group I’d never heard of — Bruce Springsteen and the E Street Band. Thus began a part of my second education.
We don’t usually think of this second education. For reasons having to do with the peculiarities of our civilization, we pay a great deal of attention to our scholastic educations, which are formal and supervised, and we devote much less public thought to our emotional educations, which are unsupervised and haphazard. This is odd, since our emotional educations are much more important to our long-term happiness and the quality of our lives.
In any case, over the next few decades Springsteen would become one of the professors in my second education. In album after album he assigned a new course in my emotional curriculum.
This second education doesn’t work the way the scholastic education works. In a normal schoolroom, information walks through the front door and announces itself by light of day. It’s direct. The teacher describes the material to be covered, and then everybody works through it.
The knowledge transmitted in an emotional education, on the other hand, comes indirectly, seeping through the cracks of the windowpanes, from under the floorboards and through the vents. It’s generally a byproduct of the search for pleasure, and the learning is indirect and unconscious.
From that first night in the winter of 1975, I wanted the thrill that Springsteen was offering. His manager, Jon Landau, says that each style of music elicits its own set of responses. Rock, when done right, is jolting and exhilarating.
Once I got a taste of that emotional uplift, I was hooked. The uplifting experiences alone were bound to open the mind for learning.
I followed Springsteen into his world. Once again, it wasn’t the explicit characters that mattered most. Springsteen sings about teenage couples out on a desperate lark, workers struggling as the mills close down, and drifters on the wrong side of the law. These stories don’t directly touch my life, and as far as I know he’s never written a song about a middle-age pundit who interviews politicians by day and makes mind-numbingly repetitive school lunches at night.
What mattered most, as with any artist, were the assumptions behind the stories. His tales take place in a distinct universe, a distinct map of reality. In Springsteen’s universe, life’s “losers” always retain their dignity. Their choices have immense moral consequences, and are seen on an epic and anthemic scale.
There are certain prominent neighborhoods on his map — one called defeat, another called exaltation, another called nostalgia. Certain emotional chords — stoicism, for one — are common, while others are absent. “There is no sarcasm in his writing,” Landau says, “and not a lot of irony.”
I find I can’t really describe what this landscape feels like, especially in newspaper prose. But I do believe his narrative tone, the mental map, has worked its way into my head, influencing the way I organize the buzzing confusion of reality, shaping the unconscious categories through which I perceive events. Just as being from New York or rural Georgia gives you a perspective from which to see the world, so spending time in Springsteen’s universe inculcates its own preconscious viewpoint.
Then there is the man himself. Like other parts of the emotional education, it is hard to bring the knowledge to consciousness, but I do think important lessons are communicated by that embarrassed half-giggle he falls into when talking about himself. I do think a message is conveyed in the way he continually situates himself within a tradition — de-emphasizing his own individual contributions, stressing instead the R&B groups, the gospel and folk singers whose work comes out through him.
I’m not claiming my second education has been exemplary or advanced. I’m describing it because I have only become aware of it retrospectively, and society pays too much attention to the first education and not enough to the second.
In fact, we all gather our own emotional faculty — artists, friends, family and teams. Each refines and develops the inner instrument with a million strings.
Last week, my kids attended their first Springsteen concert in Baltimore. At one point, I looked over at my 15-year-old daughter. She had her hands clapped to her cheeks and a look of slack-jawed, joyous astonishment on her face. She couldn’t believe what she was seeing — 10,000 people in a state of utter abandon, with Springsteen surrendering himself to them in the center of the arena.
It begins again.
http://www.nytimes.com/2009/11/27/opini ... nted=print
December 29, 2009
EssayThe Joy of Physics Isn’t in the Results, but in the Search Itself
By DENNIS OVERBYE
I was asked recently what the Large Hadron Collider, the giant particle accelerator outside Geneva, is good for. After $10 billion and 15 years, the machine is ready to begin operations early next year, banging together protons in an effort to recreate the conditions of the Big Bang. Sure, there are new particles and abstract symmetries in the offing for those few who speak the language of quantum field theory. But what about the rest of us?
The classic answer was allegedly given long ago by Michael Faraday, who, when asked what good was electricity, told a government minister that he didn’t know but that “one day you will tax it.”
Not being fast enough on my feet, I rattled off the usual suspects. Among the spinoffs from particle physics, besides a robust academic research community, are the Web, which was invented as a tool for physicists to better communicate at CERN — the European Organization for Nuclear Research, builders of the new collider — and many modern medical imaging methods like M.R.I.’s and PET scans.
These tests sound innocuous and even miraculous: noninvasive and mostly painless explorations of personal inner space, but their use does involve an encounter with forces that sound like they came from the twilight zone. When my wife, Nancy, had a scan known as a Spect last fall, for what seems to have been a false alarm, she had to be injected with a radioactive tracer. That meant she had to sleep in another room for a couple of days and was forbidden to hug our daughter.
The “P” in PET scan, after all, stands for positron, as in the particles that are opposites to the friendly workhorse, the electron, which is to say antimatter, the weird stuff of science-fiction dreams.
I don’t know if anyone ever asked Paul Dirac, the British physicist who predicted the existence of antimatter, whether it would ever be good for anything. Some people are now saying that the overuse of scanning devices has helped bankrupt the health care system. Indeed, when I saw the bill for Nancy’s scan, I almost fainted, but when I saw how little of it we ourselves had to pay, I felt like ordering up Champagne.
But better medical devices are not why we build these machines that eat a small city’s worth of electricity to bang together protons and recreate the fires of the Big Bang. Better diagnoses are not why young scientists spend the best years of their lives welding and soldering and pulling cable through underground caverns inside detectors the size of New York apartment buildings to capture and record those holy fires.
They want to know where we all came from, and so do I. In a drawer at home I have a family tree my brother made as a school project long ago tracing our ancestry back several hundred years in Norway, but it’s not enough. Whatever happened in the Big Bang, whatever laws are briefly reincarnated in the unholy proton fires at CERN, not only made galaxies and planets possible, but it also made us possible. How atoms could achieve such a thing is a story mostly untold but worth revering. The Earth’s biosphere is the most complicated manifestation of the laws of nature that we know of.
Like an only child dreaming of lost siblings, we dream of finding other Earths, other creatures and civilizations out in space, or even other universes. We all want to find out that we are cosmic Anastasias and that there is a secret that connects us, that lays bare the essential unity of physical phenomena.
And so we try, sometimes against great odds. The year that is now ending began with some areas of science in ruins. One section of the Large Hadron Collider looked like a train wreck with several-ton magnets lying about smashed after an electrical connection between them vaporized only nine days off a showy inauguration.
The Hubble Space Telescope was limping about in orbit with only one of its cameras working.
But here is the scorecard at the end of the year: in December, the newly refurbished collider produced a million proton collisions, including 50,000 at the record energy of 1.2 trillion electron volts per proton, before going silent for the holidays. CERN is on track to run it next year at three times that energy.
The Hubble telescope, after one last astronaut servicing visit, reached to within spitting distance of the Big Bang and recorded images of the most distant galaxies yet observed, which existed some 600 million or 700 million years after the putative beginning of time
Not to mention the rapidly expanding universe of extrasolar planets. In my view from the cosmic bleachers, the pot is bubbling for discovery. We all got a hint of just how crazy that might be in the new age of the Internet on Dec. 17, when physicists around the world found themselves glued to a Webcast of the results from an experiment called the Cryogenic Dark Matter Search. Rumors had swept the blogs and other outposts of scientific commentary that the experimenters were going to announce that they had finally detected the ethereal and mysterious dark matter particles, which, astronomers say, make up a quarter of the universe.
In the end, the result was frustratingly vague and inconclusive.
“We want it to be true — we so want to have a clue about dark matter,” Maria Spiropulu, a Caltech physicist working at CERN wrote to me the night of the Webcast.
“And it is not easy,” Dr. Spiropulu said. “The experiments are not easy and the analysis is not easy. This is a tough, tough ride over all.”
Although we might well solve part of the dark matter conundrum in the coming years, the larger mystery winds out in front of us like a train snaking into the fog.
We may never know where we came from. We will probably never find that cosmic connection to our lost royalty. Someday I will visit Norway and look up those ancestors. They died not knowing the fate of the universe, and so will I, but maybe that’s all right.
Steven Weinberg, a University of Texas physicist and Nobel Prize winner, once wrote in his 1977 book “The First Three Minutes”: “The more the universe seems comprehensible, the more it also seems pointless.” Dr. Weinberg has been explaining that statement ever since. He went on to say that it is by how we live and love and, yes, do science, that the universe warms up and acquires meaning.
As the dark matter fever was rising a few weeks ago, I called Vera Rubin, the astronomer at the department of terrestrial magnetism of the Carnegie Institution of Washington, who helped make dark matter a cosmic issue by showing that galaxies rotate too fast for the gravity of their luminous components to keep them together.
But Dr. Rubin, who likes to stick to the facts, refused to be excited. “I don’t know if we have dark matter or have to nudge Newton’s Laws or what.
“I’m sorry I know so little; I’m sorry we all know so little. But that’s kind of the fun, isn’t it?”
http://www.nytimes.com/2009/12/29/scien ... nted=print
EssayThe Joy of Physics Isn’t in the Results, but in the Search Itself
By DENNIS OVERBYE
I was asked recently what the Large Hadron Collider, the giant particle accelerator outside Geneva, is good for. After $10 billion and 15 years, the machine is ready to begin operations early next year, banging together protons in an effort to recreate the conditions of the Big Bang. Sure, there are new particles and abstract symmetries in the offing for those few who speak the language of quantum field theory. But what about the rest of us?
The classic answer was allegedly given long ago by Michael Faraday, who, when asked what good was electricity, told a government minister that he didn’t know but that “one day you will tax it.”
Not being fast enough on my feet, I rattled off the usual suspects. Among the spinoffs from particle physics, besides a robust academic research community, are the Web, which was invented as a tool for physicists to better communicate at CERN — the European Organization for Nuclear Research, builders of the new collider — and many modern medical imaging methods like M.R.I.’s and PET scans.
These tests sound innocuous and even miraculous: noninvasive and mostly painless explorations of personal inner space, but their use does involve an encounter with forces that sound like they came from the twilight zone. When my wife, Nancy, had a scan known as a Spect last fall, for what seems to have been a false alarm, she had to be injected with a radioactive tracer. That meant she had to sleep in another room for a couple of days and was forbidden to hug our daughter.
The “P” in PET scan, after all, stands for positron, as in the particles that are opposites to the friendly workhorse, the electron, which is to say antimatter, the weird stuff of science-fiction dreams.
I don’t know if anyone ever asked Paul Dirac, the British physicist who predicted the existence of antimatter, whether it would ever be good for anything. Some people are now saying that the overuse of scanning devices has helped bankrupt the health care system. Indeed, when I saw the bill for Nancy’s scan, I almost fainted, but when I saw how little of it we ourselves had to pay, I felt like ordering up Champagne.
But better medical devices are not why we build these machines that eat a small city’s worth of electricity to bang together protons and recreate the fires of the Big Bang. Better diagnoses are not why young scientists spend the best years of their lives welding and soldering and pulling cable through underground caverns inside detectors the size of New York apartment buildings to capture and record those holy fires.
They want to know where we all came from, and so do I. In a drawer at home I have a family tree my brother made as a school project long ago tracing our ancestry back several hundred years in Norway, but it’s not enough. Whatever happened in the Big Bang, whatever laws are briefly reincarnated in the unholy proton fires at CERN, not only made galaxies and planets possible, but it also made us possible. How atoms could achieve such a thing is a story mostly untold but worth revering. The Earth’s biosphere is the most complicated manifestation of the laws of nature that we know of.
Like an only child dreaming of lost siblings, we dream of finding other Earths, other creatures and civilizations out in space, or even other universes. We all want to find out that we are cosmic Anastasias and that there is a secret that connects us, that lays bare the essential unity of physical phenomena.
And so we try, sometimes against great odds. The year that is now ending began with some areas of science in ruins. One section of the Large Hadron Collider looked like a train wreck with several-ton magnets lying about smashed after an electrical connection between them vaporized only nine days off a showy inauguration.
The Hubble Space Telescope was limping about in orbit with only one of its cameras working.
But here is the scorecard at the end of the year: in December, the newly refurbished collider produced a million proton collisions, including 50,000 at the record energy of 1.2 trillion electron volts per proton, before going silent for the holidays. CERN is on track to run it next year at three times that energy.
The Hubble telescope, after one last astronaut servicing visit, reached to within spitting distance of the Big Bang and recorded images of the most distant galaxies yet observed, which existed some 600 million or 700 million years after the putative beginning of time
Not to mention the rapidly expanding universe of extrasolar planets. In my view from the cosmic bleachers, the pot is bubbling for discovery. We all got a hint of just how crazy that might be in the new age of the Internet on Dec. 17, when physicists around the world found themselves glued to a Webcast of the results from an experiment called the Cryogenic Dark Matter Search. Rumors had swept the blogs and other outposts of scientific commentary that the experimenters were going to announce that they had finally detected the ethereal and mysterious dark matter particles, which, astronomers say, make up a quarter of the universe.
In the end, the result was frustratingly vague and inconclusive.
“We want it to be true — we so want to have a clue about dark matter,” Maria Spiropulu, a Caltech physicist working at CERN wrote to me the night of the Webcast.
“And it is not easy,” Dr. Spiropulu said. “The experiments are not easy and the analysis is not easy. This is a tough, tough ride over all.”
Although we might well solve part of the dark matter conundrum in the coming years, the larger mystery winds out in front of us like a train snaking into the fog.
We may never know where we came from. We will probably never find that cosmic connection to our lost royalty. Someday I will visit Norway and look up those ancestors. They died not knowing the fate of the universe, and so will I, but maybe that’s all right.
Steven Weinberg, a University of Texas physicist and Nobel Prize winner, once wrote in his 1977 book “The First Three Minutes”: “The more the universe seems comprehensible, the more it also seems pointless.” Dr. Weinberg has been explaining that statement ever since. He went on to say that it is by how we live and love and, yes, do science, that the universe warms up and acquires meaning.
As the dark matter fever was rising a few weeks ago, I called Vera Rubin, the astronomer at the department of terrestrial magnetism of the Carnegie Institution of Washington, who helped make dark matter a cosmic issue by showing that galaxies rotate too fast for the gravity of their luminous components to keep them together.
But Dr. Rubin, who likes to stick to the facts, refused to be excited. “I don’t know if we have dark matter or have to nudge Newton’s Laws or what.
“I’m sorry I know so little; I’m sorry we all know so little. But that’s kind of the fun, isn’t it?”
http://www.nytimes.com/2009/12/29/scien ... nted=print
There is a very striking related multimedia linked at:
http://www.nytimes.com/2010/01/05/science/05books.html
January 5, 2010
Books on Science
A Guide to the Cosmos, in Words and Images
By DENNIS OVERBYE
In the universe there is always room for another surprise. Or two. Or a trillion.
Take the Witch Head Nebula, for example — a puffy purplish trail of gas in the constellation Eridanus. When a picture of it is turned on its side, the nebula looks just like, well, a witch, complete with a pointy chin and peaked hat, ready to jump on a broomstick or offer an apple to Snow White.
In 30 years of covering astronomy, I had never heard of the Witch Head Nebula until I came across a haunting two-page spread showing it snaking across an inky black star-speckled background in “Far Out: A Space-Time Chronicle,” an exquisite picture guide to the universe by Michael Benson, a photographer, journalist and filmmaker, and obviously a longtime space buff.
Actually “exquisite” does not really do justice to the aesthetic and literary merits of the book, published in the fall. I live in New York, so most of the cosmos is invisible to me, but even when I lived under the black crystalline and — at this time of year — head-ringingly cold skies of the Catskills, I could see only so far. If you don’t have your own Hubble Space Telescope, this book is the next best thing.
Mr. Benson has scoured images from the world’s observatories, including the Hubble, to fashion a step-by-step tour of the cosmos, outward from fantastical clusters and nebulae a few hundred light-years away to soft red dots of primordial galaxies peppering the wall of the sky billions of light-years beyond the stars, almost to the Big Bang.
The result is an art book befitting its Abrams imprint. Here are stars packed like golden sand, gas combed in delicate blue threads, piled into burgundy thunderheads and carved into sinuous rilles and ribbons, and galaxies clotted with star clusters dancing like spiders on the ceiling.
Mr. Benson has reprocessed many of the images to give them colors truer to physical reality. For example, in the NASA version of the Hubble’s “Pillars of Creation,” showing fingers of gas and dust in the Eagle Nebula boiling away to reveal new stars, the “pillars” are brown and the radiation burning them away is green; Mr. Benson has turned it into a composition in shades of red, including burgundy, the actual color of the ionized hydrogen that makes the nebula.
You can sit and look through this book for hours and never be bored by the shapes, colors and textures into which cosmic creation can arrange itself, or you can actually read the accompanying learned essays. Mr. Benson’s prose is up to its visual surroundings, no mean feat.
“The enlarging mirrors of our telescopes,” he writes, “comprise material forged at the centers of the same generation of stars they now record.”
One set of essays relates what was going on in the sky to what was going on back on Earth. The Witch Head, for example, is about 700 light-years from here, which means its soft smoky light has been traveling to us since the early part of the 14th century. It is a milestone for, among other things, the bubonic plague, the first stirrings of the Renaissance in Italy and the foundation of the Ming dynasty in China.
The Heart Nebula, another new acquaintance, in Cassiopeia right next to the Soul Nebula, is 7,500 light-years away. Its image dates to the time of the first proto-writing in China and the first wine, in Persia, and when the Mediterranean burst its banks in biblical fashion and flooded the Black Sea.
The journey outward ends in those distant blurry galaxies on the doorstep of the Big Bang. Or is it the beginning?
“Eternity,” Mr. Benson quotes William Blake as saying in an epigraph, “is in love with the productions of time.” Well, aren’t we all?
http://www.nytimes.com/2010/01/05/science/05books.html
January 5, 2010
Books on Science
A Guide to the Cosmos, in Words and Images
By DENNIS OVERBYE
In the universe there is always room for another surprise. Or two. Or a trillion.
Take the Witch Head Nebula, for example — a puffy purplish trail of gas in the constellation Eridanus. When a picture of it is turned on its side, the nebula looks just like, well, a witch, complete with a pointy chin and peaked hat, ready to jump on a broomstick or offer an apple to Snow White.
In 30 years of covering astronomy, I had never heard of the Witch Head Nebula until I came across a haunting two-page spread showing it snaking across an inky black star-speckled background in “Far Out: A Space-Time Chronicle,” an exquisite picture guide to the universe by Michael Benson, a photographer, journalist and filmmaker, and obviously a longtime space buff.
Actually “exquisite” does not really do justice to the aesthetic and literary merits of the book, published in the fall. I live in New York, so most of the cosmos is invisible to me, but even when I lived under the black crystalline and — at this time of year — head-ringingly cold skies of the Catskills, I could see only so far. If you don’t have your own Hubble Space Telescope, this book is the next best thing.
Mr. Benson has scoured images from the world’s observatories, including the Hubble, to fashion a step-by-step tour of the cosmos, outward from fantastical clusters and nebulae a few hundred light-years away to soft red dots of primordial galaxies peppering the wall of the sky billions of light-years beyond the stars, almost to the Big Bang.
The result is an art book befitting its Abrams imprint. Here are stars packed like golden sand, gas combed in delicate blue threads, piled into burgundy thunderheads and carved into sinuous rilles and ribbons, and galaxies clotted with star clusters dancing like spiders on the ceiling.
Mr. Benson has reprocessed many of the images to give them colors truer to physical reality. For example, in the NASA version of the Hubble’s “Pillars of Creation,” showing fingers of gas and dust in the Eagle Nebula boiling away to reveal new stars, the “pillars” are brown and the radiation burning them away is green; Mr. Benson has turned it into a composition in shades of red, including burgundy, the actual color of the ionized hydrogen that makes the nebula.
You can sit and look through this book for hours and never be bored by the shapes, colors and textures into which cosmic creation can arrange itself, or you can actually read the accompanying learned essays. Mr. Benson’s prose is up to its visual surroundings, no mean feat.
“The enlarging mirrors of our telescopes,” he writes, “comprise material forged at the centers of the same generation of stars they now record.”
One set of essays relates what was going on in the sky to what was going on back on Earth. The Witch Head, for example, is about 700 light-years from here, which means its soft smoky light has been traveling to us since the early part of the 14th century. It is a milestone for, among other things, the bubonic plague, the first stirrings of the Renaissance in Italy and the foundation of the Ming dynasty in China.
The Heart Nebula, another new acquaintance, in Cassiopeia right next to the Soul Nebula, is 7,500 light-years away. Its image dates to the time of the first proto-writing in China and the first wine, in Persia, and when the Mediterranean burst its banks in biblical fashion and flooded the Black Sea.
The journey outward ends in those distant blurry galaxies on the doorstep of the Big Bang. Or is it the beginning?
“Eternity,” Mr. Benson quotes William Blake as saying in an epigraph, “is in love with the productions of time.” Well, aren’t we all?
January 12, 2010
Deciphering the Chatter of Monkeys and Chimps
By NICHOLAS WADE
Walking through the Tai forest of Ivory Coast, Klaus Zuberbühler could hear the calls of the Diana monkeys, but the babble held no meaning for him.
That was in 1990. Today, after nearly 20 years of studying animal communication, he can translate the forest’s sounds. This call means a Diana monkey has seen a leopard. That one means it has sighted another predator, the crowned eagle. “In our experience time and again, it’s a humbling experience to realize there is so much more information being passed in ways which hadn’t been noticed before,” said Dr. Zuberbühler, a psychologist at the University of St. Andrews in Scotland.
Do apes and monkeys have a secret language that has not yet been decrypted? And if so, will it resolve the mystery of how the human faculty for language evolved? Biologists have approached the issue in two ways, by trying to teach human language to chimpanzees and other species, and by listening to animals in the wild.
The first approach has been propelled by people’s intense desire — perhaps reinforced by childhood exposure to the loquacious animals in cartoons — to communicate with other species. Scientists have invested enormous effort in teaching chimpanzees language, whether in the form of speech or signs. A New York Times reporter who understands sign language, Boyce Rensberger, was able in 1974 to conduct what may be the first newspaper interview with another species when he conversed with Lucy, a signing chimp. She invited him up her tree, a proposal he declined, said Mr. Rensberger, who is now at M.I.T.
But with a few exceptions, teaching animals human language has proved to be a dead end. They should speak, perhaps, but they do not. They can communicate very expressively — think how definitely dogs can make their desires known — but they do not link symbolic sounds together in sentences or have anything close to language.
Better insights have come from listening to the sounds made by animals in the wild. Vervet monkeys were found in 1980 to have specific alarm calls for their most serious predators. If the calls were recorded and played back to them, the monkeys would respond appropriately. They jumped into bushes on hearing the leopard call, scanned the ground at the snake call, and looked up when played the eagle call.
It is tempting to think of the vervet calls as words for “leopard,” “snake” or “eagle,” but that is not really so. The vervets do not combine the calls with other sounds to make new meanings. They do not modulate them, so far as is known, to convey that a leopard is 10, or 100, feet away. Their alarm calls seem less like words and more like a person saying “Ouch!” — a vocal representation of an inner mental state rather than an attempt to convey exact information.
But the calls do have specific meaning, which is a start. And the biologists who analyzed the vervet calls, Robert Seyfarth and Dorothy Cheney of the University of Pennsylvania, detected another significant element in primates’ communication when they moved on to study baboons. Baboons are very sensitive to who stands where in their society’s hierarchy. If played a recording of a superior baboon threatening an inferior, and the latter screaming in terror, baboons will pay no attention — this is business as usual in baboon affairs. But when researchers concoct a recording in which an inferior’s threat grunt precedes a superior’s scream, baboons will look in amazement toward the loudspeaker broadcasting this apparent revolution in their social order.
Baboons evidently recognize the order in which two sounds are heard, and attach different meanings to each sequence. They and other species thus seem much closer to people in their understanding of sound sequences than in their production of them. “The ability to think in sentences does not lead them to speak in sentences,” Drs. Seyfarth and Cheney wrote in their book “Baboon Metaphysics.”
Some species may be able to produce sounds in ways that are a step or two closer to human language. Dr. Zuberbühler reported last month that Campbell’s monkeys, which live in the forests of the Ivory Coast, can vary individual calls by adding suffixes, just as a speaker of English changes a verb’s present tense to past by adding an “-ed.”
The Campbell’s monkeys give a “krak” alarm call when they see a leopard. But adding an “-oo” changes it to a generic warning of predators. One context for the krak-oo sound is when they hear the leopard alarm calls of another species, the Diana monkey. The Campbell’s monkeys would evidently make good reporters since they distinguish between leopards they have observed directly (krak) and those they have heard others observe (krak-oo).
Even more remarkably, the Campbell’s monkeys can combine two calls to generate a third with a different meaning. The males have a “Boom boom” call, which means “I’m here, come to me.” When booms are followed by a series of krak-oos, the meaning is quite different, Dr. Zuberbühler says. The sequence means “Timber! Falling tree!”
Dr. Zuberbühler has observed a similar achievement among putty-nosed monkeys that combine their “pyow” call (warning of a leopard) with their “hack” call (warning of a crowned eagle) into a sequence that means “Let’s get out of here in a real hurry.”
Apes have larger brains than monkeys and might be expected to produce more calls. But if there is an elaborate code of chimpanzee communication, their human cousins have not yet cracked it. Chimps make a food call that seems to have a lot of variation, perhaps depending on the perceived quality of the food. How many different meanings can the call assume? “You would need the animals themselves to decide how many meaningful calls they can discriminate,” Dr. Zuberbühler said. Such a project, he estimates, could take a lifetime of research.
Monkeys and apes possess many of the faculties that underlie language. They hear and interpret sequences of sounds much like people do. They have good control over their vocal tract and could produce much the same range of sounds as humans. But they cannot bring it all together.
This is particularly surprising because language is so useful to a social species. Once the infrastructure of language is in place, as is almost the case with monkeys and apes, the faculty might be expected to develop very quickly by evolutionary standards. Yet monkeys have been around for 30 million years without saying a single sentence. Chimps, too, have nothing resembling language, though they shared a common ancestor with humans just five million years ago. What is it that has kept all other primates locked in the prison of their own thoughts?
Drs. Seyfarth and Cheney believe that one reason may be that they lack a “theory of mind”; the recognition that others have thoughts. Since a baboon does not know or worry about what another baboon knows, it has no urge to share its knowledge. Dr. Zuberbühler stresses an intention to communicate as the missing factor. Children from the youngest ages have a great desire to share information with others, even though they gain no immediate benefit in doing so. Not so with other primates.
“In principle, a chimp could produce all the sounds a human produces, but they don’t do so because there has been no evolutionary pressure in this direction,” Dr. Zuberbühler said. “There is nothing to talk about for a chimp because he has no interest in talking about it.” At some point in human evolution, on the other hand, people developed the desire to share thoughts, Dr. Zuberbühler notes. Luckily for them, all the underlying systems of perceiving and producing sounds were already in place as part of the primate heritage, and natural selection had only to find a way of connecting these systems with thought.
Yet it is this step that seems the most mysterious of all. Marc D. Hauser, an expert on animal communication at Harvard, sees the uninhibited interaction between different neural systems as critical to the development of language. “For whatever reason, maybe accident, our brains are promiscuous in a way that animal brains are not, and once this emerges it’s explosive,” he said.
In animal brains, by contrast, each neural system seems to be locked in place and cannot interact freely with others. “Chimps have tons to say but can’t say it,” Dr. Hauser said. Chimpanzees can read each other’s goals and intentions, and do lots of political strategizing, for which language would be very useful. But the neural systems that compute these complex social interactions have not been married to language.
Dr. Hauser is trying to find out whether animals can appreciate some of the critical aspects of language, even if they cannot produce it. He and Ansgar Endress reported last year that cotton-top tamarins can distinguish a word added in front of another word from the same word added at the end. This may seem like the syntactical ability to recognize a suffix or prefix, but Dr. Hauser thinks it is just the ability to recognize when one thing comes before another and has little to do with real syntax.
“I’m becoming pessimistic,” he said of the efforts to explore whether animals have a form of language. “I conclude that the methods we have are just impoverished and won’t get us to where we want to be as far as demonstrating anything like semantics or syntax.”
Yet, as is evident from Dr. Zuberbühler’s research, there are many seemingly meaningless sounds in the forest that convey information in ways perhaps akin to language.
http://www.nytimes.com/2010/01/12/scien ... nted=print
Deciphering the Chatter of Monkeys and Chimps
By NICHOLAS WADE
Walking through the Tai forest of Ivory Coast, Klaus Zuberbühler could hear the calls of the Diana monkeys, but the babble held no meaning for him.
That was in 1990. Today, after nearly 20 years of studying animal communication, he can translate the forest’s sounds. This call means a Diana monkey has seen a leopard. That one means it has sighted another predator, the crowned eagle. “In our experience time and again, it’s a humbling experience to realize there is so much more information being passed in ways which hadn’t been noticed before,” said Dr. Zuberbühler, a psychologist at the University of St. Andrews in Scotland.
Do apes and monkeys have a secret language that has not yet been decrypted? And if so, will it resolve the mystery of how the human faculty for language evolved? Biologists have approached the issue in two ways, by trying to teach human language to chimpanzees and other species, and by listening to animals in the wild.
The first approach has been propelled by people’s intense desire — perhaps reinforced by childhood exposure to the loquacious animals in cartoons — to communicate with other species. Scientists have invested enormous effort in teaching chimpanzees language, whether in the form of speech or signs. A New York Times reporter who understands sign language, Boyce Rensberger, was able in 1974 to conduct what may be the first newspaper interview with another species when he conversed with Lucy, a signing chimp. She invited him up her tree, a proposal he declined, said Mr. Rensberger, who is now at M.I.T.
But with a few exceptions, teaching animals human language has proved to be a dead end. They should speak, perhaps, but they do not. They can communicate very expressively — think how definitely dogs can make their desires known — but they do not link symbolic sounds together in sentences or have anything close to language.
Better insights have come from listening to the sounds made by animals in the wild. Vervet monkeys were found in 1980 to have specific alarm calls for their most serious predators. If the calls were recorded and played back to them, the monkeys would respond appropriately. They jumped into bushes on hearing the leopard call, scanned the ground at the snake call, and looked up when played the eagle call.
It is tempting to think of the vervet calls as words for “leopard,” “snake” or “eagle,” but that is not really so. The vervets do not combine the calls with other sounds to make new meanings. They do not modulate them, so far as is known, to convey that a leopard is 10, or 100, feet away. Their alarm calls seem less like words and more like a person saying “Ouch!” — a vocal representation of an inner mental state rather than an attempt to convey exact information.
But the calls do have specific meaning, which is a start. And the biologists who analyzed the vervet calls, Robert Seyfarth and Dorothy Cheney of the University of Pennsylvania, detected another significant element in primates’ communication when they moved on to study baboons. Baboons are very sensitive to who stands where in their society’s hierarchy. If played a recording of a superior baboon threatening an inferior, and the latter screaming in terror, baboons will pay no attention — this is business as usual in baboon affairs. But when researchers concoct a recording in which an inferior’s threat grunt precedes a superior’s scream, baboons will look in amazement toward the loudspeaker broadcasting this apparent revolution in their social order.
Baboons evidently recognize the order in which two sounds are heard, and attach different meanings to each sequence. They and other species thus seem much closer to people in their understanding of sound sequences than in their production of them. “The ability to think in sentences does not lead them to speak in sentences,” Drs. Seyfarth and Cheney wrote in their book “Baboon Metaphysics.”
Some species may be able to produce sounds in ways that are a step or two closer to human language. Dr. Zuberbühler reported last month that Campbell’s monkeys, which live in the forests of the Ivory Coast, can vary individual calls by adding suffixes, just as a speaker of English changes a verb’s present tense to past by adding an “-ed.”
The Campbell’s monkeys give a “krak” alarm call when they see a leopard. But adding an “-oo” changes it to a generic warning of predators. One context for the krak-oo sound is when they hear the leopard alarm calls of another species, the Diana monkey. The Campbell’s monkeys would evidently make good reporters since they distinguish between leopards they have observed directly (krak) and those they have heard others observe (krak-oo).
Even more remarkably, the Campbell’s monkeys can combine two calls to generate a third with a different meaning. The males have a “Boom boom” call, which means “I’m here, come to me.” When booms are followed by a series of krak-oos, the meaning is quite different, Dr. Zuberbühler says. The sequence means “Timber! Falling tree!”
Dr. Zuberbühler has observed a similar achievement among putty-nosed monkeys that combine their “pyow” call (warning of a leopard) with their “hack” call (warning of a crowned eagle) into a sequence that means “Let’s get out of here in a real hurry.”
Apes have larger brains than monkeys and might be expected to produce more calls. But if there is an elaborate code of chimpanzee communication, their human cousins have not yet cracked it. Chimps make a food call that seems to have a lot of variation, perhaps depending on the perceived quality of the food. How many different meanings can the call assume? “You would need the animals themselves to decide how many meaningful calls they can discriminate,” Dr. Zuberbühler said. Such a project, he estimates, could take a lifetime of research.
Monkeys and apes possess many of the faculties that underlie language. They hear and interpret sequences of sounds much like people do. They have good control over their vocal tract and could produce much the same range of sounds as humans. But they cannot bring it all together.
This is particularly surprising because language is so useful to a social species. Once the infrastructure of language is in place, as is almost the case with monkeys and apes, the faculty might be expected to develop very quickly by evolutionary standards. Yet monkeys have been around for 30 million years without saying a single sentence. Chimps, too, have nothing resembling language, though they shared a common ancestor with humans just five million years ago. What is it that has kept all other primates locked in the prison of their own thoughts?
Drs. Seyfarth and Cheney believe that one reason may be that they lack a “theory of mind”; the recognition that others have thoughts. Since a baboon does not know or worry about what another baboon knows, it has no urge to share its knowledge. Dr. Zuberbühler stresses an intention to communicate as the missing factor. Children from the youngest ages have a great desire to share information with others, even though they gain no immediate benefit in doing so. Not so with other primates.
“In principle, a chimp could produce all the sounds a human produces, but they don’t do so because there has been no evolutionary pressure in this direction,” Dr. Zuberbühler said. “There is nothing to talk about for a chimp because he has no interest in talking about it.” At some point in human evolution, on the other hand, people developed the desire to share thoughts, Dr. Zuberbühler notes. Luckily for them, all the underlying systems of perceiving and producing sounds were already in place as part of the primate heritage, and natural selection had only to find a way of connecting these systems with thought.
Yet it is this step that seems the most mysterious of all. Marc D. Hauser, an expert on animal communication at Harvard, sees the uninhibited interaction between different neural systems as critical to the development of language. “For whatever reason, maybe accident, our brains are promiscuous in a way that animal brains are not, and once this emerges it’s explosive,” he said.
In animal brains, by contrast, each neural system seems to be locked in place and cannot interact freely with others. “Chimps have tons to say but can’t say it,” Dr. Hauser said. Chimpanzees can read each other’s goals and intentions, and do lots of political strategizing, for which language would be very useful. But the neural systems that compute these complex social interactions have not been married to language.
Dr. Hauser is trying to find out whether animals can appreciate some of the critical aspects of language, even if they cannot produce it. He and Ansgar Endress reported last year that cotton-top tamarins can distinguish a word added in front of another word from the same word added at the end. This may seem like the syntactical ability to recognize a suffix or prefix, but Dr. Hauser thinks it is just the ability to recognize when one thing comes before another and has little to do with real syntax.
“I’m becoming pessimistic,” he said of the efforts to explore whether animals have a form of language. “I conclude that the methods we have are just impoverished and won’t get us to where we want to be as far as demonstrating anything like semantics or syntax.”
Yet, as is evident from Dr. Zuberbühler’s research, there are many seemingly meaningless sounds in the forest that convey information in ways perhaps akin to language.
http://www.nytimes.com/2010/01/12/scien ... nted=print
January 26, 2010
Physicists’ Dreams and Worries in Era of the Big Collider
By DENNIS OVERBYE
A few dozen scientists got together in Los Angeles for the weekend recently to talk about their craziest hopes and dreams for the universe.
At least that was the idea.
“I want to set out the questions for the next nine decades,” Maria Spiropulu said on the eve of the conference, called the Physics of the Universe Summit. She was hoping that the meeting, organized with the help of Joseph D. Lykken of the Fermi National Accelerator Laboratory and Gordon Kane of the University of Michigan, would replicate the success of a speech by the mathematician David Hilbert, who in 1900 laid out an agenda of 23 math questions to be solved in the 20th century.
Dr. Spiropulu is a professor at the California Institute of Technology and a senior scientist at CERN, outside Geneva. Next month, CERN’s Large Hadron Collider, the most powerful particle accelerator ever built, will begin colliding protons and generating sparks of primordial fire in an effort to recreate conditions that ruled the universe in the first trillionth of a second of time.
Physicists have been speculating for 30 years what they will see. Now it is almost Christmas morning.
Organized into “duels” of world views, round tables and “diatribes and polemics,” the conference was billed as a place where the physicists could let down their hair about what might come, avoid “groupthink” and “be daring (even at the expense of being wrong),” according to Dr. Spiropulu’s e-mailed instructions. “Tell us what is bugging you and what is inspiring you,” she added.
Adding to the air of looseness, the participants were housed in a Hollywood hotel known long ago as the “Riot Hyatt,” for the antics of rock stars who stayed there.
The eclectic cast included Larry Page, a co-founder of Google, who was handing out new Google phones to his friends; Elon Musk, the PayPal electric-car entrepreneur, who hosted the first day of the meeting at his SpaceX factory, where he is building rockets to ferry supplies and, perhaps, astronauts to the space station; and the filmmaker Jesse Dylan, who showed a new film about the collider. One afternoon, the magician David Blaine was sitting around the SpaceX cafeteria doing card tricks for the physicists.
This group proved to be at least as good at worrying as dreaming.
“We’re confused,” Dr. Lykken explained, “and we’re probably going to be confused for a long time.”
The first speaker of the day was Lisa Randall, a Harvard theorist who began her talk by quoting Galileo to the effect that physics progressed more by working on small problems than by talking about grand ones — an issue that she is taking on in a new book about science and the collider.
And so Dr. Randall emphasized the challenges ahead. Physicists have high expectations and elegant theories about what they will find, she said, but once they start looking in detail at these theories, “they’re not that pretty.”
For example, a major hope is some explanation for why gravity is so weak compared with the other forces of nature. How is it that a refrigerator magnet can hold itself up against the pull of the entire Earth? One popular solution is a hypothesized feature of nature known as supersymmetry, which would cause certain mathematical discrepancies in the calculations to cancel out, as well as produce a plethora of previously undiscovered particles — known collectively as wimps, for weakly interacting massive particles — and presumably a passel of Nobel prizes.
In what physicists call the “wimp miracle,” supersymmetry could also explain the mysterious dark matter that astronomers say makes up 25 percent of the universe. But no single supersymmetrical particle quite fits the bill all by itself, Dr. Randall reported, without some additional fiddling with its parameters.
Moreover, she added, it is worrying that supersymmetric effects have not already shown up as small deviations from the predictions of present-day physics, known as the Standard Model.
“A lot of stuff doesn’t happen,” Dr. Randall said. “We would have expected to see clues by now, but we haven’t.”
These are exciting times, she concluded, but the answers physicists seek might not come quickly or easily. They should prepare for surprises and trouble.
“I can’t help it,” Dr. Randall said. “I’m a worrier.”
Dr. Randall was followed by Dr. Kane, a self-proclaimed optimist who did try to provoke by claiming that physics was on the verge of seeing “the bottom of the iceberg.” The collider would soon discover supersymmetry, he said, allowing physicists to zero in on an explanation of almost everything about the physical world, or at least particle physics.
But he and other speakers were scolded for not being bold enough in the subsequent round-table discussion.
Where, asked Michael Turner of the University of Chicago, were the big ideas? The passion? Where, for that matter, was the universe? Dr. Kane’s hypothesized breakthrough did not include an explanation for the so-called dark energy that seems to be speeding up the expansion of the universe.
Dr. Kane grumbled that the proposed solutions to dark energy did not affect particle physics.
The worrying continued. Lawrence Krauss, a cosmologist from Arizona State, said that most theories were wrong.
“We get the notions they are right because we keep talking about them,” he said. Not only are most theories wrong, he said, but most data are also wrong — at first — subject to glaring uncertainties. The recent history of physics, he said, is full of promising discoveries that disappeared because they could not be repeated.
And so it went.
Maurizio Pierini, a young CERN physicist, pointed out that the tests for new physics were mostly designed to discover supersymmetry. “What if it’s not supersymmetry?” he asked.
Another assumption physicists have taken for granted — that dark matter is a simple particle rather than an entire spectrum of dark behaviors — might not be true, they were told. “Does nature really love simplicity?” Aaron Pierce of the University of Michigan asked.
Neal Weiner of New York University, who has suggested the existence of forces as well as particles on the dark side, said that until recently ideas about dark matter were driven by ideas about particle theory rather than data.
“Ultimately we learn that perhaps it has very little to do with us at all,” Dr. Weiner said. “Who knows what we will find in the dark sector?”
At one point, Mark Wise, a theoretical physicist at Caltech, felt compelled to remind the audience that this was not a depressing time for physics, listing the collider and other new experiments on heaven and on earth. “You cannot call this a depressing time,” he said.
Dr. Randall immediately chimed in. “I agree it’s a good time,” she said. “We’ll make progress by thinking about these little problems.”
On the second day, the discussion continued in an auditorium at Caltech and concluded with a showing of Mr. Dylan’s film and a history talk by Lyn Evans, the CERN scientist who has supervised the building of the Large Hadron Collider through its ups and downs over 15 years, including a disastrous explosion after it first started up in 2008.
Dr. Evans, looking relaxed, said: “It’s a beautiful machine. Now let the adventure of discovery begin.”
Dr. Spiropulu said it had already begun. Her detector, she said, recorded 50,000 proton collisions during the testing of the collider in December, recapitulating much of 20th-century particle physics.
Now it is the 21st century, Dr. Spiropulu said, and “all that has been discussed these last few days will be needed immediately.”
Correction: January 25, 2010
An earlier version of this article misstated the affiliation of Aaron Pierce, a physicist. He is with the University of Michigan, not the California Institute of Technology.
http://www.nytimes.com/2010/01/26/scien ... nted=print
Physicists’ Dreams and Worries in Era of the Big Collider
By DENNIS OVERBYE
A few dozen scientists got together in Los Angeles for the weekend recently to talk about their craziest hopes and dreams for the universe.
At least that was the idea.
“I want to set out the questions for the next nine decades,” Maria Spiropulu said on the eve of the conference, called the Physics of the Universe Summit. She was hoping that the meeting, organized with the help of Joseph D. Lykken of the Fermi National Accelerator Laboratory and Gordon Kane of the University of Michigan, would replicate the success of a speech by the mathematician David Hilbert, who in 1900 laid out an agenda of 23 math questions to be solved in the 20th century.
Dr. Spiropulu is a professor at the California Institute of Technology and a senior scientist at CERN, outside Geneva. Next month, CERN’s Large Hadron Collider, the most powerful particle accelerator ever built, will begin colliding protons and generating sparks of primordial fire in an effort to recreate conditions that ruled the universe in the first trillionth of a second of time.
Physicists have been speculating for 30 years what they will see. Now it is almost Christmas morning.
Organized into “duels” of world views, round tables and “diatribes and polemics,” the conference was billed as a place where the physicists could let down their hair about what might come, avoid “groupthink” and “be daring (even at the expense of being wrong),” according to Dr. Spiropulu’s e-mailed instructions. “Tell us what is bugging you and what is inspiring you,” she added.
Adding to the air of looseness, the participants were housed in a Hollywood hotel known long ago as the “Riot Hyatt,” for the antics of rock stars who stayed there.
The eclectic cast included Larry Page, a co-founder of Google, who was handing out new Google phones to his friends; Elon Musk, the PayPal electric-car entrepreneur, who hosted the first day of the meeting at his SpaceX factory, where he is building rockets to ferry supplies and, perhaps, astronauts to the space station; and the filmmaker Jesse Dylan, who showed a new film about the collider. One afternoon, the magician David Blaine was sitting around the SpaceX cafeteria doing card tricks for the physicists.
This group proved to be at least as good at worrying as dreaming.
“We’re confused,” Dr. Lykken explained, “and we’re probably going to be confused for a long time.”
The first speaker of the day was Lisa Randall, a Harvard theorist who began her talk by quoting Galileo to the effect that physics progressed more by working on small problems than by talking about grand ones — an issue that she is taking on in a new book about science and the collider.
And so Dr. Randall emphasized the challenges ahead. Physicists have high expectations and elegant theories about what they will find, she said, but once they start looking in detail at these theories, “they’re not that pretty.”
For example, a major hope is some explanation for why gravity is so weak compared with the other forces of nature. How is it that a refrigerator magnet can hold itself up against the pull of the entire Earth? One popular solution is a hypothesized feature of nature known as supersymmetry, which would cause certain mathematical discrepancies in the calculations to cancel out, as well as produce a plethora of previously undiscovered particles — known collectively as wimps, for weakly interacting massive particles — and presumably a passel of Nobel prizes.
In what physicists call the “wimp miracle,” supersymmetry could also explain the mysterious dark matter that astronomers say makes up 25 percent of the universe. But no single supersymmetrical particle quite fits the bill all by itself, Dr. Randall reported, without some additional fiddling with its parameters.
Moreover, she added, it is worrying that supersymmetric effects have not already shown up as small deviations from the predictions of present-day physics, known as the Standard Model.
“A lot of stuff doesn’t happen,” Dr. Randall said. “We would have expected to see clues by now, but we haven’t.”
These are exciting times, she concluded, but the answers physicists seek might not come quickly or easily. They should prepare for surprises and trouble.
“I can’t help it,” Dr. Randall said. “I’m a worrier.”
Dr. Randall was followed by Dr. Kane, a self-proclaimed optimist who did try to provoke by claiming that physics was on the verge of seeing “the bottom of the iceberg.” The collider would soon discover supersymmetry, he said, allowing physicists to zero in on an explanation of almost everything about the physical world, or at least particle physics.
But he and other speakers were scolded for not being bold enough in the subsequent round-table discussion.
Where, asked Michael Turner of the University of Chicago, were the big ideas? The passion? Where, for that matter, was the universe? Dr. Kane’s hypothesized breakthrough did not include an explanation for the so-called dark energy that seems to be speeding up the expansion of the universe.
Dr. Kane grumbled that the proposed solutions to dark energy did not affect particle physics.
The worrying continued. Lawrence Krauss, a cosmologist from Arizona State, said that most theories were wrong.
“We get the notions they are right because we keep talking about them,” he said. Not only are most theories wrong, he said, but most data are also wrong — at first — subject to glaring uncertainties. The recent history of physics, he said, is full of promising discoveries that disappeared because they could not be repeated.
And so it went.
Maurizio Pierini, a young CERN physicist, pointed out that the tests for new physics were mostly designed to discover supersymmetry. “What if it’s not supersymmetry?” he asked.
Another assumption physicists have taken for granted — that dark matter is a simple particle rather than an entire spectrum of dark behaviors — might not be true, they were told. “Does nature really love simplicity?” Aaron Pierce of the University of Michigan asked.
Neal Weiner of New York University, who has suggested the existence of forces as well as particles on the dark side, said that until recently ideas about dark matter were driven by ideas about particle theory rather than data.
“Ultimately we learn that perhaps it has very little to do with us at all,” Dr. Weiner said. “Who knows what we will find in the dark sector?”
At one point, Mark Wise, a theoretical physicist at Caltech, felt compelled to remind the audience that this was not a depressing time for physics, listing the collider and other new experiments on heaven and on earth. “You cannot call this a depressing time,” he said.
Dr. Randall immediately chimed in. “I agree it’s a good time,” she said. “We’ll make progress by thinking about these little problems.”
On the second day, the discussion continued in an auditorium at Caltech and concluded with a showing of Mr. Dylan’s film and a history talk by Lyn Evans, the CERN scientist who has supervised the building of the Large Hadron Collider through its ups and downs over 15 years, including a disastrous explosion after it first started up in 2008.
Dr. Evans, looking relaxed, said: “It’s a beautiful machine. Now let the adventure of discovery begin.”
Dr. Spiropulu said it had already begun. Her detector, she said, recorded 50,000 proton collisions during the testing of the collider in December, recapitulating much of 20th-century particle physics.
Now it is the 21st century, Dr. Spiropulu said, and “all that has been discussed these last few days will be needed immediately.”
Correction: January 25, 2010
An earlier version of this article misstated the affiliation of Aaron Pierce, a physicist. He is with the University of Michigan, not the California Institute of Technology.
http://www.nytimes.com/2010/01/26/scien ... nted=print
There is a related video at:
http://www.nytimes.com/2010/02/16/scien ... &th&emc=th
February 16, 2010
In Brookhaven Collider, Scientists Briefly Break a Law of Nature
By DENNIS OVERBYE
Physicists said Monday that they had whacked a tiny region of space with enough energy to briefly distort the laws of physics, providing the first laboratory demonstration of the kind of process that scientists suspect has shaped cosmic history.
The blow was delivered in the Relativistic Heavy Ion Collider, or RHIC, at the Brookhaven National Laboratory on Long Island, where, since 2000, physicists have been accelerating gold nuclei around a 2.4-mile underground ring to 99.995 percent of the speed of light and then colliding them in an effort to melt protons and neutrons and free their constituents — quarks and gluons. The goal has been a state of matter called a quark-gluon plasma, which theorists believe existed when the universe was only a microsecond old.
The departure from normal physics manifested itself in the apparent ability of the briefly freed quarks to tell right from left. That breaks one of the fundamental laws of nature, known as parity, which requires that the laws of physics remain unchanged if we view nature in a mirror.
This happened in bubbles smaller than the nucleus of an atom, which lasted only a billionth of a billionth of a billionth of a second. But in these bubbles were “hints of profound physics,” in the words of Steven Vigdor, associate director for nuclear and particle physics at Brookhaven. Very similar symmetry-breaking bubbles, at an earlier period in the universe, are believed to have been responsible for breaking the balance between matter and its opposite antimatter and leaving the universe with a preponderance of matter.
“We now have a hook” into how these processes occur, Dr. Vigdor said, adding in an e-mail message, “IF the interpretation of the RHIC results turns out to be correct.” Other physicists said the results were an important window into the complicated dynamics of quarks, which goes by the somewhat whimsical name of Quantum Chromo Dynamics.
Frank Wilczek, a physicist at the Massachusetts Institute of Technology who won the Nobel Prize for work on the theory of quarks, called the new results “interesting and surprising,” and said understanding them would help understand the behavior of quarks in unusual circumstances.
“It is comparable, I suppose, to understanding better how galaxies form, or astrophysical black holes,” he said.
The Brookhaven scientists and their colleagues discussed their latest results from RHIC in talks and a news conference at a meeting of the American Physical Society Monday in Washington, and in a pair of papers submitted to Physical Review Letters. “This is a view of what the world was like at 2 microseconds,” said Jack Sandweiss of Yale, a member of the Brookhaven team, calling it, “a seething cauldron.”
Among other things, the group announced it had succeeded in measuring the temperature of the quark-gluon plasma as 4 trillion degrees Celsius, “by far the hottest matter ever made,” Dr. Vigdor said. That is 250,000 times hotter than the center of the Sun and well above the temperature at which theorists calculate that protons and neutrons should melt, but the quark-gluon plasma does not act the way theorists had predicted.
Instead of behaving like a perfect gas, in which every quark goes its own way independent of the others, the plasma seemed to act like a liquid. “It was a very big surprise,” Dr. Vigdor said, when it was discovered in 2005. Since then, however, theorists have revisited their calculations and found that the quark soup can be either a liquid or a gas, depending on the temperature, he explained. “This is not your father’s quark-gluon plasma,” said Barbara V. Jacak, of the State University at Stony Brook, speaking for the team that made the new measurements.
It is now thought that the plasma would have to be a million times more energetic to become a perfect gas. That is beyond the reach of any conceivable laboratory experiment, but the experiments colliding lead nuclei in the Large Hadron Collider outside Geneva next winter should reach energies high enough to see some evolution from a liquid to a gas.
Parity, the idea that the laws of physics are the same when left and right are switched, as in a mirror reflection, is one of the most fundamental symmetries of space-time as we know it. Physicists were surprised to discover in 1956, however, that parity is not obeyed by all the laws of nature after all. The universe is slightly lopsided in this regard. The so-called weak force, which governs some radioactive decays, seems to be left-handed, causing neutrinos, the ghostlike elementary particles that are governed by that force, to spin clockwise, when viewed oncoming, but never counterclockwise.
Under normal conditions, the laws of quark behavior observe the principle of mirror symmetry, but Dmitri Kharzeev of Brookhaven, a longtime student of symmetry changes in the universe, had suggested in 1998 that those laws might change under the very abnormal conditions in the RHIC fireball. Conditions in that fireball are such that a cube with sides about one quarter the thickness of a human hair could contain the total amount of energy consumed in the United States in a year.
All this energy, he said, could put a twist in the gluon force fields, which give quarks their marching orders. There can be left-hand twists and right-hand twists, he explained, resulting in space within each little bubble getting a local direction.
What makes the violation of mirror symmetry observable in the collider is the combination of this corkscrewed space with a magnetic field, produced by the charged gold ions blasting at one another. The quarks were then drawn one way or the other along the magnetic field, depending on their electrical charges.
The magnetic fields produced by the collisions are the most intense ever observed, roughly 100 million billion gauss, Dr. Sandweiss said.
The directions of the magnetic field and of the corkscrew effect can be different in every bubble, the presumed parity violations can only be studied statistically, averaged over 14 million bubble events. In each of them, the mirror symmetry could be broken in a different direction, Dr. Sandweiss explained, but the effect would always be the same, with positive quarks going one way and negative ones the other. That is what was recorded in RHIC’s STAR detector (STAR being short for Solenoidal Tracker at RHIC) by Dr. Sandweiss and his colleagues. Dr. Sandweiss cautioned that it was still possible that some other effect could be mimicking the parity violation, and he d held off publication of the results for a year, trying unsuccessfully to find one. So they decided, he said, that it was worthy of discussion.
One test of the result, he said, would be to run RHIC at a lower energy and see if the effect went away when there was not enough oomph in the beam to distort space-time. The idea of parity might seem like a very abstract and mathematical concept, but it affects our chemistry and biology. It is not only neutrinos that are skewed. So are many of the molecules of life, including proteins, which are left-handed, and sugars, which are right-handed.
The chirality, or handedness, of molecules prevents certain reactions from taking place in chemistry and biophysics, Dr. Sandweiss noted, and affects what we can digest.
Physicists suspect that the left-handedness of neutrinos might have contributed to the most lopsided feature of the universe of all, the fact that it is composed of matter and not antimatter, even though the present-day laws do not discriminate. The amount of parity violation that physicists have measured in experiments, however, is not enough to explain how the universe got so unbalanced today. We like symmetry, Dr. Kharzeev, of Brookhaven, noted, but if the symmetry between matter and antimatter had not been broken long ago, “the universe would be a very desolate place.”
The new measurement from the quark plasma does not explain the antimatter problem either, Dr. Sandweiss said, but it helps show how departures from symmetry can appear in bubbles like the ones in RHIC in the course of cosmic evolution. Scientists think that the laws of physics went through a series of changes, or “phase transitions,” like water freezing to ice, as the universe cooled during the stupendously hot early moments of the Big Bang. Symmetry-violating bubbles like those of RHIC are more likely to form during these cosmic changeovers. “If you learn more about it from this experiment, we could then illuminate the process that gives rise to these bubbles,” Dr. Sandweiss said.
Dr. Vigdor said: “A lot of physics sounds like science fiction. There is a lot of speculation on what happened in the early universe. The amazing thing is that we have this chance to test any of this.”
http://www.nytimes.com/2010/02/16/scien ... &th&emc=th
February 16, 2010
In Brookhaven Collider, Scientists Briefly Break a Law of Nature
By DENNIS OVERBYE
Physicists said Monday that they had whacked a tiny region of space with enough energy to briefly distort the laws of physics, providing the first laboratory demonstration of the kind of process that scientists suspect has shaped cosmic history.
The blow was delivered in the Relativistic Heavy Ion Collider, or RHIC, at the Brookhaven National Laboratory on Long Island, where, since 2000, physicists have been accelerating gold nuclei around a 2.4-mile underground ring to 99.995 percent of the speed of light and then colliding them in an effort to melt protons and neutrons and free their constituents — quarks and gluons. The goal has been a state of matter called a quark-gluon plasma, which theorists believe existed when the universe was only a microsecond old.
The departure from normal physics manifested itself in the apparent ability of the briefly freed quarks to tell right from left. That breaks one of the fundamental laws of nature, known as parity, which requires that the laws of physics remain unchanged if we view nature in a mirror.
This happened in bubbles smaller than the nucleus of an atom, which lasted only a billionth of a billionth of a billionth of a second. But in these bubbles were “hints of profound physics,” in the words of Steven Vigdor, associate director for nuclear and particle physics at Brookhaven. Very similar symmetry-breaking bubbles, at an earlier period in the universe, are believed to have been responsible for breaking the balance between matter and its opposite antimatter and leaving the universe with a preponderance of matter.
“We now have a hook” into how these processes occur, Dr. Vigdor said, adding in an e-mail message, “IF the interpretation of the RHIC results turns out to be correct.” Other physicists said the results were an important window into the complicated dynamics of quarks, which goes by the somewhat whimsical name of Quantum Chromo Dynamics.
Frank Wilczek, a physicist at the Massachusetts Institute of Technology who won the Nobel Prize for work on the theory of quarks, called the new results “interesting and surprising,” and said understanding them would help understand the behavior of quarks in unusual circumstances.
“It is comparable, I suppose, to understanding better how galaxies form, or astrophysical black holes,” he said.
The Brookhaven scientists and their colleagues discussed their latest results from RHIC in talks and a news conference at a meeting of the American Physical Society Monday in Washington, and in a pair of papers submitted to Physical Review Letters. “This is a view of what the world was like at 2 microseconds,” said Jack Sandweiss of Yale, a member of the Brookhaven team, calling it, “a seething cauldron.”
Among other things, the group announced it had succeeded in measuring the temperature of the quark-gluon plasma as 4 trillion degrees Celsius, “by far the hottest matter ever made,” Dr. Vigdor said. That is 250,000 times hotter than the center of the Sun and well above the temperature at which theorists calculate that protons and neutrons should melt, but the quark-gluon plasma does not act the way theorists had predicted.
Instead of behaving like a perfect gas, in which every quark goes its own way independent of the others, the plasma seemed to act like a liquid. “It was a very big surprise,” Dr. Vigdor said, when it was discovered in 2005. Since then, however, theorists have revisited their calculations and found that the quark soup can be either a liquid or a gas, depending on the temperature, he explained. “This is not your father’s quark-gluon plasma,” said Barbara V. Jacak, of the State University at Stony Brook, speaking for the team that made the new measurements.
It is now thought that the plasma would have to be a million times more energetic to become a perfect gas. That is beyond the reach of any conceivable laboratory experiment, but the experiments colliding lead nuclei in the Large Hadron Collider outside Geneva next winter should reach energies high enough to see some evolution from a liquid to a gas.
Parity, the idea that the laws of physics are the same when left and right are switched, as in a mirror reflection, is one of the most fundamental symmetries of space-time as we know it. Physicists were surprised to discover in 1956, however, that parity is not obeyed by all the laws of nature after all. The universe is slightly lopsided in this regard. The so-called weak force, which governs some radioactive decays, seems to be left-handed, causing neutrinos, the ghostlike elementary particles that are governed by that force, to spin clockwise, when viewed oncoming, but never counterclockwise.
Under normal conditions, the laws of quark behavior observe the principle of mirror symmetry, but Dmitri Kharzeev of Brookhaven, a longtime student of symmetry changes in the universe, had suggested in 1998 that those laws might change under the very abnormal conditions in the RHIC fireball. Conditions in that fireball are such that a cube with sides about one quarter the thickness of a human hair could contain the total amount of energy consumed in the United States in a year.
All this energy, he said, could put a twist in the gluon force fields, which give quarks their marching orders. There can be left-hand twists and right-hand twists, he explained, resulting in space within each little bubble getting a local direction.
What makes the violation of mirror symmetry observable in the collider is the combination of this corkscrewed space with a magnetic field, produced by the charged gold ions blasting at one another. The quarks were then drawn one way or the other along the magnetic field, depending on their electrical charges.
The magnetic fields produced by the collisions are the most intense ever observed, roughly 100 million billion gauss, Dr. Sandweiss said.
The directions of the magnetic field and of the corkscrew effect can be different in every bubble, the presumed parity violations can only be studied statistically, averaged over 14 million bubble events. In each of them, the mirror symmetry could be broken in a different direction, Dr. Sandweiss explained, but the effect would always be the same, with positive quarks going one way and negative ones the other. That is what was recorded in RHIC’s STAR detector (STAR being short for Solenoidal Tracker at RHIC) by Dr. Sandweiss and his colleagues. Dr. Sandweiss cautioned that it was still possible that some other effect could be mimicking the parity violation, and he d held off publication of the results for a year, trying unsuccessfully to find one. So they decided, he said, that it was worthy of discussion.
One test of the result, he said, would be to run RHIC at a lower energy and see if the effect went away when there was not enough oomph in the beam to distort space-time. The idea of parity might seem like a very abstract and mathematical concept, but it affects our chemistry and biology. It is not only neutrinos that are skewed. So are many of the molecules of life, including proteins, which are left-handed, and sugars, which are right-handed.
The chirality, or handedness, of molecules prevents certain reactions from taking place in chemistry and biophysics, Dr. Sandweiss noted, and affects what we can digest.
Physicists suspect that the left-handedness of neutrinos might have contributed to the most lopsided feature of the universe of all, the fact that it is composed of matter and not antimatter, even though the present-day laws do not discriminate. The amount of parity violation that physicists have measured in experiments, however, is not enough to explain how the universe got so unbalanced today. We like symmetry, Dr. Kharzeev, of Brookhaven, noted, but if the symmetry between matter and antimatter had not been broken long ago, “the universe would be a very desolate place.”
The new measurement from the quark plasma does not explain the antimatter problem either, Dr. Sandweiss said, but it helps show how departures from symmetry can appear in bubbles like the ones in RHIC in the course of cosmic evolution. Scientists think that the laws of physics went through a series of changes, or “phase transitions,” like water freezing to ice, as the universe cooled during the stupendously hot early moments of the Big Bang. Symmetry-violating bubbles like those of RHIC are more likely to form during these cosmic changeovers. “If you learn more about it from this experiment, we could then illuminate the process that gives rise to these bubbles,” Dr. Sandweiss said.
Dr. Vigdor said: “A lot of physics sounds like science fiction. There is a lot of speculation on what happened in the early universe. The amazing thing is that we have this chance to test any of this.”
March 28, 2010, 5:00 pm
Power Tools
By STEVEN STROGATZ
There is a related video and illustrations at:
http://opinionator.blogs.nytimes.com/20 ... ?th&emc=th
Steven Strogatz on math, from basic to baffling.
Tags:
exponential growth, folding paper, functions, logarithms
If you were an avid television watcher in the 1980s, you may remember a clever show called “Moonlighting.” Known for its snappy dialogue and the romantic chemistry between its co-stars, it featured Cybill Shepherd and Bruce Willis as a couple of wisecracking private detectives named Maddie Hayes and David Addison. While investigating one particularly tough case, David asks a coroner’s assistant for his best guess about possible suspects. “Beats me,” says the assistant. “But you know what I don’t understand?” To which David replies, “Logarithms?” Then, reacting to Maddie’s look: “What? You understood those?”
(Click image to play clip.)
That pretty well sums up how many people feel about logarithms. Their peculiar name is just part of their image problem. Most folks never use them again after high school, at least not consciously, and are oblivious to the logarithms hiding behind the scenes of their daily lives.
The same is true of many of the other functions discussed in algebra II and pre-calculus. Power functions, exponential functions — what was the point of all that? My goal in this week’s column is to help you appreciate the function of all those functions, even if you never have occasion to press their buttons on your calculator.
A mathematician needs functions for the same reason that a builder needs hammers and drills. Tools transform things. So do functions. In fact, mathematicians often refer to them as “transformations” because of this. But instead of wood or steel, functions pound away on numbers and shapes and, sometimes, even on other functions.
To show you what I mean, let’s plot the graph of the equation
You may remember how this sort of activity goes: you draw a picture of the xy plane with the x-axis running horizontally and the y-axis vertically. Then for each x you compute the corresponding y and plot them together as a single point in the xy plane. For example, when x is 1, the equation says y equals 4 minus 1 squared, which is 4 minus 1, or 3. So (x,y) = (1, 3) is a point on the graph. After calculating and plotting a few more points, the following picture emerges.
The droopy shape of the curve is due to the action of mathematical pliers. In the equation for y, the function that transforms x into x2 behaves a lot like the common tool for bending and pulling things. When it’s applied to every point on a piece of the x-axis (which you could visualize as a straight piece of wire), the pliers bend and elongate that piece into the downward-curving arch shown above.
And what role does the 4 play in the equation y = 4 – x2? It acts like a nail for hanging a picture on a wall. It lifts the bent wire arch up by 4 units. Since it raises all points by the same amount, it’s known as a “constant function.”
This example illustrates the dual nature of functions. On the one hand, they’re tools: the x2 bends the piece of the x-axis and the 4 lifts it. On the other hand, they’re building blocks: the 4 and the –x2 can be regarded as component parts of a more complicated function, 4 – x2, just as wires, batteries and transistors are component parts of a radio.
Once you start to look at things this way, you’ll notice functions everywhere. The arching curve above — technically known as a “parabola”— is the signature of the squaring function x2 operating behind the scenes. Look for it when you’re taking a sip from a water fountain or watching a basketball arc toward the hoop. And if you ever have a few minutes to spare on a layover in Detroit’s International Airport, be sure to stop by the Delta terminal to enjoy the world’s most breathtaking parabolas at play:
Parabolas and constants are associated with a wider class of functions — “power functions” of the form xn, in which a variable x is raised to a fixed power n. For a parabola, n = 2; for a constant, n = 0.
Changing the value of n yields other handy tools. For example, raising x to the first power (n = 1) gives a function that works like a ramp, a steady incline of growth or decay. It’s called a “linear function” because its xy graph is a line. If you leave a bucket out in a steady rain, the water collecting at the bottom rises linearly in time.
Another useful tool is the inverse square function 1/x2, corresponding to the case n = –2. It’s good for describing how waves and forces attenuate as they spread out in three dimensions — for instance, how a sound softens as it moves away from its source.
Power functions like these are the building blocks that scientists and engineers use to describe growth and decay in their mildest forms.
But when you need mathematical dynamite, it’s time to unpack the exponential functions. They describe all sorts of explosive growth, from nuclear chain reactions to the proliferation of bacteria in a Petri dish. The most familiar example is the function 10x, in which 10 is raised to the power x. Make sure not to confuse this with the earlier power functions. Here the exponent (the power x) is a variable, and the base (the number 10) is a constant — whereas in a power function like x2, it’s the other way around. This switch makes a huge difference. Exponential growth is almost unimaginably rapid.
That’s why it’s so hard to fold a piece of paper in half more than 7 or 8 times. Each folding approximately doubles the thickness of the wad, causing it to grow exponentially. Meanwhile, the wad’s length shrinks in half every time, and thus decreases exponentially fast. For a standard sheet of notebook paper, after 7 folds the wad becomes thicker than it is long, so it can’t be folded again. It’s not a matter of the folder’s strength; for a sheet to be considered legitimately folded n times, the resulting wad is required to have 2n layers in a straight line, and this can’t happen if the wad is thicker than it is long.
The challenge was thought to be impossible until Britney Gallivan, then a junior in high school, solved it in 2002. She began by deriving a formula
that predicted the maximum number of times, n, that paper of a given thickness T and length L could be folded in one direction. Notice the forbidding presence of the exponential function 2n in two places — once to account for the doubling of the wad’s thickness at each fold, and another time to account for the halving of its length.
Using her formula, Britney concluded that she would need to use a special roll of toilet paper nearly three quarters of a mile long. In January 2002, she went to a shopping mall in her hometown of Pomona, Calif., and unrolled the paper. Seven hours later, and with the help of her parents, she smashed the world record by folding the paper in half 12 times!
More in This Series
From Fish to Infinity (Jan. 31, 2010)
Rock Groups (Feb. 7, 2010)
The Enemy of My Enemy (Feb. 14, 2010)
Division and Its Discontents (Feb. 21, 2010)
The Joy of X (Feb. 28, 2010)
Finding Your Roots (March 7, 2010)
Square Dancing (March 14, 2010)
Think Globally (March 21, 2010)
See the Entire Series »
In theory, exponential growth is also supposed to grace your bank account. If your money grows at an annual interest rate of r, after one year it will be worth (1 + r) times more; after two years, (1 + r) squared; and after x years, (1 + r)x times more than your initial deposit. Thus the miracle of compounding that we so often hear about is caused by exponential growth in action.
Which brings back to logarithms. We need them because it’s always useful to have tools that can undo one another. Just as every office worker needs both a stapler and a staple remover, every mathematician needs exponential functions and logarithms. They’re “inverses.” This means that if you type a number x into your calculator, and then punch the 10x button followed by the log x button, you’ll get back to the number you started with.
Logarithms are compressors. They’re ideal for taking numbers that vary over a wide range and squeezing them together so they become more manageable. For instance, 100 and 100 million differ a million-fold, a gulf that most of us find incomprehensible. But their logarithms differ only fourfold (they are 2 and 8, because 100 = 102 and 100 million = 108). In conversation, we all use a crude version of logarithmic shorthand when we refer to any salary between $100,000 and $999,999 as being “six figures.” That “six” is roughly the logarithm of these salaries, which in fact span the range from 5 to 6.
As impressive as all these functions may be, a mathematician’s toolbox can only do so much — which is why I still haven’t assembled my Ikea bookcases.
--------------------------------------------------------------------------------
NOTES:
1. The excerpt from “Moonlighting” is from the episode “In God We Strongly Suspect.” It originally aired on Feb. 11, 1986, during the show’s second season.
2. Will Hoffman and Derek Paul Boyle have filmed an intriguing video of the parabolas all around us in the everyday world (along with their exponential cousins, curves called “catenaries,” so-named for the shape of hanging chains). Full disclosure: the filmmakers say this video was inspired by a story I told on an episode of RadioLab.
3. For simplicity, I’ve referred to expressions like x2 as functions, through to be more precise I should speak of “the function that maps x into x2.” I hope this sort of abbreviation won’t cause confusion, since we’ve all seen it on calculator buttons.
4. For the story of Britney Gallivan’s adventures in paper folding, see: Gallivan, B. C. “How to Fold Paper in Half Twelve Times: An ‘Impossible Challenge’ Solved and Explained.” Pomona, CA: Historical Society of Pomona Valley, 2002. For a journalist’s account, aimed at children, see Ivars Peterson, “Champion paper-folder,” Muse (July/August 2004), p. 33. The Mythbusters have also attempted to replicate Britney’s experiment on their television show.
5. For evidence that our innate number sense is logarithmic, see: Stanislas Dehaene, Véronique Izard, Elizabeth Spelke, and Pierre Pica, “Log or linear? Distinct intuitions of the number scale in Western and Amazonian indigene cultures,” Science, Vol. 320 (2008), p. 1217. Popular accounts of this study are available at ScienceDaily and in this episode of RadioLab.
Thanks to David Field, Paul Ginsparg, Jon Kleinberg, Andy Ruina and Carole Schiffman for their comments and suggestions; Diane Hopkins, Cindy Klauss and Brian Madsen for their help in finding and obtaining the “Moonlighting” clip; and Margaret Nelson, for preparing the illustration.
Need to print this post? Here is a print-friendly PDF version of this piece, with images. (Similar PDFs have been created for previous columns in this series, when appropriate.)
Power Tools
By STEVEN STROGATZ
There is a related video and illustrations at:
http://opinionator.blogs.nytimes.com/20 ... ?th&emc=th
Steven Strogatz on math, from basic to baffling.
Tags:
exponential growth, folding paper, functions, logarithms
If you were an avid television watcher in the 1980s, you may remember a clever show called “Moonlighting.” Known for its snappy dialogue and the romantic chemistry between its co-stars, it featured Cybill Shepherd and Bruce Willis as a couple of wisecracking private detectives named Maddie Hayes and David Addison. While investigating one particularly tough case, David asks a coroner’s assistant for his best guess about possible suspects. “Beats me,” says the assistant. “But you know what I don’t understand?” To which David replies, “Logarithms?” Then, reacting to Maddie’s look: “What? You understood those?”
(Click image to play clip.)
That pretty well sums up how many people feel about logarithms. Their peculiar name is just part of their image problem. Most folks never use them again after high school, at least not consciously, and are oblivious to the logarithms hiding behind the scenes of their daily lives.
The same is true of many of the other functions discussed in algebra II and pre-calculus. Power functions, exponential functions — what was the point of all that? My goal in this week’s column is to help you appreciate the function of all those functions, even if you never have occasion to press their buttons on your calculator.
A mathematician needs functions for the same reason that a builder needs hammers and drills. Tools transform things. So do functions. In fact, mathematicians often refer to them as “transformations” because of this. But instead of wood or steel, functions pound away on numbers and shapes and, sometimes, even on other functions.
To show you what I mean, let’s plot the graph of the equation
You may remember how this sort of activity goes: you draw a picture of the xy plane with the x-axis running horizontally and the y-axis vertically. Then for each x you compute the corresponding y and plot them together as a single point in the xy plane. For example, when x is 1, the equation says y equals 4 minus 1 squared, which is 4 minus 1, or 3. So (x,y) = (1, 3) is a point on the graph. After calculating and plotting a few more points, the following picture emerges.
The droopy shape of the curve is due to the action of mathematical pliers. In the equation for y, the function that transforms x into x2 behaves a lot like the common tool for bending and pulling things. When it’s applied to every point on a piece of the x-axis (which you could visualize as a straight piece of wire), the pliers bend and elongate that piece into the downward-curving arch shown above.
And what role does the 4 play in the equation y = 4 – x2? It acts like a nail for hanging a picture on a wall. It lifts the bent wire arch up by 4 units. Since it raises all points by the same amount, it’s known as a “constant function.”
This example illustrates the dual nature of functions. On the one hand, they’re tools: the x2 bends the piece of the x-axis and the 4 lifts it. On the other hand, they’re building blocks: the 4 and the –x2 can be regarded as component parts of a more complicated function, 4 – x2, just as wires, batteries and transistors are component parts of a radio.
Once you start to look at things this way, you’ll notice functions everywhere. The arching curve above — technically known as a “parabola”— is the signature of the squaring function x2 operating behind the scenes. Look for it when you’re taking a sip from a water fountain or watching a basketball arc toward the hoop. And if you ever have a few minutes to spare on a layover in Detroit’s International Airport, be sure to stop by the Delta terminal to enjoy the world’s most breathtaking parabolas at play:
Parabolas and constants are associated with a wider class of functions — “power functions” of the form xn, in which a variable x is raised to a fixed power n. For a parabola, n = 2; for a constant, n = 0.
Changing the value of n yields other handy tools. For example, raising x to the first power (n = 1) gives a function that works like a ramp, a steady incline of growth or decay. It’s called a “linear function” because its xy graph is a line. If you leave a bucket out in a steady rain, the water collecting at the bottom rises linearly in time.
Another useful tool is the inverse square function 1/x2, corresponding to the case n = –2. It’s good for describing how waves and forces attenuate as they spread out in three dimensions — for instance, how a sound softens as it moves away from its source.
Power functions like these are the building blocks that scientists and engineers use to describe growth and decay in their mildest forms.
But when you need mathematical dynamite, it’s time to unpack the exponential functions. They describe all sorts of explosive growth, from nuclear chain reactions to the proliferation of bacteria in a Petri dish. The most familiar example is the function 10x, in which 10 is raised to the power x. Make sure not to confuse this with the earlier power functions. Here the exponent (the power x) is a variable, and the base (the number 10) is a constant — whereas in a power function like x2, it’s the other way around. This switch makes a huge difference. Exponential growth is almost unimaginably rapid.
That’s why it’s so hard to fold a piece of paper in half more than 7 or 8 times. Each folding approximately doubles the thickness of the wad, causing it to grow exponentially. Meanwhile, the wad’s length shrinks in half every time, and thus decreases exponentially fast. For a standard sheet of notebook paper, after 7 folds the wad becomes thicker than it is long, so it can’t be folded again. It’s not a matter of the folder’s strength; for a sheet to be considered legitimately folded n times, the resulting wad is required to have 2n layers in a straight line, and this can’t happen if the wad is thicker than it is long.
The challenge was thought to be impossible until Britney Gallivan, then a junior in high school, solved it in 2002. She began by deriving a formula
that predicted the maximum number of times, n, that paper of a given thickness T and length L could be folded in one direction. Notice the forbidding presence of the exponential function 2n in two places — once to account for the doubling of the wad’s thickness at each fold, and another time to account for the halving of its length.
Using her formula, Britney concluded that she would need to use a special roll of toilet paper nearly three quarters of a mile long. In January 2002, she went to a shopping mall in her hometown of Pomona, Calif., and unrolled the paper. Seven hours later, and with the help of her parents, she smashed the world record by folding the paper in half 12 times!
More in This Series
From Fish to Infinity (Jan. 31, 2010)
Rock Groups (Feb. 7, 2010)
The Enemy of My Enemy (Feb. 14, 2010)
Division and Its Discontents (Feb. 21, 2010)
The Joy of X (Feb. 28, 2010)
Finding Your Roots (March 7, 2010)
Square Dancing (March 14, 2010)
Think Globally (March 21, 2010)
See the Entire Series »
In theory, exponential growth is also supposed to grace your bank account. If your money grows at an annual interest rate of r, after one year it will be worth (1 + r) times more; after two years, (1 + r) squared; and after x years, (1 + r)x times more than your initial deposit. Thus the miracle of compounding that we so often hear about is caused by exponential growth in action.
Which brings back to logarithms. We need them because it’s always useful to have tools that can undo one another. Just as every office worker needs both a stapler and a staple remover, every mathematician needs exponential functions and logarithms. They’re “inverses.” This means that if you type a number x into your calculator, and then punch the 10x button followed by the log x button, you’ll get back to the number you started with.
Logarithms are compressors. They’re ideal for taking numbers that vary over a wide range and squeezing them together so they become more manageable. For instance, 100 and 100 million differ a million-fold, a gulf that most of us find incomprehensible. But their logarithms differ only fourfold (they are 2 and 8, because 100 = 102 and 100 million = 108). In conversation, we all use a crude version of logarithmic shorthand when we refer to any salary between $100,000 and $999,999 as being “six figures.” That “six” is roughly the logarithm of these salaries, which in fact span the range from 5 to 6.
As impressive as all these functions may be, a mathematician’s toolbox can only do so much — which is why I still haven’t assembled my Ikea bookcases.
--------------------------------------------------------------------------------
NOTES:
1. The excerpt from “Moonlighting” is from the episode “In God We Strongly Suspect.” It originally aired on Feb. 11, 1986, during the show’s second season.
2. Will Hoffman and Derek Paul Boyle have filmed an intriguing video of the parabolas all around us in the everyday world (along with their exponential cousins, curves called “catenaries,” so-named for the shape of hanging chains). Full disclosure: the filmmakers say this video was inspired by a story I told on an episode of RadioLab.
3. For simplicity, I’ve referred to expressions like x2 as functions, through to be more precise I should speak of “the function that maps x into x2.” I hope this sort of abbreviation won’t cause confusion, since we’ve all seen it on calculator buttons.
4. For the story of Britney Gallivan’s adventures in paper folding, see: Gallivan, B. C. “How to Fold Paper in Half Twelve Times: An ‘Impossible Challenge’ Solved and Explained.” Pomona, CA: Historical Society of Pomona Valley, 2002. For a journalist’s account, aimed at children, see Ivars Peterson, “Champion paper-folder,” Muse (July/August 2004), p. 33. The Mythbusters have also attempted to replicate Britney’s experiment on their television show.
5. For evidence that our innate number sense is logarithmic, see: Stanislas Dehaene, Véronique Izard, Elizabeth Spelke, and Pierre Pica, “Log or linear? Distinct intuitions of the number scale in Western and Amazonian indigene cultures,” Science, Vol. 320 (2008), p. 1217. Popular accounts of this study are available at ScienceDaily and in this episode of RadioLab.
Thanks to David Field, Paul Ginsparg, Jon Kleinberg, Andy Ruina and Carole Schiffman for their comments and suggestions; Diane Hopkins, Cindy Klauss and Brian Madsen for their help in finding and obtaining the “Moonlighting” clip; and Margaret Nelson, for preparing the illustration.
Need to print this post? Here is a print-friendly PDF version of this piece, with images. (Similar PDFs have been created for previous columns in this series, when appropriate.)
April 3, 2010
Op-Ed Contributor
The End of History (Books)
By MARC ARONSON
TODAY, Apple’s iPad goes on sale, and many see this as a Gutenberg moment, with digital multimedia moving one step closer toward replacing old-fashioned books.
Speaking as an author and editor of illustrated nonfiction, I agree that important change is afoot, but not in the way most people see it. In order for electronic books to live up to their billing, we have to fix a system that is broken: getting permission to use copyrighted material in new work. Either we change the way we deal with copyrights — or works of nonfiction in a multimedia world will become ever more dull and disappointing.
The hope of nonfiction is to connect readers to something outside the book: the past, a discovery, a social issue. To do this, authors need to draw on pre-existing words and images.
Unless we nonfiction writers are lucky and hit a public-domain mother lode, we have to pay for the right to use just about anything — from a single line of a song to any part of a poem; from the vast archives of the world’s art (now managed by gimlet-eyed venture capitalists) to the historical images that serve as profit centers for museums and academic libraries.
The amount we pay depends on where and how the material is used. In fact, the very first question a rights holder asks is “What are you going to do with my baby?” Which countries do you plan to sell in? What languages? Over what period of time? How large will the image be in your book?
Given that permission costs are already out of control for old-fashioned print, it’s fair to expect that they will rise even higher with e-books. After all, digital books will be in print forever (we assume); they can be downloaded, copied, shared and maybe even translated. We’ve all heard about the multimedia potential of the iPad, but how much will writers be charged for film clips and audio? Rights holders will demand a hefty premium for use in digital books — if they make their materials available in that format at all.
Seeing the clouds on the horizon, publishers painstakingly remove photos and even text extracts from print books as they are converted to e-books. So instead of providing a dazzling future, the e-world is forcing nonfiction to become drier, blander and denser.
Still, this logjam between technological potential and copyright hell could turn into a great opportunity — if it leads to a new model for how permission costs are calculated in e-books and even in print.
For e-books, the new model would look something like this: Instead of paying permission fees upfront based on estimated print runs, book creators would pay based on a periodic accounting of downloads. Right now, fees are laid out on a set schedule whose minimum rates are often higher than a modest book can support. The costs may be fine for textbooks or advertisers, but they punish individual authors. Since publishers can’t afford to fully cover permissions fees for print books, and cannot yet predict what they will earn from e-books, the writer has to choose between taking a loss on permissions fees or short-changing readers on content.
But if rights holders were compensated for actual downloads, there would be a perfect fit. The better a book did, the more the original rights holder would be paid. The challenge of this model is accurate accounting — but in the age of iTunes micropayments surely someone can figure out a way.
Before we even get to downloads, though, we need to fix the problem for print books. As a starting point, authors and publishers — perhaps through a joint committee of the Authors Guild and the Association of American Publishers — should create a grid of standard rates and images and text extracts keyed to print runs and prices.
Since authors and publishers have stakes on both sides of this issue, they ought to be able to come up with suggested fees that would allow creators to set reasonable budgets, and compel rights holders to conform to industry norms.
A good starting point might be a suggested scale based on the total number of images used in a book; an image that was one one-hundredth of a story would cost less than an image that was a tenth of it. Such a plan would encourage authors to use more art, which is precisely what we all want.
If rights remain as tightly controlled and as expensive as they are now, nonfiction will be the province of the entirely new or the overly familiar. Dazzling books with newly created art, text and multimedia will far outnumber works filled with historical materials. Only a few well-heeled companies will have the wherewithal to create gee-whiz multimedia book-like products that require permissions, and these projects will most likely focus on highly popular subjects. History’s outsiders and untold stories will be left behind.
We treat copyrights as individual possessions, jewels that exist entirely by themselves. I’m obviously sympathetic to that point of view. But source material also takes on another life when it’s repurposed. It becomes part of the flow, the narration, the interweaving of text and art in books and e-books. It’s essential that we take this into account as we re-imagine permissions in a digital age.
When we have a new model for permissions, we will have new media. Then all of us — authors, readers, new-media innovators, rights holders — will really see the stories that words and images can tell.
Marc Aronson is the author, most recently, of “If Stones Could Speak: Unlocking the Secrets of Stonehenge.”
http://www.nytimes.com/2010/04/03/opini ... ?th&emc=th
Op-Ed Contributor
The End of History (Books)
By MARC ARONSON
TODAY, Apple’s iPad goes on sale, and many see this as a Gutenberg moment, with digital multimedia moving one step closer toward replacing old-fashioned books.
Speaking as an author and editor of illustrated nonfiction, I agree that important change is afoot, but not in the way most people see it. In order for electronic books to live up to their billing, we have to fix a system that is broken: getting permission to use copyrighted material in new work. Either we change the way we deal with copyrights — or works of nonfiction in a multimedia world will become ever more dull and disappointing.
The hope of nonfiction is to connect readers to something outside the book: the past, a discovery, a social issue. To do this, authors need to draw on pre-existing words and images.
Unless we nonfiction writers are lucky and hit a public-domain mother lode, we have to pay for the right to use just about anything — from a single line of a song to any part of a poem; from the vast archives of the world’s art (now managed by gimlet-eyed venture capitalists) to the historical images that serve as profit centers for museums and academic libraries.
The amount we pay depends on where and how the material is used. In fact, the very first question a rights holder asks is “What are you going to do with my baby?” Which countries do you plan to sell in? What languages? Over what period of time? How large will the image be in your book?
Given that permission costs are already out of control for old-fashioned print, it’s fair to expect that they will rise even higher with e-books. After all, digital books will be in print forever (we assume); they can be downloaded, copied, shared and maybe even translated. We’ve all heard about the multimedia potential of the iPad, but how much will writers be charged for film clips and audio? Rights holders will demand a hefty premium for use in digital books — if they make their materials available in that format at all.
Seeing the clouds on the horizon, publishers painstakingly remove photos and even text extracts from print books as they are converted to e-books. So instead of providing a dazzling future, the e-world is forcing nonfiction to become drier, blander and denser.
Still, this logjam between technological potential and copyright hell could turn into a great opportunity — if it leads to a new model for how permission costs are calculated in e-books and even in print.
For e-books, the new model would look something like this: Instead of paying permission fees upfront based on estimated print runs, book creators would pay based on a periodic accounting of downloads. Right now, fees are laid out on a set schedule whose minimum rates are often higher than a modest book can support. The costs may be fine for textbooks or advertisers, but they punish individual authors. Since publishers can’t afford to fully cover permissions fees for print books, and cannot yet predict what they will earn from e-books, the writer has to choose between taking a loss on permissions fees or short-changing readers on content.
But if rights holders were compensated for actual downloads, there would be a perfect fit. The better a book did, the more the original rights holder would be paid. The challenge of this model is accurate accounting — but in the age of iTunes micropayments surely someone can figure out a way.
Before we even get to downloads, though, we need to fix the problem for print books. As a starting point, authors and publishers — perhaps through a joint committee of the Authors Guild and the Association of American Publishers — should create a grid of standard rates and images and text extracts keyed to print runs and prices.
Since authors and publishers have stakes on both sides of this issue, they ought to be able to come up with suggested fees that would allow creators to set reasonable budgets, and compel rights holders to conform to industry norms.
A good starting point might be a suggested scale based on the total number of images used in a book; an image that was one one-hundredth of a story would cost less than an image that was a tenth of it. Such a plan would encourage authors to use more art, which is precisely what we all want.
If rights remain as tightly controlled and as expensive as they are now, nonfiction will be the province of the entirely new or the overly familiar. Dazzling books with newly created art, text and multimedia will far outnumber works filled with historical materials. Only a few well-heeled companies will have the wherewithal to create gee-whiz multimedia book-like products that require permissions, and these projects will most likely focus on highly popular subjects. History’s outsiders and untold stories will be left behind.
We treat copyrights as individual possessions, jewels that exist entirely by themselves. I’m obviously sympathetic to that point of view. But source material also takes on another life when it’s repurposed. It becomes part of the flow, the narration, the interweaving of text and art in books and e-books. It’s essential that we take this into account as we re-imagine permissions in a digital age.
When we have a new model for permissions, we will have new media. Then all of us — authors, readers, new-media innovators, rights holders — will really see the stories that words and images can tell.
Marc Aronson is the author, most recently, of “If Stones Could Speak: Unlocking the Secrets of Stonehenge.”
http://www.nytimes.com/2010/04/03/opini ... ?th&emc=th
The purpose of posting articles of this kind is to demonstrate the relationship of the abstract mathematical concepts to our daily mundane activities and to appreciate how nature or the 'signs of Allah' are expressed through mathematical order.
There are video and diagramatic illustrations at:
http://opinionator.blogs.nytimes.com/20 ... ?th&emc=th
April 11, 2010, 5:00 pm
Change We Can Believe In
By STEVEN STROGATZ
Steven Strogatz on math, from basic to baffling.
Tags:
calculus, derivatives, Michael Jordan, Snell’s Law
Long before I knew what calculus was, I sensed there was something special about it. My dad had spoken about it in reverential tones. He hadn’t been able to go to college, being a child of the Depression, but somewhere along the line, maybe during his time in the South Pacific repairing B-24 bomber engines, he’d gotten a feel for what calculus could do. Imagine a mechanically controlled bank of anti-aircraft guns automatically firing at an incoming fighter plane. Calculus, he supposed, could be used to tell the guns where to aim.
Every year about a million American students take calculus. But far fewer really understand what the subject is about or could tell you why they were learning it. It’s not their fault. There are so many techniques to master and so many new ideas to absorb that the overall framework is easy to miss.
Calculus is the mathematics of change. It describes everything from the spread of epidemics to the zigs and zags of a well-thrown curveball. The subject is gargantuan — and so are its textbooks. Many exceed 1,000 pages and work nicely as doorstops.
But within that bulk you’ll find two ideas shining through. All the rest, as Rabbi Hillel said of the Golden Rule, is just commentary. Those two ideas are the “derivative” and the “integral.” Each dominates its own half of the subject, named in their honor as differential and integral calculus.
Roughly speaking, the derivative tells you how fast something is changing; the integral tells you how much it’s accumulating. They were born in separate times and places: integrals, in Greece around 250 B.C.; derivatives, in England and Germany in the mid-1600s. Yet in a twist straight out of a Dickens novel, they’ve turned out to be blood relatives — though it took almost two millennia to see the family resemblance.
More in This Series
From Fish to Infinity (Jan. 31, 2010)
Rock Groups (Feb. 7, 2010)
The Enemy of My Enemy (Feb. 14, 2010)
Division and Its Discontents (Feb. 21, 2010)
The Joy of X (Feb. 28, 2010)
Finding Your Roots (March 7, 2010)
Square Dancing (March 14, 2010)
Think Globally (March 21, 2010)
Power Tools (March 28, 2010)
Take It to the Limit (April 4, 2010)
See the Entire Series »
Next week’s column will explore that astonishing connection, as well as the meaning of integrals. But first, to lay the groundwork, let’s look at derivatives.
Derivatives are all around us, even if we don’t recognize them as such. For example, the slope of a ramp is a derivative. Like all derivatives, it measures a rate of change — in this case, how far you’re going up or down for every step you take. A steep ramp has a large derivative. A wheelchair-accessible ramp, with its gentle gradient, has a small derivative.
Every field has its own version of a derivative. Whether it goes by “marginal return” or “growth rate” or “velocity” or “slope,” a derivative by any other name still smells as sweet. Unfortunately, many students seem to come away from calculus with a much narrower interpretation, regarding the derivative as synonymous with the slope of a curve.
Their confusion is understandable. It’s caused by our reliance on graphs to express quantitative relationships. By plotting y versus x to visualize how one variable affects another, all scientists translate their problems into the common language of mathematics. The rate of change that really concerns them — a viral growth rate, a jet’s velocity, or whatever — then gets converted into something much more abstract but easier to picture: a slope on a graph.
Like slopes, derivatives can be positive, negative or zero, indicating whether something is rising, falling or leveling off. Watch Michael Jordan in action making his top-10 dunks.
Just after lift-off, his vertical velocity (the rate at which his elevation changes in time, and thus, another derivative) is positive, because he’s going up. His elevation is increasing. On the way down, this derivative is negative. And at the highest point of his jump, where he seems to hang in the air, his elevation is momentarily unchanging and his derivative is zero. In that sense he truly is hanging.
There’s a more general principle at work here — things always change slowest at the top or the bottom. It’s especially noticeable here in Ithaca. During the darkest depths of winter, the days are not just unmercifully short; they barely improve from one to the next. Whereas now that spring is popping, the days are lengthening rapidly. All of this makes sense. Change is most sluggish at the extremes precisely because the derivative is zero there. Things stand still, momentarily.
This zero-derivative property of peaks and troughs underlies some of the most practical applications of calculus. It allows us to use derivatives to figure out where a function reaches its maximum or minimum, an issue that arises whenever we’re looking for the best or cheapest or fastest way to do something.
My high school calculus teacher, Mr. Joffray, had a knack for making such “max-min” questions come alive. One day he came bounding into class and began telling us about his hike through a snow-covered field. The wind had apparently blown a lot of snow across part of the field, blanketing it heavily and forcing him to walk much more slowly there, while the rest of the field was clear, allowing him to stride through it easily. In a situation like that, he wondered what path a hiker should take to get from point A to point B as quickly as possible.
One thought would be to trudge straight across the deep snow, to cut down on the slowest part of the hike. The downside, though, is the rest of the trip will take longer than it would otherwise.
Another strategy is to head straight from A to B. That’s certainly the shortest distance, but it does cost extra time in the most arduous part of the trip.
With differential calculus you can find the best path. It’s a certain specific compromise between the two paths considered above.
The analysis involves four main steps. (For those who’d like to see the details, references are given in the notes.)
First, notice that that the total time of travel — which is what we’re trying to minimize — depends on just one number, the distance x where the hiker emerges from the snow.
Second, given a choice of x and the known locations of the starting point A and the destination B, we can calculate how much time the hiker spends walking through the fast and slow parts of the field. For each leg of the trip, this calculation requires the Pythagorean theorem and the old algebra mantra, “distance equals rate times time.” Adding the times for both legs together then yields a formula for the total travel time, T, as a function of x. (See the Notes for details.)
Third, we graph T versus x. The bottom of the curve is the point we’re seeking — it corresponds to the least time of travel and hence the fastest trip.
Fourth, to find this lowest point, we invoke the zero-derivative principle mentioned above. We calculate the derivative of T, set it equal to zero, and solve for x.
These four steps require a command of geometry, algebra and various derivative formulas from calculus — skills equivalent to fluency in a foreign language and, therefore, stumbling blocks for many students.
But the final answer is worth the struggle. It reveals that the fastest path obeys a relationship known as Snell’s law. What’s spooky is that nature obeys it, too.
Snell’s law describes how light rays bend when they pass from air into water, as they do when shining into a swimming pool. Light moves more slowly in water, much like the hiker in the snow, and it bends accordingly to minimize its travel time. Similarly, light also bends when it travels from air into glass or plastic as it refracts through your eyeglass lenses.
The eerie point is that light behaves as if it were considering all possible paths and automatically taking the best one. Nature — cue the theme from “The Twilight Zone” — somehow knows calculus.
--------------------------------------------------------------------------------
NOTES
In an online article for the Mathematical Association of America, David Bressoud presents data on the number of American students taking calculus each year.
For a collection of Mr. Joffray’s calculus problems, both classic and original, see: S. Strogatz, “The Calculus of Friendship: What a Teacher and a Student Learned about Life While Corresponding About Math” (Princeton University Press, 2009).
Several videos and websites present the details of Snell’s law and its derivation from Fermat’s principle (which states that light takes the path of least time). Others provide historical accounts.
Fermat’s principle was an early forerunner to the more general principle of least action. For an entertaining and deeply enlightening discussion of this principle, including its basis in quantum mechanics, see: R. P. Feynman, R. B. Leighton and M. Sands, “The principle of least action,” The Feynman Lectures on Physics, Volume 2, Chapter 19 (Addison-Wesley, 1964).
R. Feynman, “QED: The Strange Theory of Light and Matter” (Princeton University Press, 1988).
In a nutshell, Feynman’s astonishing proposition is that nature actually does try all paths. But nearly all of them cancel out with their neighboring paths, through a quantum analog of destructive interference — except for those very close to the classical path where the action is minimized (or more precisely, made stationary). There the quantum interference becomes constructive, rendering those paths exceedingly more likely to be observed. This, in Feynman’s account, is why nature obeys minimum principles. The key is that we live in the macroscopic world of everyday experience, where the actions are enormous compared to Planck’s constant. In that classical limit, quantum destructive interference becomes extremely strong and obliterates nearly everything that could otherwise happen.
Thanks to Paul Ginsparg and Carole Schiffman for their comments and suggestions, and Margaret Nelson for preparing the illustrations.
Need to print this post? Here is a print-friendly PDF version of this piece, with images.
There are video and diagramatic illustrations at:
http://opinionator.blogs.nytimes.com/20 ... ?th&emc=th
April 11, 2010, 5:00 pm
Change We Can Believe In
By STEVEN STROGATZ
Steven Strogatz on math, from basic to baffling.
Tags:
calculus, derivatives, Michael Jordan, Snell’s Law
Long before I knew what calculus was, I sensed there was something special about it. My dad had spoken about it in reverential tones. He hadn’t been able to go to college, being a child of the Depression, but somewhere along the line, maybe during his time in the South Pacific repairing B-24 bomber engines, he’d gotten a feel for what calculus could do. Imagine a mechanically controlled bank of anti-aircraft guns automatically firing at an incoming fighter plane. Calculus, he supposed, could be used to tell the guns where to aim.
Every year about a million American students take calculus. But far fewer really understand what the subject is about or could tell you why they were learning it. It’s not their fault. There are so many techniques to master and so many new ideas to absorb that the overall framework is easy to miss.
Calculus is the mathematics of change. It describes everything from the spread of epidemics to the zigs and zags of a well-thrown curveball. The subject is gargantuan — and so are its textbooks. Many exceed 1,000 pages and work nicely as doorstops.
But within that bulk you’ll find two ideas shining through. All the rest, as Rabbi Hillel said of the Golden Rule, is just commentary. Those two ideas are the “derivative” and the “integral.” Each dominates its own half of the subject, named in their honor as differential and integral calculus.
Roughly speaking, the derivative tells you how fast something is changing; the integral tells you how much it’s accumulating. They were born in separate times and places: integrals, in Greece around 250 B.C.; derivatives, in England and Germany in the mid-1600s. Yet in a twist straight out of a Dickens novel, they’ve turned out to be blood relatives — though it took almost two millennia to see the family resemblance.
More in This Series
From Fish to Infinity (Jan. 31, 2010)
Rock Groups (Feb. 7, 2010)
The Enemy of My Enemy (Feb. 14, 2010)
Division and Its Discontents (Feb. 21, 2010)
The Joy of X (Feb. 28, 2010)
Finding Your Roots (March 7, 2010)
Square Dancing (March 14, 2010)
Think Globally (March 21, 2010)
Power Tools (March 28, 2010)
Take It to the Limit (April 4, 2010)
See the Entire Series »
Next week’s column will explore that astonishing connection, as well as the meaning of integrals. But first, to lay the groundwork, let’s look at derivatives.
Derivatives are all around us, even if we don’t recognize them as such. For example, the slope of a ramp is a derivative. Like all derivatives, it measures a rate of change — in this case, how far you’re going up or down for every step you take. A steep ramp has a large derivative. A wheelchair-accessible ramp, with its gentle gradient, has a small derivative.
Every field has its own version of a derivative. Whether it goes by “marginal return” or “growth rate” or “velocity” or “slope,” a derivative by any other name still smells as sweet. Unfortunately, many students seem to come away from calculus with a much narrower interpretation, regarding the derivative as synonymous with the slope of a curve.
Their confusion is understandable. It’s caused by our reliance on graphs to express quantitative relationships. By plotting y versus x to visualize how one variable affects another, all scientists translate their problems into the common language of mathematics. The rate of change that really concerns them — a viral growth rate, a jet’s velocity, or whatever — then gets converted into something much more abstract but easier to picture: a slope on a graph.
Like slopes, derivatives can be positive, negative or zero, indicating whether something is rising, falling or leveling off. Watch Michael Jordan in action making his top-10 dunks.
Just after lift-off, his vertical velocity (the rate at which his elevation changes in time, and thus, another derivative) is positive, because he’s going up. His elevation is increasing. On the way down, this derivative is negative. And at the highest point of his jump, where he seems to hang in the air, his elevation is momentarily unchanging and his derivative is zero. In that sense he truly is hanging.
There’s a more general principle at work here — things always change slowest at the top or the bottom. It’s especially noticeable here in Ithaca. During the darkest depths of winter, the days are not just unmercifully short; they barely improve from one to the next. Whereas now that spring is popping, the days are lengthening rapidly. All of this makes sense. Change is most sluggish at the extremes precisely because the derivative is zero there. Things stand still, momentarily.
This zero-derivative property of peaks and troughs underlies some of the most practical applications of calculus. It allows us to use derivatives to figure out where a function reaches its maximum or minimum, an issue that arises whenever we’re looking for the best or cheapest or fastest way to do something.
My high school calculus teacher, Mr. Joffray, had a knack for making such “max-min” questions come alive. One day he came bounding into class and began telling us about his hike through a snow-covered field. The wind had apparently blown a lot of snow across part of the field, blanketing it heavily and forcing him to walk much more slowly there, while the rest of the field was clear, allowing him to stride through it easily. In a situation like that, he wondered what path a hiker should take to get from point A to point B as quickly as possible.
One thought would be to trudge straight across the deep snow, to cut down on the slowest part of the hike. The downside, though, is the rest of the trip will take longer than it would otherwise.
Another strategy is to head straight from A to B. That’s certainly the shortest distance, but it does cost extra time in the most arduous part of the trip.
With differential calculus you can find the best path. It’s a certain specific compromise between the two paths considered above.
The analysis involves four main steps. (For those who’d like to see the details, references are given in the notes.)
First, notice that that the total time of travel — which is what we’re trying to minimize — depends on just one number, the distance x where the hiker emerges from the snow.
Second, given a choice of x and the known locations of the starting point A and the destination B, we can calculate how much time the hiker spends walking through the fast and slow parts of the field. For each leg of the trip, this calculation requires the Pythagorean theorem and the old algebra mantra, “distance equals rate times time.” Adding the times for both legs together then yields a formula for the total travel time, T, as a function of x. (See the Notes for details.)
Third, we graph T versus x. The bottom of the curve is the point we’re seeking — it corresponds to the least time of travel and hence the fastest trip.
Fourth, to find this lowest point, we invoke the zero-derivative principle mentioned above. We calculate the derivative of T, set it equal to zero, and solve for x.
These four steps require a command of geometry, algebra and various derivative formulas from calculus — skills equivalent to fluency in a foreign language and, therefore, stumbling blocks for many students.
But the final answer is worth the struggle. It reveals that the fastest path obeys a relationship known as Snell’s law. What’s spooky is that nature obeys it, too.
Snell’s law describes how light rays bend when they pass from air into water, as they do when shining into a swimming pool. Light moves more slowly in water, much like the hiker in the snow, and it bends accordingly to minimize its travel time. Similarly, light also bends when it travels from air into glass or plastic as it refracts through your eyeglass lenses.
The eerie point is that light behaves as if it were considering all possible paths and automatically taking the best one. Nature — cue the theme from “The Twilight Zone” — somehow knows calculus.
--------------------------------------------------------------------------------
NOTES
In an online article for the Mathematical Association of America, David Bressoud presents data on the number of American students taking calculus each year.
For a collection of Mr. Joffray’s calculus problems, both classic and original, see: S. Strogatz, “The Calculus of Friendship: What a Teacher and a Student Learned about Life While Corresponding About Math” (Princeton University Press, 2009).
Several videos and websites present the details of Snell’s law and its derivation from Fermat’s principle (which states that light takes the path of least time). Others provide historical accounts.
Fermat’s principle was an early forerunner to the more general principle of least action. For an entertaining and deeply enlightening discussion of this principle, including its basis in quantum mechanics, see: R. P. Feynman, R. B. Leighton and M. Sands, “The principle of least action,” The Feynman Lectures on Physics, Volume 2, Chapter 19 (Addison-Wesley, 1964).
R. Feynman, “QED: The Strange Theory of Light and Matter” (Princeton University Press, 1988).
In a nutshell, Feynman’s astonishing proposition is that nature actually does try all paths. But nearly all of them cancel out with their neighboring paths, through a quantum analog of destructive interference — except for those very close to the classical path where the action is minimized (or more precisely, made stationary). There the quantum interference becomes constructive, rendering those paths exceedingly more likely to be observed. This, in Feynman’s account, is why nature obeys minimum principles. The key is that we live in the macroscopic world of everyday experience, where the actions are enormous compared to Planck’s constant. In that classical limit, quantum destructive interference becomes extremely strong and obliterates nearly everything that could otherwise happen.
Thanks to Paul Ginsparg and Carole Schiffman for their comments and suggestions, and Margaret Nelson for preparing the illustrations.
Need to print this post? Here is a print-friendly PDF version of this piece, with images.
This is a continuation of the series of articles about the relevance of complicated mathematical functions in daily life. The illustrations are given in the link below.
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya1
It Slices, It Dices
By STEVEN STROGATZ
Steven Strogatz on math, from basic to baffling.
Tags:
algebra, calculus, geometry, integrals, signs, symbols
Mathematical signs and symbols are often cryptic, but the best of them offer visual clues to their own meaning. The symbols for zero, one and infinity aptly resemble an empty hole, a single mark and an endless loop: 0, 1, ∞. And the equals sign, =, is formed by two parallel lines because, in the words of its originator, Welsh mathematician Robert Recorde in 1557, “no two things can be more equal.”
In calculus the most recognizable icon is the integral sign:
Its graceful lines are evocative of a musical clef or a violin’s f-hole — a fitting coincidence, given that some of the most enchanting harmonies in mathematics are expressed by integrals. But the real reason that Leibniz chose this symbol is much less poetic. It’s simply a long-necked S, for “summation.”
As for what’s being summed, that depends on context. In astronomy, the gravitational pull of the sun on the earth is described by an integral. It represents the collective effect of all the minuscule forces generated by each solar atom at their varying distances from the earth. In oncology, the growing mass of a solid tumor can be modeled by an integral. So can the cumulative amount of drug administered during the course of a chemotherapy regimen.
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya1
It Slices, It Dices
By STEVEN STROGATZ
Steven Strogatz on math, from basic to baffling.
Tags:
algebra, calculus, geometry, integrals, signs, symbols
Mathematical signs and symbols are often cryptic, but the best of them offer visual clues to their own meaning. The symbols for zero, one and infinity aptly resemble an empty hole, a single mark and an endless loop: 0, 1, ∞. And the equals sign, =, is formed by two parallel lines because, in the words of its originator, Welsh mathematician Robert Recorde in 1557, “no two things can be more equal.”
In calculus the most recognizable icon is the integral sign:
Its graceful lines are evocative of a musical clef or a violin’s f-hole — a fitting coincidence, given that some of the most enchanting harmonies in mathematics are expressed by integrals. But the real reason that Leibniz chose this symbol is much less poetic. It’s simply a long-necked S, for “summation.”
As for what’s being summed, that depends on context. In astronomy, the gravitational pull of the sun on the earth is described by an integral. It represents the collective effect of all the minuscule forces generated by each solar atom at their varying distances from the earth. In oncology, the growing mass of a solid tumor can be modeled by an integral. So can the cumulative amount of drug administered during the course of a chemotherapy regimen.
There is a related video and more at:
http://www.nytimes.com/2010/05/09/magaz ... ?th&emc=th
May 3, 2010
The Moral Life of Babies
By PAUL BLOOM
Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands. The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.
This incident occurred in one of several psychology studies that I have been involved with at the Infant Cognition Center at Yale University in collaboration with my colleague (and wife), Karen Wynn, who runs the lab, and a graduate student, Kiley Hamlin, who is the lead author of the studies. We are one of a handful of research teams around the world exploring the moral life of babies.
Like many scientists and humanists, I have long been fascinated by the capacities and inclinations of babies and children. The mental life of young humans not only is an interesting topic in its own right; it also raises — and can help answer — fundamental questions of philosophy and psychology, including how biological evolution and cultural experience conspire to shape human nature. In graduate school, I studied early language development and later moved on to fairly traditional topics in cognitive development, like how we come to understand the minds of other people — what they know, want and experience.
But the current work I’m involved in, on baby morality, might seem like a perverse and misguided next step. Why would anyone even entertain the thought of babies as moral beings? From Sigmund Freud to Jean Piaget to Lawrence Kohlberg, psychologists have long argued that we begin life as amoral animals. One important task of society, particularly of parents, is to turn babies into civilized beings — social creatures who can experience empathy, guilt and shame; who can override selfish impulses in the name of higher principles; and who will respond with outrage to unfairness and injustice. Many parents and educators would endorse a view of infants and toddlers close to that of a recent Onion headline: “New Study Reveals Most Children Unrepentant Sociopaths.” If children enter the world already equipped with moral notions, why is it that we have to work so hard to humanize them?
A growing body of evidence, though, suggests that humans do have a rudimentary moral sense from the very start of life. With the help of well-designed experiments, you can see glimmers of moral thought, moral judgment and moral feeling even in the first year of life. Some sense of good and evil seems to be bred in the bone. Which is not to say that parents are wrong to concern themselves with moral development or that their interactions with their children are a waste of time. Socialization is critically important. But this is not because babies and young children lack a sense of right and wrong; it’s because the sense of right and wrong that they naturally possess diverges in important ways from what we adults would want it to be.
Smart Babies
Babies seem spastic in their actions, undisciplined in their attention. In 1762, Jean-Jacques Rousseau called the baby “a perfect idiot,” and in 1890 William James famously described a baby’s mental life as “one great blooming, buzzing confusion.” A sympathetic parent might see the spark of consciousness in a baby’s large eyes and eagerly accept the popular claim that babies are wonderful learners, but it is hard to avoid the impression that they begin as ignorant as bread loaves. Many developmental psychologists will tell you that the ignorance of human babies extends well into childhood. For many years the conventional view was that young humans take a surprisingly long time to learn basic facts about the physical world (like that objects continue to exist once they are out of sight) and basic facts about people (like that they have beliefs and desires and goals) — let alone how long it takes them to learn about morality.
I am admittedly biased, but I think one of the great discoveries in modern psychology is that this view of babies is mistaken.
More at the above mentioned link....
*******
Also there are a series of videos on the early childhood educational potential at:
http://www.iahp.org/Vide.376.0.html
http://www.nytimes.com/2010/05/09/magaz ... ?th&emc=th
May 3, 2010
The Moral Life of Babies
By PAUL BLOOM
Not long ago, a team of researchers watched a 1-year-old boy take justice into his own hands. The boy had just seen a puppet show in which one puppet played with a ball while interacting with two other puppets. The center puppet would slide the ball to the puppet on the right, who would pass it back. And the center puppet would slide the ball to the puppet on the left . . . who would run away with it. Then the two puppets on the ends were brought down from the stage and set before the toddler. Each was placed next to a pile of treats. At this point, the toddler was asked to take a treat away from one puppet. Like most children in this situation, the boy took it from the pile of the “naughty” one. But this punishment wasn’t enough — he then leaned over and smacked the puppet in the head.
This incident occurred in one of several psychology studies that I have been involved with at the Infant Cognition Center at Yale University in collaboration with my colleague (and wife), Karen Wynn, who runs the lab, and a graduate student, Kiley Hamlin, who is the lead author of the studies. We are one of a handful of research teams around the world exploring the moral life of babies.
Like many scientists and humanists, I have long been fascinated by the capacities and inclinations of babies and children. The mental life of young humans not only is an interesting topic in its own right; it also raises — and can help answer — fundamental questions of philosophy and psychology, including how biological evolution and cultural experience conspire to shape human nature. In graduate school, I studied early language development and later moved on to fairly traditional topics in cognitive development, like how we come to understand the minds of other people — what they know, want and experience.
But the current work I’m involved in, on baby morality, might seem like a perverse and misguided next step. Why would anyone even entertain the thought of babies as moral beings? From Sigmund Freud to Jean Piaget to Lawrence Kohlberg, psychologists have long argued that we begin life as amoral animals. One important task of society, particularly of parents, is to turn babies into civilized beings — social creatures who can experience empathy, guilt and shame; who can override selfish impulses in the name of higher principles; and who will respond with outrage to unfairness and injustice. Many parents and educators would endorse a view of infants and toddlers close to that of a recent Onion headline: “New Study Reveals Most Children Unrepentant Sociopaths.” If children enter the world already equipped with moral notions, why is it that we have to work so hard to humanize them?
A growing body of evidence, though, suggests that humans do have a rudimentary moral sense from the very start of life. With the help of well-designed experiments, you can see glimmers of moral thought, moral judgment and moral feeling even in the first year of life. Some sense of good and evil seems to be bred in the bone. Which is not to say that parents are wrong to concern themselves with moral development or that their interactions with their children are a waste of time. Socialization is critically important. But this is not because babies and young children lack a sense of right and wrong; it’s because the sense of right and wrong that they naturally possess diverges in important ways from what we adults would want it to be.
Smart Babies
Babies seem spastic in their actions, undisciplined in their attention. In 1762, Jean-Jacques Rousseau called the baby “a perfect idiot,” and in 1890 William James famously described a baby’s mental life as “one great blooming, buzzing confusion.” A sympathetic parent might see the spark of consciousness in a baby’s large eyes and eagerly accept the popular claim that babies are wonderful learners, but it is hard to avoid the impression that they begin as ignorant as bread loaves. Many developmental psychologists will tell you that the ignorance of human babies extends well into childhood. For many years the conventional view was that young humans take a surprisingly long time to learn basic facts about the physical world (like that objects continue to exist once they are out of sight) and basic facts about people (like that they have beliefs and desires and goals) — let alone how long it takes them to learn about morality.
I am admittedly biased, but I think one of the great discoveries in modern psychology is that this view of babies is mistaken.
More at the above mentioned link....
*******
Also there are a series of videos on the early childhood educational potential at:
http://www.iahp.org/Vide.376.0.html
Last edited by kmaherali on Sun Dec 19, 2010 6:25 pm, edited 1 time in total.
May 27, 2010, 9:23 pm
Baby Steps to New Life-Forms
By OLIVIA JUDSON
Olivia Judson on the influence of science and biology on modern life.
Tags:
biology, DNA, genomes, Intelligent Design, synthetic biology
Intelligent design. That’s one goal of synthetic biology, a field that was catapulted into the news last week with the announcement that a group of biologists had manufactured a genome that exists nowhere in nature and inserted it into a bacterial cell. The dream is that, one day, we’ll be able to sit and think about what sort of life-form we’d like to make — and then design and build it in much the same way we make a bridge or a car.
Realizing this dream is still some way off. But before I get to that, let me briefly describe the state of play.
Synthetic biology is predicated on the fact that, to a large extent, organisms can be broken down into a set of parts. For example, the information contained in DNA comes in discrete chunks — namely, genes. Genes contain the instructions for making proteins — molecules that come in different shapes and sizes — as well as information about where and when those proteins should be used. Proteins interact with each other, driving many of the functions of the cell.
Some genes are essential: without the proteins they encode, the organism cannot exist. But many genes are “optional” — in the laboratory at least, the organism gets on fine without them.
Likewise, having an “extra” gene or two isn’t usually a problem. Already, we have exploited this fact to insert new genes into organisms as diverse as petunias and goats. Since the 1980s, human insulin has been mass-produced by bacterial cells genetically engineered to make it.
Already, we have improved on nature to create versions of genes and proteins that do not exist in the wild. Green fluorescent protein, for example, is naturally made by jellyfish; we humans have altered the gene so that the protein has a stronger fluorescence and occurs in other colors. It is now an essential tool in cell biology, as the fluorescence allows cells, genes and proteins, to be tagged and their activities monitored.
And, recently, we have begun to build genomes in the laboratory. The first to be made, eight years ago, was poliovirus. Then it became possible to make synthetic copies of existing bacterial genomes. Now, with the results published last week, we can begin to manufacture genomes for bacteria that do not exist in nature.
The difficulties, however, remain great. Last week’s announcement, while an enormous and complex technical achievement, was a baby step toward designer life, not a giant leap. The resulting bacterium is little different from a bacterium that already exists. The principle difference is that its DNA carries some “watermarks” — special sequences — that identify it as having been made, not evolved.
One problem with creating life from the drawing board is that evolved biological systems are complex, and often behave in ways we cannot (thus far) predict. Although we can specify the DNA sequence to make a particular protein, we cannot always predict what the protein will look like or how it will interact with other proteins in the cell. Also, to a large extent, biological systems are not standardized: Yes, we have become good at making DNA, but we do not yet have a “basic” cell, into which everything else can be slotted. In short, while we can copy genomes, and edit them lightly, we are a long way from writing one from scratch.
Although we cannot yet express ourselves fluently in nature’s genetic language, however, there is the tantalizing possibility that we might one day write our own. We have begun to engineer proteins that include components that do not normally occur in living beings, and we are starting to build molecules that resemble DNA in terms of their capacity to store information, but that can be read differently by the machinery of the cell. This would allow us to create a “second nature” — a set of organisms that use a different genetic language, and cannot readily interact with the life-forms that evolved in the wild.
What I love about this is that the process of inventing a new genetic language will help us to understand more about the one that actually evolved. Indeed, this has already begun. Early attempts to manufacture DNA alternatives quickly revealed that the “bannisters” of the double helix — the chains that run down the outside of the molecule — are more essential to how the molecule works than anyone had thought.
There are many ways we could use designer organisms, some good and some bad. But the most fundamental aspect of the enterprise is that by trying to build life, we gain a more profound understanding of its evolved nature.
Notes and more at:
http://opinionator.blogs.nytimes.com/20 ... n&emc=tyb1
Baby Steps to New Life-Forms
By OLIVIA JUDSON
Olivia Judson on the influence of science and biology on modern life.
Tags:
biology, DNA, genomes, Intelligent Design, synthetic biology
Intelligent design. That’s one goal of synthetic biology, a field that was catapulted into the news last week with the announcement that a group of biologists had manufactured a genome that exists nowhere in nature and inserted it into a bacterial cell. The dream is that, one day, we’ll be able to sit and think about what sort of life-form we’d like to make — and then design and build it in much the same way we make a bridge or a car.
Realizing this dream is still some way off. But before I get to that, let me briefly describe the state of play.
Synthetic biology is predicated on the fact that, to a large extent, organisms can be broken down into a set of parts. For example, the information contained in DNA comes in discrete chunks — namely, genes. Genes contain the instructions for making proteins — molecules that come in different shapes and sizes — as well as information about where and when those proteins should be used. Proteins interact with each other, driving many of the functions of the cell.
Some genes are essential: without the proteins they encode, the organism cannot exist. But many genes are “optional” — in the laboratory at least, the organism gets on fine without them.
Likewise, having an “extra” gene or two isn’t usually a problem. Already, we have exploited this fact to insert new genes into organisms as diverse as petunias and goats. Since the 1980s, human insulin has been mass-produced by bacterial cells genetically engineered to make it.
Already, we have improved on nature to create versions of genes and proteins that do not exist in the wild. Green fluorescent protein, for example, is naturally made by jellyfish; we humans have altered the gene so that the protein has a stronger fluorescence and occurs in other colors. It is now an essential tool in cell biology, as the fluorescence allows cells, genes and proteins, to be tagged and their activities monitored.
And, recently, we have begun to build genomes in the laboratory. The first to be made, eight years ago, was poliovirus. Then it became possible to make synthetic copies of existing bacterial genomes. Now, with the results published last week, we can begin to manufacture genomes for bacteria that do not exist in nature.
The difficulties, however, remain great. Last week’s announcement, while an enormous and complex technical achievement, was a baby step toward designer life, not a giant leap. The resulting bacterium is little different from a bacterium that already exists. The principle difference is that its DNA carries some “watermarks” — special sequences — that identify it as having been made, not evolved.
One problem with creating life from the drawing board is that evolved biological systems are complex, and often behave in ways we cannot (thus far) predict. Although we can specify the DNA sequence to make a particular protein, we cannot always predict what the protein will look like or how it will interact with other proteins in the cell. Also, to a large extent, biological systems are not standardized: Yes, we have become good at making DNA, but we do not yet have a “basic” cell, into which everything else can be slotted. In short, while we can copy genomes, and edit them lightly, we are a long way from writing one from scratch.
Although we cannot yet express ourselves fluently in nature’s genetic language, however, there is the tantalizing possibility that we might one day write our own. We have begun to engineer proteins that include components that do not normally occur in living beings, and we are starting to build molecules that resemble DNA in terms of their capacity to store information, but that can be read differently by the machinery of the cell. This would allow us to create a “second nature” — a set of organisms that use a different genetic language, and cannot readily interact with the life-forms that evolved in the wild.
What I love about this is that the process of inventing a new genetic language will help us to understand more about the one that actually evolved. Indeed, this has already begun. Early attempts to manufacture DNA alternatives quickly revealed that the “bannisters” of the double helix — the chains that run down the outside of the molecule — are more essential to how the molecule works than anyone had thought.
There are many ways we could use designer organisms, some good and some bad. But the most fundamental aspect of the enterprise is that by trying to build life, we gain a more profound understanding of its evolved nature.
Notes and more at:
http://opinionator.blogs.nytimes.com/20 ... n&emc=tyb1
June 7, 2010
History for Dollars
By DAVID BROOKS
When the going gets tough, the tough take accounting. When the job market worsens, many students figure they can’t indulge in an English or a history major. They have to study something that will lead directly to a job.
So it is almost inevitable that over the next few years, as labor markets struggle, the humanities will continue their long slide. There already has been a nearly 50 percent drop in the portion of liberal arts majors over the past generation, and that trend is bound to accelerate. Once the stars of university life, humanities now play bit roles when prospective students take their college tours. The labs are more glamorous than the libraries.
But allow me to pause for a moment and throw another sandbag on the levee of those trying to resist this tide. Let me stand up for the history, English and art classes, even in the face of today’s economic realities.
Studying the humanities improves your ability to read and write. No matter what you do in life, you will have a huge advantage if you can read a paragraph and discern its meaning (a rarer talent than you might suppose). You will have enormous power if you are the person in the office who can write a clear and concise memo.
Studying the humanities will give you a familiarity with the language of emotion. In an information economy, many people have the ability to produce a technical innovation: a new MP3 player. Very few people have the ability to create a great brand: the iPod. Branding involves the location and arousal of affection, and you can’t do it unless you are conversant in the language of romance.
Studying the humanities will give you a wealth of analogies. People think by comparison — Iraq is either like Vietnam or Bosnia; your boss is like Narcissus or Solon. People who have a wealth of analogies in their minds can think more precisely than those with few analogies. If you go through college without reading Thucydides, Herodotus and Gibbon, you’ll have been cheated out of a great repertoire of comparisons.
Finally, and most importantly, studying the humanities helps you befriend The Big Shaggy.
Let me try to explain. Over the past century or so, people have built various systems to help them understand human behavior: economics, political science, game theory and evolutionary psychology. These systems are useful in many circumstances. But none completely explain behavior because deep down people have passions and drives that don’t lend themselves to systemic modeling. They have yearnings and fears that reside in an inner beast you could call The Big Shaggy.
You can see The Big Shaggy at work when a governor of South Carolina suddenly chucks it all for a love voyage south of the equator, or when a smart, philosophical congressman from Indiana risks everything for an in-office affair.
You can see The Big Shaggy at work when self-destructive overconfidence overtakes oil engineers in the gulf, when go-go enthusiasm intoxicates investment bankers or when bone-chilling distrust grips politics.
Those are the destructive sides of The Big Shaggy. But this tender beast is also responsible for the mysterious but fierce determination that drives Kobe Bryant, the graceful bemusement the Detroit Tigers pitcher Armando Galarraga showed when his perfect game slipped away, the selfless courage soldiers in Afghanistan show when they risk death for buddies or a family they may never see again.
The observant person goes through life asking: Where did that come from? Why did he or she act that way? The answers are hard to come by because the behavior emanates from somewhere deep inside The Big Shaggy.
Technical knowledge stops at the outer edge. If you spend your life riding the links of the Internet, you probably won’t get too far into The Big Shaggy either, because the fast, effortless prose of blogging (and journalism) lacks the heft to get you deep below.
But over the centuries, there have been rare and strange people who possessed the skill of taking the upheavals of thought that emanate from The Big Shaggy and representing them in the form of story, music, myth, painting, liturgy, architecture, sculpture, landscape and speech. These men and women developed languages that help us understand these yearnings and also educate and mold them. They left rich veins of emotional knowledge that are the subjects of the humanities.
It’s probably dangerous to enter exclusively into this realm and risk being caught in a cloister, removed from the market and its accountability. But doesn’t it make sense to spend some time in the company of these languages — learning to feel different emotions, rehearsing different passions, experiencing different sacred rituals and learning to see in different ways?
Few of us are hewers of wood. We navigate social environments. If you’re dumb about The Big Shaggy, you’ll probably get eaten by it.
http://www.nytimes.com/2010/06/08/opini ... ?th&emc=th
History for Dollars
By DAVID BROOKS
When the going gets tough, the tough take accounting. When the job market worsens, many students figure they can’t indulge in an English or a history major. They have to study something that will lead directly to a job.
So it is almost inevitable that over the next few years, as labor markets struggle, the humanities will continue their long slide. There already has been a nearly 50 percent drop in the portion of liberal arts majors over the past generation, and that trend is bound to accelerate. Once the stars of university life, humanities now play bit roles when prospective students take their college tours. The labs are more glamorous than the libraries.
But allow me to pause for a moment and throw another sandbag on the levee of those trying to resist this tide. Let me stand up for the history, English and art classes, even in the face of today’s economic realities.
Studying the humanities improves your ability to read and write. No matter what you do in life, you will have a huge advantage if you can read a paragraph and discern its meaning (a rarer talent than you might suppose). You will have enormous power if you are the person in the office who can write a clear and concise memo.
Studying the humanities will give you a familiarity with the language of emotion. In an information economy, many people have the ability to produce a technical innovation: a new MP3 player. Very few people have the ability to create a great brand: the iPod. Branding involves the location and arousal of affection, and you can’t do it unless you are conversant in the language of romance.
Studying the humanities will give you a wealth of analogies. People think by comparison — Iraq is either like Vietnam or Bosnia; your boss is like Narcissus or Solon. People who have a wealth of analogies in their minds can think more precisely than those with few analogies. If you go through college without reading Thucydides, Herodotus and Gibbon, you’ll have been cheated out of a great repertoire of comparisons.
Finally, and most importantly, studying the humanities helps you befriend The Big Shaggy.
Let me try to explain. Over the past century or so, people have built various systems to help them understand human behavior: economics, political science, game theory and evolutionary psychology. These systems are useful in many circumstances. But none completely explain behavior because deep down people have passions and drives that don’t lend themselves to systemic modeling. They have yearnings and fears that reside in an inner beast you could call The Big Shaggy.
You can see The Big Shaggy at work when a governor of South Carolina suddenly chucks it all for a love voyage south of the equator, or when a smart, philosophical congressman from Indiana risks everything for an in-office affair.
You can see The Big Shaggy at work when self-destructive overconfidence overtakes oil engineers in the gulf, when go-go enthusiasm intoxicates investment bankers or when bone-chilling distrust grips politics.
Those are the destructive sides of The Big Shaggy. But this tender beast is also responsible for the mysterious but fierce determination that drives Kobe Bryant, the graceful bemusement the Detroit Tigers pitcher Armando Galarraga showed when his perfect game slipped away, the selfless courage soldiers in Afghanistan show when they risk death for buddies or a family they may never see again.
The observant person goes through life asking: Where did that come from? Why did he or she act that way? The answers are hard to come by because the behavior emanates from somewhere deep inside The Big Shaggy.
Technical knowledge stops at the outer edge. If you spend your life riding the links of the Internet, you probably won’t get too far into The Big Shaggy either, because the fast, effortless prose of blogging (and journalism) lacks the heft to get you deep below.
But over the centuries, there have been rare and strange people who possessed the skill of taking the upheavals of thought that emanate from The Big Shaggy and representing them in the form of story, music, myth, painting, liturgy, architecture, sculpture, landscape and speech. These men and women developed languages that help us understand these yearnings and also educate and mold them. They left rich veins of emotional knowledge that are the subjects of the humanities.
It’s probably dangerous to enter exclusively into this realm and risk being caught in a cloister, removed from the market and its accountability. But doesn’t it make sense to spend some time in the company of these languages — learning to feel different emotions, rehearsing different passions, experiencing different sacred rituals and learning to see in different ways?
Few of us are hewers of wood. We navigate social environments. If you’re dumb about The Big Shaggy, you’ll probably get eaten by it.
http://www.nytimes.com/2010/06/08/opini ... ?th&emc=th
July 8, 2010
The Medium Is the Medium
By DAVID BROOKS
Recently, book publishers got some good news. Researchers gave 852 disadvantaged students 12 books (of their own choosing) to take home at the end of the school year. They did this for three successive years.
Then the researchers, led by Richard Allington of the University of Tennessee, looked at those students’ test scores. They found that the students who brought the books home had significantly higher reading scores than other students. These students were less affected by the “summer slide” — the decline that especially afflicts lower-income students during the vacation months. In fact, just having those 12 books seemed to have as much positive effect as attending summer school.
This study, along with many others, illustrates the tremendous power of books. We already knew, from research in 27 countries, that kids who grow up in a home with 500 books stay in school longer and do better. This new study suggests that introducing books into homes that may not have them also produces significant educational gains.
Recently, Internet mavens got some bad news. Jacob Vigdor and Helen Ladd of Duke’s Sanford School of Public Policy examined computer use among a half-million 5th through 8th graders in North Carolina. They found that the spread of home computers and high-speed Internet access was associated with significant declines in math and reading scores.
This study, following up on others, finds that broadband access is not necessarily good for kids and may be harmful to their academic performance. And this study used data from 2000 to 2005 before Twitter and Facebook took off.
These two studies feed into the debate that is now surrounding Nicholas Carr’s book, “The Shallows.” Carr argues that the Internet is leading to a short-attention-span culture. He cites a pile of research showing that the multidistraction, hyperlink world degrades people’s abilities to engage in deep thought or serious contemplation.
Carr’s argument has been challenged. His critics point to evidence that suggests that playing computer games and performing Internet searches actually improves a person’s ability to process information and focus attention. The Internet, they say, is a boon to schooling, not a threat.
But there was one interesting observation made by a philanthropist who gives books to disadvantaged kids. It’s not the physical presence of the books that produces the biggest impact, she suggested. It’s the change in the way the students see themselves as they build a home library. They see themselves as readers, as members of a different group.
The Internet-versus-books debate is conducted on the supposition that the medium is the message. But sometimes the medium is just the medium. What matters is the way people think about themselves while engaged in the two activities. A person who becomes a citizen of the literary world enters a hierarchical universe. There are classic works of literature at the top and beach reading at the bottom.
A person enters this world as a novice, and slowly studies the works of great writers and scholars. Readers immerse themselves in deep, alternative worlds and hope to gain some lasting wisdom. Respect is paid to the writers who transmit that wisdom.
A citizen of the Internet has a very different experience. The Internet smashes hierarchy and is not marked by deference. Maybe it would be different if it had been invented in Victorian England, but Internet culture is set in contemporary America. Internet culture is egalitarian. The young are more accomplished than the old. The new media is supposedly savvier than the old media. The dominant activity is free-wheeling, disrespectful, antiauthority disputation.
These different cultures foster different types of learning. The great essayist Joseph Epstein once distinguished between being well informed, being hip and being cultivated. The Internet helps you become well informed — knowledgeable about current events, the latest controversies and important trends. The Internet also helps you become hip — to learn about what’s going on, as Epstein writes, “in those lively waters outside the boring mainstream.”
But the literary world is still better at helping you become cultivated, mastering significant things of lasting import. To learn these sorts of things, you have to defer to greater minds than your own. You have to take the time to immerse yourself in a great writer’s world. You have to respect the authority of the teacher.
Right now, the literary world is better at encouraging this kind of identity. The Internet culture may produce better conversationalists, but the literary culture still produces better students.
It’s better at distinguishing the important from the unimportant, and making the important more prestigious.
Perhaps that will change. Already, more “old-fashioned” outposts are opening up across the Web. It could be that the real debate will not be books versus the Internet but how to build an Internet counterculture that will better attract people to serious learning.
http://www.nytimes.com/2010/07/09/opinion/09brooks.html
The Medium Is the Medium
By DAVID BROOKS
Recently, book publishers got some good news. Researchers gave 852 disadvantaged students 12 books (of their own choosing) to take home at the end of the school year. They did this for three successive years.
Then the researchers, led by Richard Allington of the University of Tennessee, looked at those students’ test scores. They found that the students who brought the books home had significantly higher reading scores than other students. These students were less affected by the “summer slide” — the decline that especially afflicts lower-income students during the vacation months. In fact, just having those 12 books seemed to have as much positive effect as attending summer school.
This study, along with many others, illustrates the tremendous power of books. We already knew, from research in 27 countries, that kids who grow up in a home with 500 books stay in school longer and do better. This new study suggests that introducing books into homes that may not have them also produces significant educational gains.
Recently, Internet mavens got some bad news. Jacob Vigdor and Helen Ladd of Duke’s Sanford School of Public Policy examined computer use among a half-million 5th through 8th graders in North Carolina. They found that the spread of home computers and high-speed Internet access was associated with significant declines in math and reading scores.
This study, following up on others, finds that broadband access is not necessarily good for kids and may be harmful to their academic performance. And this study used data from 2000 to 2005 before Twitter and Facebook took off.
These two studies feed into the debate that is now surrounding Nicholas Carr’s book, “The Shallows.” Carr argues that the Internet is leading to a short-attention-span culture. He cites a pile of research showing that the multidistraction, hyperlink world degrades people’s abilities to engage in deep thought or serious contemplation.
Carr’s argument has been challenged. His critics point to evidence that suggests that playing computer games and performing Internet searches actually improves a person’s ability to process information and focus attention. The Internet, they say, is a boon to schooling, not a threat.
But there was one interesting observation made by a philanthropist who gives books to disadvantaged kids. It’s not the physical presence of the books that produces the biggest impact, she suggested. It’s the change in the way the students see themselves as they build a home library. They see themselves as readers, as members of a different group.
The Internet-versus-books debate is conducted on the supposition that the medium is the message. But sometimes the medium is just the medium. What matters is the way people think about themselves while engaged in the two activities. A person who becomes a citizen of the literary world enters a hierarchical universe. There are classic works of literature at the top and beach reading at the bottom.
A person enters this world as a novice, and slowly studies the works of great writers and scholars. Readers immerse themselves in deep, alternative worlds and hope to gain some lasting wisdom. Respect is paid to the writers who transmit that wisdom.
A citizen of the Internet has a very different experience. The Internet smashes hierarchy and is not marked by deference. Maybe it would be different if it had been invented in Victorian England, but Internet culture is set in contemporary America. Internet culture is egalitarian. The young are more accomplished than the old. The new media is supposedly savvier than the old media. The dominant activity is free-wheeling, disrespectful, antiauthority disputation.
These different cultures foster different types of learning. The great essayist Joseph Epstein once distinguished between being well informed, being hip and being cultivated. The Internet helps you become well informed — knowledgeable about current events, the latest controversies and important trends. The Internet also helps you become hip — to learn about what’s going on, as Epstein writes, “in those lively waters outside the boring mainstream.”
But the literary world is still better at helping you become cultivated, mastering significant things of lasting import. To learn these sorts of things, you have to defer to greater minds than your own. You have to take the time to immerse yourself in a great writer’s world. You have to respect the authority of the teacher.
Right now, the literary world is better at encouraging this kind of identity. The Internet culture may produce better conversationalists, but the literary culture still produces better students.
It’s better at distinguishing the important from the unimportant, and making the important more prestigious.
Perhaps that will change. Already, more “old-fashioned” outposts are opening up across the Web. It could be that the real debate will not be books versus the Internet but how to build an Internet counterculture that will better attract people to serious learning.
http://www.nytimes.com/2010/07/09/opinion/09brooks.html
July 22, 2010
The Moral Naturalists
By DAVID BROOKS
Washington, Conn.
Where does our sense of right and wrong come from? Most people think it is a gift from God, who revealed His laws and elevates us with His love. A smaller number think that we figure the rules out for ourselves, using our capacity to reason and choosing a philosophical system to live by.
Moral naturalists, on the other hand, believe that we have moral sentiments that have emerged from a long history of relationships. To learn about morality, you don’t rely upon revelation or metaphysics; you observe people as they live.
This week a group of moral naturalists gathered in Connecticut at a conference organized by the Edge Foundation. One of the participants, Marc Hauser of Harvard, began his career studying primates, and for moral naturalists the story of our morality begins back in the evolutionary past. It begins with the way insects, rats and monkeys learned to cooperate.
By the time humans came around, evolution had forged a pretty firm foundation for a moral sense. Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.
Paul Bloom of Yale noted that this moral sense can be observed early in life. Bloom and his colleagues conducted an experiment in which they showed babies a scene featuring one figure struggling to climb a hill, another figure trying to help it, and a third trying to hinder it.
At as early as six months, the babies showed a preference for the helper over the hinderer. In some plays, there is a second act. The hindering figure is either punished or rewarded. In this case, 8-month-olds preferred a character who was punishing the hinderer over ones being nice to it.
This illustrates, Bloom says, that people have a rudimentary sense of justice from a very early age. This doesn’t make people naturally good. If you give a 3-year-old two pieces of candy and ask him if he wants to share one of them, he will almost certainly say no. It’s not until age 7 or 8 that even half the children are willing to share. But it does mean that social norms fall upon prepared ground. We come equipped to learn fairness and other virtues.
These moral faculties structure the way we perceive and respond to the world. If you ask for donations with the photo and name of one sick child, you are likely to get twice as much money than if you had asked for donations with a photo and the names of eight children. Our minds respond more powerfully to the plight of an individual than the plight of a group.
These moral faculties rely upon emotional, intuitive processes, for good and ill. If you are in a bad mood you will make harsher moral judgments than if you’re in a good mood or have just seen a comedy. As Elizabeth Phelps of New York University points out, feelings of disgust will evoke a desire to expel things, even those things unrelated to your original mood. General fear makes people risk-averse. Anger makes them risk-seeking.
People who behave morally don’t generally do it because they have greater knowledge; they do it because they have a greater sensitivity to other people’s points of view. Hauser reported on research showing that bullies are surprisingly sophisticated at reading other people’s intentions, but they’re not good at anticipating and feeling other people’s pain.
The moral naturalists differ over what role reason plays in moral judgments. Some, like Haidt, believe that we make moral judgments intuitively and then construct justifications after the fact. Others, like Joshua Greene of Harvard, liken moral thinking to a camera. Most of the time we rely on the automatic point-and-shoot process, but occasionally we use deliberation to override the quick and easy method. We certainly tell stories and have conversations to spread and refine moral beliefs.
For people wary of abstract theorizing, it’s nice to see people investigating morality in ways that are concrete and empirical. But their approach does have certain implicit tendencies.
They emphasize group cohesion over individual dissent. They emphasize the cooperative virtues, like empathy, over the competitive virtues, like the thirst for recognition and superiority. At this conference, they barely mentioned the yearning for transcendence and the sacred, which plays such a major role in every human society.
Their implied description of the moral life is gentle, fair and grounded. But it is all lower case. So far, at least, it might not satisfy those who want their morality to be awesome, formidable, transcendent or great.
http://www.nytimes.com/2010/07/23/opini ... &th&emc=th
The Moral Naturalists
By DAVID BROOKS
Washington, Conn.
Where does our sense of right and wrong come from? Most people think it is a gift from God, who revealed His laws and elevates us with His love. A smaller number think that we figure the rules out for ourselves, using our capacity to reason and choosing a philosophical system to live by.
Moral naturalists, on the other hand, believe that we have moral sentiments that have emerged from a long history of relationships. To learn about morality, you don’t rely upon revelation or metaphysics; you observe people as they live.
This week a group of moral naturalists gathered in Connecticut at a conference organized by the Edge Foundation. One of the participants, Marc Hauser of Harvard, began his career studying primates, and for moral naturalists the story of our morality begins back in the evolutionary past. It begins with the way insects, rats and monkeys learned to cooperate.
By the time humans came around, evolution had forged a pretty firm foundation for a moral sense. Jonathan Haidt of the University of Virginia argues that this moral sense is like our sense of taste. We have natural receptors that help us pick up sweetness and saltiness. In the same way, we have natural receptors that help us recognize fairness and cruelty. Just as a few universal tastes can grow into many different cuisines, a few moral senses can grow into many different moral cultures.
Paul Bloom of Yale noted that this moral sense can be observed early in life. Bloom and his colleagues conducted an experiment in which they showed babies a scene featuring one figure struggling to climb a hill, another figure trying to help it, and a third trying to hinder it.
At as early as six months, the babies showed a preference for the helper over the hinderer. In some plays, there is a second act. The hindering figure is either punished or rewarded. In this case, 8-month-olds preferred a character who was punishing the hinderer over ones being nice to it.
This illustrates, Bloom says, that people have a rudimentary sense of justice from a very early age. This doesn’t make people naturally good. If you give a 3-year-old two pieces of candy and ask him if he wants to share one of them, he will almost certainly say no. It’s not until age 7 or 8 that even half the children are willing to share. But it does mean that social norms fall upon prepared ground. We come equipped to learn fairness and other virtues.
These moral faculties structure the way we perceive and respond to the world. If you ask for donations with the photo and name of one sick child, you are likely to get twice as much money than if you had asked for donations with a photo and the names of eight children. Our minds respond more powerfully to the plight of an individual than the plight of a group.
These moral faculties rely upon emotional, intuitive processes, for good and ill. If you are in a bad mood you will make harsher moral judgments than if you’re in a good mood or have just seen a comedy. As Elizabeth Phelps of New York University points out, feelings of disgust will evoke a desire to expel things, even those things unrelated to your original mood. General fear makes people risk-averse. Anger makes them risk-seeking.
People who behave morally don’t generally do it because they have greater knowledge; they do it because they have a greater sensitivity to other people’s points of view. Hauser reported on research showing that bullies are surprisingly sophisticated at reading other people’s intentions, but they’re not good at anticipating and feeling other people’s pain.
The moral naturalists differ over what role reason plays in moral judgments. Some, like Haidt, believe that we make moral judgments intuitively and then construct justifications after the fact. Others, like Joshua Greene of Harvard, liken moral thinking to a camera. Most of the time we rely on the automatic point-and-shoot process, but occasionally we use deliberation to override the quick and easy method. We certainly tell stories and have conversations to spread and refine moral beliefs.
For people wary of abstract theorizing, it’s nice to see people investigating morality in ways that are concrete and empirical. But their approach does have certain implicit tendencies.
They emphasize group cohesion over individual dissent. They emphasize the cooperative virtues, like empathy, over the competitive virtues, like the thirst for recognition and superiority. At this conference, they barely mentioned the yearning for transcendence and the sacred, which plays such a major role in every human society.
Their implied description of the moral life is gentle, fair and grounded. But it is all lower case. So far, at least, it might not satisfy those who want their morality to be awesome, formidable, transcendent or great.
http://www.nytimes.com/2010/07/23/opini ... &th&emc=th
August 15, 2010, 5:30 pm
Reclaiming the Imagination
By TIMOTHY WILLIAMSON
http://opinionator.blogs.nytimes.com/20 ... ?th&emc=th
The Stone is a forum for contemporary philosophers on issues both timely and timeless.
Tags:
imagination, Philosophy
Imagine being a slave in ancient Rome. Now remember being one. The second task, unlike the first, is crazy. If, as I’m guessing, you never were a slave in ancient Rome, it follows that you can’t remember being one — but you can still let your imagination rip. With a bit of effort one can even imagine the impossible, such as discovering that Dick Cheney and Madonna are really the same person. It sounds like a platitude that fiction is the realm of imagination, fact the realm of knowledge.
Why did humans evolve the capacity to imagine alternatives to reality? Was story-telling in prehistoric times like the peacock’s tail, of no direct practical use but a good way of attracting a mate? It kept Scheherazade alive through those one thousand and one nights — in the story.
We apply much of the same cognitive apparatus whether we are working online, with input from sense perception, or offline, with input from imagination.
On further reflection, imagining turns out to be much more reality-directed than the stereotype implies. If a child imagines the life of a slave in ancient Rome as mainly spent watching sports on TV, with occasional household chores, they are imagining it wrong. That is not what it was like to be a slave. The imagination is not just a random idea generator. The test is how close you can come to imagining the life of a slave as it really was, not how far you can deviate from reality.
A reality-directed faculty of imagination has clear survival value. By enabling you to imagine all sorts of scenarios, it alerts you to dangers and opportunities. You come across a cave. You imagine wintering there with a warm fire — opportunity. You imagine a bear waking up inside — danger. Having imagined possibilities, you can take account of them in contingency planning. If a bear is in the cave, how do you deal with it? If you winter there, what do you do for food and drink? Answering those questions involves more imagining, which must be reality-directed. Of course, you can imagine kissing the angry bear as it emerges from the cave so that it becomes your lifelong friend and brings you all the food and drink you need. Better not to rely on such fantasies. Instead, let your imaginings develop in ways more informed by your knowledge of how things really happen.
Constraining imagination by knowledge does not make it redundant. We rarely know an explicit formula that tells us what to do in a complex situation. We have to work out what to do by thinking through the possibilities in ways that are simultaneously imaginative and realistic, and not less imaginative when more realistic. Knowledge, far from limiting imagination, enables it to serve its central function.
To go further, we can borrow a distinction from the philosophy of science, between contexts of discovery and contexts of justification. In the context of discovery, we get ideas, no matter how — dreams or drugs will do. Then, in the context of justification, we assemble objective evidence to determine whether the ideas are correct. On this picture, standards of rationality apply only to the context of justification, not to the context of discovery. Those who downplay the cognitive role of the imagination restrict it to the context of discovery, excluding it from the context of justification. But they are wrong. Imagination plays a vital role in justifying ideas as well as generating them in the first place.
In science, the obvious role of imagination is in the context of discovery. Unimaginative scientists don’t produce radically new ideas.
Your belief that you will not be visible from inside the cave if you crouch behind that rock may be justified because you can imagine how things would look from inside. To change the example, what would happen if all NATO forces left Afghanistan by 2011? What will happen if they don’t? Justifying answers to those questions requires imaginatively working through various scenarios in ways deeply informed by knowledge of Afghanistan and its neighbors. Without imagination, one couldn’t get from knowledge of the past and present to justified expectations about the complex future. We also need it to answer questions about the past. Were the Rosenbergs innocent? Why did Neanderthals become extinct? We must develop the consequences of competing hypotheses with disciplined imagination in order to compare them with the available evidence. In drawing out a scenario’s implications, we apply much of the same cognitive apparatus whether we are working online, with input from sense perception, or offline, with input from imagination.
Even imagining things contrary to our knowledge contributes to the growth of knowledge, for example in learning from our mistakes. Surprised at the bad outcomes of our actions, we may learn how to do better by imagining what would have happened if we had acted differently from how we know only too well we did act.
In science, the obvious role of imagination is in the context of discovery. Unimaginative scientists don’t produce radically new ideas. But even in science imagination plays a role in justification too. Experiment and calculation cannot do all its work. When mathematical models are used to test a conjecture, choosing an appropriate model may itself involve imagining how things would go if the conjecture were true. Mathematicians typically justify their fundamental axioms, in particular those of set theory, by informal appeals to the imagination.
Sometimes the only honest response to a question is “I don’t know.” In recognizing that, one may rely just as much on imagination, because one needs it to determine that several competing hypotheses are equally compatible with one’s evidence.
The lesson is not that all intellectual inquiry deals in fictions. That is just to fall back on the crude stereotype of the imagination, from which it needs reclaiming. A better lesson is that imagination is not only about fiction: it is integral to our painful progress in separating fiction from fact. Although fiction is a playful use of imagination, not all uses of imagination are playful. Like a cat’s play with a mouse, fiction may both emerge as a by-product of un-playful uses and hone one’s skills for them.
Critics of contemporary philosophy sometimes complain that in using thought experiments it loses touch with reality. They complain less about Galileo and Einstein’s thought experiments, and those of earlier philosophers. Plato explored the nature of morality by asking how you would behave if you possessed the ring of Gyges, which makes the wearer invisible. Today, if someone claims that science is by nature a human activity, we can refute them by imaginatively appreciating the possibility of extra-terrestrial scientists. Once imagining is recognized as a normal means of learning, contemporary philosophers’ use of such techniques can be seen as just extraordinarily systematic and persistent applications of our ordinary cognitive apparatus. Much remains to be understood about how imagination works as a means to knowledge — but if it didn’t work, we wouldn’t be around now to ask the question.
--------------------------------------------------------------------------------
Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Vagueness” (1994), “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007).
Reclaiming the Imagination
By TIMOTHY WILLIAMSON
http://opinionator.blogs.nytimes.com/20 ... ?th&emc=th
The Stone is a forum for contemporary philosophers on issues both timely and timeless.
Tags:
imagination, Philosophy
Imagine being a slave in ancient Rome. Now remember being one. The second task, unlike the first, is crazy. If, as I’m guessing, you never were a slave in ancient Rome, it follows that you can’t remember being one — but you can still let your imagination rip. With a bit of effort one can even imagine the impossible, such as discovering that Dick Cheney and Madonna are really the same person. It sounds like a platitude that fiction is the realm of imagination, fact the realm of knowledge.
Why did humans evolve the capacity to imagine alternatives to reality? Was story-telling in prehistoric times like the peacock’s tail, of no direct practical use but a good way of attracting a mate? It kept Scheherazade alive through those one thousand and one nights — in the story.
We apply much of the same cognitive apparatus whether we are working online, with input from sense perception, or offline, with input from imagination.
On further reflection, imagining turns out to be much more reality-directed than the stereotype implies. If a child imagines the life of a slave in ancient Rome as mainly spent watching sports on TV, with occasional household chores, they are imagining it wrong. That is not what it was like to be a slave. The imagination is not just a random idea generator. The test is how close you can come to imagining the life of a slave as it really was, not how far you can deviate from reality.
A reality-directed faculty of imagination has clear survival value. By enabling you to imagine all sorts of scenarios, it alerts you to dangers and opportunities. You come across a cave. You imagine wintering there with a warm fire — opportunity. You imagine a bear waking up inside — danger. Having imagined possibilities, you can take account of them in contingency planning. If a bear is in the cave, how do you deal with it? If you winter there, what do you do for food and drink? Answering those questions involves more imagining, which must be reality-directed. Of course, you can imagine kissing the angry bear as it emerges from the cave so that it becomes your lifelong friend and brings you all the food and drink you need. Better not to rely on such fantasies. Instead, let your imaginings develop in ways more informed by your knowledge of how things really happen.
Constraining imagination by knowledge does not make it redundant. We rarely know an explicit formula that tells us what to do in a complex situation. We have to work out what to do by thinking through the possibilities in ways that are simultaneously imaginative and realistic, and not less imaginative when more realistic. Knowledge, far from limiting imagination, enables it to serve its central function.
To go further, we can borrow a distinction from the philosophy of science, between contexts of discovery and contexts of justification. In the context of discovery, we get ideas, no matter how — dreams or drugs will do. Then, in the context of justification, we assemble objective evidence to determine whether the ideas are correct. On this picture, standards of rationality apply only to the context of justification, not to the context of discovery. Those who downplay the cognitive role of the imagination restrict it to the context of discovery, excluding it from the context of justification. But they are wrong. Imagination plays a vital role in justifying ideas as well as generating them in the first place.
In science, the obvious role of imagination is in the context of discovery. Unimaginative scientists don’t produce radically new ideas.
Your belief that you will not be visible from inside the cave if you crouch behind that rock may be justified because you can imagine how things would look from inside. To change the example, what would happen if all NATO forces left Afghanistan by 2011? What will happen if they don’t? Justifying answers to those questions requires imaginatively working through various scenarios in ways deeply informed by knowledge of Afghanistan and its neighbors. Without imagination, one couldn’t get from knowledge of the past and present to justified expectations about the complex future. We also need it to answer questions about the past. Were the Rosenbergs innocent? Why did Neanderthals become extinct? We must develop the consequences of competing hypotheses with disciplined imagination in order to compare them with the available evidence. In drawing out a scenario’s implications, we apply much of the same cognitive apparatus whether we are working online, with input from sense perception, or offline, with input from imagination.
Even imagining things contrary to our knowledge contributes to the growth of knowledge, for example in learning from our mistakes. Surprised at the bad outcomes of our actions, we may learn how to do better by imagining what would have happened if we had acted differently from how we know only too well we did act.
In science, the obvious role of imagination is in the context of discovery. Unimaginative scientists don’t produce radically new ideas. But even in science imagination plays a role in justification too. Experiment and calculation cannot do all its work. When mathematical models are used to test a conjecture, choosing an appropriate model may itself involve imagining how things would go if the conjecture were true. Mathematicians typically justify their fundamental axioms, in particular those of set theory, by informal appeals to the imagination.
Sometimes the only honest response to a question is “I don’t know.” In recognizing that, one may rely just as much on imagination, because one needs it to determine that several competing hypotheses are equally compatible with one’s evidence.
The lesson is not that all intellectual inquiry deals in fictions. That is just to fall back on the crude stereotype of the imagination, from which it needs reclaiming. A better lesson is that imagination is not only about fiction: it is integral to our painful progress in separating fiction from fact. Although fiction is a playful use of imagination, not all uses of imagination are playful. Like a cat’s play with a mouse, fiction may both emerge as a by-product of un-playful uses and hone one’s skills for them.
Critics of contemporary philosophy sometimes complain that in using thought experiments it loses touch with reality. They complain less about Galileo and Einstein’s thought experiments, and those of earlier philosophers. Plato explored the nature of morality by asking how you would behave if you possessed the ring of Gyges, which makes the wearer invisible. Today, if someone claims that science is by nature a human activity, we can refute them by imaginatively appreciating the possibility of extra-terrestrial scientists. Once imagining is recognized as a normal means of learning, contemporary philosophers’ use of such techniques can be seen as just extraordinarily systematic and persistent applications of our ordinary cognitive apparatus. Much remains to be understood about how imagination works as a means to knowledge — but if it didn’t work, we wouldn’t be around now to ask the question.
--------------------------------------------------------------------------------
Timothy Williamson is the Wykeham Professor of Logic at Oxford University, a Fellow of the British Academy and a Foreign Honorary Member of the American Academy of Arts and Sciences. He has been a visiting professor at M.I.T. and Princeton. His books include “Vagueness” (1994), “Knowledge and its Limits” (2000) and “The Philosophy of Philosophy” (2007).
http://www.nytimes.com/2010/08/24/opini ... &th&emc=th
August 23, 2010
A Case of Mental Courage
By DAVID BROOKS
In 1811, the popular novelist Fanny Burney learned she had breast cancer and underwent a mastectomy without anesthesia. She lay down on an old mattress, and a piece of thin linen was placed over her face, allowing her to make out the movements of the surgeons above her.
“I felt the instrument — describing a curve — cutting against the grain, if I may so say, while the flesh resisted in a manner so forcible as to oppose & tire the hand of the operator who was forced to change from the right to the left,” she wrote later.
“I began a scream that lasted intermittingly during the whole time of the incision — & I almost marvel that it rings not in my ears still.” The surgeon removed most of the breast but then had to go in a few more times to complete the work: “I then felt the Knife rackling against the breast bone — scraping it! This performed while I yet remained in utterly speechless torture.”
The operation was ghastly, but Burney’s real heroism came later. She could have simply put the horror behind her, but instead she resolved to write down everything that had happened. This proved horrifically painful. “Not for days, not for weeks, but for months I could not speak of this terrible business without nearly again going through it!” Six months after the operation she finally began to write her account.
It took her three months to put down a few thousand words. She suffered headaches as she picked up her pen and began remembering. “I dare not revise, nor read, the recollection is still so painful,” she confessed. But she did complete it. She seems to have regarded the exercise as a sort of mental boot camp — an arduous but necessary ordeal if she hoped to be a person of character and courage.
Burney’s struggle reminds one that character is not only moral, it is also mental. Heroism exists not only on the battlefield or in public but also inside the head, in the ability to face unpleasant thoughts.
She lived at a time when people were more conscious of the fallen nature of men and women. People were held to be inherently sinful, and to be a decent person one had to struggle against one’s weakness.
In the mental sphere, this meant conquering mental laziness with arduous and sometimes numbingly boring lessons. It meant conquering frivolity by sitting through earnest sermons and speeches. It meant conquering self- approval by staring straight at what was painful.
This emphasis on mental character lasted for a time, but it has abated. There’s less talk of sin and frailty these days. Capitalism has also undermined this ethos. In the media competition for eyeballs, everyone is rewarded for producing enjoyable and affirming content. Output is measured by ratings and page views, so much of the media, and even the academy, is more geared toward pleasuring consumers, not putting them on some arduous character-building regime.
In this atmosphere, we’re all less conscious of our severe mental shortcomings and less inclined to be skeptical of our own opinions. Occasionally you surf around the Web and find someone who takes mental limitations seriously. For example, Charlie Munger of Berkshire Hathaway once gave a speech called “The Psychology of Human Misjudgment.” He and others list our natural weaknesses: We have confirmation bias; we pick out evidence that supports our views. We are cognitive misers; we try to think as little as possible. We are herd thinkers and conform our perceptions to fit in with the group.
But, in general, the culture places less emphasis on the need to struggle against one’s own mental feebleness. Today’s culture is better in most ways, but in this way it is worse.
The ensuing mental flabbiness is most evident in politics. Many conservatives declare that Barack Obama is a Muslim because it feels so good to say so. Many liberals would never ask themselves why they were so wrong about the surge in Iraq while George Bush was so right. The question is too uncomfortable.
There’s a seller’s market in ideologies that gives people a chance to feel victimized. There’s a rigidity to political debate. Issues like tax cuts and the size of government, which should be shaped by circumstances (often it’s good to cut taxes; sometimes it’s necessary to raise them), are now treated as inflexible tests of tribal purity.
To use a fancy word, there’s a metacognition deficit. Very few in public life habitually step back and think about the weakness in their own thinking and what they should do to compensate. A few people I interview do this regularly (in fact, Larry Summers is one). But it is rare. The rigors of combat discourage it.
Of the problems that afflict the country, this is the underlying one.
August 23, 2010
A Case of Mental Courage
By DAVID BROOKS
In 1811, the popular novelist Fanny Burney learned she had breast cancer and underwent a mastectomy without anesthesia. She lay down on an old mattress, and a piece of thin linen was placed over her face, allowing her to make out the movements of the surgeons above her.
“I felt the instrument — describing a curve — cutting against the grain, if I may so say, while the flesh resisted in a manner so forcible as to oppose & tire the hand of the operator who was forced to change from the right to the left,” she wrote later.
“I began a scream that lasted intermittingly during the whole time of the incision — & I almost marvel that it rings not in my ears still.” The surgeon removed most of the breast but then had to go in a few more times to complete the work: “I then felt the Knife rackling against the breast bone — scraping it! This performed while I yet remained in utterly speechless torture.”
The operation was ghastly, but Burney’s real heroism came later. She could have simply put the horror behind her, but instead she resolved to write down everything that had happened. This proved horrifically painful. “Not for days, not for weeks, but for months I could not speak of this terrible business without nearly again going through it!” Six months after the operation she finally began to write her account.
It took her three months to put down a few thousand words. She suffered headaches as she picked up her pen and began remembering. “I dare not revise, nor read, the recollection is still so painful,” she confessed. But she did complete it. She seems to have regarded the exercise as a sort of mental boot camp — an arduous but necessary ordeal if she hoped to be a person of character and courage.
Burney’s struggle reminds one that character is not only moral, it is also mental. Heroism exists not only on the battlefield or in public but also inside the head, in the ability to face unpleasant thoughts.
She lived at a time when people were more conscious of the fallen nature of men and women. People were held to be inherently sinful, and to be a decent person one had to struggle against one’s weakness.
In the mental sphere, this meant conquering mental laziness with arduous and sometimes numbingly boring lessons. It meant conquering frivolity by sitting through earnest sermons and speeches. It meant conquering self- approval by staring straight at what was painful.
This emphasis on mental character lasted for a time, but it has abated. There’s less talk of sin and frailty these days. Capitalism has also undermined this ethos. In the media competition for eyeballs, everyone is rewarded for producing enjoyable and affirming content. Output is measured by ratings and page views, so much of the media, and even the academy, is more geared toward pleasuring consumers, not putting them on some arduous character-building regime.
In this atmosphere, we’re all less conscious of our severe mental shortcomings and less inclined to be skeptical of our own opinions. Occasionally you surf around the Web and find someone who takes mental limitations seriously. For example, Charlie Munger of Berkshire Hathaway once gave a speech called “The Psychology of Human Misjudgment.” He and others list our natural weaknesses: We have confirmation bias; we pick out evidence that supports our views. We are cognitive misers; we try to think as little as possible. We are herd thinkers and conform our perceptions to fit in with the group.
But, in general, the culture places less emphasis on the need to struggle against one’s own mental feebleness. Today’s culture is better in most ways, but in this way it is worse.
The ensuing mental flabbiness is most evident in politics. Many conservatives declare that Barack Obama is a Muslim because it feels so good to say so. Many liberals would never ask themselves why they were so wrong about the surge in Iraq while George Bush was so right. The question is too uncomfortable.
There’s a seller’s market in ideologies that gives people a chance to feel victimized. There’s a rigidity to political debate. Issues like tax cuts and the size of government, which should be shaped by circumstances (often it’s good to cut taxes; sometimes it’s necessary to raise them), are now treated as inflexible tests of tribal purity.
To use a fancy word, there’s a metacognition deficit. Very few in public life habitually step back and think about the weakness in their own thinking and what they should do to compensate. A few people I interview do this regularly (in fact, Larry Summers is one). But it is rare. The rigors of combat discourage it.
Of the problems that afflict the country, this is the underlying one.
Excerpt from MHI's speech:
"Yesterday, I visited the magnificent new Aga Khan educational institution. I was shown enough of its work to convince me that this school compared with the finest in the world. One event which I witnessed was a boxing match between two Ismaili boys -- one African, one Asian. I saw a good fight and at the end I think each of them thought he had won. Perhaps both were right!
To me, this friendly contest reflected something of tremendous importance to our community. It reflected first the qualities of determination and endurance which are demanded by our faith. These qualities are also necessary to the future leaders of the community and for the country as a whole.
At the end of this sporting event, the two boys shook hands and stood together to be photographed. To me this symbolized the partnership between different races which I am convinced is the only condition of peace and prosperity."(Takht Nashini, Kampala, 25.10.1957)
http://www.ismaili.net/speech/s571025.html
September 15, 2010, 9:00 pm
Boxing Lessons
By GORDON MARINO
The Stone is a forum for contemporary philosophers on issues both timely and timeless.
I offer training in both philosophy and boxing. Over the years, some of my colleagues have groused that my work is a contradiction, building minds and cultivating rational discourse while teaching violence and helping to remove brain cells. Truth be told, I think philosophers with this gripe should give some thought to what really counts as violence. I would rather take a punch in the nose any day than be subjected to some of the attacks that I have witnessed in philosophy colloquia. However, I have a more positive case for including boxing in my curriculum for sentimental education.
The unmindful attitude towards the body so prevalent in the West blinkers us to profound truths that the skin, muscles and breath can deliver like a punch.
Western philosophy, even before Descartes’ influential case for a mind-body dualism, has been dismissive of the body. Plato — even though he competed as a wrestler — and most of the sages who followed him, taught us to think of our arms and legs as nothing but a poor carriage for the mind. In “Phaedo,” Plato presents his teacher Socrates on his deathbed as a sort of Mr. Spock yearning to be free from the shackles of the flesh so he can really begin thinking seriously. In this account, the body gives rise to desires that will not listen to reason and that becloud our ability to think clearly.
In much of Eastern philosophy, in contrast, the search for wisdom is more holistic. The body is considered inseparable from the mind, and is regarded as a vehicle, rather than an impediment, to enlightenment. The unmindful attitude towards the body so prevalent in the West blinkers us to profound truths that the skin, muscles and breath can deliver like a punch.
While different physical practices may open us to different truths, there is a lot of wisdom to be gained in the ring. Socrates, of course, maintained that the unexamined life was not worth living, that self-knowledge is of supreme importance. One thing is certain: boxing can compel a person to take a quick self-inventory and gut check about what he or she is willing to endure and risk. As Joyce Carol Oates observes in her minor classic, “On Boxing”:
Boxers are there to establish an absolute experience, a public accounting of the outermost limits of their beings; they will know, as few of us can know of ourselves, what physical and psychic power they possess — of how much, or how little, they are capable.
Though the German idealist philosopher G.W.F. Hegel (1770-1831) never slipped on the gloves, I think he would have at least supported the study of the sweet science. In his famous Lord and Bondsman allegory,[1] Hegel suggests that it is in mortal combat with the other, and ultimately in our willingness to give up our lives, that we rise to a higher level of freedom and consciousness. If Hegel is correct, the lofty image that the warrior holds in our society has something to do with the fact that in her willingness to sacrifice her own life, she has escaped the otherwise universal choke hold of death anxiety. Boxing can be seen as a stylized version of Hegel’s proverbial trial by battle and as such affords new possibilities of freedom and selfhood.
Viewed purely psychologically, practice in what used to be termed the “manly art” makes people feel more at home in themselves, and so less defensive and perhaps less aggressive. The way we cope with the elemental feelings of anger and fear determines to no small extent what kind of person we will become. Enlisting Aristotle, I shall have more to say about fear in a moment, but I don’t think it takes a Freud to recognize that many people are mired in their own bottled up anger. In our society, expressions of anger are more taboo than libidinal impulses. Yet, as our entertainment industry so powerfully bears out, there is plenty of fury to go around. I have trained boxers, often women, who find it extremely liberating to learn that they can strike out, throw a punch, express some rage, and that no one is going to die as a result.
And let’s be clear, life is filled with blows. It requires toughness and resiliency. There are few better places than the squared circle to receive concentrated lessons in the dire need to be able to absorb punishment and carry on, “to get off the canvas” and “roll with the punches.” It is little wonder that boxing, more than any other sport, has functioned as a metaphor for life. Aside from the possibilities for self-fulfillment, boxing can also contribute to our moral lives.
Aristotle recognized that a person could know a great deal about the Good and not lead a good life.
In his “Nicomachean Ethics,” Aristotle argues that the final end for human beings is eudaimonia ─ the good life, or as it is most often translated, happiness. In an immortal sentence Aristotle announces, “The Good of man (eudaimonia) is the active exercise of his soul’s faculties in conformity with excellence or virtue, or if there be several human excellences or virtues, in conformity with the best and most perfect among them.”[2]
A few pages later, Aristotle acknowledges that there are in fact two kinds of virtue or excellence, namely, intellectual and moral.[3] Intellectual excellence is simple book learning, or theoretical smarts. Unlike his teacher Plato and his teacher’s teacher, Socrates, Aristotle recognized that a person could know a great deal about the Good and not lead a good life. “With regard to excellence,” says Aristotle, “it is not enough to know, but we must try to have and use it.” [4]
Aristotle offers a table of the moral virtues that includes, among other qualities, temperance, justice, pride, friendliness and truthfulness. Each semester when I teach ethics, I press my students to generate their own list of the moral virtues. “What,” I ask, “are the traits that you connect with having character?” Tolerance, kindness, self-respect, creativity, always make it on to the board, but it is usually only with prodding that courage gets a nod. And yet, courage seems absolutely essential to leading a moral life. After all, if you do not have mettle, you will not be able to abide by your moral judgments. Doing the right thing often demands going down the wrong side of the road of our immediate and long-range self-interests. It frequently involves sacrifices that we do not much care for, sometimes of friendships, or jobs; sometimes, as in the case with Socrates, even of our lives. Making these sacrifices is impossible without courage.
According to Aristotle, courage is a mean between rashness and cowardliness;[5] that is, between having too little trepidation and too much. Aristotle reckoned that in order to be able to hit the mean, we need practice in dealing with the emotions and choices corresponding to that virtue. So far as developing grit is concerned, it helps to get some swings at dealing with manageable doses of fear. And yet, even in our approach to education, many of us tend to think of anything that causes a shiver as traumatic. Consider, for example, the demise of dodge ball in public schools. It was banned because of the terror that the flying red balls caused in some children and of the damage to self-esteem that might come with always being the first one knocked out of the game. But how are we supposed to learn to stand up to our fears if we never have any supervised practice in dealing with the jitters? Of course, our young people are very familiar with aggressive and often gruesome video games that simulate physical harm and self-defense, but without, of course, any of the consequences and risks that might come with putting on the gloves.
Boxing provides practice with fear and with the right, attentive supervision, in quite manageable increments. In their first sparring session, boxers usually erupt in “fight or flight” mode. When the bell rings, novices forget everything they have learned and simply flail away. If they stick with it for a few months, their fears diminish; they can begin to see things in the ring that their emotions blinded them to before. More importantly, they become more at home with feeling afraid. Fear is painful, but it can be faced, and in time a boxer learns not to panic about the blows that will be coming his way.
While Aristotle is able to define courage, the study and practice of boxing can enable us to not only comprehend courage, but “to have and use” it. By getting into the ring with our fears, we will be less likely to succumb to trepidation when doing the right thing demands taking a hit. To be sure, there is an important difference between physical and moral courage. After all, the world has seen many a brave monster. The willingness to endure physical risks is not enough to guarantee uprightness; nevertheless, it can, I think contribute in powerful ways to the development of moral virtue.
NOTES
[1] G.W.F. Hegel, “Phenomenology of Spirit,” Chapter 4.
[2] Aristotle, “Nicomachean Ethics,” Book I, Chapter 7.
[3] ibid., Book I, Chapter 13.
[4] ibid, Book X, Chapter 9.
[5] ibid, Book III, Chapter 7.
--------------------------------------------------------------------------------
Gordon Marino is an active boxing trainer and professor of philosophy at St. Olaf College. He covers boxing for the Wall Street Journal, is the editor of “Ethics:The Essential Writings” (Modern Library Classics, 2010) and is at work on a book about boxing and philosophy.
http://opinionator.blogs.nytimes.com/20 ... ?th&emc=th
"Yesterday, I visited the magnificent new Aga Khan educational institution. I was shown enough of its work to convince me that this school compared with the finest in the world. One event which I witnessed was a boxing match between two Ismaili boys -- one African, one Asian. I saw a good fight and at the end I think each of them thought he had won. Perhaps both were right!
To me, this friendly contest reflected something of tremendous importance to our community. It reflected first the qualities of determination and endurance which are demanded by our faith. These qualities are also necessary to the future leaders of the community and for the country as a whole.
At the end of this sporting event, the two boys shook hands and stood together to be photographed. To me this symbolized the partnership between different races which I am convinced is the only condition of peace and prosperity."(Takht Nashini, Kampala, 25.10.1957)
http://www.ismaili.net/speech/s571025.html
September 15, 2010, 9:00 pm
Boxing Lessons
By GORDON MARINO
The Stone is a forum for contemporary philosophers on issues both timely and timeless.
I offer training in both philosophy and boxing. Over the years, some of my colleagues have groused that my work is a contradiction, building minds and cultivating rational discourse while teaching violence and helping to remove brain cells. Truth be told, I think philosophers with this gripe should give some thought to what really counts as violence. I would rather take a punch in the nose any day than be subjected to some of the attacks that I have witnessed in philosophy colloquia. However, I have a more positive case for including boxing in my curriculum for sentimental education.
The unmindful attitude towards the body so prevalent in the West blinkers us to profound truths that the skin, muscles and breath can deliver like a punch.
Western philosophy, even before Descartes’ influential case for a mind-body dualism, has been dismissive of the body. Plato — even though he competed as a wrestler — and most of the sages who followed him, taught us to think of our arms and legs as nothing but a poor carriage for the mind. In “Phaedo,” Plato presents his teacher Socrates on his deathbed as a sort of Mr. Spock yearning to be free from the shackles of the flesh so he can really begin thinking seriously. In this account, the body gives rise to desires that will not listen to reason and that becloud our ability to think clearly.
In much of Eastern philosophy, in contrast, the search for wisdom is more holistic. The body is considered inseparable from the mind, and is regarded as a vehicle, rather than an impediment, to enlightenment. The unmindful attitude towards the body so prevalent in the West blinkers us to profound truths that the skin, muscles and breath can deliver like a punch.
While different physical practices may open us to different truths, there is a lot of wisdom to be gained in the ring. Socrates, of course, maintained that the unexamined life was not worth living, that self-knowledge is of supreme importance. One thing is certain: boxing can compel a person to take a quick self-inventory and gut check about what he or she is willing to endure and risk. As Joyce Carol Oates observes in her minor classic, “On Boxing”:
Boxers are there to establish an absolute experience, a public accounting of the outermost limits of their beings; they will know, as few of us can know of ourselves, what physical and psychic power they possess — of how much, or how little, they are capable.
Though the German idealist philosopher G.W.F. Hegel (1770-1831) never slipped on the gloves, I think he would have at least supported the study of the sweet science. In his famous Lord and Bondsman allegory,[1] Hegel suggests that it is in mortal combat with the other, and ultimately in our willingness to give up our lives, that we rise to a higher level of freedom and consciousness. If Hegel is correct, the lofty image that the warrior holds in our society has something to do with the fact that in her willingness to sacrifice her own life, she has escaped the otherwise universal choke hold of death anxiety. Boxing can be seen as a stylized version of Hegel’s proverbial trial by battle and as such affords new possibilities of freedom and selfhood.
Viewed purely psychologically, practice in what used to be termed the “manly art” makes people feel more at home in themselves, and so less defensive and perhaps less aggressive. The way we cope with the elemental feelings of anger and fear determines to no small extent what kind of person we will become. Enlisting Aristotle, I shall have more to say about fear in a moment, but I don’t think it takes a Freud to recognize that many people are mired in their own bottled up anger. In our society, expressions of anger are more taboo than libidinal impulses. Yet, as our entertainment industry so powerfully bears out, there is plenty of fury to go around. I have trained boxers, often women, who find it extremely liberating to learn that they can strike out, throw a punch, express some rage, and that no one is going to die as a result.
And let’s be clear, life is filled with blows. It requires toughness and resiliency. There are few better places than the squared circle to receive concentrated lessons in the dire need to be able to absorb punishment and carry on, “to get off the canvas” and “roll with the punches.” It is little wonder that boxing, more than any other sport, has functioned as a metaphor for life. Aside from the possibilities for self-fulfillment, boxing can also contribute to our moral lives.
Aristotle recognized that a person could know a great deal about the Good and not lead a good life.
In his “Nicomachean Ethics,” Aristotle argues that the final end for human beings is eudaimonia ─ the good life, or as it is most often translated, happiness. In an immortal sentence Aristotle announces, “The Good of man (eudaimonia) is the active exercise of his soul’s faculties in conformity with excellence or virtue, or if there be several human excellences or virtues, in conformity with the best and most perfect among them.”[2]
A few pages later, Aristotle acknowledges that there are in fact two kinds of virtue or excellence, namely, intellectual and moral.[3] Intellectual excellence is simple book learning, or theoretical smarts. Unlike his teacher Plato and his teacher’s teacher, Socrates, Aristotle recognized that a person could know a great deal about the Good and not lead a good life. “With regard to excellence,” says Aristotle, “it is not enough to know, but we must try to have and use it.” [4]
Aristotle offers a table of the moral virtues that includes, among other qualities, temperance, justice, pride, friendliness and truthfulness. Each semester when I teach ethics, I press my students to generate their own list of the moral virtues. “What,” I ask, “are the traits that you connect with having character?” Tolerance, kindness, self-respect, creativity, always make it on to the board, but it is usually only with prodding that courage gets a nod. And yet, courage seems absolutely essential to leading a moral life. After all, if you do not have mettle, you will not be able to abide by your moral judgments. Doing the right thing often demands going down the wrong side of the road of our immediate and long-range self-interests. It frequently involves sacrifices that we do not much care for, sometimes of friendships, or jobs; sometimes, as in the case with Socrates, even of our lives. Making these sacrifices is impossible without courage.
According to Aristotle, courage is a mean between rashness and cowardliness;[5] that is, between having too little trepidation and too much. Aristotle reckoned that in order to be able to hit the mean, we need practice in dealing with the emotions and choices corresponding to that virtue. So far as developing grit is concerned, it helps to get some swings at dealing with manageable doses of fear. And yet, even in our approach to education, many of us tend to think of anything that causes a shiver as traumatic. Consider, for example, the demise of dodge ball in public schools. It was banned because of the terror that the flying red balls caused in some children and of the damage to self-esteem that might come with always being the first one knocked out of the game. But how are we supposed to learn to stand up to our fears if we never have any supervised practice in dealing with the jitters? Of course, our young people are very familiar with aggressive and often gruesome video games that simulate physical harm and self-defense, but without, of course, any of the consequences and risks that might come with putting on the gloves.
Boxing provides practice with fear and with the right, attentive supervision, in quite manageable increments. In their first sparring session, boxers usually erupt in “fight or flight” mode. When the bell rings, novices forget everything they have learned and simply flail away. If they stick with it for a few months, their fears diminish; they can begin to see things in the ring that their emotions blinded them to before. More importantly, they become more at home with feeling afraid. Fear is painful, but it can be faced, and in time a boxer learns not to panic about the blows that will be coming his way.
While Aristotle is able to define courage, the study and practice of boxing can enable us to not only comprehend courage, but “to have and use” it. By getting into the ring with our fears, we will be less likely to succumb to trepidation when doing the right thing demands taking a hit. To be sure, there is an important difference between physical and moral courage. After all, the world has seen many a brave monster. The willingness to endure physical risks is not enough to guarantee uprightness; nevertheless, it can, I think contribute in powerful ways to the development of moral virtue.
NOTES
[1] G.W.F. Hegel, “Phenomenology of Spirit,” Chapter 4.
[2] Aristotle, “Nicomachean Ethics,” Book I, Chapter 7.
[3] ibid., Book I, Chapter 13.
[4] ibid, Book X, Chapter 9.
[5] ibid, Book III, Chapter 7.
--------------------------------------------------------------------------------
Gordon Marino is an active boxing trainer and professor of philosophy at St. Olaf College. He covers boxing for the Wall Street Journal, is the editor of “Ethics:The Essential Writings” (Modern Library Classics, 2010) and is at work on a book about boxing and philosophy.
http://opinionator.blogs.nytimes.com/20 ... ?th&emc=th
Reflections on the Scope of Knowledge in Islam: The Interdependence of the Spiritual and Material Dimensions
By Ghulam Abbas Hunzai
The Holy Qur’an’s encouragement to study nature and the physical world around us gave the original impetus to scientific enquiry among Muslims. Exchanges of knowledge between institutions and nations and the widening of man’s intellectual horizons are essentially Islamic concepts. The faith urges freedom of intellectual enquiry and this freedom does not mean that knowledge will lose its spiritual dimension. That dimension is indeed itself a field for intellectual enquiry.
I cannot illustrate this interdependence of spiritual inspiration and learning better than by recounting a dialogue between Ibn Sina, the philosopher, and Abu Said Abul-Khayr, the Sufi mystic. Ibn Sina remarked, ‘Whatever I know, He sees’, to which Abu Said replied, ‘Whatever I see, He knows’.
(An excerpt from the speech delivered by His Highness the Aga Khan at the inauguration of the Aga Khan University in Karachi on November 11, 1985)
QUR’AN AS THE SOURCE OF GUIDANCE
The ‘Scope of Knowledge’ is only one aspect of the vast subject of Islamic epistemology which includes the origins, nature, forms, means and validity of knowledge. The article seeks to focus mainly on the ‘scope of knowledge’ in Islam. The discussion on knowledge is followed by certain observations which will locate the subject in its historical context. The discourse leads to certain conclusions pertaining to knowledge and education.
Islam firmly believes in the possibility of the attainment of valid knowledge, therefore scepticism can be dismissed at the very outset. Once the possibility of knowledge is established, the next logical step, perhaps, is to determine its boundaries or scope. The major source to attain the understanding of the ‘scope of knowledge’ in Islam is the Holy Qur’an and its subsequent authentic interpretations. The following discussion is, therefore, based mostly on these sources.
The framework for the perception of knowledge is the presence of the knower, the knowable and the means (senses) through which the knower knows the knowable. We shall look into the Qur’an keeping this primary frame of knowledge-situation in mind.
A proper reflection on the Qur’an convinces the reader that it emphatically appeals to the human faculty of cognition to reflect, observe, consider, comprehend, meditate, contemplate and realise. The seat where this reflective activity takes place is called the heart (qalb) or (rational) soul (nafs). Thus the rational soul is the knower or recipient of the knowledge. The Qur’an appeals to the faculty of understanding to reflect and understand, for, if this faculty is not activated man plunges himself into an emotionally destabilised state. And emotionally guided perceptions do not give the true picture of things as they exist.
IMPORTANCE OF EMPIRICAL KNOWLEDGE IN ISLAM
The Qur’anic emphasis on the act of reflection encourages the intellect to be inquisitive and prepares it to accumulate, assimilate and integrate all forms of knowledge. In such an inquiring mode the believer humbly but earnestly supplicates to Allah saying: “Lord, increase my knowledge.” (Holy Qur’an, 20:112).
The inquiring intellect (knower – aqil) comes into contact with the creation (knowable – ma’qul) through the senses (hawas). The Qur’an does not brand empirical knowledge as illusive, unreal or deceptive; rather it exhorts the believer to fully utilise it in comprehending Allah’s signs (ayat Allah) in the creation. In this regard the Qur’an says, “And We had endowed them with the (faculties of) hearing, seeing, heart and intellect.” (Holy Qur’an, 46:26). Allah declares about those who either misuse or do not use the senses, “They have hearts yet they cannot understand; eyes yet they do not see; ears yet they do not hear. They are like beasts (anam), indeed they are less enlightened, such are the heedless.” (Holy Qur’an, 7:179). These verses clearly establish the significant role of the senses in the Islamic notion of knowledge.
MAJOR REALMS THAT CAN BE KNOWN THROUGH THE INTELLECT: CREATION, SCRIPTURE AND THE SOUL
The third aspect of the epistemological hierarchy is the knowable or the intelligible — that which can be known. This factor seems to be crucial in determining the boundaries of understanding — ‘the scope of knowledge’.
According to the Qur’anic teaching there are three major areas which can be ‘intellected’. They are (1) the creation or nature (khalq or fitra); (2) revelation, i.e. holy scripture (wahi or kitab); and (3) the soul (nafs) which includes intellect as well. This means that the intellect is potentially capable of comprehending everything including itself except its Originator – Allah. These three realms constitute the existence as such and are formed of the divine signs. The signs indicate to their common Origin, Allah, and also indicate to each other confirming and substantiating each other’s existence.
First Realm: Creation
Allah says in the Qur’an “Behold! in the creation of the heavens and the earth, and the alternation of night and day, there are indeed signs for men of understanding.” (3:190). In the following verse reflection on creation has been described as the specific characteristic of those who constantly remain in prayers. The verse states, “Those who remember Allah, standing, sitting and lying down on their sides, and contemplate the creation in the heavens and the earth, (saying): ‘Lord, You have not created these in vain.” (Holy Qur’an, 3:191).
The knowledge of creation taken in itself, constitutes empirical science; when it is seen in relation to the spiritual world it becomes a part of gnosis (ma’rifat) in the recipient’s mind. The Qur’anic declaration of man’s mastery over the universe clearly establishes human capacity to discover and control various phenomena in the creation.
Second Realm: Scriptures
The second realm wherefrom man derives knowledge is the Holy Scripture. Allah states in the Qur’an about the Prophet that he recites Allah’s verses to the believers, he purifies them and teaches them the Book and the wisdom or hikmah (Holy Qur’an, 3:169). The Prophet (SAWS) and after him his appointed Imams unveil the meaning of the Qur’anic verses through the God-given knowledge (‘ilm-i ladunni). Human intellect can attain the profound meaning and wisdom from Qur’an when it reflects on Qur’anic verses in the light of the teachings (ta’lim) of the guide. Through this esoteric knowledge the believer finds out that the essential form (surat) of revelation and creation is the same – he discovers a transcendental unity behind their apparent diversity.
Third Realm: The Self or the Soul
The Qura’n also makes it imperative for man to reflect upon himself; in other words man has been guided to attain knowledge about himself. This self-knowledge enables him to realise his essential nature, the purpose of his life, from where he has come and where he will return. This knowledge also enables him to understand his exact position in the scheme of existence, particularly his relation to Allah and His creation. Before the attainment of the self-knowledge, the soul remains restless, but this knowledge brings with itself perfect peace and satisfaction.
IMPORTANCE OF THE ALL-INCLUSIVE NATURE OF THE THREE REALMS OF KNOWLEDGE
The foregoing discussion indicates to the contours of the Qur’anic concept of knowledge which in its broadest sense is all-inclusive. In other words, it covers both spiritual and material aspects of existence. This also establishes the fact that the entire field of existence is open to human intellect for reflection and discovery. Any conception of knowledge which excludes either of the above aspects cannot be, in the true sense, Qur’anic or Islamic. Ignoring one aspect will be tantamount to ignoring the signs of Allah without which the picture of reality remains incomplete. The misunderstanding of the underlying Qura’nic scope of knowledge may render tawhid (the doctrine of the ‘Oneness of Allah’) superficial for the profound knowledge of tawhid is attainable through the proper appreciation of all dimensions of Being.
If the Qur’an is not regarded a fecund source of profound principles, there is a strong possibility of the Qur’an becoming a source of frivolous polemics, the necessary consequence of which is disputes and divisions that are diametrically opposed to the Islamic spirit of unity, which reflects the Unity of Allah.
THE DECLINE OF ISLAMIC INTELLECTUAL POWER AND VISION
When, in the light of the foregoing discourse, the history of Islamic epistemological development is analysed, it becomes fairly clear that one of the chief causes of the problems experienced by Muslims was the misrepresentation of the profound scope of knowledge given in the Qur’an. Muslims advanced rapidly in all aspects of life in the earlier centuries of Islam. This was due to the fact that they understood the spirit of Qur’anic concept of knowledge. They not only absorbed all available forms of knowledge, they also contributed substantially to its collective growth.
In the wake of the thirteenth century of Hijra (i.e from the latter part of the 18th Century), the tendency became dominant in Muslim society according to which science and philosophy were not truly Islamic because they were not allegedly available in the Qur’an. This narrow vision immensely damaged the profound spirit of the Qur’an which stimulates profound reflection and inquiry.
When a nation’s spirit of inquiry weakens, it undergoes a psychological change and acquires a state whereby it becomes either fatalistic or emotional. Such a nation starts looking for wrong causes for its genuine problems. The intellectual vision becomes myopic and consequently it becomes intellectually crippled. Thus the Muslim downfall seems to be co-extensive with the actualisation of the dangers implied in the narrow interpretation of the Qur’anic notion of knowledge. The splits and sometimes conflicts between religious and secular institutions of education in modern Muslim society are another serious consequences of this narrow interpretation.
WAY FORWARD TO OVERCOME THE EPISTEMOLOGICAL DECLINE
In the light of the preceding discussion one can conclude that Muslims today should try to resolve the epistemological crises that began centuries ago. The solution lies in finding out and enforcing the Qur’anic scope of knowledge in both the material and spiritual realms as explained so well in the above excerpt from a speech delivered by Mawlana Shah Karim al-Hussaini Hazar Imam, the current Imam of the Ismailis.
In practical terms this can be done through evolving a philosophy of education which should take into account both scientific and spiritual aspects of knowledge. The education system based on this philosophy will create individuals and institutions capable of accepting the ever increasing new forms of knowledge without sacrificing the fundamental principles of Islam. They will have the willingness and courage to contribute towards the growth of human consciousness. The awareness of spiritual dimension can provide moral foundation and a sense of direction to the society, otherwise the exclusive and unadulterated pursuit of material progress can lead to pride and engender an evil vacuum of destructive forces.
Publication date: November 7, 2010
© Simerg.com
______________
BIBLIOGRAPHY
1. The Holy Qur’an.
2. Arif Tamir, Jam’at al-Jami’at, Beirut.
3. Nasir-i Khusraw, Zad al-Musafirin, ed. BazI aI-Rahman, 1924, Berlin.
4. Nasir-i Khusraw, Jami’ul-Hikmatyn. ed H Corbin.
5. Hamid al-Din Kirmani, al-Aqwal al-Dhahabiyyah, ed. Salah al-Sawy, Tehran, 1973.
Article adapted from Ilm, Voulme 11, Number 1 (December, 1986), published by the Ismaili Tariqah and Religious Education Board for the United Kingdom.
http://simerg.com/literary-readings/ref ... -in-islam/
By Ghulam Abbas Hunzai
The Holy Qur’an’s encouragement to study nature and the physical world around us gave the original impetus to scientific enquiry among Muslims. Exchanges of knowledge between institutions and nations and the widening of man’s intellectual horizons are essentially Islamic concepts. The faith urges freedom of intellectual enquiry and this freedom does not mean that knowledge will lose its spiritual dimension. That dimension is indeed itself a field for intellectual enquiry.
I cannot illustrate this interdependence of spiritual inspiration and learning better than by recounting a dialogue between Ibn Sina, the philosopher, and Abu Said Abul-Khayr, the Sufi mystic. Ibn Sina remarked, ‘Whatever I know, He sees’, to which Abu Said replied, ‘Whatever I see, He knows’.
(An excerpt from the speech delivered by His Highness the Aga Khan at the inauguration of the Aga Khan University in Karachi on November 11, 1985)
QUR’AN AS THE SOURCE OF GUIDANCE
The ‘Scope of Knowledge’ is only one aspect of the vast subject of Islamic epistemology which includes the origins, nature, forms, means and validity of knowledge. The article seeks to focus mainly on the ‘scope of knowledge’ in Islam. The discussion on knowledge is followed by certain observations which will locate the subject in its historical context. The discourse leads to certain conclusions pertaining to knowledge and education.
Islam firmly believes in the possibility of the attainment of valid knowledge, therefore scepticism can be dismissed at the very outset. Once the possibility of knowledge is established, the next logical step, perhaps, is to determine its boundaries or scope. The major source to attain the understanding of the ‘scope of knowledge’ in Islam is the Holy Qur’an and its subsequent authentic interpretations. The following discussion is, therefore, based mostly on these sources.
The framework for the perception of knowledge is the presence of the knower, the knowable and the means (senses) through which the knower knows the knowable. We shall look into the Qur’an keeping this primary frame of knowledge-situation in mind.
A proper reflection on the Qur’an convinces the reader that it emphatically appeals to the human faculty of cognition to reflect, observe, consider, comprehend, meditate, contemplate and realise. The seat where this reflective activity takes place is called the heart (qalb) or (rational) soul (nafs). Thus the rational soul is the knower or recipient of the knowledge. The Qur’an appeals to the faculty of understanding to reflect and understand, for, if this faculty is not activated man plunges himself into an emotionally destabilised state. And emotionally guided perceptions do not give the true picture of things as they exist.
IMPORTANCE OF EMPIRICAL KNOWLEDGE IN ISLAM
The Qur’anic emphasis on the act of reflection encourages the intellect to be inquisitive and prepares it to accumulate, assimilate and integrate all forms of knowledge. In such an inquiring mode the believer humbly but earnestly supplicates to Allah saying: “Lord, increase my knowledge.” (Holy Qur’an, 20:112).
The inquiring intellect (knower – aqil) comes into contact with the creation (knowable – ma’qul) through the senses (hawas). The Qur’an does not brand empirical knowledge as illusive, unreal or deceptive; rather it exhorts the believer to fully utilise it in comprehending Allah’s signs (ayat Allah) in the creation. In this regard the Qur’an says, “And We had endowed them with the (faculties of) hearing, seeing, heart and intellect.” (Holy Qur’an, 46:26). Allah declares about those who either misuse or do not use the senses, “They have hearts yet they cannot understand; eyes yet they do not see; ears yet they do not hear. They are like beasts (anam), indeed they are less enlightened, such are the heedless.” (Holy Qur’an, 7:179). These verses clearly establish the significant role of the senses in the Islamic notion of knowledge.
MAJOR REALMS THAT CAN BE KNOWN THROUGH THE INTELLECT: CREATION, SCRIPTURE AND THE SOUL
The third aspect of the epistemological hierarchy is the knowable or the intelligible — that which can be known. This factor seems to be crucial in determining the boundaries of understanding — ‘the scope of knowledge’.
According to the Qur’anic teaching there are three major areas which can be ‘intellected’. They are (1) the creation or nature (khalq or fitra); (2) revelation, i.e. holy scripture (wahi or kitab); and (3) the soul (nafs) which includes intellect as well. This means that the intellect is potentially capable of comprehending everything including itself except its Originator – Allah. These three realms constitute the existence as such and are formed of the divine signs. The signs indicate to their common Origin, Allah, and also indicate to each other confirming and substantiating each other’s existence.
First Realm: Creation
Allah says in the Qur’an “Behold! in the creation of the heavens and the earth, and the alternation of night and day, there are indeed signs for men of understanding.” (3:190). In the following verse reflection on creation has been described as the specific characteristic of those who constantly remain in prayers. The verse states, “Those who remember Allah, standing, sitting and lying down on their sides, and contemplate the creation in the heavens and the earth, (saying): ‘Lord, You have not created these in vain.” (Holy Qur’an, 3:191).
The knowledge of creation taken in itself, constitutes empirical science; when it is seen in relation to the spiritual world it becomes a part of gnosis (ma’rifat) in the recipient’s mind. The Qur’anic declaration of man’s mastery over the universe clearly establishes human capacity to discover and control various phenomena in the creation.
Second Realm: Scriptures
The second realm wherefrom man derives knowledge is the Holy Scripture. Allah states in the Qur’an about the Prophet that he recites Allah’s verses to the believers, he purifies them and teaches them the Book and the wisdom or hikmah (Holy Qur’an, 3:169). The Prophet (SAWS) and after him his appointed Imams unveil the meaning of the Qur’anic verses through the God-given knowledge (‘ilm-i ladunni). Human intellect can attain the profound meaning and wisdom from Qur’an when it reflects on Qur’anic verses in the light of the teachings (ta’lim) of the guide. Through this esoteric knowledge the believer finds out that the essential form (surat) of revelation and creation is the same – he discovers a transcendental unity behind their apparent diversity.
Third Realm: The Self or the Soul
The Qura’n also makes it imperative for man to reflect upon himself; in other words man has been guided to attain knowledge about himself. This self-knowledge enables him to realise his essential nature, the purpose of his life, from where he has come and where he will return. This knowledge also enables him to understand his exact position in the scheme of existence, particularly his relation to Allah and His creation. Before the attainment of the self-knowledge, the soul remains restless, but this knowledge brings with itself perfect peace and satisfaction.
IMPORTANCE OF THE ALL-INCLUSIVE NATURE OF THE THREE REALMS OF KNOWLEDGE
The foregoing discussion indicates to the contours of the Qur’anic concept of knowledge which in its broadest sense is all-inclusive. In other words, it covers both spiritual and material aspects of existence. This also establishes the fact that the entire field of existence is open to human intellect for reflection and discovery. Any conception of knowledge which excludes either of the above aspects cannot be, in the true sense, Qur’anic or Islamic. Ignoring one aspect will be tantamount to ignoring the signs of Allah without which the picture of reality remains incomplete. The misunderstanding of the underlying Qura’nic scope of knowledge may render tawhid (the doctrine of the ‘Oneness of Allah’) superficial for the profound knowledge of tawhid is attainable through the proper appreciation of all dimensions of Being.
If the Qur’an is not regarded a fecund source of profound principles, there is a strong possibility of the Qur’an becoming a source of frivolous polemics, the necessary consequence of which is disputes and divisions that are diametrically opposed to the Islamic spirit of unity, which reflects the Unity of Allah.
THE DECLINE OF ISLAMIC INTELLECTUAL POWER AND VISION
When, in the light of the foregoing discourse, the history of Islamic epistemological development is analysed, it becomes fairly clear that one of the chief causes of the problems experienced by Muslims was the misrepresentation of the profound scope of knowledge given in the Qur’an. Muslims advanced rapidly in all aspects of life in the earlier centuries of Islam. This was due to the fact that they understood the spirit of Qur’anic concept of knowledge. They not only absorbed all available forms of knowledge, they also contributed substantially to its collective growth.
In the wake of the thirteenth century of Hijra (i.e from the latter part of the 18th Century), the tendency became dominant in Muslim society according to which science and philosophy were not truly Islamic because they were not allegedly available in the Qur’an. This narrow vision immensely damaged the profound spirit of the Qur’an which stimulates profound reflection and inquiry.
When a nation’s spirit of inquiry weakens, it undergoes a psychological change and acquires a state whereby it becomes either fatalistic or emotional. Such a nation starts looking for wrong causes for its genuine problems. The intellectual vision becomes myopic and consequently it becomes intellectually crippled. Thus the Muslim downfall seems to be co-extensive with the actualisation of the dangers implied in the narrow interpretation of the Qur’anic notion of knowledge. The splits and sometimes conflicts between religious and secular institutions of education in modern Muslim society are another serious consequences of this narrow interpretation.
WAY FORWARD TO OVERCOME THE EPISTEMOLOGICAL DECLINE
In the light of the preceding discussion one can conclude that Muslims today should try to resolve the epistemological crises that began centuries ago. The solution lies in finding out and enforcing the Qur’anic scope of knowledge in both the material and spiritual realms as explained so well in the above excerpt from a speech delivered by Mawlana Shah Karim al-Hussaini Hazar Imam, the current Imam of the Ismailis.
In practical terms this can be done through evolving a philosophy of education which should take into account both scientific and spiritual aspects of knowledge. The education system based on this philosophy will create individuals and institutions capable of accepting the ever increasing new forms of knowledge without sacrificing the fundamental principles of Islam. They will have the willingness and courage to contribute towards the growth of human consciousness. The awareness of spiritual dimension can provide moral foundation and a sense of direction to the society, otherwise the exclusive and unadulterated pursuit of material progress can lead to pride and engender an evil vacuum of destructive forces.
Publication date: November 7, 2010
© Simerg.com
______________
BIBLIOGRAPHY
1. The Holy Qur’an.
2. Arif Tamir, Jam’at al-Jami’at, Beirut.
3. Nasir-i Khusraw, Zad al-Musafirin, ed. BazI aI-Rahman, 1924, Berlin.
4. Nasir-i Khusraw, Jami’ul-Hikmatyn. ed H Corbin.
5. Hamid al-Din Kirmani, al-Aqwal al-Dhahabiyyah, ed. Salah al-Sawy, Tehran, 1973.
Article adapted from Ilm, Voulme 11, Number 1 (December, 1986), published by the Ismaili Tariqah and Religious Education Board for the United Kingdom.
http://simerg.com/literary-readings/ref ... -in-islam/
November 23, 2010, 8:15 pm
Experiments in Field Philosophy
By ROBERT FRODEMAN
Back in September, Joshua Knobe of Yale University, writing here at The Stone, outlined a new experimental approach to doing philosophy in his post, “Experiments in Philosophy.” Philosophers, he argued, have spent enough time cogitating in their armchairs. Knobe described how he and a group of like-minded colleagues in the discipline have undertaken a more engaged approach, working with cognitive scientists and designing experiments that will “test” people’s intuitions about traditional philosophic puzzlers such as the existence of God, the objectivity of ethics and the possibility of free will. The result: new, empirically-grounded insights available to philosophers and psychologists.
Field philosophers leave the book-lined study to work with scientists, engineers and decision makers on specific social challenges.
The experimental philosophy movement deserves praise. Anything that takes philosophy out of the study and into the world is good news. And philosophy will only be strengthened by becoming more empirically-oriented. But I wonder whether experimental philosophy really satisfies the Socratic imperative to philosophize out in the world. For the results gained are directed back to debates within the philosophic community rather than toward helping people with real life problems.
Another group of philosophers, myself included, is experimenting with an approach we call “field philosophy.” Field philosophy plays on the difference between lab science and field science. Field scientists, such as geologists and anthropologists, cannot control conditions as a chemist or physicist can in the lab. Each rock outcrop or social group is radically individual in nature. Instead of making law-like generalizations, field scientists draw analogies from one site to another, with the aim of telling the geological history of a particular location or the story of a particular people.
“Getting out into the field” means leaving the book-lined study to work with scientists, engineers and decision makers on specific social challenges. Rather than going into the public square in order to collect data for understanding traditional philosophic problems like the old chestnut of “free will,” as experimental philosophers do, field philosophers start out in the world. Rather than seeking to identify general philosophic principles, they begin with the problems of non-philosophers, drawing out specific, underappreciated, philosophic dimensions of societal problems.
Growing numbers of philosophers are interested in this kind of philosophic practice. Some of this field work in philosophy has been going on for years, for instance within the ethics boards of hospitals. But today this approach is increasingly visible across a number of fields like environmental science and nanotechnology. Paul Thompson of Michigan State has worked with and challenged the food industry on the application of recombinant DNA techniques to agricultural crops and food animals. Rachelle Hollander, now at the National Academy of Engineering, worked for years at the National Science Foundation to integrate ethics and values concerns with the ongoing work of scientists and engineers. And at my own institution, the University of North Texas, we have worked with the U.S. Geological Survey and the small community of Silverton, Colo., on problems of water quality, the legacy of 19th- and 20th-century gold mines; helped the Great Lakes Fisheries Commission develop a management plan for the Great Lakes; and assisted the Chilean government in creating a UNESCO biosphere reserve in Cape Horn.
Sometimes what is needed is not the 7000-word scholarly article but rather a three-minute brief or a one-page memo.
Note further that “field” areas also include government offices in places such as Washington, DC and Brussels. So, for instance, my research group is in the midst of a three-year study funded by the National Science Foundation that is examining the process of peer review for grant proposals. Science agencies around the world are struggling to bring assessments of the larger societal impact of proposed research into the peer review process. In this study we meet regularly with the users of this research — the federal agencies themselves — to make sure that our research helps agencies better address societal needs. The “field” can even include the lab, as when Erik Fisher of Arizona State speaks of “embedded philosophers” who, like embedded journalists of recent wars, work daily alongside lab scientists and engineers.
Field philosophy has two roles to play in such cases. First, it can provide an account of the generally philosophical (ethical, aesthetic, epistemological, ontological, metaphysical and theological) aspects of societal problems. Second, it can offer an overall narrative of the relations between the various disciplines (e.g., chemistry, geology, anthropology, public policy, economics) that offer insight into our problems. Such narratives can provide us with something that is sorely lacking today: a sense of the whole.
Field philosophy, then, moves in a different direction than either traditional applied philosophy or the new experimental philosophy. Whereas these approaches are top-down in orientation, beginning in theory and hoping to apply a theoretical construct to a problem, field philosophy is bottom-up, beginning with the needs of stakeholders and drawing out philosophical insights after the work is completed.
Being a field philosopher does have its epistemological consequences. For instance, we take seriously the temporal and financial constraints of our users. Working with government or industry means that we must often seek to provide “good-enough” philosophizing — it often lacks some footnotes, but attempts to provide much needed insights in a timely manner.
The willingness to take these constraints seriously has meant that our work is sometimes dismissed by other philosophers. Across the 20th century, philosophy has embraced rigor as an absolute value. Other important values such as timeliness, relevance and cost have been sacrificed to disciplinary notions of expertise. In contrast, we see “rigor” as involving a delicate balance among these often competing values. To put it practically, field philosophers need to learn how to edit themselves: sometimes what is needed is not the 7000-word scholarly article but rather a three-minute brief or a one-page memo.
Related
More From The Stone
Read previous contributions to this series.
Make no mistake: field philosophy does not reject traditional standards of philosophic excellence. Yet in a world crying out for help on a wide range of ethical and philosophical questions, philosophers need to develop additional skills. They need to master the political arts of working on an interdisciplinary team. Graduate students need to be trained not only in the traditional skills of rigorous philosophical analysis but also in the field rigor of writing grants and framing insights for scientists, engineers and decision makers at the project level.
Finally, a field approach to philosophy may also help with the challenge facing the entire academic community today. Underlying the growing popular distrust of all societal institutions lies a social demand for greater accountability for all those who work in the industry of knowledge production. This is most obvious among scientists who face increasing demands for scientific research to be socially relevant. But with budgets tightening, similar demands will soon be made on philosophy and on all the humanities — to justify our existence in terms of its positive and direct impacts on society. Field philosophy, then, serves as an example of how academics can better serve the community — which after all is said and done, pays the bills.
Robert Frodeman
Robert Frodeman is professor of philosophy and founding director of the Center for the Study of Interdisciplinarity at the University of North Texas. He is author of “Geo-Logic: Breaking Ground between Philosophy and the Earth Sciences (2003), co-editor of the “Encyclopedia of Environmental Ethics and Philosophy” (2008), editor of the Oxford “Handbook of Interdisciplinarity” (2010).
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya3
Experiments in Field Philosophy
By ROBERT FRODEMAN
Back in September, Joshua Knobe of Yale University, writing here at The Stone, outlined a new experimental approach to doing philosophy in his post, “Experiments in Philosophy.” Philosophers, he argued, have spent enough time cogitating in their armchairs. Knobe described how he and a group of like-minded colleagues in the discipline have undertaken a more engaged approach, working with cognitive scientists and designing experiments that will “test” people’s intuitions about traditional philosophic puzzlers such as the existence of God, the objectivity of ethics and the possibility of free will. The result: new, empirically-grounded insights available to philosophers and psychologists.
Field philosophers leave the book-lined study to work with scientists, engineers and decision makers on specific social challenges.
The experimental philosophy movement deserves praise. Anything that takes philosophy out of the study and into the world is good news. And philosophy will only be strengthened by becoming more empirically-oriented. But I wonder whether experimental philosophy really satisfies the Socratic imperative to philosophize out in the world. For the results gained are directed back to debates within the philosophic community rather than toward helping people with real life problems.
Another group of philosophers, myself included, is experimenting with an approach we call “field philosophy.” Field philosophy plays on the difference between lab science and field science. Field scientists, such as geologists and anthropologists, cannot control conditions as a chemist or physicist can in the lab. Each rock outcrop or social group is radically individual in nature. Instead of making law-like generalizations, field scientists draw analogies from one site to another, with the aim of telling the geological history of a particular location or the story of a particular people.
“Getting out into the field” means leaving the book-lined study to work with scientists, engineers and decision makers on specific social challenges. Rather than going into the public square in order to collect data for understanding traditional philosophic problems like the old chestnut of “free will,” as experimental philosophers do, field philosophers start out in the world. Rather than seeking to identify general philosophic principles, they begin with the problems of non-philosophers, drawing out specific, underappreciated, philosophic dimensions of societal problems.
Growing numbers of philosophers are interested in this kind of philosophic practice. Some of this field work in philosophy has been going on for years, for instance within the ethics boards of hospitals. But today this approach is increasingly visible across a number of fields like environmental science and nanotechnology. Paul Thompson of Michigan State has worked with and challenged the food industry on the application of recombinant DNA techniques to agricultural crops and food animals. Rachelle Hollander, now at the National Academy of Engineering, worked for years at the National Science Foundation to integrate ethics and values concerns with the ongoing work of scientists and engineers. And at my own institution, the University of North Texas, we have worked with the U.S. Geological Survey and the small community of Silverton, Colo., on problems of water quality, the legacy of 19th- and 20th-century gold mines; helped the Great Lakes Fisheries Commission develop a management plan for the Great Lakes; and assisted the Chilean government in creating a UNESCO biosphere reserve in Cape Horn.
Sometimes what is needed is not the 7000-word scholarly article but rather a three-minute brief or a one-page memo.
Note further that “field” areas also include government offices in places such as Washington, DC and Brussels. So, for instance, my research group is in the midst of a three-year study funded by the National Science Foundation that is examining the process of peer review for grant proposals. Science agencies around the world are struggling to bring assessments of the larger societal impact of proposed research into the peer review process. In this study we meet regularly with the users of this research — the federal agencies themselves — to make sure that our research helps agencies better address societal needs. The “field” can even include the lab, as when Erik Fisher of Arizona State speaks of “embedded philosophers” who, like embedded journalists of recent wars, work daily alongside lab scientists and engineers.
Field philosophy has two roles to play in such cases. First, it can provide an account of the generally philosophical (ethical, aesthetic, epistemological, ontological, metaphysical and theological) aspects of societal problems. Second, it can offer an overall narrative of the relations between the various disciplines (e.g., chemistry, geology, anthropology, public policy, economics) that offer insight into our problems. Such narratives can provide us with something that is sorely lacking today: a sense of the whole.
Field philosophy, then, moves in a different direction than either traditional applied philosophy or the new experimental philosophy. Whereas these approaches are top-down in orientation, beginning in theory and hoping to apply a theoretical construct to a problem, field philosophy is bottom-up, beginning with the needs of stakeholders and drawing out philosophical insights after the work is completed.
Being a field philosopher does have its epistemological consequences. For instance, we take seriously the temporal and financial constraints of our users. Working with government or industry means that we must often seek to provide “good-enough” philosophizing — it often lacks some footnotes, but attempts to provide much needed insights in a timely manner.
The willingness to take these constraints seriously has meant that our work is sometimes dismissed by other philosophers. Across the 20th century, philosophy has embraced rigor as an absolute value. Other important values such as timeliness, relevance and cost have been sacrificed to disciplinary notions of expertise. In contrast, we see “rigor” as involving a delicate balance among these often competing values. To put it practically, field philosophers need to learn how to edit themselves: sometimes what is needed is not the 7000-word scholarly article but rather a three-minute brief or a one-page memo.
Related
More From The Stone
Read previous contributions to this series.
Make no mistake: field philosophy does not reject traditional standards of philosophic excellence. Yet in a world crying out for help on a wide range of ethical and philosophical questions, philosophers need to develop additional skills. They need to master the political arts of working on an interdisciplinary team. Graduate students need to be trained not only in the traditional skills of rigorous philosophical analysis but also in the field rigor of writing grants and framing insights for scientists, engineers and decision makers at the project level.
Finally, a field approach to philosophy may also help with the challenge facing the entire academic community today. Underlying the growing popular distrust of all societal institutions lies a social demand for greater accountability for all those who work in the industry of knowledge production. This is most obvious among scientists who face increasing demands for scientific research to be socially relevant. But with budgets tightening, similar demands will soon be made on philosophy and on all the humanities — to justify our existence in terms of its positive and direct impacts on society. Field philosophy, then, serves as an example of how academics can better serve the community — which after all is said and done, pays the bills.
Robert Frodeman
Robert Frodeman is professor of philosophy and founding director of the Center for the Study of Interdisciplinarity at the University of North Texas. He is author of “Geo-Logic: Breaking Ground between Philosophy and the Earth Sciences (2003), co-editor of the “Encyclopedia of Environmental Ethics and Philosophy” (2008), editor of the Oxford “Handbook of Interdisciplinarity” (2010).
http://opinionator.blogs.nytimes.com/20 ... n&emc=tya3
The speech below highlights the evolution of the concept of knowledge from the traditional one as understood and interpreted during the time of the Prophet to a more complex one in view of altered conditions of explosion of information on the one hand and technological developments especially in the area of information processing and transfer on the other. It goes on to discus the implications of how we should interpret and approach knowledge and education today.
Speech by Oleg Grabar, Recipient of the 2010 Chairman’s Award, at the Aga Khan Award for Architecture 2010 Award Presentation Ceremony (Doha, Qatar)
24 November 2010
Please also see: Recipients of the 2010 Aga Khan Award for Architecture (Press Section)
It is with much pride and gratitude that I have accepted the Chairman’s Award from the Aga Khan Award for Architecture, following Hassan Fathy, Rifat Chaderji, and Sir Geoffrey Bawa. It is an honor to join such a distinguished company and, even more so, to be the first one who is not an architect nor a planner, not even a decision-maker at the level at which I remember the term being used in the deliberations of the Steering Committee many years ago (and a term to which I shall return), but an academic scholar and teacher who has spent his life in universities and research institutes learning and then transmitting to others, in lectures, seminars, and writing whatever I had learned and understood. And it is with these two themes of knowledge and of education that my remarks will deal.
But, first, let me add that my acceptance contains also a sprinkling of somewhat sentimental memories and I want to begin with a few of these, because they have a bearing on the achievements of the past 35 years, on the subject of my talk, and on the expectations we can have for the future.
Some thirty-five years ago, when I was a member of the first Steering Committee gathered to help His Highness the Aga Khan design what was then simply the Aga Khan Award for Architecture and which is now an enormous enterprise operating on five continents, his dream and vision for the growth and development of the environment of Muslims, wherever they live and work, were already fully present in his spirit, but neither, with all due respect, His Highness nor any of the six or seven people gathered to help him out had any clear idea of what to do and how to translate his vision into a reality. The story of how it eventually happened will never be told, because a dozen separate memories are involved, some of the key participants are no longer alive, and, to my recollection, no coherent record was kept of the inventive and creative discussions among a dozen or so imaginative, hard-working, and witty men and one woman, who bear the responsibility of what happened.
Two questions dominated our discussions then. One was: is there an abstract cultural phenomenon to be called Islamic architecture that is not simply whatever architecture is or was used by Muslims, and that could be defined as different from whatever was done elsewhere or for other human groups ? And how do we find out what it is? The other question was: once we find what it is, how do we let the world in general and Muslim communities in particular know what it is or was? The aim, or one of the aims, of the Award was to help maintain the quality and presumed uniqueness of this architecture, while bringing it up to the most effective economic, technical, and cultural practices of our own day. There was something simple-minded in our feelings then that the local past was almost always genuine and good and that contemporary universal ways were usually meaningless. We were clearly wrong then, mostly because of ignorance of what was really going on and because we were ourselves the victims of very narrow prejudices. We all felt that weakness. And one of the main objectives of our meetings was to acquire a knowledge of planning and constructional practice and to provide a program of creative education. We were not to be restricted by arbitrary opinions nor by presumably established doctrines.
In a sense, our task of many years back was justified by an often quoted Tradition (hadith) attributed to the Prophet Muhammad that knowledge must be sought wherever it is found, even in China. China in the seventh century of the common era and the first century of the hijrah was a way to identify a remote world known to exist and to be important, but hardly an accessible one. The point of the Tradition is that there is knowledge everywhere, none of which should be rejected without being tested. Both of these implications are still pertinent today. Knowledge is indeed created everywhere and China has become a central actor in the cultural as well as economic realms of today’s world. What has changed dramatically since the time of the Prophet and what keeps changing in ways which are almost impossible to predict are the nature of knowledge and the means in our possession to deal with it.
Such contemporary comments on the hadith as are known to me do not talk about education. At the time of the Prophet, transmission of practical or philosophical knowledge was relatively simple, through writing, copying and reading books, through oral arguments kept in the memories of participants, and through the continuing practice of artisanal procedures carried from father to son, from master to apprentice, and from region to region. Any capable and intelligent person was then able to master most of what was known. The breadth of knowledge within the minds of many talented individual before the seventeenth century can at times be truly stunning, even if such individuals were rare. Education was one with information and knowledge and it took place wherever there was a library and a few literate and concerned individuals, or within ateliers of artisans.
Today’s scene is dramatically different. There are as many centers producing information as there are countries with universities, technical schools, archaeological institutes, hospitals, architectural firms, or museums. Much of this information is available in what I once counted as twenty-six different languages (I am sure it is many more now). It exists in millions of books, hundreds of journals, thousands of reports, and now, thanks to the Internet and to Google, this knowledge is accessible, in theory at least, almost everywhere in the world. Museum collections have been photographed and recorded, exhibitions kept forever on DVD’s. And I suppose that architectural firms and excavators preserve whatever they find and do on masses of disks. In short, the quantity of available information is enormous, so large that it cannot be mastered and it is easy to forget whatever one has just found out. No one can say anymore that he knows all about Islamic art, about the architectural projects of today, about excavations or about objects from any one period of history, or about anything but a narrow strip of constantly changing information.
And I can even go a step further. As I discovered recently while listening to a lecture on Physics by a Nobel Prize winner of recent vintage, what we see and can describe as a variety of people in this original building is only one reality, one truth. Both people and architecture or any space can also be defined as an infinite number of quarks and electrons in constant motion. This particular reality is invisible and can only be measured in mathematical terms. In fact, it is rather curious that this fact of an invisible reality (to contemporary physicists, I gather, there are several such parallel realities) of everything coincides in many ways with the theory of atomism developed in Ancient Greece and then transformed in ninth century Baghdad. This theory acknowledged the existence of an invisible reality of all things, a reality which was or could be modified, but God alone was empowered to modify these constructions – that is, us as people or everything around us, including what we think we have created– or to keep them as they were. Within this traditional scheme, truth was always one and invisible; our own science today assumes the existence of several parallel truths, in addition to the one we see with our eyes. It is often the case that transcendental and laboratory-defined explanations come close to each other, but they can never meet for one is ultimately based on beliefs and the other one on experiments. This is altogether an area of concern I shall not touch on today, except perhaps a little bit in conclusion, but it is an important one. How do we separate what we know from what we believe? Or do we? Should we?
This contemporary explosion of data has by necessity created two paths in dealing with it. One way is to narrow one’s competence and to claim total or near total knowledge only in narrowly defined spheres, the Ottoman world of the eighteenth century, the ceramics of Iran, the construction of minarets, or the contribution of Hassan Fathy to contemporary architecture in the Islamic world. Specialization becomes the order given to knowledge and it tends to be determined by the narrow restrictions provided by limited linguistic competence or limited area awareness, or even limited mental capacities. Specialization tends to become national and linguistically limited, but it presumes thoroughness and completeness in dealing with its subjects. It also requires large numbers of equally competent specialists, properly distributed everywhere and well versed in other languages and in other histories, who may or may not find ways of communicating with each other. Ultimately, however, this successful specialization is impossible to achieve and this way of proceeding compels one to lie about what one knows or to project arbitrarily a limited experience to other areas of knowledge.
I will give you just one example taken from history, although contemporary politics provides many such examples. Traditional architectural decoration of Andalus in Spain, or Egypt, Iraq, Central Asia, is dominated after the tenth century by a complex geometry. It is easy to argue that this is the result of Islamic religious and philosophical thought that rejected or avoided resemblances to natural creations and found in geometry an abstract truth which could be given esthetic values. But it is also possible to show that rather different mathematical theories had developed in Central Asia and in Andalus and that the practice of artisanship was quite different in the two regions. The existing designs did not reflect the same principles of thought and different social and cultural interpretations must be provided for motifs that are strikingly similar and especially for a similar transfer of abstract thought to architectural forms. It is easy to provide a universal or pan-Islamic interpretation because external, political or cultural, forces today require such an interpretation, while in fact, for a scholar or a student aware of the details of several separate histories, very different social, ethnic, pious, and intellectual factors were involved. But what is more important, the historian’s search for and eventual knowledge of the truth or the contemporary political or social leader’s need to satisfy contemporary emotional needs? Any answer bears heavy political and ideological implications. Here is, however, an instance without deep political implications. Many years ago, as I shared a panel in Indonesia with Hassan Fathy, the latter made a very eloquent speech on the necessity to save water in an Islamic society and gave appropriate Arabian or Egyptian examples. But his Indonesian audience replied that in Java, water is a danger against which you need protection and that the principles of the faith have nothing to do with it. Which position is more “Islamic,” the one that preaches saving water or its opposite, getting rid of it as efficiently as possible?
The other direction proposed by the explosion of information was outlined to me some years ago by an early Internet activist with much experience in the physical and natural sciences, who was installing a new computer in my office. He wanted to let me know what a wonderful progress was being made available for my research. Just as happened, apparently, with chemistry, he argued that every week I could receive automatically, as in a newsletter or, today, by email, a summary in English and with illustrations of every publication about the history of Islamic art or the practice of contemporary architects, or both. This survey would include a judgement as to the significance and value of that information, wherever it appeared and in whatever language it had been originally written. Even if one grants that his hyperbolic enthusiasm for weekly accounts gives more credit than deserved to the activities of the few who deal with the arts of the Muslim world, his basic point was simple. The explosion of information is, first of all, a vehicle in which it is all gathered together and it requires the formation of a class of intermediary handlers –I suppose we could call them consultants today or executive assistants-- who channel information and evaluate it for the use of others. They would be collectively competent in all appropriate languages, they would have a literate command of English, and they would undergo a type of training that would guarantee the accuracy of what they relate and its appropriateness to whatever we need and already know.
In chemistry, as in the political thought of our own times, this accuracy is a variable and many of the reasons for the failures of contemporary political leadership is that the experts cannot manage to keep up with change. Things are probably a bit simpler in architecture related matters. To some degree, for our broad area of the man-made environment of the Islamic world, this consulting function is partly fulfilled by ArchNet, the creation of the Aga Khan Program at Harvard and MIT. This is more or less true with respect to information. But I am not sure that ArchNet possesses well developed critical abilities and that it is capable of reacting rapidly and intelligently to new knowledge and of distributing its awareness to all of its constituents, whether they asked for it or not. Part of my uncertainty derives not so much from failures in the operation of ArchNet, but in the absence of broad categories for the understanding of architecture that would automatically be known to all and consistently included in all new information. We cannot expect something as direct and universal as mathematical formulas are, but we should be able to develop standard categories of description and interpretation which would be expressed in any language.
A simple example of such a category is the material of construction: stone, wood, brick, concrete. We think we know what these terms mean, but it is enough to pick up any book in Russian or Uzbek on architecture in Khorasan and Transoxiana to become totally confused about the terms used for different types of mud bricks which seem to have been used. But there are much more complex categories of understanding which are either, like style, impossible to define, or, like design, too difficult to explain in theory, if not in practice. Finally, while the means exist to make knowledge of architecture available, this is not true of the other arts, where utter disorder of knowledge is still the rule and where very few categories of identification and description exist. And here is another example. All museums exhibit their treasures as closed collections tied to a donor or to a space, but all historians always jump from one collection to the other in search of comparative material or in order to explain complete series of artefacts, the Fatimid ceramics of Egypt or Mughal miniatures. Very different basic information is required by each of these procedures.
I may add, without suggesting it as an immediate possibility, that is may be possible to express these categories of understanding as drawings or charts and models, as visual symbols, usually easy to store and to understand and available in the one language every practitioner, the scholar or the urbanist, would have to learn. But, then, as I reflect on the inane conclusions our governments draw so often from statistics and models made by economists, I am afraid of even suggesting this approach. Someone else will eventually do it better.
Let me sum up then this first part of my talk. The explosion of knowledge of architecture, the built environment in general, and all other arts from the areas in which, now or in the past, Muslims are or were present and active, this explosion consists of two components. There is information, the immense body of documents, from individual buildings to aggregates of buidings creating spaces to a written documentation that ranges from descriptive accounts of the built environment to the legal restrictions attached to them (I am thinking of the thousands of remaining waqfiyahs dealing with the urban environment of most Muslim cities), to the multiple ways in which they were or are used, to the critical record of how they were received, even to philosophical or literary considerations on architecture, although we have few of these, at least to my knowledge. But to know how to find our way in this mass of information, we need knowledge, codes and protocols, ways of access to information that has already been processed for easy use. These codes still have to be generated if we hope to make the mass of existing information usable in an intellectually or morally acceptable way. This is where my way of presenting the evidence leads me to those who rule the architectural and artistic universe, the patrons of art and of museums and the users of works of art, the decision makers to whom we owe so much of what is around us. It is, I believe, their responsibility, I was going to say obligation, to sponsor the creation of such a system for access to information and to support for several years the teams of young men and women who could develop and handle it. It may initially be an expensive task, because errors will be made, and I could tell you more than one pathetic or comic story of our own expensive mistakes of over thirty years ago while building up the Aga Khan Award and then ArchNet. But ultimately information can be tamed and made accessible to all who need it in an acceptable form.
Yet it is not simply a matter of establishing categories of description and of understanding. It is also a matter of making these categories enter the intelligence of people and groups. To do so is how I understand the purposes and requirements of education and I would like to turn now to some remarks on what it is and how it operates.
Education can and should be understood at three different levels.
The first level is the scholarly one, the level of the learned practitioner. It is the highest one because its aim extends beyond existing knowledge to the creation of further knowledge and because it is –or should be-- equipped to communicate with all scholarship in all fields of the humanities and the social sciences. I insist on this point, as I feel very strongly that comparative understanding is a key feature of learned scholarship. Among other things, it permits to avoid the dominance by western paradigms. Over the years I profited a great deal from whatever I learned from contemporary theories of structuralism and linguistics and I owe a great deal of my understanding of Islamic art to the more developed methods of dealing with western art. But this use did not mean that works of Islamic art were like works of western art; it did, however imply the existence of broad, universal, principles behind our understanding of the arts and a good specialist in Islamic art should easily be ready to handle Christian or classical art.
This learned level is also the easiest one to handle and to understand. Naturally and professionally it is centered on maximum information and on the development of ideas. It is only restricted by the linguistic and intellectual limitations of its practitioners and by the time available to deal with it. The development of consultants or assistants as outlined earlier and the improvements in the operation of the Internet should lead to a scholarship that would improve individual learning and that would be made available through the usual mechanisms of higher education like seminars with students, colloquia with colleagues, publication in books with a, by necessity, limited public or through often obscure periodicals. But this level will always remain a relatively restricted one, because it requires not only many technical, especially linguistic, competencies, but because it also demands a passion for learning which is time and mind consuming and which exists only in a few people.
The second level can be called the level of social leadership. It involves those people and institutions that are running governments and financial or industrial enterprises and defining the cultural context of their actions. They make decisions about school curricula and university programs, they sponsor films and television programs, they publish newspapers and magazines they patronize new projects. The form of the governments in which they operate varies a great deal and in their hands lies something even more important than the sponsorship of buildings or the interplay of social and political activities. They provide rewards and awards, they accept or reject the implications of new investments, whether an airport, a university, or the restoration of a historic building. They decide whether something is going to be called Islamic, Arab, or Egyptian, and they define the features to be used in preparing urban developments. They accept or change symbols –flags, occasionally clothes or simply colors– credibly associated with a land or a culture. The power of this level is enormous and so are its responsibilities, but it is far less clear to whom they are responsible. It is easy enough to identify the aims and ambitions of this level of education, but it is more difficult to describe the ways in which it can be influenced and improved. One should avoid policing of thought or the proclamation of compulsory national, ethnic, or religious sets of forms and doctrines. But how does one maintain a climate of openness to many available forms of knowledge which would insure that whatever one sponsors reflects traditions when needed, without becoming absurdly self-centric or entirely transformed by foreign imports? When can one abandon one’s obligation to the past and endorse a present in conflict with that past? These are not easy questions to answer, but they must be dealt with by those who advise leaders, if not by the leaders themselves. And they require a very broad education as well as some passion.
The third level of education lies with the general public. There are many myths and falsehoods in the collective memory of large and small groups of people. Such misconceptions can be dangerous, especially when picked up by ignorant news media; they can lead to the destruction of monuments or of sites, to the assassination of opponents, or, on a less upsetting level, to the broadcasting of false slogans or the sponsorship of dubious causes and unacceptable opinions. The recent often tragic, at times merely ludicrous, events surrounding the portraits of the Prophet or the wearing of the burqa have shown how easy it is to inspire actions through ignorance and to whip up destruction instead of discussion. Here I am thinking about a topic removed from my area of knowledge and competence, but I would argue that public education must concentrate on the media shared by all, such as radio, film, and television, and on the primary and secondary schools attended by all boys and girls. The enlightenment and training of primary and secondary school teachers seems to me essential, because they ultimately fashion the beliefs, attitudes, and eventually passions of all men and women. In the case of teachers, it should be relatively easy to develop adequate programs, because most teachers are dedicated to the task of educating the young, even when they are not properly informed of what to teach them. Matters are much less clear when one deals with the media, so often responsible for the dramas of today. But this is yet another area where I have little factual knowledge and whose discussion must be left to others.
On my first level, that of scholarly knowledge and education, the needs are fairly clear and require only important technical components to be successful. The profession of university level academics, teachers and thinkers in professoinal schools, and well-established practitioners can be reached with a minimum of effort, once certain mechanisms of information and judgement are developed and the gap lessened between wealthy and poor countries. Matters are more complicated when we deal with education for leaders and for the general public. The Aga Khan Award for Architecture and its several side developments are, to my knowledge, the only organizations which have tried to reach publics that are so different from the professionals of architecture. But I am not aware of any formal depiction of its efforts or of any professional evaluation of its impact beyond the moment of pride and of joy that accompanies any award.
I would like to conclude my remarks by repeating, first of all, that much has been accomplished in the practice of architecture in Islamic societies and in the awareness by the rest of the world of the quality of that architecture and of the talents of those who execute it. Thirty five years ago our experience had been that students in professional schools sought their models exclusively in Europe and America, not in their own backyards; and Hassan Fathy was more honored as a person than as a model. This is no longer true today, when a knowledge of the vernacular and sometimes even pride in it are present to architects in the Muslim world and Muslim architects are practicing successfully all over the world. Furthermore, information is relatively easily available everywhere, although not systematically enough and with often dubious interpretations. In general, the levels of knowledge and of practice have changed considerably and can meet the lofty ideals sketched out thirty-five years ago. Other things have changed as well. The collapse of the Soviet Union brought seven new countries within our collective awareness and, more significantly, liberated a high level of technical knowhow, still poorly exploited because of linguistic barriers. Muslim Africa and Malaysia are no longer the unknown visual experiences they were then. And the complexities of the Muslim presence in non-Muslim lands have affected the minds of all men and women, unfortunately not always for the good.
These are all developments that could have made all of us richer and more sophisticated in our understanding of the vast Muslim world. But have they done so? Not always. In a paradoxical way, they have strengthened parochialism, because few thinkers, practitioners or generalists are really involved in imagining what Morocco, Uzbekistan, or Malaysia have in common. It has become easier to firm up one’s own rather than to drown it with too much parallel information. This development is clearly tied to the rise of local nationalisms of all sorts and may not subside, but national passions are so tragically inbred in men and women that I can only hope that the global humanism I wish to preach is not a dream. Then, the rise of violent extremism among Muslims and a destructive and senseless response among non-Muslims have led, for our academic and practical purposes, to an almost paranoiac concern for security and to restrictions on travel or on exchanges of all sorts which are harmful to the growth, even the maintenance, of learned connections and of a fruitful knowledge. Behind both extremism and the response to it lies a profound ignorance of everything, from the interpretation of religious texts and awareness of history to the beliefs and motivations of others than ourselves.
These are all rather frightening prospects, especially when it is so easy to draw a vision of a rich and productive future, in which local creativity can enhance the lives of all men and women and become known to all of them, from financial or political leaders to schoolchildren. People of my age will not know whether this vision will ever become real, but we all know that those here and elsewhere who are under fifty have an exciting challenge ahead of them, and, thanks to the Aga Khan Award and to a number of parallel institutions, one or more vehicle to meet that challenge.
http://www.akdn.org/Content/1037/Speech ... mans-Award
Speech by Oleg Grabar, Recipient of the 2010 Chairman’s Award, at the Aga Khan Award for Architecture 2010 Award Presentation Ceremony (Doha, Qatar)
24 November 2010
Please also see: Recipients of the 2010 Aga Khan Award for Architecture (Press Section)
It is with much pride and gratitude that I have accepted the Chairman’s Award from the Aga Khan Award for Architecture, following Hassan Fathy, Rifat Chaderji, and Sir Geoffrey Bawa. It is an honor to join such a distinguished company and, even more so, to be the first one who is not an architect nor a planner, not even a decision-maker at the level at which I remember the term being used in the deliberations of the Steering Committee many years ago (and a term to which I shall return), but an academic scholar and teacher who has spent his life in universities and research institutes learning and then transmitting to others, in lectures, seminars, and writing whatever I had learned and understood. And it is with these two themes of knowledge and of education that my remarks will deal.
But, first, let me add that my acceptance contains also a sprinkling of somewhat sentimental memories and I want to begin with a few of these, because they have a bearing on the achievements of the past 35 years, on the subject of my talk, and on the expectations we can have for the future.
Some thirty-five years ago, when I was a member of the first Steering Committee gathered to help His Highness the Aga Khan design what was then simply the Aga Khan Award for Architecture and which is now an enormous enterprise operating on five continents, his dream and vision for the growth and development of the environment of Muslims, wherever they live and work, were already fully present in his spirit, but neither, with all due respect, His Highness nor any of the six or seven people gathered to help him out had any clear idea of what to do and how to translate his vision into a reality. The story of how it eventually happened will never be told, because a dozen separate memories are involved, some of the key participants are no longer alive, and, to my recollection, no coherent record was kept of the inventive and creative discussions among a dozen or so imaginative, hard-working, and witty men and one woman, who bear the responsibility of what happened.
Two questions dominated our discussions then. One was: is there an abstract cultural phenomenon to be called Islamic architecture that is not simply whatever architecture is or was used by Muslims, and that could be defined as different from whatever was done elsewhere or for other human groups ? And how do we find out what it is? The other question was: once we find what it is, how do we let the world in general and Muslim communities in particular know what it is or was? The aim, or one of the aims, of the Award was to help maintain the quality and presumed uniqueness of this architecture, while bringing it up to the most effective economic, technical, and cultural practices of our own day. There was something simple-minded in our feelings then that the local past was almost always genuine and good and that contemporary universal ways were usually meaningless. We were clearly wrong then, mostly because of ignorance of what was really going on and because we were ourselves the victims of very narrow prejudices. We all felt that weakness. And one of the main objectives of our meetings was to acquire a knowledge of planning and constructional practice and to provide a program of creative education. We were not to be restricted by arbitrary opinions nor by presumably established doctrines.
In a sense, our task of many years back was justified by an often quoted Tradition (hadith) attributed to the Prophet Muhammad that knowledge must be sought wherever it is found, even in China. China in the seventh century of the common era and the first century of the hijrah was a way to identify a remote world known to exist and to be important, but hardly an accessible one. The point of the Tradition is that there is knowledge everywhere, none of which should be rejected without being tested. Both of these implications are still pertinent today. Knowledge is indeed created everywhere and China has become a central actor in the cultural as well as economic realms of today’s world. What has changed dramatically since the time of the Prophet and what keeps changing in ways which are almost impossible to predict are the nature of knowledge and the means in our possession to deal with it.
Such contemporary comments on the hadith as are known to me do not talk about education. At the time of the Prophet, transmission of practical or philosophical knowledge was relatively simple, through writing, copying and reading books, through oral arguments kept in the memories of participants, and through the continuing practice of artisanal procedures carried from father to son, from master to apprentice, and from region to region. Any capable and intelligent person was then able to master most of what was known. The breadth of knowledge within the minds of many talented individual before the seventeenth century can at times be truly stunning, even if such individuals were rare. Education was one with information and knowledge and it took place wherever there was a library and a few literate and concerned individuals, or within ateliers of artisans.
Today’s scene is dramatically different. There are as many centers producing information as there are countries with universities, technical schools, archaeological institutes, hospitals, architectural firms, or museums. Much of this information is available in what I once counted as twenty-six different languages (I am sure it is many more now). It exists in millions of books, hundreds of journals, thousands of reports, and now, thanks to the Internet and to Google, this knowledge is accessible, in theory at least, almost everywhere in the world. Museum collections have been photographed and recorded, exhibitions kept forever on DVD’s. And I suppose that architectural firms and excavators preserve whatever they find and do on masses of disks. In short, the quantity of available information is enormous, so large that it cannot be mastered and it is easy to forget whatever one has just found out. No one can say anymore that he knows all about Islamic art, about the architectural projects of today, about excavations or about objects from any one period of history, or about anything but a narrow strip of constantly changing information.
And I can even go a step further. As I discovered recently while listening to a lecture on Physics by a Nobel Prize winner of recent vintage, what we see and can describe as a variety of people in this original building is only one reality, one truth. Both people and architecture or any space can also be defined as an infinite number of quarks and electrons in constant motion. This particular reality is invisible and can only be measured in mathematical terms. In fact, it is rather curious that this fact of an invisible reality (to contemporary physicists, I gather, there are several such parallel realities) of everything coincides in many ways with the theory of atomism developed in Ancient Greece and then transformed in ninth century Baghdad. This theory acknowledged the existence of an invisible reality of all things, a reality which was or could be modified, but God alone was empowered to modify these constructions – that is, us as people or everything around us, including what we think we have created– or to keep them as they were. Within this traditional scheme, truth was always one and invisible; our own science today assumes the existence of several parallel truths, in addition to the one we see with our eyes. It is often the case that transcendental and laboratory-defined explanations come close to each other, but they can never meet for one is ultimately based on beliefs and the other one on experiments. This is altogether an area of concern I shall not touch on today, except perhaps a little bit in conclusion, but it is an important one. How do we separate what we know from what we believe? Or do we? Should we?
This contemporary explosion of data has by necessity created two paths in dealing with it. One way is to narrow one’s competence and to claim total or near total knowledge only in narrowly defined spheres, the Ottoman world of the eighteenth century, the ceramics of Iran, the construction of minarets, or the contribution of Hassan Fathy to contemporary architecture in the Islamic world. Specialization becomes the order given to knowledge and it tends to be determined by the narrow restrictions provided by limited linguistic competence or limited area awareness, or even limited mental capacities. Specialization tends to become national and linguistically limited, but it presumes thoroughness and completeness in dealing with its subjects. It also requires large numbers of equally competent specialists, properly distributed everywhere and well versed in other languages and in other histories, who may or may not find ways of communicating with each other. Ultimately, however, this successful specialization is impossible to achieve and this way of proceeding compels one to lie about what one knows or to project arbitrarily a limited experience to other areas of knowledge.
I will give you just one example taken from history, although contemporary politics provides many such examples. Traditional architectural decoration of Andalus in Spain, or Egypt, Iraq, Central Asia, is dominated after the tenth century by a complex geometry. It is easy to argue that this is the result of Islamic religious and philosophical thought that rejected or avoided resemblances to natural creations and found in geometry an abstract truth which could be given esthetic values. But it is also possible to show that rather different mathematical theories had developed in Central Asia and in Andalus and that the practice of artisanship was quite different in the two regions. The existing designs did not reflect the same principles of thought and different social and cultural interpretations must be provided for motifs that are strikingly similar and especially for a similar transfer of abstract thought to architectural forms. It is easy to provide a universal or pan-Islamic interpretation because external, political or cultural, forces today require such an interpretation, while in fact, for a scholar or a student aware of the details of several separate histories, very different social, ethnic, pious, and intellectual factors were involved. But what is more important, the historian’s search for and eventual knowledge of the truth or the contemporary political or social leader’s need to satisfy contemporary emotional needs? Any answer bears heavy political and ideological implications. Here is, however, an instance without deep political implications. Many years ago, as I shared a panel in Indonesia with Hassan Fathy, the latter made a very eloquent speech on the necessity to save water in an Islamic society and gave appropriate Arabian or Egyptian examples. But his Indonesian audience replied that in Java, water is a danger against which you need protection and that the principles of the faith have nothing to do with it. Which position is more “Islamic,” the one that preaches saving water or its opposite, getting rid of it as efficiently as possible?
The other direction proposed by the explosion of information was outlined to me some years ago by an early Internet activist with much experience in the physical and natural sciences, who was installing a new computer in my office. He wanted to let me know what a wonderful progress was being made available for my research. Just as happened, apparently, with chemistry, he argued that every week I could receive automatically, as in a newsletter or, today, by email, a summary in English and with illustrations of every publication about the history of Islamic art or the practice of contemporary architects, or both. This survey would include a judgement as to the significance and value of that information, wherever it appeared and in whatever language it had been originally written. Even if one grants that his hyperbolic enthusiasm for weekly accounts gives more credit than deserved to the activities of the few who deal with the arts of the Muslim world, his basic point was simple. The explosion of information is, first of all, a vehicle in which it is all gathered together and it requires the formation of a class of intermediary handlers –I suppose we could call them consultants today or executive assistants-- who channel information and evaluate it for the use of others. They would be collectively competent in all appropriate languages, they would have a literate command of English, and they would undergo a type of training that would guarantee the accuracy of what they relate and its appropriateness to whatever we need and already know.
In chemistry, as in the political thought of our own times, this accuracy is a variable and many of the reasons for the failures of contemporary political leadership is that the experts cannot manage to keep up with change. Things are probably a bit simpler in architecture related matters. To some degree, for our broad area of the man-made environment of the Islamic world, this consulting function is partly fulfilled by ArchNet, the creation of the Aga Khan Program at Harvard and MIT. This is more or less true with respect to information. But I am not sure that ArchNet possesses well developed critical abilities and that it is capable of reacting rapidly and intelligently to new knowledge and of distributing its awareness to all of its constituents, whether they asked for it or not. Part of my uncertainty derives not so much from failures in the operation of ArchNet, but in the absence of broad categories for the understanding of architecture that would automatically be known to all and consistently included in all new information. We cannot expect something as direct and universal as mathematical formulas are, but we should be able to develop standard categories of description and interpretation which would be expressed in any language.
A simple example of such a category is the material of construction: stone, wood, brick, concrete. We think we know what these terms mean, but it is enough to pick up any book in Russian or Uzbek on architecture in Khorasan and Transoxiana to become totally confused about the terms used for different types of mud bricks which seem to have been used. But there are much more complex categories of understanding which are either, like style, impossible to define, or, like design, too difficult to explain in theory, if not in practice. Finally, while the means exist to make knowledge of architecture available, this is not true of the other arts, where utter disorder of knowledge is still the rule and where very few categories of identification and description exist. And here is another example. All museums exhibit their treasures as closed collections tied to a donor or to a space, but all historians always jump from one collection to the other in search of comparative material or in order to explain complete series of artefacts, the Fatimid ceramics of Egypt or Mughal miniatures. Very different basic information is required by each of these procedures.
I may add, without suggesting it as an immediate possibility, that is may be possible to express these categories of understanding as drawings or charts and models, as visual symbols, usually easy to store and to understand and available in the one language every practitioner, the scholar or the urbanist, would have to learn. But, then, as I reflect on the inane conclusions our governments draw so often from statistics and models made by economists, I am afraid of even suggesting this approach. Someone else will eventually do it better.
Let me sum up then this first part of my talk. The explosion of knowledge of architecture, the built environment in general, and all other arts from the areas in which, now or in the past, Muslims are or were present and active, this explosion consists of two components. There is information, the immense body of documents, from individual buildings to aggregates of buidings creating spaces to a written documentation that ranges from descriptive accounts of the built environment to the legal restrictions attached to them (I am thinking of the thousands of remaining waqfiyahs dealing with the urban environment of most Muslim cities), to the multiple ways in which they were or are used, to the critical record of how they were received, even to philosophical or literary considerations on architecture, although we have few of these, at least to my knowledge. But to know how to find our way in this mass of information, we need knowledge, codes and protocols, ways of access to information that has already been processed for easy use. These codes still have to be generated if we hope to make the mass of existing information usable in an intellectually or morally acceptable way. This is where my way of presenting the evidence leads me to those who rule the architectural and artistic universe, the patrons of art and of museums and the users of works of art, the decision makers to whom we owe so much of what is around us. It is, I believe, their responsibility, I was going to say obligation, to sponsor the creation of such a system for access to information and to support for several years the teams of young men and women who could develop and handle it. It may initially be an expensive task, because errors will be made, and I could tell you more than one pathetic or comic story of our own expensive mistakes of over thirty years ago while building up the Aga Khan Award and then ArchNet. But ultimately information can be tamed and made accessible to all who need it in an acceptable form.
Yet it is not simply a matter of establishing categories of description and of understanding. It is also a matter of making these categories enter the intelligence of people and groups. To do so is how I understand the purposes and requirements of education and I would like to turn now to some remarks on what it is and how it operates.
Education can and should be understood at three different levels.
The first level is the scholarly one, the level of the learned practitioner. It is the highest one because its aim extends beyond existing knowledge to the creation of further knowledge and because it is –or should be-- equipped to communicate with all scholarship in all fields of the humanities and the social sciences. I insist on this point, as I feel very strongly that comparative understanding is a key feature of learned scholarship. Among other things, it permits to avoid the dominance by western paradigms. Over the years I profited a great deal from whatever I learned from contemporary theories of structuralism and linguistics and I owe a great deal of my understanding of Islamic art to the more developed methods of dealing with western art. But this use did not mean that works of Islamic art were like works of western art; it did, however imply the existence of broad, universal, principles behind our understanding of the arts and a good specialist in Islamic art should easily be ready to handle Christian or classical art.
This learned level is also the easiest one to handle and to understand. Naturally and professionally it is centered on maximum information and on the development of ideas. It is only restricted by the linguistic and intellectual limitations of its practitioners and by the time available to deal with it. The development of consultants or assistants as outlined earlier and the improvements in the operation of the Internet should lead to a scholarship that would improve individual learning and that would be made available through the usual mechanisms of higher education like seminars with students, colloquia with colleagues, publication in books with a, by necessity, limited public or through often obscure periodicals. But this level will always remain a relatively restricted one, because it requires not only many technical, especially linguistic, competencies, but because it also demands a passion for learning which is time and mind consuming and which exists only in a few people.
The second level can be called the level of social leadership. It involves those people and institutions that are running governments and financial or industrial enterprises and defining the cultural context of their actions. They make decisions about school curricula and university programs, they sponsor films and television programs, they publish newspapers and magazines they patronize new projects. The form of the governments in which they operate varies a great deal and in their hands lies something even more important than the sponsorship of buildings or the interplay of social and political activities. They provide rewards and awards, they accept or reject the implications of new investments, whether an airport, a university, or the restoration of a historic building. They decide whether something is going to be called Islamic, Arab, or Egyptian, and they define the features to be used in preparing urban developments. They accept or change symbols –flags, occasionally clothes or simply colors– credibly associated with a land or a culture. The power of this level is enormous and so are its responsibilities, but it is far less clear to whom they are responsible. It is easy enough to identify the aims and ambitions of this level of education, but it is more difficult to describe the ways in which it can be influenced and improved. One should avoid policing of thought or the proclamation of compulsory national, ethnic, or religious sets of forms and doctrines. But how does one maintain a climate of openness to many available forms of knowledge which would insure that whatever one sponsors reflects traditions when needed, without becoming absurdly self-centric or entirely transformed by foreign imports? When can one abandon one’s obligation to the past and endorse a present in conflict with that past? These are not easy questions to answer, but they must be dealt with by those who advise leaders, if not by the leaders themselves. And they require a very broad education as well as some passion.
The third level of education lies with the general public. There are many myths and falsehoods in the collective memory of large and small groups of people. Such misconceptions can be dangerous, especially when picked up by ignorant news media; they can lead to the destruction of monuments or of sites, to the assassination of opponents, or, on a less upsetting level, to the broadcasting of false slogans or the sponsorship of dubious causes and unacceptable opinions. The recent often tragic, at times merely ludicrous, events surrounding the portraits of the Prophet or the wearing of the burqa have shown how easy it is to inspire actions through ignorance and to whip up destruction instead of discussion. Here I am thinking about a topic removed from my area of knowledge and competence, but I would argue that public education must concentrate on the media shared by all, such as radio, film, and television, and on the primary and secondary schools attended by all boys and girls. The enlightenment and training of primary and secondary school teachers seems to me essential, because they ultimately fashion the beliefs, attitudes, and eventually passions of all men and women. In the case of teachers, it should be relatively easy to develop adequate programs, because most teachers are dedicated to the task of educating the young, even when they are not properly informed of what to teach them. Matters are much less clear when one deals with the media, so often responsible for the dramas of today. But this is yet another area where I have little factual knowledge and whose discussion must be left to others.
On my first level, that of scholarly knowledge and education, the needs are fairly clear and require only important technical components to be successful. The profession of university level academics, teachers and thinkers in professoinal schools, and well-established practitioners can be reached with a minimum of effort, once certain mechanisms of information and judgement are developed and the gap lessened between wealthy and poor countries. Matters are more complicated when we deal with education for leaders and for the general public. The Aga Khan Award for Architecture and its several side developments are, to my knowledge, the only organizations which have tried to reach publics that are so different from the professionals of architecture. But I am not aware of any formal depiction of its efforts or of any professional evaluation of its impact beyond the moment of pride and of joy that accompanies any award.
I would like to conclude my remarks by repeating, first of all, that much has been accomplished in the practice of architecture in Islamic societies and in the awareness by the rest of the world of the quality of that architecture and of the talents of those who execute it. Thirty five years ago our experience had been that students in professional schools sought their models exclusively in Europe and America, not in their own backyards; and Hassan Fathy was more honored as a person than as a model. This is no longer true today, when a knowledge of the vernacular and sometimes even pride in it are present to architects in the Muslim world and Muslim architects are practicing successfully all over the world. Furthermore, information is relatively easily available everywhere, although not systematically enough and with often dubious interpretations. In general, the levels of knowledge and of practice have changed considerably and can meet the lofty ideals sketched out thirty-five years ago. Other things have changed as well. The collapse of the Soviet Union brought seven new countries within our collective awareness and, more significantly, liberated a high level of technical knowhow, still poorly exploited because of linguistic barriers. Muslim Africa and Malaysia are no longer the unknown visual experiences they were then. And the complexities of the Muslim presence in non-Muslim lands have affected the minds of all men and women, unfortunately not always for the good.
These are all developments that could have made all of us richer and more sophisticated in our understanding of the vast Muslim world. But have they done so? Not always. In a paradoxical way, they have strengthened parochialism, because few thinkers, practitioners or generalists are really involved in imagining what Morocco, Uzbekistan, or Malaysia have in common. It has become easier to firm up one’s own rather than to drown it with too much parallel information. This development is clearly tied to the rise of local nationalisms of all sorts and may not subside, but national passions are so tragically inbred in men and women that I can only hope that the global humanism I wish to preach is not a dream. Then, the rise of violent extremism among Muslims and a destructive and senseless response among non-Muslims have led, for our academic and practical purposes, to an almost paranoiac concern for security and to restrictions on travel or on exchanges of all sorts which are harmful to the growth, even the maintenance, of learned connections and of a fruitful knowledge. Behind both extremism and the response to it lies a profound ignorance of everything, from the interpretation of religious texts and awareness of history to the beliefs and motivations of others than ourselves.
These are all rather frightening prospects, especially when it is so easy to draw a vision of a rich and productive future, in which local creativity can enhance the lives of all men and women and become known to all of them, from financial or political leaders to schoolchildren. People of my age will not know whether this vision will ever become real, but we all know that those here and elsewhere who are under fifty have an exciting challenge ahead of them, and, thanks to the Aga Khan Award and to a number of parallel institutions, one or more vehicle to meet that challenge.
http://www.akdn.org/Content/1037/Speech ... mans-Award
December 31, 2010
This Year, Change Your Mind
By OLIVER SACKS
NEW Year’s resolutions often have to do with eating more healthfully, going to the gym more, giving up sweets, losing weight — all admirable goals aimed at improving one’s physical health. Most people, though, do not realize that they can strengthen their brains in a similar way.
While some areas of the brain are hard-wired from birth or early childhood, other areas — especially in the cerebral cortex, which is central to higher cognitive powers like language and thought, as well as sensory and motor functions — can be, to a remarkable extent, rewired as we grow older. In fact, the brain has an astonishing ability to rebound from damage — even from something as devastating as the loss of sight or hearing. As a physician who treats patients with neurological conditions, I see this happen all the time.
For example, one patient of mine who had been deafened by scarlet fever at the age of 9, was so adept at lip-reading that it was easy to forget she was deaf. Once, without thinking, I turned away from her as I was speaking. “I can no longer hear you,” she said sharply.
“You mean you can no longer see me,” I said.
“You may call it seeing,” she answered, “but I experience it as hearing.”
Lip-reading, seeing mouth movements, was immediately transformed for this patient into “hearing” the sounds of speech in her mind. Her brain was converting one mode of sensation into another.
In a similar way, blind people often find ways of “seeing.” Some areas of the brain, if not stimulated, will atrophy and die. (“Use it or lose it,” neurologists often say.) But the visual areas of the brain, even in someone born blind, do not entirely disappear; instead, they are redeployed for other senses. We have all heard of blind people with unusually acute hearing, but other senses may be heightened, too.
For example, Geerat Vermeij, a biologist at the University of California-Davis who has been blind since the age of 3, has identified many new species of mollusks based on tiny variations in the contours of their shells. He uses a sort of spatial or tactile giftedness that is beyond what any sighted person is likely to have.
The writer Ved Mehta, also blind since early childhood, navigates in large part by using “facial vision” — the ability to sense objects by the way they reflect sounds, or subtly shift the air currents that reach his face. Ben Underwood, a remarkable boy who lost his sight at 3 and died at 16 in 2009, developed an effective, dolphin-like strategy of emitting regular clicks with his mouth and reading the resulting echoes from nearby objects. He was so skilled at this that he could ride a bike and play sports and even video games.
People like Ben Underwood and Ved Mehta, who had some early visual experience but then lost their sight, seem to instantly convert the information they receive from touch or sound into a visual image — “seeing” the dots, for instance, as they read Braille with a finger. Researchers using functional brain imagery have confirmed that in such situations the blind person activates not only the parts of the cortex devoted to touch, but parts of the visual cortex as well.
One does not have to be blind or deaf to tap into the brain’s mysterious and extraordinary power to learn, adapt and grow. I have seen hundreds of patients with various deficits — strokes, Parkinson’s and even dementia — learn to do things in new ways, whether consciously or unconsciously, to work around those deficits.
That the brain is capable of such radical adaptation raises deep questions. To what extent are we shaped by, and to what degree do we shape, our own brains? And can the brain’s ability to change be harnessed to give us greater cognitive powers? The experiences of many people suggest that it can.
One patient I knew became totally paralyzed overnight from a spinal cord infection. At first she fell into deep despair, because she couldn’t enjoy even little pleasures, like the daily crossword she had loved.
After a few weeks, though, she asked for the newspaper, so that at least she could look at the puzzle, get its configuration, run her eyes along the clues. When she did this, something extraordinary happened. As she looked at the clues, the answers seemed to write themselves in their spaces. Her visual memory strengthened over the next few weeks, until she found that she was able to hold the entire crossword and its clues in her mind after a single, intense inspection — and then solve it mentally. She had had no idea, she later told me, that such powers were available to her.
This growth can even happen within a matter of days. Researchers at Harvard found, for example, that blindfolding sighted adults for as few as five days could produce a shift in the way their brains functioned: their subjects became markedly better at complex tactile tasks like learning Braille.
Neuroplasticity — the brain’s capacity to create new pathways — is a crucial part of recovery for anyone who loses a sense or a cognitive or motor ability. But it can also be part of everyday life for all of us. While it is often true that learning is easier in childhood, neuroscientists now know that the brain does not stop growing, even in our later years. Every time we practice an old skill or learn a new one, existing neural connections are strengthened and, over time, neurons create more connections to other neurons. Even new nerve cells can be generated.
I have had many reports from ordinary people who take up a new sport or a musical instrument in their 50s or 60s, and not only become quite proficient, but derive great joy from doing so. Eliza Bussey, a journalist in her mid-50s who now studies harp at the Peabody conservatory in Baltimore, could not read a note of music a few years ago. In a letter to me, she wrote about what it was like learning to play Handel’s “Passacaille”: “I have felt, for example, my brain and fingers trying to connect, to form new synapses. ... I know that my brain has dramatically changed.” Ms. Bussey is no doubt right: her brain has changed.
Music is an especially powerful shaping force, for listening to and especially playing it engages many different areas of the brain, all of which must work in tandem: from reading musical notation and coordinating fine muscle movements in the hands, to evaluating and expressing rhythm and pitch, to associating music with memories and emotion.
Whether it is by learning a new language, traveling to a new place, developing a passion for beekeeping or simply thinking about an old problem in a new way, all of us can find ways to stimulate our brains to grow, in the coming year and those to follow. Just as physical activity is essential to maintaining a healthy body, challenging one’s brain, keeping it active, engaged, flexible and playful, is not only fun. It is essential to cognitive fitness.
Oliver Sacks is the author of “The Mind’s Eye.”
http://www.nytimes.com/2011/01/01/opini ... &emc=thab1
This Year, Change Your Mind
By OLIVER SACKS
NEW Year’s resolutions often have to do with eating more healthfully, going to the gym more, giving up sweets, losing weight — all admirable goals aimed at improving one’s physical health. Most people, though, do not realize that they can strengthen their brains in a similar way.
While some areas of the brain are hard-wired from birth or early childhood, other areas — especially in the cerebral cortex, which is central to higher cognitive powers like language and thought, as well as sensory and motor functions — can be, to a remarkable extent, rewired as we grow older. In fact, the brain has an astonishing ability to rebound from damage — even from something as devastating as the loss of sight or hearing. As a physician who treats patients with neurological conditions, I see this happen all the time.
For example, one patient of mine who had been deafened by scarlet fever at the age of 9, was so adept at lip-reading that it was easy to forget she was deaf. Once, without thinking, I turned away from her as I was speaking. “I can no longer hear you,” she said sharply.
“You mean you can no longer see me,” I said.
“You may call it seeing,” she answered, “but I experience it as hearing.”
Lip-reading, seeing mouth movements, was immediately transformed for this patient into “hearing” the sounds of speech in her mind. Her brain was converting one mode of sensation into another.
In a similar way, blind people often find ways of “seeing.” Some areas of the brain, if not stimulated, will atrophy and die. (“Use it or lose it,” neurologists often say.) But the visual areas of the brain, even in someone born blind, do not entirely disappear; instead, they are redeployed for other senses. We have all heard of blind people with unusually acute hearing, but other senses may be heightened, too.
For example, Geerat Vermeij, a biologist at the University of California-Davis who has been blind since the age of 3, has identified many new species of mollusks based on tiny variations in the contours of their shells. He uses a sort of spatial or tactile giftedness that is beyond what any sighted person is likely to have.
The writer Ved Mehta, also blind since early childhood, navigates in large part by using “facial vision” — the ability to sense objects by the way they reflect sounds, or subtly shift the air currents that reach his face. Ben Underwood, a remarkable boy who lost his sight at 3 and died at 16 in 2009, developed an effective, dolphin-like strategy of emitting regular clicks with his mouth and reading the resulting echoes from nearby objects. He was so skilled at this that he could ride a bike and play sports and even video games.
People like Ben Underwood and Ved Mehta, who had some early visual experience but then lost their sight, seem to instantly convert the information they receive from touch or sound into a visual image — “seeing” the dots, for instance, as they read Braille with a finger. Researchers using functional brain imagery have confirmed that in such situations the blind person activates not only the parts of the cortex devoted to touch, but parts of the visual cortex as well.
One does not have to be blind or deaf to tap into the brain’s mysterious and extraordinary power to learn, adapt and grow. I have seen hundreds of patients with various deficits — strokes, Parkinson’s and even dementia — learn to do things in new ways, whether consciously or unconsciously, to work around those deficits.
That the brain is capable of such radical adaptation raises deep questions. To what extent are we shaped by, and to what degree do we shape, our own brains? And can the brain’s ability to change be harnessed to give us greater cognitive powers? The experiences of many people suggest that it can.
One patient I knew became totally paralyzed overnight from a spinal cord infection. At first she fell into deep despair, because she couldn’t enjoy even little pleasures, like the daily crossword she had loved.
After a few weeks, though, she asked for the newspaper, so that at least she could look at the puzzle, get its configuration, run her eyes along the clues. When she did this, something extraordinary happened. As she looked at the clues, the answers seemed to write themselves in their spaces. Her visual memory strengthened over the next few weeks, until she found that she was able to hold the entire crossword and its clues in her mind after a single, intense inspection — and then solve it mentally. She had had no idea, she later told me, that such powers were available to her.
This growth can even happen within a matter of days. Researchers at Harvard found, for example, that blindfolding sighted adults for as few as five days could produce a shift in the way their brains functioned: their subjects became markedly better at complex tactile tasks like learning Braille.
Neuroplasticity — the brain’s capacity to create new pathways — is a crucial part of recovery for anyone who loses a sense or a cognitive or motor ability. But it can also be part of everyday life for all of us. While it is often true that learning is easier in childhood, neuroscientists now know that the brain does not stop growing, even in our later years. Every time we practice an old skill or learn a new one, existing neural connections are strengthened and, over time, neurons create more connections to other neurons. Even new nerve cells can be generated.
I have had many reports from ordinary people who take up a new sport or a musical instrument in their 50s or 60s, and not only become quite proficient, but derive great joy from doing so. Eliza Bussey, a journalist in her mid-50s who now studies harp at the Peabody conservatory in Baltimore, could not read a note of music a few years ago. In a letter to me, she wrote about what it was like learning to play Handel’s “Passacaille”: “I have felt, for example, my brain and fingers trying to connect, to form new synapses. ... I know that my brain has dramatically changed.” Ms. Bussey is no doubt right: her brain has changed.
Music is an especially powerful shaping force, for listening to and especially playing it engages many different areas of the brain, all of which must work in tandem: from reading musical notation and coordinating fine muscle movements in the hands, to evaluating and expressing rhythm and pitch, to associating music with memories and emotion.
Whether it is by learning a new language, traveling to a new place, developing a passion for beekeeping or simply thinking about an old problem in a new way, all of us can find ways to stimulate our brains to grow, in the coming year and those to follow. Just as physical activity is essential to maintaining a healthy body, challenging one’s brain, keeping it active, engaged, flexible and playful, is not only fun. It is essential to cognitive fitness.
Oliver Sacks is the author of “The Mind’s Eye.”
http://www.nytimes.com/2011/01/01/opini ... &emc=thab1
February 3, 2011
Even More Things in Heaven and Earth
By MICHAEL BYERS
Ann Arbor, Mich.
ASTRONOMERS announced last month that, contrary to previous assumptions, the orbiting body Eris might be smaller than Pluto after all. Since it was the discovery in 2005 of Eris, an object seemingly larger than what had been considered our smallest planet, that precipitated the downgrading of Pluto from full planet to “dwarf,” some think it may be time to revisit Pluto’s status.
Most of us can’t help rooting for Pluto. We liked the idea of a ninth planet, hanging out there like a period at the end of the gorgeous sentence of the solar system. It gave us a sense of completeness. And besides, we were used to it. Pluto’s demotion caused such an outcry because it altered something we thought we knew to be true about our world.
Of course, science doesn’t, and shouldn’t, care what we learned in first grade. If Pluto’s odyssey teaches us anything, it’s that whenever we think we’ve discovered a measure of certainty about the universe, it’s often fleeting, and more often pure dumb luck. The 1930 discovery of Pluto — by Clyde Tombaugh, who coincidentally was born 105 years ago today — is a prime example, a testament not only to Tombaugh’s remarkable perseverance, but also to how a stupendously unlikely run of circumstances can lead to scientific glory.
The search for a ninth planet was led by the Harvard-trained Percival Lowell, a Boston Brahmin who was widely known for announcing the existence of a Martian civilization. Lowell’s hypotheses about a “Planet X” were based on optimistic interpretations of inconclusive data. Many had observed that the orbit of Uranus seemed to be perturbed by a gravitational influence beyond the orbit of Neptune. If the source of that pull could be determined, he speculated, a fellow could point a telescope at that source and find an undiscovered world.
So Lowell, in the Arizona observatory he had built, set out to do just that. His method was not without precedent. But in 1916, after more than a decade of exquisitely delicate mathematics and erratic searching, Lowell died, his reputation as a gifted crackpot confirmed.
Thirteen years later, Clyde Tombaugh was hired by V. M. Slipher, the director of the Lowell Observatory, to resume the search.
At 22, Tombaugh had been making his own telescopes for years in a root cellar on his father’s Kansas farm (where the air was cool and still enough to allow for the correction of microscopic flaws in the mirrors he polished by hand). The resulting telescopes were of such high quality that Tombaugh could draw the bands of weather on Jupiter, 400 million miles away. Ambitious but too poor to afford college, Tombaugh had written at random to Slipher, seeking career advice. Slipher took a look at the drawings that Tombaugh had included, and invited him to Arizona.
For months, Tombaugh used a device called a blink comparator to pore over scores of photographic plates, hunting for one moving pinprick amid millions of stars. When he finally found the moving speck in February 1930, it was very nearly where Lowell’s mathematics had predicted it would be. Headlines proclaimed Lowell’s predictions confirmed.
But there was something strange about the object. It soon became apparent that it was much too small to have exerted any effect on Uranus’s orbit. Astronomers then assumed it had to be a moon, with a larger planet nearby. But despite more searching, no larger object came to light.
They eventually had to face the fact that the discovery of a new object so near Lowell’s predicted location was nothing more than a confounding coincidence. Tombaugh’s object, soon christened Pluto, wasn’t Planet X. Instead of an example of good old American vision and know-how, the discovery was the incredibly fluky result of a baseless dream.
Decades later, this was proved doubly true. Data from the 1989 Voyager 2 flyby showed that the mass of Neptune had been inaccurately measured by about 0.5 percent all along, and that, in fact, the orbit of Uranus had never been inexplicably disturbed to begin with. Percival Lowell had been hunting a ghost. And Clyde Tombaugh, against all odds, had found one.
All of which is to say, science is imperfect. It is a human enterprise, subject to passions and whims, accidents and luck. Astronomers have since discovered dozens of other objects in our solar system approaching Pluto’s size, amounting to a whole separate class of orbiting bodies. And just this week, researchers announced that they had identified 1,235 possible planets in other star systems.
We can mourn the demotion of our favorite planet. But the best way to honor Lowell and Tombaugh is to celebrate the fact that Pluto — while never quite the world it was predicted to be — is part of a universe more complex, varied and surprising than even its discoverers could have imagined.
Of course, for those of us who grew up chanting “My Very Eager Mother Just Served Us New Pizza,” nine planets will always seem more fitting than eight. New facts are unsettling. But with the right mindset, the new glories can more than make up for the loss of the old.
Michael Byers is the author of the novel “Percival’s Planet.”
http://www.nytimes.com/2011/02/04/opini ... emc=tha212
Even More Things in Heaven and Earth
By MICHAEL BYERS
Ann Arbor, Mich.
ASTRONOMERS announced last month that, contrary to previous assumptions, the orbiting body Eris might be smaller than Pluto after all. Since it was the discovery in 2005 of Eris, an object seemingly larger than what had been considered our smallest planet, that precipitated the downgrading of Pluto from full planet to “dwarf,” some think it may be time to revisit Pluto’s status.
Most of us can’t help rooting for Pluto. We liked the idea of a ninth planet, hanging out there like a period at the end of the gorgeous sentence of the solar system. It gave us a sense of completeness. And besides, we were used to it. Pluto’s demotion caused such an outcry because it altered something we thought we knew to be true about our world.
Of course, science doesn’t, and shouldn’t, care what we learned in first grade. If Pluto’s odyssey teaches us anything, it’s that whenever we think we’ve discovered a measure of certainty about the universe, it’s often fleeting, and more often pure dumb luck. The 1930 discovery of Pluto — by Clyde Tombaugh, who coincidentally was born 105 years ago today — is a prime example, a testament not only to Tombaugh’s remarkable perseverance, but also to how a stupendously unlikely run of circumstances can lead to scientific glory.
The search for a ninth planet was led by the Harvard-trained Percival Lowell, a Boston Brahmin who was widely known for announcing the existence of a Martian civilization. Lowell’s hypotheses about a “Planet X” were based on optimistic interpretations of inconclusive data. Many had observed that the orbit of Uranus seemed to be perturbed by a gravitational influence beyond the orbit of Neptune. If the source of that pull could be determined, he speculated, a fellow could point a telescope at that source and find an undiscovered world.
So Lowell, in the Arizona observatory he had built, set out to do just that. His method was not without precedent. But in 1916, after more than a decade of exquisitely delicate mathematics and erratic searching, Lowell died, his reputation as a gifted crackpot confirmed.
Thirteen years later, Clyde Tombaugh was hired by V. M. Slipher, the director of the Lowell Observatory, to resume the search.
At 22, Tombaugh had been making his own telescopes for years in a root cellar on his father’s Kansas farm (where the air was cool and still enough to allow for the correction of microscopic flaws in the mirrors he polished by hand). The resulting telescopes were of such high quality that Tombaugh could draw the bands of weather on Jupiter, 400 million miles away. Ambitious but too poor to afford college, Tombaugh had written at random to Slipher, seeking career advice. Slipher took a look at the drawings that Tombaugh had included, and invited him to Arizona.
For months, Tombaugh used a device called a blink comparator to pore over scores of photographic plates, hunting for one moving pinprick amid millions of stars. When he finally found the moving speck in February 1930, it was very nearly where Lowell’s mathematics had predicted it would be. Headlines proclaimed Lowell’s predictions confirmed.
But there was something strange about the object. It soon became apparent that it was much too small to have exerted any effect on Uranus’s orbit. Astronomers then assumed it had to be a moon, with a larger planet nearby. But despite more searching, no larger object came to light.
They eventually had to face the fact that the discovery of a new object so near Lowell’s predicted location was nothing more than a confounding coincidence. Tombaugh’s object, soon christened Pluto, wasn’t Planet X. Instead of an example of good old American vision and know-how, the discovery was the incredibly fluky result of a baseless dream.
Decades later, this was proved doubly true. Data from the 1989 Voyager 2 flyby showed that the mass of Neptune had been inaccurately measured by about 0.5 percent all along, and that, in fact, the orbit of Uranus had never been inexplicably disturbed to begin with. Percival Lowell had been hunting a ghost. And Clyde Tombaugh, against all odds, had found one.
All of which is to say, science is imperfect. It is a human enterprise, subject to passions and whims, accidents and luck. Astronomers have since discovered dozens of other objects in our solar system approaching Pluto’s size, amounting to a whole separate class of orbiting bodies. And just this week, researchers announced that they had identified 1,235 possible planets in other star systems.
We can mourn the demotion of our favorite planet. But the best way to honor Lowell and Tombaugh is to celebrate the fact that Pluto — while never quite the world it was predicted to be — is part of a universe more complex, varied and surprising than even its discoverers could have imagined.
Of course, for those of us who grew up chanting “My Very Eager Mother Just Served Us New Pizza,” nine planets will always seem more fitting than eight. New facts are unsettling. But with the right mindset, the new glories can more than make up for the loss of the old.
Michael Byers is the author of the novel “Percival’s Planet.”
http://www.nytimes.com/2011/02/04/opini ... emc=tha212