Concept of Knowledge Revisited
The Questions that Really Matter
The simplest questions are the most profound.
Where were you born?
Where is your home?
Where are you going?
What are you doing?
Think about these once in a while
and watch your answers change.
- Richard Bach
A prudent question is one-half of wisdom.
- Francis Bacon
The only questions that really matter are the ones you ask yourself.
- Ursula K. Le Guin
The wise man doesn't give the right answers,
he poses the right questions.
- Claude Levi-Strauss
Successful people ask better questions,
and as a result, they get better answers.
- Tony Robbins
Questions are a microscope peering into the depths of the soul.
- Jonathan Lockwood Huie
******
How Should We Respond to ‘Evil’?
Extract:
"Ricoeur agrees with many other thinkers that evil is not a thing per se, but rather exists in a sort of black hole of thought, an aporia. This fact alone complicates arguments for the destruction of evil: how do you obliterate something that has no substance? For Ricoeur, we conceive of evil through the realm of myth, or grand narratives that express common human experience. Myth is not false; rather, it encapsulates truth about subjects like evil that cannot be perceived fully through reason alone. In this sense, “the axis of evil” is, arguably, a kind of myth, an explanation that makes sense of calamity in a world we think of as otherwise good and in which we can all participate.
Because evil exists beyond the limits of reason, what matters for Ricoeur is not that we identify evil, but that we respond to it appropriately. He rightly observes that the tragedy of evil is not the act committed, but the experience of the victim. Separating evil perpetrated from evil suffered shifts the concern from what or who is evil to the best possible action in the face of it, which according to him is “not a solution, but a response.”
In the common conception, solutions to evil require retribution, and the most obvious way to achieve retribution is through violence. Responses, on the other hand, engender what Ricoeur calls “wisdom,” an unwavering commitment to relieve and prevent suffering. Any violence used in a response to evil would, therefore, be focused on the alleviation of suffering rather than the attempt to stamp out evil where we think we see it."
More...
http://www.nytimes.com/2016/06/27/opini ... collection
The simplest questions are the most profound.
Where were you born?
Where is your home?
Where are you going?
What are you doing?
Think about these once in a while
and watch your answers change.
- Richard Bach
A prudent question is one-half of wisdom.
- Francis Bacon
The only questions that really matter are the ones you ask yourself.
- Ursula K. Le Guin
The wise man doesn't give the right answers,
he poses the right questions.
- Claude Levi-Strauss
Successful people ask better questions,
and as a result, they get better answers.
- Tony Robbins
Questions are a microscope peering into the depths of the soul.
- Jonathan Lockwood Huie
******
How Should We Respond to ‘Evil’?
Extract:
"Ricoeur agrees with many other thinkers that evil is not a thing per se, but rather exists in a sort of black hole of thought, an aporia. This fact alone complicates arguments for the destruction of evil: how do you obliterate something that has no substance? For Ricoeur, we conceive of evil through the realm of myth, or grand narratives that express common human experience. Myth is not false; rather, it encapsulates truth about subjects like evil that cannot be perceived fully through reason alone. In this sense, “the axis of evil” is, arguably, a kind of myth, an explanation that makes sense of calamity in a world we think of as otherwise good and in which we can all participate.
Because evil exists beyond the limits of reason, what matters for Ricoeur is not that we identify evil, but that we respond to it appropriately. He rightly observes that the tragedy of evil is not the act committed, but the experience of the victim. Separating evil perpetrated from evil suffered shifts the concern from what or who is evil to the best possible action in the face of it, which according to him is “not a solution, but a response.”
In the common conception, solutions to evil require retribution, and the most obvious way to achieve retribution is through violence. Responses, on the other hand, engender what Ricoeur calls “wisdom,” an unwavering commitment to relieve and prevent suffering. Any violence used in a response to evil would, therefore, be focused on the alleviation of suffering rather than the attempt to stamp out evil where we think we see it."
More...
http://www.nytimes.com/2016/06/27/opini ... collection
There Is No Scientific Method
In 1970, I had the chance to attend a lecture by Stephen Spender. He described in some detail the stages through which he would pass in crafting a poem. He jotted on a blackboard some lines of verse from successive drafts of one of his poems, asking whether these lines (a) expressed what he wanted to express and (b) did so in the desired form. He then amended the lines to bring them closer either to the meaning he wanted to communicate or to the poetic form of that communication.
I was immediately struck by the similarities between his editing process and those associated with scientific investigation and began to wonder whether there was such a thing as a scientific method. Maybe the method on which science relies exists wherever we find systematic investigation. In saying there is no scientific method, what I mean, more precisely, is that there is no distinctly scientific method.
There is meaning, which we can grasp and anchor in a short phrase, and then there is the expression of that meaning that accounts for it, whether in a literal explanation or in poetry or in some other way. Our knowledge separates into layers: Experience provides a base for a higher layer of more conceptual understanding. This is as true for poetry as for science.
More...
http://www.nytimes.com/2016/07/04/opini ... .html?_r=0
In 1970, I had the chance to attend a lecture by Stephen Spender. He described in some detail the stages through which he would pass in crafting a poem. He jotted on a blackboard some lines of verse from successive drafts of one of his poems, asking whether these lines (a) expressed what he wanted to express and (b) did so in the desired form. He then amended the lines to bring them closer either to the meaning he wanted to communicate or to the poetic form of that communication.
I was immediately struck by the similarities between his editing process and those associated with scientific investigation and began to wonder whether there was such a thing as a scientific method. Maybe the method on which science relies exists wherever we find systematic investigation. In saying there is no scientific method, what I mean, more precisely, is that there is no distinctly scientific method.
There is meaning, which we can grasp and anchor in a short phrase, and then there is the expression of that meaning that accounts for it, whether in a literal explanation or in poetry or in some other way. Our knowledge separates into layers: Experience provides a base for a higher layer of more conceptual understanding. This is as true for poetry as for science.
More...
http://www.nytimes.com/2016/07/04/opini ... .html?_r=0
Becoming Wiser
The mind is not a vessel to be filled,
but a fire to be kindled.
- Plutarch
We are not afraid to follow truth
wherever it may lead,
nor to tolerate any error
so long as reason is left free to combat it.
- Thomas Jefferson
Education is not the filling of a pail,
but the lighting of a fire.
- William Butler Yeats
Intellectual growth should commence at birth
and cease only at death.
- Albert Einstein
Half of everything you were ever taught is wrong;
the question is which half.
- Jonathan Lockwood Huie
The mind is not a vessel to be filled,
but a fire to be kindled.
- Plutarch
We are not afraid to follow truth
wherever it may lead,
nor to tolerate any error
so long as reason is left free to combat it.
- Thomas Jefferson
Education is not the filling of a pail,
but the lighting of a fire.
- William Butler Yeats
Intellectual growth should commence at birth
and cease only at death.
- Albert Einstein
Half of everything you were ever taught is wrong;
the question is which half.
- Jonathan Lockwood Huie
Consciousness in the Aesthetic Imagination
By
J.F. Martel
 |  Posted on July 11, 2016“
The real voyage of discovery does not consist in looking for new landscapes, but in seeing with new eyes.
—Marcel Proust
Sunflower Events
What can art tell us about the nature of consciousness? The question is meaningless if it refers to the personal convictions of this or that poet or musician, this or that genre, school, or movement. The goal of this essay is to explore what the things artists make—the works of art themselves—tell us about the nature of mind and matter, self and world, over and above their creators’ personal beliefs. Is there a metaphysics that art as such implies? Or maybe the question is better framed in McLuhanian terms: What is the message of the medium of art with regard to the nature of consciousness?
The first thing that strikes me is that art is not discursive. It doesn’t constitute an attempt to represent things—to talk or think about them. Samuel Barber’s Adagio for Strings isn’t a piece of music about sadness. It is a sadness in itself. It is a sadness that has acquired a form outside the private experience of a subjective mind. If after the death of the last living thing on earth, there remained a radio playing the Adagio over and over again among the ashes of civilization, there would still be sadness in the world.
Works of art like Barber’s famous composition do not represent but enact the movements of experience. By doing this, they preserve these movements in material form. Only poor art tries to represent or reproduce, and that is why it generates only clichés, stereotypes, and opinions. Genuine art isn’t representational but demonstrational and imitative. It is deeply implicated in the experiential dimension, all that we associate with consciousness. But whereas most of the time we approach consciousness discursively from an assumed outside perspective, works of art catch it from within, in media res. What art gives us is consciousness in action—not consciousness of the world, but consciousness in the world.
More...
https://www.metapsychosis.com/conscious ... agination/
By
J.F. Martel
 |  Posted on July 11, 2016“
The real voyage of discovery does not consist in looking for new landscapes, but in seeing with new eyes.
—Marcel Proust
Sunflower Events
What can art tell us about the nature of consciousness? The question is meaningless if it refers to the personal convictions of this or that poet or musician, this or that genre, school, or movement. The goal of this essay is to explore what the things artists make—the works of art themselves—tell us about the nature of mind and matter, self and world, over and above their creators’ personal beliefs. Is there a metaphysics that art as such implies? Or maybe the question is better framed in McLuhanian terms: What is the message of the medium of art with regard to the nature of consciousness?
The first thing that strikes me is that art is not discursive. It doesn’t constitute an attempt to represent things—to talk or think about them. Samuel Barber’s Adagio for Strings isn’t a piece of music about sadness. It is a sadness in itself. It is a sadness that has acquired a form outside the private experience of a subjective mind. If after the death of the last living thing on earth, there remained a radio playing the Adagio over and over again among the ashes of civilization, there would still be sadness in the world.
Works of art like Barber’s famous composition do not represent but enact the movements of experience. By doing this, they preserve these movements in material form. Only poor art tries to represent or reproduce, and that is why it generates only clichés, stereotypes, and opinions. Genuine art isn’t representational but demonstrational and imitative. It is deeply implicated in the experiential dimension, all that we associate with consciousness. But whereas most of the time we approach consciousness discursively from an assumed outside perspective, works of art catch it from within, in media res. What art gives us is consciousness in action—not consciousness of the world, but consciousness in the world.
More...
https://www.metapsychosis.com/conscious ... agination/
The Journey Within
The only journey is the one within.
- Rainer Maria Rilke
The longest journey is the journey inwards.
Of him who has chosen his destiny, Who has started
upon his quest for the source of his being.
- Dag Hammarskjold
Never make your home in a place.
Make a home for yourself inside your own head.
You'll find what you need to furnish it -
memory, friends you can trust, love of learning,
and other such things.
That way it will go with you wherever you journey.
- Tad Williams
The key to growth is the introduction of
higher dimensions of consciousness into our awareness.
- Lao Tzu
We shall not cease from exploration
And the end of all of our exploring
Will be to arrive where we started
And know the place for the first time
- T. S. Eliot
The only journey is the one within.
- Rainer Maria Rilke
The longest journey is the journey inwards.
Of him who has chosen his destiny, Who has started
upon his quest for the source of his being.
- Dag Hammarskjold
Never make your home in a place.
Make a home for yourself inside your own head.
You'll find what you need to furnish it -
memory, friends you can trust, love of learning,
and other such things.
That way it will go with you wherever you journey.
- Tad Williams
The key to growth is the introduction of
higher dimensions of consciousness into our awareness.
- Lao Tzu
We shall not cease from exploration
And the end of all of our exploring
Will be to arrive where we started
And know the place for the first time
- T. S. Eliot
How Artists Change the World
Extract:
Most of all, he was using art to reteach people how to see.
We are often under the illusion that seeing is a very simple thing. You see something, which is taking information in, and then you evaluate, which is the hard part.
But in fact perception and evaluation are the same thing. We carry around unconscious mental maps, built by nature and experience, that organize how we scan the world and how we instantly interpret and order what we see.
With these portraits, Douglass was redrawing people’s unconscious mental maps. He was erasing old associations about blackness and replacing them with new ones. As Gates writes, he was taking an institution like slavery, which had seemed to many so inevitable, and leading people to perceive it as arbitrary. He was creating a new ideal of a just society and a fully alive black citizen, and therefore making current reality look different in the light of that ideal.
“Poets, prophets and reformers are all picture makers — and this ability is the secret of their power and of their achievements,” Douglass wrote. This is where artists make their mark, by implanting pictures in the underwater processing that is upstream from conscious cognition. Those pictures assign weights and values to what the eyes take in.
I never understand why artists want to get involved in partisanship and legislation. The real power lies in the ability to recode the mental maps people project into the world.
A photograph is powerful, even in the age of video, because of its ability to ingrain a single truth. The special “Vision and Justice” issue of Aperture shows that the process of retraining the imagination is ongoing. There are so many images that startlingly put African-American models in places where our culture assumes whiteness — in the Garden of Eden, in Vermeer’s “Girl With a Pearl Earring.”
These images don’t change your mind; they smash through some of the warped lenses through which we’ve been taught to see.
More.....
http://www.nytimes.com/2016/08/02/opini ... ef=opinion
Extract:
Most of all, he was using art to reteach people how to see.
We are often under the illusion that seeing is a very simple thing. You see something, which is taking information in, and then you evaluate, which is the hard part.
But in fact perception and evaluation are the same thing. We carry around unconscious mental maps, built by nature and experience, that organize how we scan the world and how we instantly interpret and order what we see.
With these portraits, Douglass was redrawing people’s unconscious mental maps. He was erasing old associations about blackness and replacing them with new ones. As Gates writes, he was taking an institution like slavery, which had seemed to many so inevitable, and leading people to perceive it as arbitrary. He was creating a new ideal of a just society and a fully alive black citizen, and therefore making current reality look different in the light of that ideal.
“Poets, prophets and reformers are all picture makers — and this ability is the secret of their power and of their achievements,” Douglass wrote. This is where artists make their mark, by implanting pictures in the underwater processing that is upstream from conscious cognition. Those pictures assign weights and values to what the eyes take in.
I never understand why artists want to get involved in partisanship and legislation. The real power lies in the ability to recode the mental maps people project into the world.
A photograph is powerful, even in the age of video, because of its ability to ingrain a single truth. The special “Vision and Justice” issue of Aperture shows that the process of retraining the imagination is ongoing. There are so many images that startlingly put African-American models in places where our culture assumes whiteness — in the Garden of Eden, in Vermeer’s “Girl With a Pearl Earring.”
These images don’t change your mind; they smash through some of the warped lenses through which we’ve been taught to see.
More.....
http://www.nytimes.com/2016/08/02/opini ... ef=opinion
Seek Your Own Wisdom
Do not seek to follow in the footsteps of the wise;
rather, seek what they sought.
- Matsuo Basho
No one saves us but ourselves.
No one can and no one may.
We ourselves must walk the path.
- The Buddha
Travelers, there is no path.
Paths are made by walking.
- Antonio Machado
There is no subject so old that something new cannot be said about it.
- Fyodor Dostoyevsky
Quotations are dead unless a spur for inquiry.
- Jonathan Lockwood Huie
Do not seek to follow in the footsteps of the wise;
rather, seek what they sought.
- Matsuo Basho
No one saves us but ourselves.
No one can and no one may.
We ourselves must walk the path.
- The Buddha
Travelers, there is no path.
Paths are made by walking.
- Antonio Machado
There is no subject so old that something new cannot be said about it.
- Fyodor Dostoyevsky
Quotations are dead unless a spur for inquiry.
- Jonathan Lockwood Huie
Beware of False Knowledge
To know yet to think that one does not know is best;
Not to know yet to think that one knows
will lead to difficulty.
- Lao Tzu
The greatest obstacle to discovery is not ignorance -
it is the illusion of knowledge.
- Daniel J. Boorstin
Beware of false knowledge;
it is more dangerous than ignorance.
- George Bernard Shaw
We want the facts to fit the preconceptions.
When they don't, it is easier to ignore the facts
than to change the preconceptions.
- Jessamyn West
Truth is not defined by
how many people believe something.
Ask. Question. Think.
Decide - for yourself.
- Jonathan Lockwood Huie
To know yet to think that one does not know is best;
Not to know yet to think that one knows
will lead to difficulty.
- Lao Tzu
The greatest obstacle to discovery is not ignorance -
it is the illusion of knowledge.
- Daniel J. Boorstin
Beware of false knowledge;
it is more dangerous than ignorance.
- George Bernard Shaw
We want the facts to fit the preconceptions.
When they don't, it is easier to ignore the facts
than to change the preconceptions.
- Jessamyn West
Truth is not defined by
how many people believe something.
Ask. Question. Think.
Decide - for yourself.
- Jonathan Lockwood Huie
A Life of Meaning (Reason Not Required)
Few would disagree with two age-old truisms: We should strive to shape our lives with reason, and a central prerequisite for the good life is a personal sense of meaning. Ideally, the two should go hand in hand. We study the lessons of history, read philosophy, and seek out wise men with the hope of learning what matters. But this acquired knowledge is not the same as the felt sense that one’s life is meaningful.
Though impossible to accurately describe, we readily recognize meaning by its absence. Anyone who has experienced a bout of spontaneous depression knows the despair of feeling that nothing in life is worth pursuing and that no argument, no matter how inspired, can fill the void. Similarly, we are all familiar with the countless narratives of religious figures “losing their way” despite retaining their formal beliefs.
Any philosophical approach to values and purpose must acknowledge this fundamental neurological reality: a visceral sense of meaning in one’s life is an involuntary mental state that, like joy or disgust, is independent from and resistant to the best of arguments. If philosophy is to guide us to a better life, it must somehow bridge this gap between feeling and thought.
As neuroscience attempts to pound away at the idea of pure rationality and underscore the primacy of subliminal mental activity, I am increasingly drawn to the metaphor of idiosyncratic mental taste buds. From genetic factors (a single gene determines whether we find brussels sprouts bitter or sweet), to the cultural — considering fried grasshoppers and grilled monkey brains as delicacies — taste isn’t a matter of the best set of arguments. Anyone who’s tried to get his child to eat something she doesn’t like understands the limits of the most cunning of inducements. If thoughts, like foods, come in a dazzling variety of flavors, and personal taste trumps reason, philosophy — which relies most heavily on reason, and aims to foster the acquisition of objective knowledge — is in a bind.
Though we don’t know how thoughts are produced by the brain, it is hard to imagine having a thought unaccompanied by some associated mental state. We experience a thought as pleasing, revolting, correct, incorrect, obvious, stupid, brilliant, etc. Though integral to our thoughts, these qualifiers arise out of different brain mechanisms from those that produce the raw thought. As examples, feelings of disgust, empathy and knowing arise from different areas of brain and can be provoked de novo in volunteer subjects via electrical stimulation even when the subjects are unaware of having any concomitant thought at all. This chicken-and-egg relationship between feelings and thought can readily be seen in how we make moral judgments.
The psychologist Jonathan Haidt and others have shown that our moral stances strongly correlate with the degree of activation of those brain areas that generate a sense of disgust and revulsion. According to Haidt, reason provides an after-the-fact explanation for moral decisions that are preceded by inherently reflexive positive or negative feelings. Think about your stance on pedophilia or denying a kidney transplant to a serial killer. Long before you have a moral position in place, each scenario will have already generated some degree of disgust or empathy.
Nowhere is this overpowering effect of biology on how we think more evident than in the paradox-plagued field of philosophy of mind. Even those cognitive scientists who have been most instrumental in uncovering our myriad innate biases continue to believe in the primacy of reason. Consider the argument by the Yale psychology professor Paul Bloom that we do not have free will, but since we are capable of conscious rational deliberation, so are responsible for our actions.
Though deeply sympathetic to his conclusion, I am puzzled by his argument. The evidence most supportive of Bloom’s contention that we do not have free will also is compelling evidence against the notion of conscious rational deliberation. In the 1980s the neurophysiologist Ben Libet of the University of California, San Francisco, showed that the brain generates action-specific electrical activity nearly half a second before the subject consciously “decides” to initiate the action. Though interpretations of the results are the subject of considerable controversy, a number of subsequent studies have confirmed that the conscious sense of willing an action is preceded by subliminal brain activity likely to indicate that the brain is preparing to initiate the action.
An everyday example of this temporal illusion is seen in high-speed sports such as baseball and tennis. Though batters sense that they withhold deciding whether to swing until they see the ball near the plate, their swing actually begins shortly after the ball leaves the pitcher’s hand. The same applies to tennis players returning a serve coming at them at 140 miles an hour. Initiation of the action precedes full conscious perception of seeing the approaching ball.
It is unlikely that there is any fundamental difference in how the brain initiates thought and action. We learn the process of thinking incrementally, acquiring knowledge of language, logic, the external world and cultural norms and expectations just as we learn physical actions like talking, walking or playing the piano. If we conceptualize thought as a mental motor skill subject to the same temporal reorganization as high-speed sports, it’s hard to avoid the conclusion that the experience of free will (agency) and conscious rational deliberation are both biologically generated illusions.
What then are we to do with the concept of rationality? It would be a shame to get rid of a term useful in characterizing the clarity of a line of reasoning. Everyone understands that “being rational” implies trying to strip away biases and innate subjectivity in order to make the best possible decision. But what if the word rational leads us to scientifically unsound conclusions?
We describe the decision to jam on the brakes at the sight of a child running into the road as being rational, even when we understand that it is reflexive. However, few of us would say that a self-driving car performing the same maneuver was acting rationally. It’s pretty obvious that the difference in how we assign rationality isn’t dependent upon how decisions are made, but how we wish to see ourselves in relationship to the rest of the animal kingdom, and indeed even to plants and intelligent machines.
It is hard to imagine what would happen to modern thought if we abandoned the notion of rationality. Scientific method might partly fill the void. With quantum physics, scientists have been able to validate counterintuitive theories. But empirical methods can’t help us with abstract, non-measurable, linguistically ambiguous concepts such as purpose and meaning. It’s no wonder that pre-eminent scientists like Stephen Hawking have gleefully declared, “Philosophy is dead.”
Going forward, the greatest challenge for philosophy will be to remain relevant while conceding that, like the rest of the animal kingdom, we are decision-making organisms rather than rational agents, and that our most logical conclusions about moral and ethical values can’t be scientifically verified nor guaranteed to pass the test of time. (The history of science should serve as a cautionary tale for anyone tempted to believe in the persistent truth of untestable ideas).
Even so, I would hate to discard such truisms such as “know thyself” or “the unexamined life isn’t worth living.” Reason allows us new ways of seeing, just as close listening to a piece of music can reveal previously unheard melodies and rhythms or observing an ant hill can give us an unexpected appreciation of nature’s harmonies. These various forms of inquiry aren’t dependent upon logic and verification; they are modes of perception.
Robert A. Burton, a former chief of neurology at the University of California, San Francisco, Medical Center at Mount Zion, is the author of “On Being Certain: Believing You Are Right Even When You’re Not,” and “A Skeptic’s Guide to the Mind: What Neuroscience Can and Cannot Tell Us About Ourselves.”
http://www.nytimes.com/2016/09/05/opini ... d=71987722
Few would disagree with two age-old truisms: We should strive to shape our lives with reason, and a central prerequisite for the good life is a personal sense of meaning. Ideally, the two should go hand in hand. We study the lessons of history, read philosophy, and seek out wise men with the hope of learning what matters. But this acquired knowledge is not the same as the felt sense that one’s life is meaningful.
Though impossible to accurately describe, we readily recognize meaning by its absence. Anyone who has experienced a bout of spontaneous depression knows the despair of feeling that nothing in life is worth pursuing and that no argument, no matter how inspired, can fill the void. Similarly, we are all familiar with the countless narratives of religious figures “losing their way” despite retaining their formal beliefs.
Any philosophical approach to values and purpose must acknowledge this fundamental neurological reality: a visceral sense of meaning in one’s life is an involuntary mental state that, like joy or disgust, is independent from and resistant to the best of arguments. If philosophy is to guide us to a better life, it must somehow bridge this gap between feeling and thought.
As neuroscience attempts to pound away at the idea of pure rationality and underscore the primacy of subliminal mental activity, I am increasingly drawn to the metaphor of idiosyncratic mental taste buds. From genetic factors (a single gene determines whether we find brussels sprouts bitter or sweet), to the cultural — considering fried grasshoppers and grilled monkey brains as delicacies — taste isn’t a matter of the best set of arguments. Anyone who’s tried to get his child to eat something she doesn’t like understands the limits of the most cunning of inducements. If thoughts, like foods, come in a dazzling variety of flavors, and personal taste trumps reason, philosophy — which relies most heavily on reason, and aims to foster the acquisition of objective knowledge — is in a bind.
Though we don’t know how thoughts are produced by the brain, it is hard to imagine having a thought unaccompanied by some associated mental state. We experience a thought as pleasing, revolting, correct, incorrect, obvious, stupid, brilliant, etc. Though integral to our thoughts, these qualifiers arise out of different brain mechanisms from those that produce the raw thought. As examples, feelings of disgust, empathy and knowing arise from different areas of brain and can be provoked de novo in volunteer subjects via electrical stimulation even when the subjects are unaware of having any concomitant thought at all. This chicken-and-egg relationship between feelings and thought can readily be seen in how we make moral judgments.
The psychologist Jonathan Haidt and others have shown that our moral stances strongly correlate with the degree of activation of those brain areas that generate a sense of disgust and revulsion. According to Haidt, reason provides an after-the-fact explanation for moral decisions that are preceded by inherently reflexive positive or negative feelings. Think about your stance on pedophilia or denying a kidney transplant to a serial killer. Long before you have a moral position in place, each scenario will have already generated some degree of disgust or empathy.
Nowhere is this overpowering effect of biology on how we think more evident than in the paradox-plagued field of philosophy of mind. Even those cognitive scientists who have been most instrumental in uncovering our myriad innate biases continue to believe in the primacy of reason. Consider the argument by the Yale psychology professor Paul Bloom that we do not have free will, but since we are capable of conscious rational deliberation, so are responsible for our actions.
Though deeply sympathetic to his conclusion, I am puzzled by his argument. The evidence most supportive of Bloom’s contention that we do not have free will also is compelling evidence against the notion of conscious rational deliberation. In the 1980s the neurophysiologist Ben Libet of the University of California, San Francisco, showed that the brain generates action-specific electrical activity nearly half a second before the subject consciously “decides” to initiate the action. Though interpretations of the results are the subject of considerable controversy, a number of subsequent studies have confirmed that the conscious sense of willing an action is preceded by subliminal brain activity likely to indicate that the brain is preparing to initiate the action.
An everyday example of this temporal illusion is seen in high-speed sports such as baseball and tennis. Though batters sense that they withhold deciding whether to swing until they see the ball near the plate, their swing actually begins shortly after the ball leaves the pitcher’s hand. The same applies to tennis players returning a serve coming at them at 140 miles an hour. Initiation of the action precedes full conscious perception of seeing the approaching ball.
It is unlikely that there is any fundamental difference in how the brain initiates thought and action. We learn the process of thinking incrementally, acquiring knowledge of language, logic, the external world and cultural norms and expectations just as we learn physical actions like talking, walking or playing the piano. If we conceptualize thought as a mental motor skill subject to the same temporal reorganization as high-speed sports, it’s hard to avoid the conclusion that the experience of free will (agency) and conscious rational deliberation are both biologically generated illusions.
What then are we to do with the concept of rationality? It would be a shame to get rid of a term useful in characterizing the clarity of a line of reasoning. Everyone understands that “being rational” implies trying to strip away biases and innate subjectivity in order to make the best possible decision. But what if the word rational leads us to scientifically unsound conclusions?
We describe the decision to jam on the brakes at the sight of a child running into the road as being rational, even when we understand that it is reflexive. However, few of us would say that a self-driving car performing the same maneuver was acting rationally. It’s pretty obvious that the difference in how we assign rationality isn’t dependent upon how decisions are made, but how we wish to see ourselves in relationship to the rest of the animal kingdom, and indeed even to plants and intelligent machines.
It is hard to imagine what would happen to modern thought if we abandoned the notion of rationality. Scientific method might partly fill the void. With quantum physics, scientists have been able to validate counterintuitive theories. But empirical methods can’t help us with abstract, non-measurable, linguistically ambiguous concepts such as purpose and meaning. It’s no wonder that pre-eminent scientists like Stephen Hawking have gleefully declared, “Philosophy is dead.”
Going forward, the greatest challenge for philosophy will be to remain relevant while conceding that, like the rest of the animal kingdom, we are decision-making organisms rather than rational agents, and that our most logical conclusions about moral and ethical values can’t be scientifically verified nor guaranteed to pass the test of time. (The history of science should serve as a cautionary tale for anyone tempted to believe in the persistent truth of untestable ideas).
Even so, I would hate to discard such truisms such as “know thyself” or “the unexamined life isn’t worth living.” Reason allows us new ways of seeing, just as close listening to a piece of music can reveal previously unheard melodies and rhythms or observing an ant hill can give us an unexpected appreciation of nature’s harmonies. These various forms of inquiry aren’t dependent upon logic and verification; they are modes of perception.
Robert A. Burton, a former chief of neurology at the University of California, San Francisco, Medical Center at Mount Zion, is the author of “On Being Certain: Believing You Are Right Even When You’re Not,” and “A Skeptic’s Guide to the Mind: What Neuroscience Can and Cannot Tell Us About Ourselves.”
http://www.nytimes.com/2016/09/05/opini ... d=71987722
Teaching Calvin in California
BERKELEY, Calif. — We spend a great deal of time worrying about theology these days. From extremist violence to the American culture wars, the theological imagination can feel like an existential threat to liberal democracy. Or more simply, just to common decency. No surprise that many believe that theology has no place in the secular college classroom.
Over the years, I have decided that this is wrong. I learned to think otherwise teaching Calvin in California.
Given my profession, I am naturally curious about theology. But it takes collaborative work in the classroom to persuade students that they should be, too. To persuade them that theology is more than its bad press; that it is a rich subject as likely to provoke disbelief as belief; that it is more likely to open than to close interesting conversations about religion and public life. To persuade them, in short, that theology matters to a liberal education.
In my history of Christianity course, we read a number of challenging writers. Each one I ask students to read with as much sympathy, charity and critical perspective as they can muster. But nothing outrages them — not the writings of Augustine or Erasmus or Luther — more than two or three pages of John Calvin.
Calvin was the most influential religious reformer of the 16th century. His theological imagination and organizational genius prepared the way for almost all forms of American Protestantism, from the Presbyterians to the Methodists to the Baptists. He was also a severe and uncompromising thinker. The Ayatollah of Geneva, some have called him.
More...
http://www.nytimes.com/2016/09/12/opini ... d=45305309
BERKELEY, Calif. — We spend a great deal of time worrying about theology these days. From extremist violence to the American culture wars, the theological imagination can feel like an existential threat to liberal democracy. Or more simply, just to common decency. No surprise that many believe that theology has no place in the secular college classroom.
Over the years, I have decided that this is wrong. I learned to think otherwise teaching Calvin in California.
Given my profession, I am naturally curious about theology. But it takes collaborative work in the classroom to persuade students that they should be, too. To persuade them that theology is more than its bad press; that it is a rich subject as likely to provoke disbelief as belief; that it is more likely to open than to close interesting conversations about religion and public life. To persuade them, in short, that theology matters to a liberal education.
In my history of Christianity course, we read a number of challenging writers. Each one I ask students to read with as much sympathy, charity and critical perspective as they can muster. But nothing outrages them — not the writings of Augustine or Erasmus or Luther — more than two or three pages of John Calvin.
Calvin was the most influential religious reformer of the 16th century. His theological imagination and organizational genius prepared the way for almost all forms of American Protestantism, from the Presbyterians to the Methodists to the Baptists. He was also a severe and uncompromising thinker. The Ayatollah of Geneva, some have called him.
More...
http://www.nytimes.com/2016/09/12/opini ... d=45305309
The Difference Between Rationality and Intelligence
ARE you intelligent — or rational? The question may sound redundant, but in recent years researchers have demonstrated just how distinct those two cognitive attributes actually are.
It all started in the early 1970s, when the psychologists Daniel Kahneman and Amos Tversky conducted an influential series of experiments showing that all of us, even highly intelligent people, are prone to irrationality. Across a wide range of scenarios, the experiments revealed, people tend to make decisions based on intuition rather than reason.
More...
http://www.nytimes.com/2016/09/18/opini ... inion&_r=0
ARE you intelligent — or rational? The question may sound redundant, but in recent years researchers have demonstrated just how distinct those two cognitive attributes actually are.
It all started in the early 1970s, when the psychologists Daniel Kahneman and Amos Tversky conducted an influential series of experiments showing that all of us, even highly intelligent people, are prone to irrationality. Across a wide range of scenarios, the experiments revealed, people tend to make decisions based on intuition rather than reason.
More...
http://www.nytimes.com/2016/09/18/opini ... inion&_r=0
My Syllabus, My Self
Something unusual has happened in the American university system in recent months: The syllabus, long seen as little more than an obligatory academic to-do list, has become a highly charged politicized space.
As the trigger warning debate rages on, I am tempted to step back and rethink this strange genre of document. As a site where politics, law, logistics and intellect meet, and where the soul of a teacher is most visible to his or her students, the syllabus is not a mere reading list. It is the interface between the institution, the instructor and the student. To me, the new urgency surrounding the syllabus makes sense.
If you are a syllabus fetishist like me, you have a great nostalgia for those days you received your syllabus on the first day of class. The object was endowed with an almost numinous power. From the first moment you held it in your hands (before the digital era when printing out a syllabus didn’t seem to be an environmentally relevant question), your future for the semester crystallized. To nearly all course-related questions posed by the students the answer was, “It’s in the syllabus,” as though it were a holy book.
I’ve been a syllabus-maker since the early 2000s and still get all kinds of thrills from devising these humble documents I hope others will find just as thrilling.
As a thought experiment, try putting together your own syllabus. What would your course be called, and what would be on it?
Syllabus-making is a kind of composting process, a vegetal reworking of the old into the new. Dead authors are resurrected through this process, perhaps put alongside living ones. Indeed, the syllabus operates according to a logic of combinatorics and reprocessing: How can all the knowledge about a particular topic be reconfigured to the benefit of each new generation of students?
A syllabus should have an ageless quality to it. Its themes should be renewably relevant and timeless even when addressing some contemporary problem that has just crept up. The good syllabus does not expire.
It is also a historical document. Imagine if we had access to the syllabuses of all of the great thinkers who ever made one (for the record, I prefer syllabi, but this paper’s style rules dictate otherwise). We would be able to take a course with them privately, separated by time but nonetheless plugged into their brains and hearts. (The syllabus is an affair of both.) We could see whom they prioritized intellectually and through which texts or other cultural artifacts they believed a conceptual problem could best be addressed. This unattainable dream haunts me at times
What is a syllabus supposed to do? What are its parts? What is political about it?
More...
http://www.nytimes.com/2016/10/17/opini ... 87722&_r=0
Something unusual has happened in the American university system in recent months: The syllabus, long seen as little more than an obligatory academic to-do list, has become a highly charged politicized space.
As the trigger warning debate rages on, I am tempted to step back and rethink this strange genre of document. As a site where politics, law, logistics and intellect meet, and where the soul of a teacher is most visible to his or her students, the syllabus is not a mere reading list. It is the interface between the institution, the instructor and the student. To me, the new urgency surrounding the syllabus makes sense.
If you are a syllabus fetishist like me, you have a great nostalgia for those days you received your syllabus on the first day of class. The object was endowed with an almost numinous power. From the first moment you held it in your hands (before the digital era when printing out a syllabus didn’t seem to be an environmentally relevant question), your future for the semester crystallized. To nearly all course-related questions posed by the students the answer was, “It’s in the syllabus,” as though it were a holy book.
I’ve been a syllabus-maker since the early 2000s and still get all kinds of thrills from devising these humble documents I hope others will find just as thrilling.
As a thought experiment, try putting together your own syllabus. What would your course be called, and what would be on it?
Syllabus-making is a kind of composting process, a vegetal reworking of the old into the new. Dead authors are resurrected through this process, perhaps put alongside living ones. Indeed, the syllabus operates according to a logic of combinatorics and reprocessing: How can all the knowledge about a particular topic be reconfigured to the benefit of each new generation of students?
A syllabus should have an ageless quality to it. Its themes should be renewably relevant and timeless even when addressing some contemporary problem that has just crept up. The good syllabus does not expire.
It is also a historical document. Imagine if we had access to the syllabuses of all of the great thinkers who ever made one (for the record, I prefer syllabi, but this paper’s style rules dictate otherwise). We would be able to take a course with them privately, separated by time but nonetheless plugged into their brains and hearts. (The syllabus is an affair of both.) We could see whom they prioritized intellectually and through which texts or other cultural artifacts they believed a conceptual problem could best be addressed. This unattainable dream haunts me at times
What is a syllabus supposed to do? What are its parts? What is political about it?
More...
http://www.nytimes.com/2016/10/17/opini ... 87722&_r=0
The Hidden Music of Words
Academics shouldn’t scoff at literary prose—they have much to learn from it
By Aaron Sachs
October 11, 2016
What is the difference between academic and literary writing? If this sounds like a joke, the punch lines are many. Style. Voice. Jargon. Signposts (“The aim of this chapter is …”). In fact, jargon so litters academic writing that signposts are indispensable to reader comprehension.
Surely my biases are already clear, which perhaps makes me come across as self-loathing, since I’m an academic myself. Guilty. But I also appreciate much of what we academics contribute to this debate. We excel at argumentation. We’re thorough. Our extensive citations allow readers to check our work and involve themselves deeply in the conversations that we’re constantly trying to spark. Some professional scholars even embrace writing as an art and a craft in itself rather than as merely a means of presenting research findings.
I’m particularly grateful to my fellow travelers who have seen fit to teach writing seminars designed for Ph.D. students. Maybe the next generation of scholars can learn how to write more artfully. Nevertheless, stylish prose strikes many academics as dilettantish, causing younger scholars to adhere to the established templates for fear of whiffing on the academic job market.
But shouldn’t there be ways of improving academic writing without sacrificing scholarly credibility? Journalists, essayists, and even memoirists make use of academic research all the time to bolster their prose. Why couldn’t scholars steal some literary techniques from them?
More....
https://theamericanscholar.org/the-hidd ... urce=email
Academics shouldn’t scoff at literary prose—they have much to learn from it
By Aaron Sachs
October 11, 2016
What is the difference between academic and literary writing? If this sounds like a joke, the punch lines are many. Style. Voice. Jargon. Signposts (“The aim of this chapter is …”). In fact, jargon so litters academic writing that signposts are indispensable to reader comprehension.
Surely my biases are already clear, which perhaps makes me come across as self-loathing, since I’m an academic myself. Guilty. But I also appreciate much of what we academics contribute to this debate. We excel at argumentation. We’re thorough. Our extensive citations allow readers to check our work and involve themselves deeply in the conversations that we’re constantly trying to spark. Some professional scholars even embrace writing as an art and a craft in itself rather than as merely a means of presenting research findings.
I’m particularly grateful to my fellow travelers who have seen fit to teach writing seminars designed for Ph.D. students. Maybe the next generation of scholars can learn how to write more artfully. Nevertheless, stylish prose strikes many academics as dilettantish, causing younger scholars to adhere to the established templates for fear of whiffing on the academic job market.
But shouldn’t there be ways of improving academic writing without sacrificing scholarly credibility? Journalists, essayists, and even memoirists make use of academic research all the time to bolster their prose. Why couldn’t scholars steal some literary techniques from them?
More....
https://theamericanscholar.org/the-hidd ... urce=email
What Counts as Science?
The arXiv preprint service is trying to answer an age-old question.
xxx.lanl.gov. The address was cryptic, with a tantalizing whiff of government secrets, or worse.
The server itself was exactly the opposite. Government, yes—it was hosted by Los Alamos National Laboratory—but openly accessible in a way that, in those early Internet days of the 1990s, was totally new, and is still game changing today.
The site, known as arXiv (pronounced “archive,” and long since decamped to the more wholesome address “arXiv.org” and to the stewardship of the Cornell University Library), is a vast repository of scientific preprints, articles that haven’t yet gone through the peer-review process or aren’t intended for publication in refereed journals. (Papers can also appear, often in revised form, after they have been published elsewhere.) As of July 2016, there were more than a million papers on arXiv, leaning heavily toward the hardest of the hard sciences: math, computer science, quantitative biology, quantitative finance, statistics, and above all, physics.
THE ARXIVIST: Physicist Paul Ginsparg started arXiv in 1991, expecting to catalog about 100 research papers. As papers deluged him, he sought help in a computer program, which he learned to write “by sitting in over a decade of machine-learning seminars.” Ginsparg is now a professor in physics and information science at Cornell University.Robert Lieberman (Cornell University)
ArXiv is the kind of library that, 30 years ago, scientists could only dream of: totally searchable, accessible from anywhere, free to publish to and read from, and containing basically everything in the field that’s worth reading. At this golden moment in technological history, when you can look up the history of atomic theory on Wikipedia while waiting in line at Starbucks, this might seem trivial. But in fact it was revolutionary.
Practically, arXiv has leveraged new technologies to create a boon for its community. What is less visible, though, is that it has had to answer a difficult philosophical question, one which resonates through the rest of the scientific community: What, exactly, is worth reading? What counts as science?
Before arXiv, preprint papers were available only within small scientific circles, distributed by hand and by mail, and the journals in which they were ultimately published months later (if they were published at all) were holed up in university libraries. But arXiv has democratized the playing field, giving scientists instant access to ideas from all kinds of colleagues, all over the world, from prestigious chairs at elite universities to post-docs drudging away at off-brand institutions and scientists in developing countries with meager research support.
Paul Ginsparg set up arXiv in 1991, when he was a 35-year-old physicist at Los Alamos. He expected only about 100 papers to go out to a few hundred email subscribers in the first year. But by the summer of 1992, more than 1,200 papers had been submitted. It was a good problem to have, but still a problem. While Ginsparg had no intention of giving incoming papers the top-to-tail scrutiny of peer review, he did want to be sure that readers could find the ones they were most interested in. So he started binning the incoming papers into new categories and sub-categories and bringing on more and more moderators, who took on the work as volunteers, as a service to their scientific community.
Will unclassifiable papers get lost in the muck of the truly incoherent?
The arXiv credo is that papers should be “of interest, relevance, and value” to the scientific disciplines that arXiv serves. But as the site and its public profile grew, it began attracting papers from outside the usual research circles, and many of those papers didn’t pass the test. They weren’t necessarily bad science, says Ginsparg. Bad science can be examined, tested, and refuted. They were “non-science”—sweeping theories grandly claiming to overturn Einstein, Newton, and Hawking; to reveal hidden connections between physics and ESP and UFOs; and to do it all almost entirely without math and experiment.
The arXiv’s default stance is acceptance—papers are innocent “until proven guilty,” Ginsparg says—but the non-science papers were a waste of scholarly readers’ time. And if they were allowed to share the same virtual shelf space with legitimate science, they could create confusion among arXiv’s growing audience of journalists and policymakers. So, paper by paper, moderators had to make the call: What is and isn’t science?
Most arXiv users were satisfied with the moderators’ decisions. But some people felt papers were getting tossed aside that should have made it into arXiv, and some scientists—especially those on the academic fringes—accused arXiv moderators of censoring against-the-grain ideas.
The problem that arXiv’s moderators were tackling wasn’t a new one. In 1959, philosopher of science Thomas Kuhn called it “the essential tension”: the conflict between traditional scholarly limits, which place scientific questions and practices in- or out-of-bounds, and the kind of freewheeling inquiry that embraces maverick ideas and methods. To move forward, science needs both, the thinking goes. If innovative ideas often spring up in the spaces between well-established disciplines, will unclassifiable but credible papers get lost in the muck of the truly incoherent?
ArXiv moderators, though, don’t have much time for Kuhnian rumination. Many users probe the site daily and arXiv wants to stay as fresh and up-to-date as possible. So, from the beginning, arXiv imposed a “very unforgiving 24-hour turnaround,” says Ginsparg. Any paper that comes in before 4 p.m. Eastern on a weekday is cued to go online at 8 p.m. that night. Moderators have less than a day to flag a paper for rejection or further review—and sometimes as little as four hours. Mindful of the daily dash, Ginsparg had an idea for how to give his volunteer moderators a helping hand: a computer program that would do some of the thinking for them.
More....
http://nautil.us/issue/41/selection/wha ... as-science
The arXiv preprint service is trying to answer an age-old question.
xxx.lanl.gov. The address was cryptic, with a tantalizing whiff of government secrets, or worse.
The server itself was exactly the opposite. Government, yes—it was hosted by Los Alamos National Laboratory—but openly accessible in a way that, in those early Internet days of the 1990s, was totally new, and is still game changing today.
The site, known as arXiv (pronounced “archive,” and long since decamped to the more wholesome address “arXiv.org” and to the stewardship of the Cornell University Library), is a vast repository of scientific preprints, articles that haven’t yet gone through the peer-review process or aren’t intended for publication in refereed journals. (Papers can also appear, often in revised form, after they have been published elsewhere.) As of July 2016, there were more than a million papers on arXiv, leaning heavily toward the hardest of the hard sciences: math, computer science, quantitative biology, quantitative finance, statistics, and above all, physics.
THE ARXIVIST: Physicist Paul Ginsparg started arXiv in 1991, expecting to catalog about 100 research papers. As papers deluged him, he sought help in a computer program, which he learned to write “by sitting in over a decade of machine-learning seminars.” Ginsparg is now a professor in physics and information science at Cornell University.Robert Lieberman (Cornell University)
ArXiv is the kind of library that, 30 years ago, scientists could only dream of: totally searchable, accessible from anywhere, free to publish to and read from, and containing basically everything in the field that’s worth reading. At this golden moment in technological history, when you can look up the history of atomic theory on Wikipedia while waiting in line at Starbucks, this might seem trivial. But in fact it was revolutionary.
Practically, arXiv has leveraged new technologies to create a boon for its community. What is less visible, though, is that it has had to answer a difficult philosophical question, one which resonates through the rest of the scientific community: What, exactly, is worth reading? What counts as science?
Before arXiv, preprint papers were available only within small scientific circles, distributed by hand and by mail, and the journals in which they were ultimately published months later (if they were published at all) were holed up in university libraries. But arXiv has democratized the playing field, giving scientists instant access to ideas from all kinds of colleagues, all over the world, from prestigious chairs at elite universities to post-docs drudging away at off-brand institutions and scientists in developing countries with meager research support.
Paul Ginsparg set up arXiv in 1991, when he was a 35-year-old physicist at Los Alamos. He expected only about 100 papers to go out to a few hundred email subscribers in the first year. But by the summer of 1992, more than 1,200 papers had been submitted. It was a good problem to have, but still a problem. While Ginsparg had no intention of giving incoming papers the top-to-tail scrutiny of peer review, he did want to be sure that readers could find the ones they were most interested in. So he started binning the incoming papers into new categories and sub-categories and bringing on more and more moderators, who took on the work as volunteers, as a service to their scientific community.
Will unclassifiable papers get lost in the muck of the truly incoherent?
The arXiv credo is that papers should be “of interest, relevance, and value” to the scientific disciplines that arXiv serves. But as the site and its public profile grew, it began attracting papers from outside the usual research circles, and many of those papers didn’t pass the test. They weren’t necessarily bad science, says Ginsparg. Bad science can be examined, tested, and refuted. They were “non-science”—sweeping theories grandly claiming to overturn Einstein, Newton, and Hawking; to reveal hidden connections between physics and ESP and UFOs; and to do it all almost entirely without math and experiment.
The arXiv’s default stance is acceptance—papers are innocent “until proven guilty,” Ginsparg says—but the non-science papers were a waste of scholarly readers’ time. And if they were allowed to share the same virtual shelf space with legitimate science, they could create confusion among arXiv’s growing audience of journalists and policymakers. So, paper by paper, moderators had to make the call: What is and isn’t science?
Most arXiv users were satisfied with the moderators’ decisions. But some people felt papers were getting tossed aside that should have made it into arXiv, and some scientists—especially those on the academic fringes—accused arXiv moderators of censoring against-the-grain ideas.
The problem that arXiv’s moderators were tackling wasn’t a new one. In 1959, philosopher of science Thomas Kuhn called it “the essential tension”: the conflict between traditional scholarly limits, which place scientific questions and practices in- or out-of-bounds, and the kind of freewheeling inquiry that embraces maverick ideas and methods. To move forward, science needs both, the thinking goes. If innovative ideas often spring up in the spaces between well-established disciplines, will unclassifiable but credible papers get lost in the muck of the truly incoherent?
ArXiv moderators, though, don’t have much time for Kuhnian rumination. Many users probe the site daily and arXiv wants to stay as fresh and up-to-date as possible. So, from the beginning, arXiv imposed a “very unforgiving 24-hour turnaround,” says Ginsparg. Any paper that comes in before 4 p.m. Eastern on a weekday is cued to go online at 8 p.m. that night. Moderators have less than a day to flag a paper for rejection or further review—and sometimes as little as four hours. Mindful of the daily dash, Ginsparg had an idea for how to give his volunteer moderators a helping hand: a computer program that would do some of the thinking for them.
More....
http://nautil.us/issue/41/selection/wha ... as-science
American Universities Must Take a Stand
Not since the era of witch hunts and “red baiting” has the American university faced so great a threat from government. How is the university to function when a president’s administration blurs the distinction between fact and fiction by asserting the existence of “alternative facts”? How can the university turn a blind eye to what every historian knows to be a key instrument of modern authoritarian regimes: the capacity to dress falsehood up as truth and reject the fruits of reasoned argument, evidence and rigorous verification?
The atmosphere of suspicion and insecurity created by the undermining of truth provides the perfect environment for President Trump’s recent actions on immigration. The American university’s future, indeed its most fundamental reason for being, is imperiled by a government that constructs walls on the Mexican border, restricts Muslim immigrants and denigrates the idea of America as a destination for refugees.
Although American universities did not always welcome the huge influx of refugees after the Nazi seizure of power in 1933, that intellectual migration transformed a provincial and second-rate higher education system into the finest in the world. Manufacturing may have fled our borders, but American higher education remains a powerful and competitive force, a destination for students and scholars everywhere and a vital engine of employment and economic health. An astonishingly large percentage of graduate students and professors in science today are foreigners and immigrants.
I am a Jewish immigrant who came here as part of a family that was stateless, and my deep patriotism is rooted in that experience. I benefited from American humanitarianism, and I have worked my entire life to give back to this country. An America inhospitable to immigrants and foreigners, a place of fear and danger instead of refuge, is unthinkable in the context of the nation’s history and founding principles. If a more practical argument is required, think of the consequences for the quality and future of our colleges and universities, and their highly prized superiority in science and engineering.
More..
https://www.nytimes.com/2017/02/08/opin ... stand.html
Not since the era of witch hunts and “red baiting” has the American university faced so great a threat from government. How is the university to function when a president’s administration blurs the distinction between fact and fiction by asserting the existence of “alternative facts”? How can the university turn a blind eye to what every historian knows to be a key instrument of modern authoritarian regimes: the capacity to dress falsehood up as truth and reject the fruits of reasoned argument, evidence and rigorous verification?
The atmosphere of suspicion and insecurity created by the undermining of truth provides the perfect environment for President Trump’s recent actions on immigration. The American university’s future, indeed its most fundamental reason for being, is imperiled by a government that constructs walls on the Mexican border, restricts Muslim immigrants and denigrates the idea of America as a destination for refugees.
Although American universities did not always welcome the huge influx of refugees after the Nazi seizure of power in 1933, that intellectual migration transformed a provincial and second-rate higher education system into the finest in the world. Manufacturing may have fled our borders, but American higher education remains a powerful and competitive force, a destination for students and scholars everywhere and a vital engine of employment and economic health. An astonishingly large percentage of graduate students and professors in science today are foreigners and immigrants.
I am a Jewish immigrant who came here as part of a family that was stateless, and my deep patriotism is rooted in that experience. I benefited from American humanitarianism, and I have worked my entire life to give back to this country. An America inhospitable to immigrants and foreigners, a place of fear and danger instead of refuge, is unthinkable in the context of the nation’s history and founding principles. If a more practical argument is required, think of the consequences for the quality and future of our colleges and universities, and their highly prized superiority in science and engineering.
More..
https://www.nytimes.com/2017/02/08/opin ... stand.html
Why We Believe Obvious Untruths
How can so many people believe things that are demonstrably false? The question has taken on new urgency as the Trump administration propagates falsehoods about voter fraud, climate change and crime statistics that large swaths of the population have bought into. But collective delusion is not new, nor is it the sole province of the political right. Plenty of liberals believe, counter to scientific consensus, that G.M.O.s are poisonous, and that vaccines cause autism.
The situation is vexing because it seems so easy to solve. The truth is obvious if you bother to look for it, right? This line of thinking leads to explanations of the hoodwinked masses that amount to little more than name calling: “Those people are foolish” or “Those people are monsters.”
Such accounts may make us feel good about ourselves, but they are misguided and simplistic: They reflect a misunderstanding of knowledge that focuses too narrowly on what goes on between our ears. Here is the humbler truth: On their own, individuals are not well equipped to separate fact from fiction, and they never will be. Ignorance is our natural state; it is a product of the way the mind works.
What really sets human beings apart is not our individual mental capacity. The secret to our success is our ability to jointly pursue complex goals by dividing cognitive labor. Hunting, trade, agriculture, manufacturing — all of our world-altering innovations — were made possible by this ability. Chimpanzees can surpass young children on numerical and spatial reasoning tasks, but they cannot come close on tasks that require collaborating with another individual to achieve a goal. Each of us knows only a little bit, but together we can achieve remarkable feats.
Knowledge isn’t in my head or in your head. It’s shared.
Consider some simple examples. You know that the earth revolves around the sun. But can you rehearse the astronomical observations and calculations that led to that conclusion? You know that smoking causes cancer. But can you articulate what smoke does to our cells, how cancers form and why some kinds of smoke are more dangerous than others? We’re guessing no. Most of what you “know” — most of what anyone knows — about any topic is a placeholder for information stored elsewhere, in a long-forgotten textbook or in some expert’s head.
One consequence of the fact that knowledge is distributed this way is that being part of a community of knowledge can make people feel as if they understand things they don’t. Recently, one of us ran a series of studies in which we told people about some new scientific discoveries that we fabricated, like rocks that glow. When we said that scientists had not yet explained the glowing rocks and then asked our respondents how well they understood how such rocks glow, they reported not understanding at all — a very natural response given that they knew nothing about the rocks. But when we told another group about the same discovery, only this time claiming that scientists had explained how the rocks glowed, our respondents reported a little bit more understanding. It was as if the scientists’ knowledge (which we never described) had been directly transmitted to them.
The sense of understanding is contagious. The understanding that others have, or claim to have, makes us feel smarter. This happens only when people believe they have access to the relevant information: When our experimental story indicated that the scientists worked for the Army and were keeping the explanation secret, people no longer felt that they had any understanding of why the rocks glowed.
The key point here is not that people are irrational; it’s that this irrationality comes from a very rational place. People fail to distinguish what they know from what others know because it is often impossible to draw sharp boundaries between what knowledge resides in our heads and what resides elsewhere.
This is especially true of divisive political issues. Your mind cannot master and retain sufficiently detailed knowledge about many of them. You must rely on your community. But if you are not aware that you are piggybacking on the knowledge of others, it can lead to hubris.
Recently, for example, there was a vociferous outcry when President Trump and Congress rolled back regulations on the dumping of mining waste in waterways. This may be bad policy, but most people don’t have sufficient expertise to draw that conclusion because evaluating the policy is complicated. Environmental policy is about balancing costs and benefits. In this case, you need to know something about what mining waste does to waterways and in what quantities these effects occur, how much economic activity depends on being able to dump freely, how a decrease in mining activity would be made up for from other energy sources and how environmentally damaging those are, and on and on.
We suspect that most of those people expressing outrage lacked the detailed knowledge necessary to assess the policy. We also suspect that many in Congress who voted for the rollback were equally in the dark. But people seemed pretty confident.
Such collective delusions illustrate both the power and the deep flaw of human thinking. It is remarkable that large groups of people can coalesce around a common belief when few of them individually possess the requisite knowledge to support it. This is how we discovered the Higgs boson and increased the human life span by 30 years in the last century. But the same underlying forces explain why we can come to believe outrageous things, which can lead to equally consequential but disastrous outcomes.
That individual ignorance is our natural state is a bitter pill to swallow. But if we take this medicine, it can be empowering. It can help us differentiate the questions that merit real investigation from those that invite a reactive and superficial analysis. It also can prompt us to demand expertise and nuanced analysis from our leaders, which is the only tried and true way to make effective policy. A better understanding of how little is actually inside our own heads would serve us well.
https://www.nytimes.com/2017/03/03/opin ... ef=opinion
How can so many people believe things that are demonstrably false? The question has taken on new urgency as the Trump administration propagates falsehoods about voter fraud, climate change and crime statistics that large swaths of the population have bought into. But collective delusion is not new, nor is it the sole province of the political right. Plenty of liberals believe, counter to scientific consensus, that G.M.O.s are poisonous, and that vaccines cause autism.
The situation is vexing because it seems so easy to solve. The truth is obvious if you bother to look for it, right? This line of thinking leads to explanations of the hoodwinked masses that amount to little more than name calling: “Those people are foolish” or “Those people are monsters.”
Such accounts may make us feel good about ourselves, but they are misguided and simplistic: They reflect a misunderstanding of knowledge that focuses too narrowly on what goes on between our ears. Here is the humbler truth: On their own, individuals are not well equipped to separate fact from fiction, and they never will be. Ignorance is our natural state; it is a product of the way the mind works.
What really sets human beings apart is not our individual mental capacity. The secret to our success is our ability to jointly pursue complex goals by dividing cognitive labor. Hunting, trade, agriculture, manufacturing — all of our world-altering innovations — were made possible by this ability. Chimpanzees can surpass young children on numerical and spatial reasoning tasks, but they cannot come close on tasks that require collaborating with another individual to achieve a goal. Each of us knows only a little bit, but together we can achieve remarkable feats.
Knowledge isn’t in my head or in your head. It’s shared.
Consider some simple examples. You know that the earth revolves around the sun. But can you rehearse the astronomical observations and calculations that led to that conclusion? You know that smoking causes cancer. But can you articulate what smoke does to our cells, how cancers form and why some kinds of smoke are more dangerous than others? We’re guessing no. Most of what you “know” — most of what anyone knows — about any topic is a placeholder for information stored elsewhere, in a long-forgotten textbook or in some expert’s head.
One consequence of the fact that knowledge is distributed this way is that being part of a community of knowledge can make people feel as if they understand things they don’t. Recently, one of us ran a series of studies in which we told people about some new scientific discoveries that we fabricated, like rocks that glow. When we said that scientists had not yet explained the glowing rocks and then asked our respondents how well they understood how such rocks glow, they reported not understanding at all — a very natural response given that they knew nothing about the rocks. But when we told another group about the same discovery, only this time claiming that scientists had explained how the rocks glowed, our respondents reported a little bit more understanding. It was as if the scientists’ knowledge (which we never described) had been directly transmitted to them.
The sense of understanding is contagious. The understanding that others have, or claim to have, makes us feel smarter. This happens only when people believe they have access to the relevant information: When our experimental story indicated that the scientists worked for the Army and were keeping the explanation secret, people no longer felt that they had any understanding of why the rocks glowed.
The key point here is not that people are irrational; it’s that this irrationality comes from a very rational place. People fail to distinguish what they know from what others know because it is often impossible to draw sharp boundaries between what knowledge resides in our heads and what resides elsewhere.
This is especially true of divisive political issues. Your mind cannot master and retain sufficiently detailed knowledge about many of them. You must rely on your community. But if you are not aware that you are piggybacking on the knowledge of others, it can lead to hubris.
Recently, for example, there was a vociferous outcry when President Trump and Congress rolled back regulations on the dumping of mining waste in waterways. This may be bad policy, but most people don’t have sufficient expertise to draw that conclusion because evaluating the policy is complicated. Environmental policy is about balancing costs and benefits. In this case, you need to know something about what mining waste does to waterways and in what quantities these effects occur, how much economic activity depends on being able to dump freely, how a decrease in mining activity would be made up for from other energy sources and how environmentally damaging those are, and on and on.
We suspect that most of those people expressing outrage lacked the detailed knowledge necessary to assess the policy. We also suspect that many in Congress who voted for the rollback were equally in the dark. But people seemed pretty confident.
Such collective delusions illustrate both the power and the deep flaw of human thinking. It is remarkable that large groups of people can coalesce around a common belief when few of them individually possess the requisite knowledge to support it. This is how we discovered the Higgs boson and increased the human life span by 30 years in the last century. But the same underlying forces explain why we can come to believe outrageous things, which can lead to equally consequential but disastrous outcomes.
That individual ignorance is our natural state is a bitter pill to swallow. But if we take this medicine, it can be empowering. It can help us differentiate the questions that merit real investigation from those that invite a reactive and superficial analysis. It also can prompt us to demand expertise and nuanced analysis from our leaders, which is the only tried and true way to make effective policy. A better understanding of how little is actually inside our own heads would serve us well.
https://www.nytimes.com/2017/03/03/opin ... ef=opinion
Why You Should Read Books You Hate
Here’s a reading challenge: Pick up a book you’re pretty sure you won’t like — the style is wrong, the taste not your own, the author bio unappealing. You might even take it one step further. Pick up a book you think you will hate, of a genre you’ve dismissed since high school, written by an author you’re inclined to avoid. Now read it to the last bitter page.
Sound like hell? You’re off to a good start.
This is not about reading a book you know is bad, a pleasure in its own right, like an exceptionally dashing villain. It’s about finding a book that affronts you, and staring it down to the last word.
At a time when people are siloed into narrow sources of information according to their particular tinted worldview — those they follow on Twitter, the evening shoutfest they choose, AM talk radio or NPR — it’s no surprise most of us also read books we’re inclined to favor. Reading is a pleasure and a time-consuming one. Why bother reading something you dislike?
But reading what you hate helps you refine what it is you value, whether it’s a style, a story line or an argument. Because books are long-form, they require more of the writer and the reader than a talk show or Facebook link. You can finish watching a movie in two hours and forget about it; not so a novel. Sticking it out for 300 pages means immersing yourself in another person’s world and discovering how it feels. That’s part of what makes books you despise so hard to dismiss. Rather than toss the book aside, turn to the next page and wrestle with its ideas. What about them makes you so uncomfortable?
More...
https://www.nytimes.com/2017/04/15/opin ... 87722&_r=0
Here’s a reading challenge: Pick up a book you’re pretty sure you won’t like — the style is wrong, the taste not your own, the author bio unappealing. You might even take it one step further. Pick up a book you think you will hate, of a genre you’ve dismissed since high school, written by an author you’re inclined to avoid. Now read it to the last bitter page.
Sound like hell? You’re off to a good start.
This is not about reading a book you know is bad, a pleasure in its own right, like an exceptionally dashing villain. It’s about finding a book that affronts you, and staring it down to the last word.
At a time when people are siloed into narrow sources of information according to their particular tinted worldview — those they follow on Twitter, the evening shoutfest they choose, AM talk radio or NPR — it’s no surprise most of us also read books we’re inclined to favor. Reading is a pleasure and a time-consuming one. Why bother reading something you dislike?
But reading what you hate helps you refine what it is you value, whether it’s a style, a story line or an argument. Because books are long-form, they require more of the writer and the reader than a talk show or Facebook link. You can finish watching a movie in two hours and forget about it; not so a novel. Sticking it out for 300 pages means immersing yourself in another person’s world and discovering how it feels. That’s part of what makes books you despise so hard to dismiss. Rather than toss the book aside, turn to the next page and wrestle with its ideas. What about them makes you so uncomfortable?
More...
https://www.nytimes.com/2017/04/15/opin ... 87722&_r=0
Is Quantum Theory About Reality or What We Know?
Physicists know how to use quantum theory—your phone and computer give plenty of evidence of that. But knowing how to use it is a far cry from fully understanding the world the theory describes—or even what the various mathematical devices scientists use in the theory are supposed to mean. One such mathematical object, whose status physicists have long debated, is known as the quantum state.
One of the most striking features of quantum theory is that its predictions are, under virtually all circumstances, probabilistic. If you set up an experiment in a laboratory, and then you use quantum theory to predict the outcomes of various measurements you might perform, the best the theory can offer is probabilities—say, a 50 percent chance that you’ll get one outcome, and a 50 percent chance that you’ll get a different one. The role the quantum state plays in the theory is to determine, or at least encode, these probabilities. If you know the quantum state, then you can compute the probability of getting any possible outcome to any possible experiment.
But does the quantum state ultimately represent some objective aspect of reality, or is it a way of characterizing something about us, namely, something about what some person knows about reality? This question stretches back to the earliest history of quantum theory, but has recently become an active topic again, inspiring a slew of new theoretical results and even some experimental tests.
More..
http://nautil.us//blog/is-quantum-theor ... 7-60760513
Physicists know how to use quantum theory—your phone and computer give plenty of evidence of that. But knowing how to use it is a far cry from fully understanding the world the theory describes—or even what the various mathematical devices scientists use in the theory are supposed to mean. One such mathematical object, whose status physicists have long debated, is known as the quantum state.
One of the most striking features of quantum theory is that its predictions are, under virtually all circumstances, probabilistic. If you set up an experiment in a laboratory, and then you use quantum theory to predict the outcomes of various measurements you might perform, the best the theory can offer is probabilities—say, a 50 percent chance that you’ll get one outcome, and a 50 percent chance that you’ll get a different one. The role the quantum state plays in the theory is to determine, or at least encode, these probabilities. If you know the quantum state, then you can compute the probability of getting any possible outcome to any possible experiment.
But does the quantum state ultimately represent some objective aspect of reality, or is it a way of characterizing something about us, namely, something about what some person knows about reality? This question stretches back to the earliest history of quantum theory, but has recently become an active topic again, inspiring a slew of new theoretical results and even some experimental tests.
More..
http://nautil.us//blog/is-quantum-theor ... 7-60760513
We Aren’t Built to Live in the Moment
We are misnamed. We call ourselves Homo sapiens, the “wise man,” but that’s more of a boast than a description. What makes us wise? What sets us apart from other animals? Various answers have been proposed — language, tools, cooperation, culture, tasting bad to predators — but none is unique to humans.
What best distinguishes our species is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. It usually lifts our spirits, but it’s also the source of most depression and anxiety, whether we’re evaluating our own lives or worrying about the nation. Other animals have springtime rituals for educating the young, but only we subject them to “commencement” speeches grandly informing them that today is the first day of the rest of their lives.
A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain, as psychologists and neuroscientists have discovered — rather belatedly, because for the past century most researchers have assumed that we’re prisoners of the past and the present.
More...
https://www.nytimes.com/2017/05/19/opin ... inion&_r=0
******
You Still Need Your Brain
Most adults recall memorizing the names of rivers or the Pythagorean theorem in school and wondering, “When am I ever gonna use this stuff?” Kids today have a high-profile spokesman. Jonathan Rochelle, the director of Google’s education apps group, said last year at an industry conference that he “cannot answer” why his children should learn the quadratic equation. He wonders why they cannot “ask Google.” If Mr. Rochelle cannot answer his children, I can.
Google is good at finding information, but the brain beats it in two essential ways. Champions of Google underestimate how much the meaning of words and sentences changes with context. Consider vocabulary. Every teacher knows that a sixth grader, armed with a thesaurus, will often submit a paper studded with words used in not-quite-correct ways, like the student who looked up “meticulous,” saw it meant “very careful,” and wrote “I was meticulous when I fell off the cliff.”
With the right knowledge in memory, your brain deftly puts words in context. Consider “Trisha spilled her coffee.” When followed by the sentence “Dan jumped up to get a rag,” the brain instantly highlights one aspect of the meaning of “spill” — spills make a mess. Had the second sentence been “Dan jumped up to get her more,” you would have thought instead of the fact that “spill” means Trisha had less of something. Still another aspect of meaning would come to mind had you read, “Dan jumped up, howling in pain.”
The meaning of “spill” depends on context, but dictionaries, including internet dictionaries, necessarily offer context-free meanings. That’s why kids fall off cliffs meticulously.
Perhaps internet searches will become more sensitive to context, but until our brains communicate directly with silicon chips, there’s another problem — speed.
More...
https://www.nytimes.com/2017/05/19/opin ... ef=opinion
We are misnamed. We call ourselves Homo sapiens, the “wise man,” but that’s more of a boast than a description. What makes us wise? What sets us apart from other animals? Various answers have been proposed — language, tools, cooperation, culture, tasting bad to predators — but none is unique to humans.
What best distinguishes our species is an ability that scientists are just beginning to appreciate: We contemplate the future. Our singular foresight created civilization and sustains society. It usually lifts our spirits, but it’s also the source of most depression and anxiety, whether we’re evaluating our own lives or worrying about the nation. Other animals have springtime rituals for educating the young, but only we subject them to “commencement” speeches grandly informing them that today is the first day of the rest of their lives.
A more apt name for our species would be Homo prospectus, because we thrive by considering our prospects. The power of prospection is what makes us wise. Looking into the future, consciously and unconsciously, is a central function of our large brain, as psychologists and neuroscientists have discovered — rather belatedly, because for the past century most researchers have assumed that we’re prisoners of the past and the present.
More...
https://www.nytimes.com/2017/05/19/opin ... inion&_r=0
******
You Still Need Your Brain
Most adults recall memorizing the names of rivers or the Pythagorean theorem in school and wondering, “When am I ever gonna use this stuff?” Kids today have a high-profile spokesman. Jonathan Rochelle, the director of Google’s education apps group, said last year at an industry conference that he “cannot answer” why his children should learn the quadratic equation. He wonders why they cannot “ask Google.” If Mr. Rochelle cannot answer his children, I can.
Google is good at finding information, but the brain beats it in two essential ways. Champions of Google underestimate how much the meaning of words and sentences changes with context. Consider vocabulary. Every teacher knows that a sixth grader, armed with a thesaurus, will often submit a paper studded with words used in not-quite-correct ways, like the student who looked up “meticulous,” saw it meant “very careful,” and wrote “I was meticulous when I fell off the cliff.”
With the right knowledge in memory, your brain deftly puts words in context. Consider “Trisha spilled her coffee.” When followed by the sentence “Dan jumped up to get a rag,” the brain instantly highlights one aspect of the meaning of “spill” — spills make a mess. Had the second sentence been “Dan jumped up to get her more,” you would have thought instead of the fact that “spill” means Trisha had less of something. Still another aspect of meaning would come to mind had you read, “Dan jumped up, howling in pain.”
The meaning of “spill” depends on context, but dictionaries, including internet dictionaries, necessarily offer context-free meanings. That’s why kids fall off cliffs meticulously.
Perhaps internet searches will become more sensitive to context, but until our brains communicate directly with silicon chips, there’s another problem — speed.
More...
https://www.nytimes.com/2017/05/19/opin ... ef=opinion
15 signs that you are emotionally intelligent
Slide show:
http://www.msn.com/en-ca/lifestyle/smar ... ut#image=1
In our society, a quick mind and the ability to reason are glorified, but emotional intelligence, which is reflected in the ability to manage your emotions and those of others, is just as important. Here are 15 signs that you are emotionally intelligent.
Slide show:
http://www.msn.com/en-ca/lifestyle/smar ... ut#image=1
In our society, a quick mind and the ability to reason are glorified, but emotional intelligence, which is reflected in the ability to manage your emotions and those of others, is just as important. Here are 15 signs that you are emotionally intelligent.
Artificial Intelligence Is Stuck. Here’s How to Move It Forward.
Artificial Intelligence is colossally hyped these days, but the dirty little secret is that it still has a long, long way to go. Sure, A.I. systems have mastered an array of games, from chess and Go to “Jeopardy” and poker, but the technology continues to struggle in the real world. Robots fall over while opening doors, prototype driverless cars frequently need human intervention, and nobody has yet designed a machine that can read reliably at the level of a sixth grader, let alone a college student. Computers that can educate themselves — a mark of true intelligence — remain a dream.
Even the trendy technique of “deep learning,” which uses artificial neural networks to discern complex statistical correlations in huge amounts of data, often comes up short. Some of the best image-recognition systems, for example, can successfully distinguish dog breeds, yet remain capable of major blunders, like mistaking a simple pattern of yellow and black stripes for a school bus. Such systems can neither comprehend what is going on in complex visual scenes (“Who is chasing whom and why?”) nor follow simple instructions (“Read this story and summarize what it means”).
Although the field of A.I. is exploding with microdiscoveries, progress toward the robustness and flexibility of human cognition remains elusive. Not long ago, for example, while sitting with me in a cafe, my 3-year-old daughter spontaneously realized that she could climb out of her chair in a new way: backward, by sliding through the gap between the back and the seat of the chair. My daughter had never seen anyone else disembark in quite this way; she invented it on her own — and without the benefit of trial and error, or the need for terabytes of labeled data.
More...
https://www.nytimes.com/2017/07/29/opin ... ef=opinion
Artificial Intelligence is colossally hyped these days, but the dirty little secret is that it still has a long, long way to go. Sure, A.I. systems have mastered an array of games, from chess and Go to “Jeopardy” and poker, but the technology continues to struggle in the real world. Robots fall over while opening doors, prototype driverless cars frequently need human intervention, and nobody has yet designed a machine that can read reliably at the level of a sixth grader, let alone a college student. Computers that can educate themselves — a mark of true intelligence — remain a dream.
Even the trendy technique of “deep learning,” which uses artificial neural networks to discern complex statistical correlations in huge amounts of data, often comes up short. Some of the best image-recognition systems, for example, can successfully distinguish dog breeds, yet remain capable of major blunders, like mistaking a simple pattern of yellow and black stripes for a school bus. Such systems can neither comprehend what is going on in complex visual scenes (“Who is chasing whom and why?”) nor follow simple instructions (“Read this story and summarize what it means”).
Although the field of A.I. is exploding with microdiscoveries, progress toward the robustness and flexibility of human cognition remains elusive. Not long ago, for example, while sitting with me in a cafe, my 3-year-old daughter spontaneously realized that she could climb out of her chair in a new way: backward, by sliding through the gap between the back and the seat of the chair. My daughter had never seen anyone else disembark in quite this way; she invented it on her own — and without the benefit of trial and error, or the need for terabytes of labeled data.
More...
https://www.nytimes.com/2017/07/29/opin ... ef=opinion
Emotional Intelligence Needs a Rewrite
Think you can read people’s emotions? Think again.
You’ve probably met people who are experts at mastering their emotions and understanding the emotions of others. When all hell breaks loose, somehow these individuals remain calm. They know what to say and do when their boss is moody or their lover is upset. It’s no wonder that emotional intelligence was heralded as the next big thing in business success, potentially more important than IQ, when Daniel Goleman’s bestselling book, Emotional Intelligence, arrived in 1995. After all, whom would you rather work with—someone who can identify and respond to your feelings, or someone who has no clue? Whom would you rather date?
The traditional foundation of emotional intelligence rests on two common-sense assumptions. The first is that it’s possible to detect the emotions of other people accurately. That is, the human face and body are said to broadcast happiness, sadness, anger, fear, and other emotions, and if you observe closely enough, you can read these emotions like words on a page. The second assumption is that emotions are automatically triggered by events in the world, and you can learn to control them through rationality. This idea is one of the most cherished beliefs in Western civilization. For example, in many legal systems, there’s a distinction between a crime of passion, where your emotions allegedly hijacked your good sense, and a premeditated crime that involved rational planning. In economics, nearly every popular model of investor behavior separates emotion and cognition.
These two core assumptions are strongly appealing and match our daily experiences. Nevertheless, neither one stands up to scientific scrutiny in the age of neuroscience. Copious research, from my lab and others, shows that faces and bodies alone do not communicate any specific emotion in any consistent manner. In addition, we now know that the brain doesn’t have separate processes for emotion and cognition, and therefore one cannot control the other. If these statements defy your common sense, I’m right there with you. But our experiences of emotion, no matter how compelling, don’t reflect the biology of what’s happening inside us. Our traditional understanding and practice of emotional intelligence badly needs a tuneup.
More...
http://nautil.us/issue/51/limits/emotio ... 4-60760513
Think you can read people’s emotions? Think again.
You’ve probably met people who are experts at mastering their emotions and understanding the emotions of others. When all hell breaks loose, somehow these individuals remain calm. They know what to say and do when their boss is moody or their lover is upset. It’s no wonder that emotional intelligence was heralded as the next big thing in business success, potentially more important than IQ, when Daniel Goleman’s bestselling book, Emotional Intelligence, arrived in 1995. After all, whom would you rather work with—someone who can identify and respond to your feelings, or someone who has no clue? Whom would you rather date?
The traditional foundation of emotional intelligence rests on two common-sense assumptions. The first is that it’s possible to detect the emotions of other people accurately. That is, the human face and body are said to broadcast happiness, sadness, anger, fear, and other emotions, and if you observe closely enough, you can read these emotions like words on a page. The second assumption is that emotions are automatically triggered by events in the world, and you can learn to control them through rationality. This idea is one of the most cherished beliefs in Western civilization. For example, in many legal systems, there’s a distinction between a crime of passion, where your emotions allegedly hijacked your good sense, and a premeditated crime that involved rational planning. In economics, nearly every popular model of investor behavior separates emotion and cognition.
These two core assumptions are strongly appealing and match our daily experiences. Nevertheless, neither one stands up to scientific scrutiny in the age of neuroscience. Copious research, from my lab and others, shows that faces and bodies alone do not communicate any specific emotion in any consistent manner. In addition, we now know that the brain doesn’t have separate processes for emotion and cognition, and therefore one cannot control the other. If these statements defy your common sense, I’m right there with you. But our experiences of emotion, no matter how compelling, don’t reflect the biology of what’s happening inside us. Our traditional understanding and practice of emotional intelligence badly needs a tuneup.
More...
http://nautil.us/issue/51/limits/emotio ... 4-60760513
The Fundamental Limits of Machine Learning
POSTED BY JESSE DUNIETZ ON AUG 14, 2017
Excerpt:
My aunt and her colleagues had stumbled across a fundamental problem in machine learning, the study of computers that learn. Almost all of the learning we expect our computers to do—and much of the learning we ourselves do—is about reducing information to underlying patterns, which can then be used to infer the unknown. Her puzzle was no different.
As a human, the challenge is to find any pattern at all. Of course, we have intuitions that limit our guesses. But computers have no such intuitions. From a computer’s standpoint, the difficulty in pattern recognition is one of surplus: with an endless variety of patterns, all technically valid, what makes one “right” and another “wrong?”
The problem only recently became of practical concern. Before the 1990s, AI systems rarely did much learning at all. For example, the chess-playing Deep Thought, predecessor to Deep Blue, didn’t get good at chess by learning from successes and failures. Instead, chess grandmasters and programming wizards carefully crafted rules to teach it which board positions were good or bad. Such extensive hand-tuning was typical of that era’s “expert systems” approach.
More...
http://nautil.us//blog/-the-fundamental ... 8-60760513
POSTED BY JESSE DUNIETZ ON AUG 14, 2017
Excerpt:
My aunt and her colleagues had stumbled across a fundamental problem in machine learning, the study of computers that learn. Almost all of the learning we expect our computers to do—and much of the learning we ourselves do—is about reducing information to underlying patterns, which can then be used to infer the unknown. Her puzzle was no different.
As a human, the challenge is to find any pattern at all. Of course, we have intuitions that limit our guesses. But computers have no such intuitions. From a computer’s standpoint, the difficulty in pattern recognition is one of surplus: with an endless variety of patterns, all technically valid, what makes one “right” and another “wrong?”
The problem only recently became of practical concern. Before the 1990s, AI systems rarely did much learning at all. For example, the chess-playing Deep Thought, predecessor to Deep Blue, didn’t get good at chess by learning from successes and failures. Instead, chess grandmasters and programming wizards carefully crafted rules to teach it which board positions were good or bad. Such extensive hand-tuning was typical of that era’s “expert systems” approach.
More...
http://nautil.us//blog/-the-fundamental ... 8-60760513
How Much More Can We Learn About the Universe?
These are the few limits on our ability to know.
As a cosmologist, some of the questions I hear most frequently after a lecture include: What lies beyond our universe? What is our universe expanding into? Will our universe expand forever? These are natural questions to ask. But there is an even deeper question at play here. Fundamentally what we really want to know is: Is there a boundary to our knowledge? Are there fundamental limits to science?
The answer, of course, is that we don’t know in advance. We won’t know if there is a limit to knowledge unless we try to get past it. At the moment, we have no sign of one. We may be facing roadblocks, but those give every indication of being temporary. Some people say to me: “We will never know how the universe began.” “We can never know what happened before the Big Bang.” These statements demonstrate a remarkable conceit, by suggesting we can know in advance the locus of all those things that we cannot know. This is not only unsubstantiated, but the history of science so far has demonstrated no such limits. And in my own field, cosmology, our knowledge has increased in ways that no one foresaw even 50 years ago.
More...
http://nautil.us/issue/51/limits/how-mu ... 8-60760513
These are the few limits on our ability to know.
As a cosmologist, some of the questions I hear most frequently after a lecture include: What lies beyond our universe? What is our universe expanding into? Will our universe expand forever? These are natural questions to ask. But there is an even deeper question at play here. Fundamentally what we really want to know is: Is there a boundary to our knowledge? Are there fundamental limits to science?
The answer, of course, is that we don’t know in advance. We won’t know if there is a limit to knowledge unless we try to get past it. At the moment, we have no sign of one. We may be facing roadblocks, but those give every indication of being temporary. Some people say to me: “We will never know how the universe began.” “We can never know what happened before the Big Bang.” These statements demonstrate a remarkable conceit, by suggesting we can know in advance the locus of all those things that we cannot know. This is not only unsubstantiated, but the history of science so far has demonstrated no such limits. And in my own field, cosmology, our knowledge has increased in ways that no one foresaw even 50 years ago.
More...
http://nautil.us/issue/51/limits/how-mu ... 8-60760513
Fire of the Mind
The mind is not a vessel to be filled,
but a fire to be kindled.
- Plutarch
Education is not the filling of a pail,
but the lighting of a fire.
- William Butler Yeats
We are not afraid to follow truth
wherever it may lead,
nor to tolerate any error
so long as reason is left free to combat it.
- Thomas Jefferson
Intellectual growth should commence at birth
and cease only at death.
- Albert Einstein
The mind is not a vessel to be filled,
but a fire to be kindled.
- Plutarch
Education is not the filling of a pail,
but the lighting of a fire.
- William Butler Yeats
We are not afraid to follow truth
wherever it may lead,
nor to tolerate any error
so long as reason is left free to combat it.
- Thomas Jefferson
Intellectual growth should commence at birth
and cease only at death.
- Albert Einstein
Understanding is Beyond Knowing Facts
There is a great difference between knowing and understanding:
you can know a lot about something and not really understand it.
- Charles Kettering
The eye sees only what the mind is prepared to comprehend.
- Robertson Davies
If you understand, things are just as they are;
if you do not understand, things are just as they are.
- Zen Proverb
The more you explain it, the more I don't understand it.
- Mark Twain
I hear and I forget.
I see and I remember.
I do and I understand.
- Chinese proverb
Not until we are lost
do we begin to understand ourselves.
- Henry David Thoreau
There is a great difference between knowing and understanding:
you can know a lot about something and not really understand it.
- Charles Kettering
The eye sees only what the mind is prepared to comprehend.
- Robertson Davies
If you understand, things are just as they are;
if you do not understand, things are just as they are.
- Zen Proverb
The more you explain it, the more I don't understand it.
- Mark Twain
I hear and I forget.
I see and I remember.
I do and I understand.
- Chinese proverb
Not until we are lost
do we begin to understand ourselves.
- Henry David Thoreau
How to Get Your Mind to Read
Americans are not good readers. Many blame the ubiquity of digital media. We’re too busy on Snapchat to read, or perhaps internet skimming has made us incapable of reading serious prose. But Americans’ trouble with reading predates digital technologies. The problem is not bad reading habits engendered by smartphones, but bad education habits engendered by a misunderstanding of how the mind reads.
Just how bad is our reading problem? The last National Assessment of Adult Literacy from 2003 is a bit dated, but it offers a picture of Americans’ ability to read in everyday situations: using an almanac to find a particular fact, for example, or explaining the meaning of a metaphor used in a story. Of those who finished high school but did not continue their education, 13 percent could not perform simple tasks like these. When things got more complex — in comparing two newspaper editorials with different interpretations of scientific evidence or examining a table to evaluate credit card offers — 95 percent failed.
There’s no reason to think things have gotten better. Scores for high school seniors on the National Assessment of Education Progress reading test haven’t improved in 30 years.
Many of these poor readers can sound out words from print, so in that sense, they can read. Yet they are functionally illiterate — they comprehend very little of what they can sound out. So what does comprehension require? Broad vocabulary, obviously. Equally important, but more subtle, is the role played by factual knowledge.
More...
https://www.nytimes.com/2017/11/25/opin ... dline&te=1
Americans are not good readers. Many blame the ubiquity of digital media. We’re too busy on Snapchat to read, or perhaps internet skimming has made us incapable of reading serious prose. But Americans’ trouble with reading predates digital technologies. The problem is not bad reading habits engendered by smartphones, but bad education habits engendered by a misunderstanding of how the mind reads.
Just how bad is our reading problem? The last National Assessment of Adult Literacy from 2003 is a bit dated, but it offers a picture of Americans’ ability to read in everyday situations: using an almanac to find a particular fact, for example, or explaining the meaning of a metaphor used in a story. Of those who finished high school but did not continue their education, 13 percent could not perform simple tasks like these. When things got more complex — in comparing two newspaper editorials with different interpretations of scientific evidence or examining a table to evaluate credit card offers — 95 percent failed.
There’s no reason to think things have gotten better. Scores for high school seniors on the National Assessment of Education Progress reading test haven’t improved in 30 years.
Many of these poor readers can sound out words from print, so in that sense, they can read. Yet they are functionally illiterate — they comprehend very little of what they can sound out. So what does comprehension require? Broad vocabulary, obviously. Equally important, but more subtle, is the role played by factual knowledge.
More...
https://www.nytimes.com/2017/11/25/opin ... dline&te=1
From Wonder into Wonder
The most beautiful thing we can experience is the mysterious.
It is the source of all true art and all science.
He to whom this emotion is a stranger,
who can no longer pause to wonder and stand rapt in awe,
is as good as dead: his eyes are closed.
- Albert Einstein
Wisdom begins in wonder.
- Socrates
Never say there is nothing beautiful in the world anymore.
There is always something to make you wonder
in the shape of a tree, the trembling of a leaf.
- Albert Schweitzer
From wonder into wonder existence opens.
- Lao Tzu
True wisdom comes to each of us when we realize
how little we understand about life,
ourselves, and the world around us.
- Socrates
******
Imagining Leads to Accomplishing
If you can imagine it, you can achieve it.
If you can dream it, you can become it.
- William Arthur Ward
Live out of your imagination, not your history.
- Stephen Covey
Imagination is more important than knowledge.
For knowledge is limited to all we now know and understand,
while imagination embraces the entire world,
and all there ever will be to know and understand.
- Albert Einstein
Imagination is everything.
It is the preview of life's coming attractions.
- Albert Einstein
The man who has no imagination has no wings.
- Muhammad Ali
Imagining is the first step to accomplishing.
- Jonathan Lockwood Huie
The power of imagination makes us infinite.
- John Muir
The most beautiful thing we can experience is the mysterious.
It is the source of all true art and all science.
He to whom this emotion is a stranger,
who can no longer pause to wonder and stand rapt in awe,
is as good as dead: his eyes are closed.
- Albert Einstein
Wisdom begins in wonder.
- Socrates
Never say there is nothing beautiful in the world anymore.
There is always something to make you wonder
in the shape of a tree, the trembling of a leaf.
- Albert Schweitzer
From wonder into wonder existence opens.
- Lao Tzu
True wisdom comes to each of us when we realize
how little we understand about life,
ourselves, and the world around us.
- Socrates
******
Imagining Leads to Accomplishing
If you can imagine it, you can achieve it.
If you can dream it, you can become it.
- William Arthur Ward
Live out of your imagination, not your history.
- Stephen Covey
Imagination is more important than knowledge.
For knowledge is limited to all we now know and understand,
while imagination embraces the entire world,
and all there ever will be to know and understand.
- Albert Einstein
Imagination is everything.
It is the preview of life's coming attractions.
- Albert Einstein
The man who has no imagination has no wings.
- Muhammad Ali
Imagining is the first step to accomplishing.
- Jonathan Lockwood Huie
The power of imagination makes us infinite.
- John Muir
Teach in Order to Understand
We Learn...
10% of what we read,
20% of what we hear,
30% of what we see,
50% of what we see and hear,
70% of what we discuss,
80% of what we experience,
95% of what we teach others.
- William Glasser
Education is not the answer to the question.
Education is the means to the answer to all questions.
- William Allin
Learning is the beginning of wealth.
Learning is the beginning of health.
Learning is the beginning of spirituality.
Searching and learning is where the miracle process all begins.
- Jim Rohn
True learning is not about facts,
but about conscious appreciation of the experience of living.
- Jonathan Lockwood Huie
We Learn...
10% of what we read,
20% of what we hear,
30% of what we see,
50% of what we see and hear,
70% of what we discuss,
80% of what we experience,
95% of what we teach others.
- William Glasser
Education is not the answer to the question.
Education is the means to the answer to all questions.
- William Allin
Learning is the beginning of wealth.
Learning is the beginning of health.
Learning is the beginning of spirituality.
Searching and learning is where the miracle process all begins.
- Jim Rohn
True learning is not about facts,
but about conscious appreciation of the experience of living.
- Jonathan Lockwood Huie
Why Did a Billionaire Give $75 Million to a Philosophy Department?
Last week, for the first time in recent memory, a news story in this troubling period had me, a bachelor of arts in philosophy, sitting up straight in stunned delight. Johns Hopkins University was gifted $75 million to expand its philosophy department to near-twice its size—more professors (13 to 22 over a decade) and postdoctoral fellows and graduate students, as well more undergraduate courses. It’s apparently the largest donation any philosophy department has ever received, and for Johns Hopkins, it’s the largest gift the university has ever received for one of its humanities departments.
The giftee isn’t remaining anonymous; the philosophy department, which gave birth to American pragmatism in the late 19th century, will be named after the star investor, and former Johns Hopkins philosophy graduate student, William H. “Bill” Miller III, who you may remember making a bullish-on-banks blunder as “Bruce” Miller in The Big Short. Miller attributes “much” of his success—beating the Standard & Poor’s 500 for 15 consecutive years, from 1991 to 2005—to the “analytical training and habits of mind” his philosophical study at Johns Hopkins inculcated. The way he sees it, more students should have the chance for that intellectual stimulation. Miller agrees with the president of Johns Hopkins University, Ronald J. Daniels, when he says, “Philosophy matters…The contemporary challenges of the genomics revolution, the rise of artificial intelligence, the growth in income inequality, social and political fragmentation, and our capacity for devastating war all invite philosophical perspective.”
Challenges in the sciences also invite philosophical perspective. Though for Lawrence Krauss, a theoretical cosmologist, philosophizing in science is most usefully done by scientists themselves. “Of course physics needs philosophy,” he says. “But does it need philosophers? That’s the question. And the answer is not so much anymore. The earlier physicists were philosophers. When the questions weren’t well defined, that’s when philosophy becomes critically important.”
http://nautil.us//blog/why-did-a-billio ... 0-60760513
Last week, for the first time in recent memory, a news story in this troubling period had me, a bachelor of arts in philosophy, sitting up straight in stunned delight. Johns Hopkins University was gifted $75 million to expand its philosophy department to near-twice its size—more professors (13 to 22 over a decade) and postdoctoral fellows and graduate students, as well more undergraduate courses. It’s apparently the largest donation any philosophy department has ever received, and for Johns Hopkins, it’s the largest gift the university has ever received for one of its humanities departments.
The giftee isn’t remaining anonymous; the philosophy department, which gave birth to American pragmatism in the late 19th century, will be named after the star investor, and former Johns Hopkins philosophy graduate student, William H. “Bill” Miller III, who you may remember making a bullish-on-banks blunder as “Bruce” Miller in The Big Short. Miller attributes “much” of his success—beating the Standard & Poor’s 500 for 15 consecutive years, from 1991 to 2005—to the “analytical training and habits of mind” his philosophical study at Johns Hopkins inculcated. The way he sees it, more students should have the chance for that intellectual stimulation. Miller agrees with the president of Johns Hopkins University, Ronald J. Daniels, when he says, “Philosophy matters…The contemporary challenges of the genomics revolution, the rise of artificial intelligence, the growth in income inequality, social and political fragmentation, and our capacity for devastating war all invite philosophical perspective.”
Challenges in the sciences also invite philosophical perspective. Though for Lawrence Krauss, a theoretical cosmologist, philosophizing in science is most usefully done by scientists themselves. “Of course physics needs philosophy,” he says. “But does it need philosophers? That’s the question. And the answer is not so much anymore. The earlier physicists were philosophers. When the questions weren’t well defined, that’s when philosophy becomes critically important.”
http://nautil.us//blog/why-did-a-billio ... 0-60760513