Articles of Interest in Science

Current issues, news and ethics
Post Reply
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Articles of Interest in Science

Post by kmaherali »

July 4, 2012
Physicists Find Elusive Particle Seen as Key to Universe

By DENNIS OVERBYE

ASPEN, Colo. — Signaling a likely end to one of the longest, most expensive searches in the history of science, physicists said Wednesday that they had discovered a new subatomic particle that looks for all the world like the Higgs boson, a key to understanding why there is diversity and life in the universe.

Like Omar Sharif materializing out of the shimmering desert as a man on a camel in “Lawrence of Arabia,” the elusive boson has been coming slowly into view since last winter, as the first signals of its existence grew until they practically jumped off the chart.

“I think we have it,” said Rolf-Dieter Heuer, the director general of CERN, the multinational research center headquartered in Geneva. The agency is home to the Large Hadron Collider, the immense particle accelerator that produced the new data by colliding protons. The findings were announced by two separate teams. Dr. Heuer called the discovery “a historic milestone.”

He and others said that it was too soon to know for sure, however, whether the new particle is the one predicted by the Standard Model, the theory that has ruled physics for the last half-century. The particle is predicted to imbue elementary particles with mass. It may be an impostor as yet unknown to physics, perhaps the first of many particles yet to be discovered.

That possibility is particularly exciting to physicists, as it could point the way to new, deeper ideas, beyond the Standard Model, about the nature of reality.

For now, some physicists are simply calling it a “Higgslike” particle.

“It’s something that may, in the end, be one of the biggest observations of any new phenomena in our field in the last 30 or 40 years,” said Joe Incandela, a physicist of the University of California, Santa Barbara, and a spokesman for one of the two groups reporting new data on Wednesday.

Here at the Aspen Center for Physics, a retreat for scientists, bleary-eyed physicists drank Champagne in the wee hours as word arrived via Webcast from CERN. It was a scene duplicated in Melbourne, Australia, where physicists had gathered for a major conference, as well as in Los Angeles, Chicago, Princeton, New York, London and beyond — everywhere that members of a curious species have dedicated their lives and fortunes to the search for their origins in a dark universe.

In Geneva, 1,000 people stood in line all night to get into an auditorium at CERN, where some attendees noted a rock-concert ambience. Peter Higgs, the University of Edinburgh theorist for whom the boson is named, entered the meeting to a sustained ovation.

Confirmation of the Higgs boson or something very much like it would constitute a rendezvous with destiny for a generation of physicists who have believed in the boson for half a century without ever seeing it. The finding affirms a grand view of a universe described by simple and elegant and symmetrical laws — but one in which everything interesting, like ourselves, results from flaws or breaks in that symmetry.

According to the Standard Model, the Higgs boson is the only manifestation of an invisible force field, a cosmic molasses that permeates space and imbues elementary particles with mass. Particles wading through the field gain heft the way a bill going through Congress attracts riders and amendments, becoming ever more ponderous.

Without the Higgs field, as it is known, or something like it, all elementary forms of matter would zoom around at the speed of light, flowing through our hands like moonlight. There would be neither atoms nor life.

Physicists said that they would probably be studying the new particle for years. Any deviations from the simplest version predicted by current theory — and there are hints of some already — could begin to answer questions left hanging by the Standard Model. For example, what is the dark matter that provides the gravitational scaffolding of galaxies?

And why is the universe made of matter instead of antimatter?

“If the boson really is not acting standard, then that will imply that there is more to the story — more particles, maybe more forces around the corner,” Neal Weiner, a theorist at New York University, wrote in an e-mail. “What that would be is anyone’s guess at the moment.”

Wednesday’s announcement was also an impressive opening act for the Large Hadron Collider, the world’s biggest physics machine, which cost $10 billion to build and began operating only two years ago. It is still running at only half-power.

Physicists had been icing the Champagne ever since last December. Two teams of about 3,000 physicists each — one named Atlas, led by Fabiola Gianotti, and the other CMS, led by Dr. Incandela — operate giant detectors in the collider, sorting the debris from the primordial fireballs left after proton collisions.

Last winter, they both reported hints of the same particle. They were not able, however, to rule out the possibility that it was a statistical fluke. Since then, the collider has more than doubled the number of collisions it has recorded.

The results announced Wednesday capped two weeks of feverish speculation and Internet buzz as the physicists, who had been sworn to secrecy, did a breakneck analysis of about 800 trillion proton-proton collisions over the last two years.

Up until last weekend, physicists at the agency were saying that they themselves did not know what the outcome would be. Expectations soared when it was learned that the five surviving originators of the Higgs boson theory had been invited to the CERN news conference.

The December signal was no fluke, the scientists said Wednesday. The new particle has a mass of about 125.3 billion electron volts, as measured by the CMS group, and 126 billion according to Atlas. Both groups said that the likelihood that their signal was a result of a chance fluctuation was less than one chance in 3.5 million, “five sigma,” which is the gold standard in physics for a discovery.

On that basis, Dr. Heuer said that he had decided only on Tuesday afternoon to call the Higgs result a “discovery.”

He said, “I know the science, and as director general I can stick out my neck.”

Dr. Incandela’s and Dr. Gianotti’s presentations were repeatedly interrupted by applause as they showed slide after slide of data presented in graphs with bumps rising like mountains from the sea.

Dr. Gianotti noted that the mass of the putative Higgs, apparently one of the heaviest subatomic particles, made it easy to study its many behaviors. “Thanks, nature,” she said.

Gerald Guralnik, one of the founders of the Higgs theory, said he was glad to be at a physics meeting “where there is applause, like a football game.”

Asked to comment after the announcements, Dr. Higgs seemed overwhelmed. “For me, it’s really an incredible thing that’s happened in my lifetime,” he said.

Dr. Higgs was one of six physicists, working in three independent groups, who in 1964 invented what came to be known as the Higgs field. The others were Tom Kibble of Imperial College, London; Carl Hagen of the University of Rochester; Dr. Guralnik of Brown University; and François Englert and Robert Brout, both of Université Libre de Bruxelles.

One implication of their theory was that this cosmic molasses, normally invisible, would produce its own quantum particle if hit hard enough with the right amount of energy. The particle would be fragile and fall apart within a millionth of a second in a dozen possible ways, depending upon its own mass.

Unfortunately, the theory did not describe how much this particle should weigh, which is what made it so hard to find, eluding researchers at a succession of particle accelerators, including the Large Electron Positron Collider at CERN, which closed down in 2000, and the Tevatron at the Fermi National Accelerator Laboratory, or Fermilab, in Batavia, Ill., which shut down last year.

Along the way the Higgs boson achieved a notoriety rare in abstract physics. To the eternal dismay of his colleagues, Leon Lederman, the former director of Fermilab, called it the “God particle,” in his book of the same name, written with Dick Teresi. (He later said that he had wanted to call it the “goddamn particle.”)

Finding the missing boson was one of the main goals of the Large Hadron Collider. Both Dr. Heuer and Dr. Gianotti said they had not expected the search to succeed so quickly.

So far, the physicists admit, they know little about their new boson. The CERN results are mostly based on measurements of two or three of the dozen different ways, or “channels,” by which a Higgs boson could be produced and then decay.

There are hints, but only hints so far, that some of the channels are overproducing the boson while others might be underproducing it, clues that maybe there is more at work here than the Standard Model would predict.

“This could be the first in a ring of discoveries,” said Guido Tonelli of CERN.

In an e-mail, Maria Spiropulu, a professor at the California Institute of Technology who works with the CMS team of physicists, said: “I personally do not want it to be standard model anything — I don’t want it to be simple or symmetric or as predicted. I want us all to have been dealt a complex hand that will send me (and all of us) in a (good) loop for a long time.”

Nima Arkani-Hamed, a physicist at the Institute for Advanced Study in Princeton, said: “It’s a triumphant day for fundamental physics. Now some fun begins.”

http://www.nytimes.com/2012/07/05/scien ... h_20120705
Last edited by kmaherali on Tue Jul 14, 2015 12:32 am, edited 1 time in total.
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

July 9, 2012


A Blip That Speaks of Our Place in the Universe

By LAWRENCE M. KRAUSS


ASPEN, Colo. — Last week, physicists around the world were glued to computers at very odd hours (I was at a 1 a.m. physics “party” here with a large projection screen and dozens of colleagues) to watch live as scientists at the Large Hadron Collider, outside Geneva, announced that they had apparently found one of the most important missing pieces of the jigsaw puzzle that is nature.

The “Higgs particle,” proposed almost 50 years ago to allow for consistency between theoretical predictions and experimental observations in elementary particle physics, appears to have been discovered — even as the detailed nature of the discovery allows room for even more exotic revelations that may be just around the corner.

It is natural for those not deeply involved in the half-century quest for the Higgs to ask why they should care about this seemingly esoteric discovery. There are three reasons.

First, it caps one of the most remarkable intellectual adventures in human history — one that anyone interested in the progress of knowledge should at least be aware of.

Second, it makes even more remarkable the precarious accident that allowed our existence to form from nothing — further proof that the universe of our senses is just the tip of a vast, largely hidden cosmic iceberg.

And finally, the effort to uncover this tiny particle represents the very best of what the process of science can offer to modern civilization.

If one is a theoretical physicist working on some idea late at night or at a blackboard with colleagues over coffee one afternoon, it is almost terrifying to imagine that something that you cook up in your mind might actually be real. It’s like staring at a large jar and being asked to guess the number of jelly beans inside; if you guess right, it seems too good to be true.

The prediction of the Higgs particle accompanied a remarkable revolution that completely changed our understanding of particle physics in the latter part of the 20th century.

Just 50 years ago, in spite of the great advances of physics in the previous half century, we understood only one of the four fundamental forces of nature — electromagnetism — as a fully consistent quantum theory. In just one subsequent decade, however, not only had three of the four known forces succumbed to our investigations, but a new elegant unity of nature had been uncovered.

It was found that all of the known forces could be described using a single mathematical framework — and that two of the forces, electromagnetism and the weak force (which governs the nuclear reactions that power the sun), were actually different manifestations of a single underlying theory.

How could two such different forces be related? After all, the photon, the particle that conveys electromagnetism, has no mass, while the particles that convey the weak force are very massive — almost 100 times as heavy as the particles that make up atomic nuclei, a fact that explains why the weak force is weak.

What the British physicist Peter Higgs and several others showed is that if there exists an otherwise invisible background field permeating all of space, then the particles that convey some force like electromagnetism can interact with this field and effectively encounter resistance to their motion and slow down, like a swimmer moving through molasses.

As a result, these particles can behave as if they are heavy, as if they have a mass. The physicist Steven Weinberg later applied this idea to a model of the weak and electromagnetic forces previously proposed by Sheldon L. Glashow, and everything fit together.

This idea can be extended to the rest of particles in nature, including the protons and neutrons and electrons that make up the atoms in our bodies. If some particle interacts more strongly with this background field, it ends up acting heavier. If it interacts more weakly, if acts lighter. If it doesn’t interact at all, like the photon, it remains massless.

If anything sounds too good to be true, this is it. The miracle of mass — indeed of our very existence, because if not for the Higgs, there would be no stars, no planets and no people — is possible because of some otherwise hidden background field whose only purpose seems to be to allow the world to look the way it does.

Dr. Glashow, who along with Dr. Weinberg won a Nobel Prize in Physics, later once referred to this “Higgs field” as the “toilet” of modern physics because that’s where all the ugly details that allow the marvelous beauty of the physical world are hidden.

But relying on invisible miracles is the stuff of religion, not science. To ascertain whether this remarkable accident was real, physicists relied on another facet of the quantum world.

Associated with every background field is a particle, and if you pick a point in space and hit it hard enough, you may whack out real particles. The trick is hitting it hard enough over a small enough volume.

And that’s the rub. After 50 years of trying, including a failed attempt in this country to build an accelerator to test these ideas, no sign of the Higgs had appeared. In fact, I was betting against it, since a career in theoretical physics has taught me that nature usually has a far richer imagination than we do.

Until last week.

Every second at the Large Hadron Collider, enough data is generated to fill more than 1,000 one-terabyte hard drives — more than the information in all the world’s libraries. The logistics of filtering and analyzing the data to find the Higgs particle peeking out under a mountain of noise, not to mention running the most complex machine humans have ever built, is itself a triumph of technology and computational wizardry of unprecedented magnitude.

The physicist Victor F. Weisskopf — the colorful first director of CERN, the European Center for Nuclear Research, which operates the collider — once described large particle accelerators as the gothic cathedrals of our time. Like those beautiful remnants of antiquity, accelerators require the cutting edge of technology, they take decades or more to build, and they require the concerted efforts of thousands of craftsmen and women. At CERN, each of the mammoth detectors used to study collisions requires the work of thousands of physicists, from scores of countries, speaking several dozen languages.

Most significantly perhaps, cathedrals and colliders are both works of incomparable grandeur that celebrate the beauty of being alive.

The apparent discovery of the Higgs may not result in a better toaster or a faster car. But it provides a remarkable celebration of the human mind’s capacity to uncover nature’s secrets, and of the technology we have built to control them. Hidden in what seems like empty space — indeed, like nothing, which is getting more interesting all the time — are the very elements that allow for our existence.

By demonstrating that, last week’s discovery will change our view of ourselves and our place in the universe. Surely that is the hallmark of great music, great literature, great art ...and great science.

Lawrence M. Krauss, the director of the Origins Project at Arizona State University, is the author, most recently, of “A Universe From Nothing.”

http://www.nytimes.com/2012/07/10/scien ... h_20120710
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Pakistan shuns physicist linked to ‘God particle’ because of religious beliefs

Published on Monday July 09, 2012

http://www.thestar.com/news/world/artic ... us-beliefs

ISLAMABAD—The pioneering work of Abdus Salam, Pakistan’s only Nobel laureate, helped lead to the apparent discovery of the subatomic “God particle” last week. But the late physicist is no hero at home, where his name has been stricken from school textbooks.

Related: What is the Higgs-boson and why the hunt for the god particle matters

Praise within Pakistan for Salam, who also guided the early stages of the country’s nuclear program, faded decades ago as Muslim fundamentalists gained power. He belonged to the Ahmadi sect, which has been persecuted by the government and targeted by Taliban militants who view its members as heretics.

Their plight — along with that of Pakistan’s other religious minorities, such as Shiite Muslims, Christians and Hindus — has deepened in recent years as hardline interpretations of Islam have gained ground and militants have stepped up attacks against groups they oppose. Most Pakistanis are Sunni Muslims.

Salam, a child prodigy born in 1926 in what was to become Pakistan after the partition of British-controlled India, won more than a dozen international prizes and honours. In 1979, he was co-winner of the Nobel Prize for his work on the so-called Standard Model of particle physics, which theorizes how fundamental forces govern the overall dynamics of the universe. He died in 1996.

Salam and Steven Weinberg, with whom he shared the Nobel Prize, independently predicted the existence of a subatomic particle now called the Higgs boson, named after a British physicist who theorized that it endowed other particles with mass, said Pervez Hoodbhoy, a Pakistani physicist who once worked with Salam. It is also known as the “God particle” because its existence is vitally important toward understanding the early evolution of the universe.

Physicists in Switzerland stoked worldwide excitement Wednesday when they announced they have all but proven the particle’s existence. This was done using the world’s largest atom smasher at the European Organization for Nuclear Research, or CERN, near Geneva.

“This would be a great vindication of Salam’s work and the Standard Model as a whole,” said Khurshid Hasanain, chairman of the physics department at Quaid-i-Azam University in Islamabad.

In the 1960s and early 1970s, Salam wielded significant influence in Pakistan as the chief scientific adviser to the president, helping to set up the country’s space agency and institute for nuclear science and technology. Salam also assisted in the early stages of Pakistan’s effort to build a nuclear bomb, which it eventually tested in 1998.

Salam’s life, along with the fate of the three million other Ahmadis in Pakistan, drastically changed in 1974 when parliament amended the constitution to declare that members of the sect were not considered Muslims under Pakistani law.

Ahmadis believe their spiritual leader, Hadhrat Mirza Ghulam Ahmad, who died in 1908, was the Promised Messiah — a position rejected by the government in response to a mass movement led by Pakistan’s major Islamic parties. Most Muslims consider Muhammad the last prophet and those who subsequently declared themselves prophets as heretics.

All Pakistani passport applicants must sign a section saying the Ahmadi faith’s founder was an “impostor” and his followers are “non-Muslims.” Ahmadis are prevented by law in Pakistan from “posing as Muslims,” declaring their faith publicly, calling their places of worship mosques or performing the Muslim call to prayer. They can be punished with prison and even death.

Salam resigned from his government post in protest following the 1974 constitutional amendment and eventually moved to Europe to pursue his work. In Italy, he created a centre for theoretical physics to help physicists from the developing world.

Although Pakistan’s then-president, general Zia ul-Haq, presented Salam with Pakistan’s highest civilian honour after he won the Nobel Prize, the general response in the country was muted. The physicist was celebrated more enthusiastically by other countries, including India.

Despite his achievements, Salam’s name appears in few textbooks and is rarely mentioned by Pakistani leaders or the media. By contrast, fellow Pakistani physicist A.Q. Khan, who played a key role in developing the country’s nuclear bomb and later confessed to spreading nuclear technology to Iran, North Korea and Libya, is considered a national hero.

Officials at Quaid-i-Azam University had to cancel plans for Salam to lecture about his Nobel-winning theory when Islamist student activists threatened to break the physicist’s legs, said his colleague Hoodbhoy.

“The way he has been treated is such a tragedy,” said Hoodbhoy. “He went from someone who was revered in Pakistan, a national celebrity, to someone who could not set foot in Pakistan. If he came, he would be insulted and could be hurt or even killed.”

The president who honoured Salam would later go on to intensify persecution of Ahmadis, for whom life in Pakistan has grown even more precarious. Taliban militants attacked two mosques packed with Ahmadis in Lahore in 2010, killing at least 80 people.

“Many Ahmadis have received letters from fundamentalists since the 2010 attacks threatening to target them again, and the government isn’t doing anything,” said Qamar Suleiman, a spokesman for the Ahmadi community.

For Salam, not even death saved him from being targeted.

Hoodbhoy said his body was returned to Pakistan in 1996 after he died in Oxford, England, and was buried under a gravestone that read “First Muslim Nobel Laureate.” A local magistrate ordered that the word “Muslim” be erased.
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Large Hadron Collider restarts after two-year shutown

The world's largest particle collider has restarted after a two-year upgrade. Scientists are hoping the upgrade will provide still more energy to research so-called "dark matter."

Scientists at the European Organization for Nuclear Research, or CERN, on Sunday shot the first particle beams through the restarted Large Hadron Collider (LHC), after the particle accelerator underwent two years of work to increase its collision capacity.

The LHC - also known as the "Big Bang" collider -, which consists of a 27-kilometer-long (16.8-mile-long) tunnel beneath the Swiss-French border, is being used by researchers to study the "dark universe" - the subatomic particles that make up some 96 percent of matter in the known universe, along with the forces that hold them together.

The collider hit the headlines in 2012 with the discovery of the Higgs Boson, a subatomic particle that confers mass, whose existence had been theorized since 1968 but not confirmed.

The discovery earned the Nobel prize for two of the scientists who had proposed the existence of the particle.

The LHC uses powerful magnets to bend beams of protons coming from opposite directions, thus creating collisions that are monitored by sensors.

The subatomic debris is scanned for unknown kinds of particles and also provides information on coherent forces.

Scientists say the collider has nearly twice its previous energy following the upgrade, which will enable it to produce even more powerful collisions.

The restart was delayed last Saturday following a short-circuit in one of the LHC's magnet circuits.

http://www.msn.com/en-ca/news/other/lar ... ar-AAasoYm
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

A Crisis at the Edge of Physics

DO physicists need empirical evidence to confirm their theories?

You may think that the answer is an obvious yes, experimental confirmation being the very heart of science. But a growing controversy at the frontiers of physics and cosmology suggests that the situation is not so simple.

A few months ago in the journal Nature, two leading researchers, George Ellis and Joseph Silk, published a controversial piece called “Scientific Method: Defend the Integrity of Physics.” They criticized a newfound willingness among some scientists to explicitly set aside the need for experimental confirmation of today’s most ambitious cosmic theories — so long as those theories are “sufficiently elegant and explanatory.” Despite working at the cutting edge of knowledge, such scientists are, for Professors Ellis and Silk, “breaking with centuries of philosophical tradition of defining scientific knowledge as empirical.”

Whether or not you agree with them, the professors have identified a mounting concern in fundamental physics: Today, our most ambitious science can seem at odds with the empirical methodology that has historically given the field its credibility.

How did we get to this impasse? In a way, the landmark detection three years ago of the elusive Higgs boson particle by researchers at the Large Hadron Collider marked the end of an era. Predicted about 50 years ago, the Higgs particle is the linchpin of what physicists call the “standard model” of particle physics, a powerful mathematical theory that accounts for all the fundamental entities in the quantum world (quarks and leptons) and all the known forces acting between them (gravity, electromagnetism and the strong and weak nuclear forces).

But the standard model, despite the glory of its vindication, is also a dead end. It offers no path forward to unite its vision of nature’s tiny building blocks with the other great edifice of 20th-century physics: Einstein’s cosmic-scale description of gravity. Without a unification of these two theories — a so-called theory of quantum gravity — we have no idea why our universe is made up of just these particles, forces and properties. (We also can’t know how to truly understand the Big Bang, the cosmic event that marked the beginning of time.)

This is where the specter of an evidence-independent science arises. For most of the last half-century, physicists have struggled to move beyond the standard model to reach the ultimate goal of uniting gravity and the quantum world. Many tantalizing possibilities (like the often-discussed string theory) have been explored, but so far with no concrete success in terms of experimental validation.

Today, the favored theory for the next step beyond the standard model is called supersymmetry (which is also the basis for string theory). Supersymmetry predicts the existence of a “partner” particle for every particle that we currently know. It doubles the number of elementary particles of matter in nature. The theory is elegant mathematically, and the particles whose existence it predicts might also explain the universe’s unaccounted-for “dark matter.” As a result, many researchers were confident that supersymmetry would be experimentally validated soon after the Large Hadron Collider became operational.

That’s not how things worked out, however. To date, no supersymmetric particles have been found. If the Large Hadron Collider cannot detect these particles, many physicists will declare supersymmetry — and, by extension, string theory — just another beautiful idea in physics that didn’t pan out.

But many won’t. Some may choose instead to simply retune their models to predict supersymmetric particles at masses beyond the reach of the Large Hadron Collider’s power of detection — and that of any foreseeable substitute.

Implicit in such a maneuver is a philosophical question: How are we to determine whether a theory is true if it cannot be validated experimentally? Should we abandon it just because, at a given level of technological capacity, empirical support might be impossible? If not, how long should we wait for such experimental machinery before moving on: ten years? Fifty years? Centuries?

Consider, likewise, the cutting-edge theory in physics that suggests that our universe is just one universe in a profusion of separate universes that make up the so-called multiverse. This theory could help solve some deep scientific conundrums about our own universe (such as the so-called fine-tuning problem), but at considerable cost: Namely, the additional universes of the multiverse would lie beyond our powers of observation and could never be directly investigated. Multiverse advocates argue nonetheless that we should keep exploring the idea — and search for indirect evidence of other universes.

The opposing camp, in response, has its own questions. If a theory successfully explains what we can detect but does so by positing entities that we can’t detect (like other universes or the hyperdimensional superstrings of string theory) then what is the status of these posited entities? Should we consider them as real as the verified particles of the standard model? How are scientific claims about them any different from any other untestable — but useful — explanations of reality?

Recall the epicycles, the imaginary circles that Ptolemy used and formalized around A.D. 150 to describe the motions of planets. Although Ptolemy had no evidence for their existence, epicycles successfully explained what the ancients could see in the night sky, so they were accepted as real. But they were eventually shown to be a fiction, more than 1,500 years later. Are superstrings and the multiverse, painstakingly theorized by hundreds of brilliant scientists, anything more than modern-day epicycles?

Just a few days ago, scientists restarted investigations with the Large Hadron Collider, after a two-year hiatus. Upgrades have made it even more powerful, and physicists are eager to explore the properties of the Higgs particle in greater detail. If the upgraded collider does discover supersymmetric particles, it will be an astonishing triumph of modern physics. But if nothing is found, our next steps may prove to be difficult and controversial, challenging not just how we do science but what it means to do science at all.

http://www.nytimes.com/2015/06/07/opini ... inion&_r=0
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Experiment Shows Future Events Affect The Past

http://www.scienceandnonduality.com/exp ... -the-past/

One of the main tenets of quantum physics is that particles, such as electrons and photons, can act like both a particle and a wave. In addition, a particle’s choice for which way it behaves depends upon how it is measured at the end of its journey.

Australian researchers recently confirmed this wave-particle duality by carrying out John Wheeler’s classic delayed-choice thought experiment. Their results confirm that, for quantum particles at least, reality doesn’t exist until it is measured, and that future events can affect the past.

“It proves that measurement is everything. At the quantum level, reality does not exist if you are not looking at it,” said study author Andrew Truscott, a professor of physics at Australian National University in a press release.

When Wheeler first proposed this thought experiment in 1978, he suggested using light beams, or photons, as the particles. The Australian team, though, used helium atoms instead. This added another layer of complexity to the experiment because, unlike photons, atoms have mass and can interact with electrical fields.

In this experiment, which was published May 25 in Nature Physics, the researchers sent a single atom down a path through a grating pattern formed by laser beams. This is similar to the solid grating used to scatter light. If the atom acted like a particle it would travel in a straight line. As a wave, though, the atom would produce the interference bands seen with light passing through double slits.

In addition, the researchers randomly added a second laser grating. They found that when this was present, the atoms created the wavelike interference pattern. When the second grating was not there, the atoms behaved like particles and traveled along a single path.

However, whether or not the second grating was added was determined only after the atom had made it through the first crossroads. The wave-like or particle-like behavior of the atom only came into existence when the researchers measured it after it had completed its journey.

This, says Truscott, shows that “a future event causes the photon to decide its past.” -

See more at: http://www.scienceandnonduality.com/exp ... FAKQo.dpuf
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Rendezvous With Pluto

There was something wonderfully childlike in the delight of scientists and the public at the rendezvous of the New Horizons spacecraft with that most distant and mysterious of the planets, Pluto, even if it was reclassified a few years ago as merely a “dwarf” planet. But there was nothing childish in the extraordinary science and engineering required to send half a ton of highly sophisticated instruments hurtling through space at speeds of up to 47,000 miles an hour for three billion miles.

The whoops at the operations center at the Johns Hopkins Applied Physics Laboratory on Tuesday at the moment when the probe shot past Pluto, and the cheers when the spacecraft broke silence many hours later to confirm that it had survived the passage, spoke to an elemental curiosity in every human being, one awakened when a child first gazes out into a star-filled sky. That is ultimately what the mission was all about, as the celebrated British cosmologist Stephen Hawking declared in a message broadcast on NASA TV: “We explore because we are human and we want to know.”

For the scientists who launched New Horizons nine and a half years ago, there was a distinctly parental anxiety in the hours after it went silent on Monday so it could dedicate all its energies — a mere 200 watts — to taking pictures and measurements of the icy body. “We always talk about the spacecraft as being a child, maybe a teenager,” explained Alice Bowman, the operations manager.

There was a storybook quality to the entire mission. Perched at the edge of the solar system in a thicket of small frozen objects known as the Kuiper belt, Pluto was only discovered in 1930 by Clyde Tombaugh, a self-taught astronomer, a pinch of whose ashes were on board New Horizons, and named by an 11-year-old English girl, Venetia Burney, after the Roman god of the underworld. Once the spacecraft began sending clear pictures, scientists started naming various features on Pluto after fictional underworld characters ranging from the Balrog, a creature in J. R. R. Tolkien’s “Lord of the Rings” to Meng-Po, the goddess of forgetfulness in Chinese mythology.

The flyby is justifiably a source of great pride for the United States, which has now sent probes to all eight planets and the reclassified Pluto, fulfilling the ambitious challenge proclaimed by President John F. Kennedy on May 25, 1961, when he called on Congress to approve funding for a mission to the Moon and “perhaps to the very end of the solar system.”

The major motive then was to demonstrate the supremacy of Western freedoms over Soviet tyranny. The Cold War is over, and the end of the solar system has been reached. But that cannot be the end of space exploration.

What next? Scientists, of course, are full of ideas, from further probes into our solar system to a search for planets beyond (readers can vote their preferences here). Whatever the choice, there need not be any commercial or ideological justification. The human impulse to know is more than enough.

http://www.nytimes.com/2015/07/16/opini ... d=45305309

*****
Pluto’s Portrait From New Horizons: Ice Mountains and No Craters

With fanfare, NASA released the first batch of mesmerizing close-up images of the dwarf planet and its moons.

http://www.nytimes.com/2015/07/16/scien ... d=45305309
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

What Emotions Are (and Aren’t)

OUR senses appear to show us the world the way it truly is, but they are easily deceived. For example, if you listen to a recorded symphony through stereo speakers that are placed exactly right, the orchestra will sound like it’s inside your head. Obviously that isn’t the case.

But suppose you completely trusted your senses. You might find yourself asking well-meaning but preposterous scientific questions like “Where in the brain is the woodwinds section located?” A more reasonable approach is not to ask a where question but a how question: How does the brain construct this experience of hearing the orchestra in your head?

I have just set the stage to dispel a major misconception about emotions. Most people, including many scientists, believe that emotions are distinct, locatable entities inside us — but they’re not. Searching for emotions in this form is as misguided as looking for cerebral clarinets and oboes.

Of course, we experience anger, happiness, surprise and other emotions as clear and identifiable states of being. This seems to imply that each emotion has an underlying property or “essence” in the brain or body. Perhaps an annoying co-worker triggers your “anger neurons,” so your blood pressure rises; you scowl, yell and feel the heat of fury. Or the loss of a loved one triggers your “sadness neurons,” so your stomach aches; you pout, feel despair and cry. Or an alarming news story triggers your “fear neurons,” so your heart races; you freeze and feel a flash of dread.

Such characteristics are thought to be the unique biological “fingerprints” of each emotion. Scientists and technology companies spend enormous amounts of time and money trying to locate these fingerprints. They hope someday to identify your emotions from your facial muscle movements, your body changes and your brain’s electrical signals.

Some scientific studies seem to support that such fingerprints exist. But many of those studies disagree on what the fingerprints are, and a multitude of other studies indicate there are no such fingerprints.

Let’s start with neuroscience. The Interdisciplinary Affective Science Laboratory (which I direct) collectively analyzed brain-imaging studies published from 1990 to 2011 that examined fear, sadness, anger, disgust and happiness. We divided the human brain virtually into tiny cubes, like 3-D pixels, and computed the probability that studies of each emotion found an increase in activation in each cube.

Overall, we found that no brain region was dedicated to any single emotion. We also found that every alleged “emotion” region of the brain increased its activity during nonemotional thoughts and perceptions as well.

The most well-known “emotion” region of the brain is the amygdala, a group of nuclei found deep within the temporal lobes. Since 2009, at least 30 articles in the popular press have claimed that fear is caused by neurons firing in the amygdala. Yet only a quarter of the experiments that we analyzed showed an increase in activity in the amygdala during the experience of fear. Indeed, it has long been known that certain “fear” behaviors, such as fleeing, don’t require the amygdala.

Other evidence against the amygdala-fear relationship comes from a pair of identical twins, known in the scientific literature as “BG” and “AM,” who both have a genetic disease that obliterates the amygdala. BG has difficulty feeling fear in all but the most extreme situations, but AM leads a normal emotional life.

Brain regions like the amygdala are certainly important to emotion, but they are neither necessary nor sufficient for it. In general, the workings of the brain are not one-to-one, whereby a given region has a distinct psychological purpose. Instead, a single brain area like the amygdala participates in many different mental events, and many different brain areas are capable of producing the same outcome. Emotions like fear and anger, my lab has found, are constructed by multipurpose brain networks that work together.

If emotions are not distinct neural entities, perhaps they have a distinct bodily pattern — heart rate, respiration, perspiration, temperature and so on?

Again, the answer is no. My lab analyzed over 200 published studies, covering nearly 22,000 test subjects, and found no consistent and specific fingerprints in the body for any emotion. Instead, the body acts in diverse ways that are tied to the situation. Even a rat facing a threat (say, the odor of a cat) will flee, freeze or fight depending on its surrounding context.

The same goes for the human face Many scientists assume that the face clearly and reliably broadcasts emotion (scowling in anger, pouting in sadness, widening the eyes in fear, wrinkling the nose in disgust). But a growing body of evidence suggests that this is not the case. When we place electrodes on a human face and actually measure muscle movements during anger, for example, we find that people make a wide variety of movements, not just the stereotypical scowl.

CHARLES DARWIN famously vanquished the notion of essences in biology. He observed that a species is not a single type of being with a fixed set of attributes, but rather a population of richly varied individuals, each of which is better or worse suited to its environment.

Analogously, emotion words like “anger,” “happiness” and “fear” each name a population of diverse biological states that vary depending on the context. When you’re angry with your co-worker, sometimes your heart rate will increase, other times it will decrease and still other times it will stay the same. You might scowl, or you might smile as you plot your revenge. You might shout or be silent. Variation is the norm.

This insight is not just academic. When medical researchers ask, “What is the link between anger and cancer?” as if there is a single thing called “anger” in the body, they are in the grip of this error. When airport security officers are trained on the assumption that facial and body movements are reliable indicators of innermost feelings, taxpayers’ money is wasted.

The ease with which we experience emotions, and the effortlessness with which we see emotions in others, doesn’t mean that each emotion has a distinct pattern in the face, body or brain. Instead of asking where emotions are or what bodily patterns define them, we would do better to abandon such essentialism and ask the more revealing question, “How does the brain construct these incredible experiences?”

Lisa Feldman Barrett is a professor of psychology at Northeastern University and the author of the forthcoming book “How Emotions Are Made: The New Science of the Mind and Brain.”

http://www.nytimes.com/2015/08/02/opini ... 05309&_r=0
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Whole Nervous System Filmed In Action

http://www.msn.com/en-ca/video/news/who ... lsignoutmd

For the first time, a team of researchers have filmed a complete central nervous system firing as a more complex animal--in this case, a fruit fly larva--moved back and forth.

******
How Walking in Nature Changes the Brain

A walk in the park may soothe the mind and, in the process, change the workings of our brains in ways that improve our mental health, according to an interesting new study of the physical effects on the brain of visiting nature.

Most of us today live in cities and spend far less time outside in green, natural spaces than people did several generations ago.

City dwellers also have a higher risk for anxiety, depression and other mental illnesses than people living outside urban centers, studies show.

These developments seem to be linked to some extent, according to a growing body of research. Various studies have found that urban dwellers with little access to green spaces have a higher incidence of psychological problems than people living near parks and that city dwellers who visit natural environments have lower levels of stress hormones immediately afterward than people who have not recently been outside.

But just how a visit to a park or other green space might alter mood has been unclear. Does experiencing nature actually change our brains in some way that affects our emotional health?

That possibility intrigued Gregory Bratman, a graduate student at the Emmett Interdisciplinary Program in Environment and Resources at Stanford University, who has been studying the psychological effects of urban living. In an earlier study published last month, he and his colleagues found that volunteers who walked briefly through a lush, green portion of the Stanford campus were more attentive and happier afterward than volunteers who strolled for the same amount of time near heavy traffic.

But that study did not examine the neurological mechanisms that might underlie the effects of being outside in nature.

So for the new study, which was published last week in Proceedings of the National Academy of Sciences, Mr. Bratman and his collaborators decided to closely scrutinize what effect a walk might have on a person’s tendency to brood.

Brooding, which is known among cognitive scientists as morbid rumination, is a mental state familiar to most of us, in which we can’t seem to stop chewing over the ways in which things are wrong with ourselves and our lives. This broken-record fretting is not healthy or helpful. It can be a precursor to depression and is disproportionately common among city dwellers compared with people living outside urban areas, studies show.

Perhaps most interesting for the purposes of Mr. Bratman and his colleagues, however, such rumination also is strongly associated with increased activity in a portion of the brain known as the subgenual prefrontal cortex.

If the researchers could track activity in that part of the brain before and after people visited nature, Mr. Bratman realized, they would have a better idea about whether and to what extent nature changes people’s minds.

Mr. Bratman and his colleagues first gathered 38 healthy, adult city dwellers and asked them to complete a questionnaire to determine their normal level of morbid rumination.

The researchers also checked for brain activity in each volunteer’s subgenual prefrontal cortex, using scans that track blood flow through the brain. Greater blood flow to parts of the brain usually signals more activity in those areas.

Then the scientists randomly assigned half of the volunteers to walk for 90 minutes through a leafy, quiet, parklike portion of the Stanford campus or next to a loud, hectic, multi-lane highway in Palo Alto. The volunteers were not allowed to have companions or listen to music. They were allowed to walk at their own pace.

Immediately after completing their walks, the volunteers returned to the lab and repeated both the questionnaire and the brain scan.

As might have been expected, walking along the highway had not soothed people’s minds. Blood flow to their subgenual prefrontal cortex was still high and their broodiness scores were unchanged.

But the volunteers who had strolled along the quiet, tree-lined paths showed slight but meaningful improvements in their mental health, according to their scores on the questionnaire. They were not dwelling on the negative aspects of their lives as much as they had been before the walk.

They also had less blood flow to the subgenual prefrontal cortex. That portion of their brains were quieter.

These results “strongly suggest that getting out into natural environments” could be an easy and almost immediate way to improve moods for city dwellers, Mr. Bratman said.

But of course many questions remain, he said, including how much time in nature is sufficient or ideal for our mental health, as well as what aspects of the natural world are most soothing. Is it the greenery, quiet, sunniness, loamy smells, all of those, or something else that lifts our moods? Do we need to be walking or otherwise physically active outside to gain the fullest psychological benefits? Should we be alone or could companionship amplify mood enhancements?

“There’s a tremendous amount of study that still needs to be done,” Mr. Bratman said.

But in the meantime, he pointed out, there is little downside to strolling through the nearest park, and some chance that you might beneficially muffle, at least for awhile, your subgenual prefrontal cortex.

http://well.blogs.nytimes.com/2015/07/2 ... k&WT.mc_c=
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Read the Lost Dream Journal of the Man Who Discovered Neurons

An exclusive look at the dreams Santiago Ramon y Cajal recorded to prove Freud was wrong.

Santiago Ramón y Cajal, a Spanish histologist and anatomist known today as the father of modern neuroscience, was also a committed psychologist who believed psychoanalysis and Freudian dream theory were “collective lies.” When Freud published The Interpretation of Dreams in 1900, the science world swooned over his theory of the unconscious. Dreams quickly became synonymous with repressed desire. Puzzling dream images could unlock buried conflicts, the psychoanalyst said, given the correct interpretation.

Cajal, who won the 1906 Nobel Prize for discovering neurons and, more remarkably, intuiting the form and function of synapses, set out to prove Freud wrong. To disprove the theory that every dream is the result of a repressed desire, Cajal began keeping a dream journal and collecting the dreams of others, analyzing them with logic and rigor.

Translated here into English for the first time, the dreams of Santiago Ramón y Cajal offer insight into the mind of a great scientist.

Cajal eventually deemed the project unpublishable. But before his death in 1934, he gave his research, scribbled on stained loose papers and in the margins of books and newspapers, to his good friend and former student, the psychiatrist José Germain Cebrián. Germain typed the diary into a book, which was thought lost during the 1936 Spanish Civil War. In fact, Germain carried the manuscript with him as he traveled through Europe. Before his death, he gave it to José Rallo, a Spanish psychiatrist and dream researcher. To the delight of scholars and enthusiasts, Los sueños de Santiago Ramón y Cajal was published in Spanish in 2014, containing 103 of Cajal’s dreams, recorded between 1918 and his death in 1934.1 Translated here into English for the first time, these dreams, and Cajal’s notes on them, offer insight into the mind of a great scientist—insight that perhaps he himself did not always have.

Cajal exalted rational thinking and the conscious will. In his autobiography, the scientist described neurons as “mysterious butterflies of the soul, the beating of whose wings might one day, who knows, reveal the secrets of mental life.” He had a lifelong fascination with dreaming and dreams, despite, or perhaps because of, their tendency to resist all rational explanation. Early in his career, Cajal studied hypnosis and the power of suggestion, turning his home into a clinic for hysterics, neurasthenics, and spiritual mediums, and he planned to publish three psychological books before judging their content to be too speculative: Essays on Hypnotism, Spiritualism, and Metaphysics; Dreams: Critics of Their Explanatory Doctrines; and Dreams. He did, however, publish a scientific paper in 1908 on dreaming and visual hallucinations, which begins, “Dreaming is one of the most interesting and most wondrous phenomena of brain physiology.”2 He investigates visual hallucination in blind adults, concluding that the retina is not active during dreaming, instead studying the associative cortex, thalamus, and glial cells for evidence of activation.3

In 1902, in the preface to a contemporary poetry book, the normally reserved Cajal allowed himself to theorize a bit more freely about dreams. “The majority of dreams,” he writes, “consists of scraps of ideas, unconnected or weirdly assembled, somewhat like an absurd monster without proportions, harmony or reason.” He theorized that dreaming happens in unused areas of the cerebral cortex: “The fallow lands of the brain, that is, the cells in which unconscious images are recorded, stay awake and become excited, rejuvenating themselves with the exercise they did behind the back of the conscious mind.” At the end of the waking day, according to Cajal, certain groups of cells are tired out, leaving others to work during sleep. More than any theory, this persistent cellular focus is Cajal’s legacy to psychology, which indeed now favors a neurobiological approach.4 Some contemporary theories about the neuroscience of dreaming, namely the activation-synthesis hypothesis, would seem to support Cajal’s belief that dreams are a sequence of random images, unfiltered by the prefrontal cortex, which the brain then tries to interpret.

Cajal’s anatomical views on dreaming and his reluctance to speculate without physiological evidence stand in stark contrast to the dream theory made famous by Freud. In a letter to Juan Paulis, published in 1935, Cajal wrote, “Except in extremely rare cases it is impossible to verify the doctrine of the surly and somewhat egotistical Viennese author, who has always seemed more preoccupied with founding a sensational theory than with the desire to austerely serve the cause of scientific theory.”5

Devoted, like many mythologized geniuses, more to his work than his family, Cajal remained entranced at his microscope, failing to respond to his wife, Silveria Fañanás García, as she screamed through the night that their 6-year-old daughter was dying. In mourning, the light of the microscope was his only refuge. Thirty years after the death of his daughter, the father of contemporary neuroscience dreams that he is drowning off the coast of Spain, holding his little girl in his arms. This dream needs no further analysis.

More....

http://nautil.us/issue/27/dark-matter/r ... ed-neurons
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

How Changeable Is Gender?

THANKS to Caitlyn Jenner, and the military’s changing policies, transgender people are gaining acceptance — and living in a bigger, more understanding spotlight than at any previous time.

We’re learning to be more accepting of transgender individuals. And we’re learning more about gender identity, too.

The prevailing narrative seems to be that gender is a social construct and that people can move between genders to arrive at their true identity.

But if gender were nothing more than a social convention, why was it necessary for Caitlyn Jenner to undergo facial surgeries, take hormones and remove her body hair? The fact that some transgender individuals use hormone treatment and surgery to switch gender speaks to the inescapable biology at the heart of gender identity.

This is not to suggest that gender identity is simply binary — male or female — or that gender identity is inflexible for everyone. Nor does it mean that conventional gender roles always feel right; the sheer number of people who experience varying degrees of mismatch between their preferred gender and their body makes this very clear.

In fact, recent neuroscience research suggests that gender identity may exist on a spectrum and that gender dysphoria fits well within the range of human biological variation. For example, Georg S. Kranz at the Medical University of Vienna and colleagues elsewhere reported in a 2014 study in The Journal of Neuroscience that individuals who identified as transsexuals — those who wanted sex reassignment — had structural differences in their brains that were between their desired gender and their genetic sex.

Dr. Kranz studied four different groups: female-to-male transsexuals; male-to-female transsexuals; and controls who were born female or male and identify as such. Since hormones can have a direct effect on the brain, both transsexual groups were studied before they took any sex hormones, so observed differences in brain function and structure would not be affected by the treatment. He used a high-resolution technique called diffusion tensor imaging, a special type of M.R.I., to examine the white matter microstructure of subjects’ brains.

What Dr. Kranz found was intriguing: In several brain regions, people born female with a female gender identity had the highest level of something called mean diffusivity, followed by female-to-male transsexuals. Next came male-to-female transsexuals, and then the males with a male gender identity, who had the lowest levels.

In other words, it seems that Dr. Kranz may have found a neural signature of the transgender experience: a mismatch between one’s gender identity and physical sex. Transgender people have a brain that is structurally different than the brain of a nontransgender male or female — someplace in between men and women.

This gradient of structural brain differences, from females to males, with transgender people in between, suggests that gender identity has a neural basis and that it exists on a spectrum, like so much of human behavior.

Some theorize that the transgender experience might arise, in part, from a quirk of brain development. It turns out that the sexual differentiation of the brain happens during the second half of pregnancy, later than sexual differentiation of the genitals and body, which begins during the first two months of pregnancy. And since these two processes can be influenced independently of each other, it may be possible to have a mismatch between gender-specific brain development and that of the body.

Is it really so surprising that gender identity might, like sexual orientation, be on a spectrum? After all, one can be exclusively straight or exclusively gay — or anything in between. But variability in a behavior shouldn’t be confused with its malleability. There is little evidence, for example, that you really can change your sexual orientation. Sure, you can change your sexual behavior, but your inner sexual fantasies endure.

In fact, attempts to change a person’s sexual orientation, through so-called reparative therapy, have been debunked as quackery and rightly condemned by the psychiatric profession as potentially harmful.

Of course, people should have the freedom to assume whatever gender role makes them comfortable and refer to themselves with whatever pronoun they choose; we should encourage people to be who they really feel they are, not who or what society would like them to be. I wonder, if we were a more tolerant society that welcomed all types of gender identity, what the impact might be on gender dysphoria. How many transgender individuals would feel the need to physically change gender, if they truly felt accepted with whatever gender role they choose?

At the same time, we have to acknowledge that gender identity is a complex phenomenon, involving a mix of genes, hormones and social influence. And there is no getting around the fact that biology places constraints on our capacity to reimagine ourselves and to change, and it’s important to understand those limitations.

The critical question is not whether there is a range of gender identity — it seems clear that there is. Rather, it is to what extent and in which populations gender identity is malleable, and to what extent various strategies to change one’s body and behavior to match a preferred gender will give people the psychological satisfaction they seek.

Although transsexualism (defined as those who want to change or do change their body) is very rare — a recent meta-analysis estimated the prevalence at about 5 per 100,000 — it garners much media attention. What do we really know about how these individuals feel and function in their new role?

The data are all over the map. One meta-analysis published in 2010 of follow-up studies suggested that about 80 percent of transgender individuals reported subjective improvement in terms of gender dysphoria and quality of life. But the review emphasized that many of the studies were suboptimal: All of them were observational and most lacked controls.

Dr. Cecilia Dhejne and colleagues at the Karolinska Institute in Sweden have done one of the largest follow-up studies of transsexuals, published in PLOS One in 2011. They compared a group of 324 Swedish transsexuals for an average of more than 10 years after gender reassignment with controls and found that transsexuals had 19 times the rate of suicide and about three times the mortality rate compared with controls. When the researchers controlled for baseline rates of depression and suicide, which are known to be higher in transsexuals, they still found elevated rates of depression and suicide after sex reassignment.

This study doesn’t prove that gender reassignment per se was the cause of the excess morbidity and mortality in transsexual people; to answer that, you would have to compare transgender people who were randomly assigned to reassignment to those who were not. Still, even if hormone replacement and surgery relieve gender dysphoria, the overall outcome with gender reassignment doesn’t look so good — a fact that only underscores the need for better medical treatments in general for transgender individuals and better psychiatric care after reassignment.

Alarmingly, 41 percent of transgender and gender nonconforming individuals attempt suicide at some point in their lifetime compared with 4.6 percent of the general public, according to a joint study by the American Foundation for Suicide Prevention and the Williams Institute. The disturbingly high rate of suicide attempts among transgender people likely reflects a complex interaction of mental health factors and experiences of harassment, discrimination and violence. The study analyzed data from the National Transgender Discrimination Survey, which documents the bullying, harassment, rejection by family and other assorted horrors.

On a broader level, the outcome studies suggest that gender reassignment doesn’t necessarily give everyone what they really want or make them happier.

Nowhere is this issue more contentious than in children and adolescents who experience gender dysphoria or the sense that their desired gender mismatches their body. In fact, there are few areas of medicine or psychiatry where the debate has become so heated. I was surprised to discover how many professional colleagues in this area either warned me to be careful about what I wrote or were reluctant to talk with me on the record for fear of reprisal from the transgender community.

If gender identity were a fixed and stable phenomenon in all young people, there would be little to argue about. But we have learned over the past two decades that, like so much else in child and adolescent behavior, the experience of gender dysphoria is itself often characterized by flux.

Several studies have tracked the persistence of gender dysphoria in children as they grow. For example, Dr. Richard Green’s study of young boys with gender dysphoria in the 1980s found that only one of the 44 boys was gender dysphoric by adolescence or adulthood. And a 2008 study by Madeleine S. C. Wallein, at the VU University Medical Center in the Netherlands, reported that in a group of 77 young people, ages 5 to 12, who all had gender dysphoria at the start of the study, 70 percent of the boys and 36 percent of the girls were no longer gender dysphoric after an average of 10 years’ follow-up.

THIS strongly suggests that gender dysphoria in young children is highly unstable and likely to change. Whether the loss of gender dysphoria is spontaneous or the result of parental or social influence is anyone’s guess. Moreover, we can’t predict reliably which gender dysphoric children will be “persisters” and which will be “desisters.”

So if you were a parent of, say, an 8-year-old boy who said he really wanted to be a girl, you might not immediately accede to your child’s wish, knowing that there is a high probability — 80 percent, in some studies — that that desire will disappear with time.

The counterargument is that to delay treatment is to consign this child to psychological suffering of potentially unknown duration. This is a disturbing possibility, though much can be done to help alleviate depression or anxiety without necessarily embarking on gender change. But rather than managing these psychological symptoms and watchfully waiting, some groups recommend pharmacologically delaying the onset of puberty in gender dysphoric children until age 16, before proceeding to reassignment. Puberty suppression is presumed reversible, and can be stopped if the adolescent’s gender dysphoria desists. But the risks of this treatment are not fully understood. Even more troubling, some doctors appear to be starting reassignment earlier. Some argue that the medical and psychiatric professions have a responsibility to respond to the child as he or she really is.

But if anything marks what a child really is, it is experimentation and flux. Why, then, would one subject a child to hormones and gender reassignment if there is a high likelihood that the gender dysphoria will resolve?

With adolescents, the story is very different: About three quarters of gender dysphoric teens may be “persisters,” which makes decisions about gender reassignment at this age more secure.

Clinicians who take an agnostic watch-and-wait approach in children with gender dysphoria have been accused by some in the transgender community of imposing societal values — that boys should remain boys and girls remain girls — on their patients and have compared them to clinicians who practice reparative therapy for gays.

I think that criticism is misguided. First, there is abundant evidence that reparative therapy is both ineffective and often harmful, while there is no comparable data in the area of gender dysphoria. Second, unlike sexual orientation, which tends to be stable, gender dysphoria in many young people clearly isn’t. Finally, when it comes to gender dysphoria, the evidence for therapeutics are simply poor to start with: There are no randomized clinical trials and very few comparative studies examining different approaches for this population.

Given the absence of good treatment-outcome data, how can anyone — whether transgender activist, parent or clinician — be sure of the best course of action?

There is obviously a huge gap between rapidly shifting cultural attitudes about gender identity and our scientific understanding of them. Until we have better data, what’s wrong with a little skepticism? After all, medical and psychological treatments should be driven by the best available scientific evidence — not political pressure or cherished beliefs.

Richard A. Friedman is a professor of clinical psychiatry at Weill Cornell Medical College and a contributing opinion writer.

Follow The New York Times Opinion section on Facebook and Twitter, and sign up for the Opinion Today newsletter.

A version of this op-ed appears in print on August 23, 2015, on page SR1 of the New York edition with the headline: How Changeable Is Gender? . Today's Paper|Subscribe

http://www.nytimes.com/2015/08/23/opini ... type=Blogs
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

How a Volcanic Eruption in 1815 Darkened the World but Colored the Arts

In April 1815, the most powerful volcanic blast in recorded history shook the planet in a catastrophe so vast that 200 years later, investigators are still struggling to grasp its repercussions. It played a role, they now understand, in icy weather, agricultural collapse and global pandemics — and even gave rise to celebrated monsters.

Around the lush isles of the Dutch East Indies — modern-day Indonesia — the eruption of Mount Tambora killed tens of thousands of people. They were burned alive or killed by flying rocks, or they died later of starvation because the heavy ash smothered crops.

More surprising, investigators have found that the giant cloud of minuscule particles spread around the globe, blocked sunlight and produced three years of planetary cooling. In June 1816, a blizzard pummeled upstate New York. That July and August, killer frosts in New England ravaged farms. Hailstones pounded London all summer.

A recent history of the disaster, “Tambora: The Eruption that Changed the World,” by Gillen D’Arcy Wood, shows planetary effects so extreme that many nations and communities sustained waves of famine, disease, civil unrest and economic decline. Crops failed globally.

More...
http://www.nytimes.com/2015/08/25/scien ... d=45305309

******
Walter Munk, the ‘Einstein of the Oceans’

From forecasting ocean waves in World War II to using underwater sound to measure climate change, Walter Munk has spent nearly eight decades taking on timely problems.

More...
http://www.nytimes.com/2015/08/25/scien ... d=45305309
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

NASA’s Next Horizon in Space

Most of us have come down from the highs of seeing Pluto up close for the first time. Ever since New Horizons beamed back those photos, the question has loomed: What's next?

We asked a few experts and Times readers what NASA’s exploratory priority should be in the years ahead. More than 1,600 readers shared their imaginative ideas. Some responses were serious and technical. Others were more whimsical, like that of Carter Read of Brooklyn, who proposed that we “send a record player bumping the sounds of Chuck Berry’s ‘golden decade’ into deep space,” because “he’s the best communicator the human race has.” (Mr. Berry already has one song in space, aboard the Voyager spacecraft.)

Below are some of the best responses, starting with the most popular. Perhaps NASA — and the members of Congress who appropriate its budget — will listen up.

http://www.nytimes.com/interactive/2015 ... d=45305309

*******
Jacob Bekenstein, Physicist Who Revolutionized Theory of Black Holes, Dies at 68

Jacob Bekenstein, a physicist who prevailed in an argument with Stephen Hawking that revolutionized the study of black holes, and indeed the nature of space-time itself, died on Sunday in Helsinki, Finland, where he was to give a physics lecture. He was 68.
....

In the Haaretz.com interview, Dr. Bekenstein put it more modestly. “I look at the world as a product of God,” he said. His job as a scientist, he added, was to figure out how it works.

“I feel much more comfortable in the world because I understand how simple things work,” he said. “I get a sense of security that not everything is random, and that I can actually understand and not be surprised by things.”

More...

http://www.nytimes.com/2015/08/22/scien ... Multimedia
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Psychology Is Not in Crisis

Boston — IS psychology in the midst of a research crisis?

An initiative called the Reproducibility Project at the University of Virginia recently reran 100 psychology experiments and found that over 60 percent of them failed to replicate — that is, their findings did not hold up the second time around. The results, published last week in Science, have generated alarm (and in some cases, confirmed suspicions) that the field of psychology is in poor shape.

But the failure to replicate is not a cause for alarm; in fact, it is a normal part of how science works.

Suppose you have two well-designed, carefully run studies, A and B, that investigate the same phenomenon. They perform what appear to be identical experiments, and yet they reach opposite conclusions. Study A produces the predicted phenomenon, whereas Study B does not. We have a failure to replicate.

Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon from Study A is true only under certain conditions. The scientist’s job now is to figure out what those conditions are, in order to form new and better hypotheses to test.

A number of years ago, for example, scientists conducted an experiment on fruit flies that appeared to identify the gene responsible for curly wings. The results looked solid in the tidy confines of the lab, but out in the messy reality of nature, where temperatures and humidity varied widely, the gene turned out not to reliably have this effect. In a simplistic sense, the experiment “failed to replicate.” But in a grander sense, as the evolutionary biologist Richard Lewontin has noted, “failures” like this helped teach biologists that a single gene produces different characteristics and behaviors, depending on the context.

Similarly, when physicists discovered that subatomic particles didn’t obey Newton’s laws of motion, they didn’t cry out that Newton’s laws had “failed to replicate.” Instead, they realized that Newton’s laws were valid only in certain contexts, rather than being universal, and thus the science of quantum mechanics was born.

In psychology, we find many phenomena that fail to replicate if we change the context. One of the most famous is called “fear learning,” which has been used to explain anxiety disorders like post-traumatic stress. Scientists place a rat into a small box with an electrical grid on the floor. They play a loud tone and then, a moment later, give the rat an electrical shock. The shock causes the rat to freeze and its heart rate and blood pressure to rise. The scientists repeat this process many times, pairing the tone and the shock, with the same results. Eventually, they play the tone without the shock, and the rat responds in the same way, as if expecting the shock.

Originally this “fear learning” was assumed to be a universal law, but then other scientists slightly varied the context and the rats stopped freezing. For example, if you restrain the rat during the tone (which shouldn’t matter if the rat is going to freeze anyway), its heart rate goes down instead of up. And if the cage design permits, the rat will run away rather than freeze.

These failures to replicate did not mean that the original experiments were worthless. Indeed, they led scientists to the crucial understanding that a freezing rat was actually responding to the uncertainty of threat, which happened to be engendered by particular combinations of tone, cage and shock.

Much of science still assumes that phenomena can be explained with universal laws and therefore context should not matter. But this is not how the world works. Even a simple statement like “the sky is blue” is true only at particular times of day, depending on the mix of molecules in the air as they reflect and scatter light, and on the viewer’s experience of color.

Psychologists are usually well attuned to the importance of context. In our experiments, we take great pains to avoid any irregularities or distractions that might affect the results. But when it comes to replication, psychologists and their critics often seem to forget the powerful and subtle effects of context. They ask simply, “Did the experiment work or not?” rather than considering a failure to replicate as a valuable scientific clue.

As with any scientific field, psychology has some published studies that were conducted sloppily, and a few bad eggs who have falsified their data. But contrary to the implication of the Reproducibility Project, there is no replication crisis in psychology. The “crisis” may simply be the result of a misunderstanding of what science is.

Science is not a body of facts that emerge, like an orderly string of light bulbs, to illuminate a linear path to universal truth. Rather, science (to paraphrase Henry Gee, an editor at Nature) is a method to quantify doubt about a hypothesis, and to find the contexts in which a phenomenon is likely. Failure to replicate is not a bug; it is a feature. It is what leads us along the path — the wonderfully twisty path — of scientific discovery.

Lisa Feldman Barrett, a professor of psychology at Northeastern University, is the author of the forthcoming book “How Emotions Are Made: The New Science of the Mind and Brain.”

http://www.nytimes.com/2015/09/01/opini ... d=45305309
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Mendeleev’s Garden
In honor of the late neurologist who charmed us with over a dozen books, a beloved essay from the archives.
By Oliver Sacks
Excerpt:
"Even as a student in St. Petersburg, Mendeleev showed not only an insatiable curiosity but a hunger for organizing principles of all kinds. Linnaeus, in the eighteenth century, had classified animals and plants, and (much less successfully) minerals, too. Dana, in the 1830s, had replaced the old physical classification of minerals with a chemical classification of a dozen or so main categories (native elements, oxides, sulfides, and so on). But there was no such classification for the elements themselves, and there were now some sixty elements known. Some elements, indeed, seemed almost impossible to categorize. Where did uranium go, or that puzzling, ultralight metal, beryllium? Some of the most recently discovered elements were particularly difficult—thallium, for example, discovered in 1862, was in some ways similar to lead, in others to silver, in others to aluminum, and in yet others to potassium.

It was nearly twenty years from Mendeleev’s first interest in classification to the emergence of his periodic table in 1869. This long pondering and incubation (so similar, in a way, to Darwin’s before he published On the Origin of Species) was perhaps the reason why, when Mendeleev finally published his Principles, he could bring a vastness of knowledge and insight far beyond any of his contemporaries—some of them also had a clear vision of periodicity, but none of them could marshal the overwhelming detail he could."

https://theamericanscholar.org/mendelee ... urce=email
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Will You Ever Be Able to Upload Your Brain?

SOME hominid along the evolutionary path to humans was probably the first animal with the cognitive ability to understand that it would someday die. To be human is to cope with this knowledge. Many have been consoled by the religious promise of life beyond this world, but some have been seduced by the hope that they can escape death in this world. Such hopes, from Ponce de León’s quest to find a fountain of youth to the present vogue for cryogenic preservation, inevitably prove false.

In recent times it has become appealing to believe that your dead brain might be preserved sufficiently by freezing so that some future civilization could bring your mind back to life. Assuming that no future scientists will reverse death, the hope is that they could analyze your brain’s structure and use this to recreate a functioning mind, whether in engineered living tissue or in a computer with a robotic body. By functioning, I mean thinking, feeling, talking, seeing, hearing, learning, remembering, acting. Your mind would wake up, much as it wakes up after a night’s sleep, with your own memories, feelings and patterns of thought, and continue on into the world.

I am a theoretical neuroscientist. I study models of brain circuits, precisely the sort of models that would be needed to try to reconstruct or emulate a functioning brain from a detailed knowledge of its structure. I don’t in principle see any reason that what I’ve described could not someday, in the very far future, be achieved (though it’s an active field of philosophical debate). But to accomplish this, these future scientists would need to know details of staggering complexity about the brain’s structure, details quite likely far beyond what any method today could preserve in a dead brain.

How much would we need to know to reconstruct a functioning brain? Let’s begin by defining some terms. Neurons are the cells in the brain that electrically carry information: Their electrical activity somehow amounts to your seeing, hearing, thinking, acting and all the rest. Each neuron sends a highly branched wire, or axon, out to connect or electrically “talk” to other neurons. The specialized connecting points between neurons are called synapses. Memories are commonly thought to be largely stored in the patterns of synaptic connections between neurons, which in turn shape the electrical activities of the neurons.

Much of the current hope of reconstructing a functioning brain rests on connectomics: the ambition to construct a complete wiring diagram, or “connectome,” of all the synaptic connections between neurons in the mammalian brain. Unfortunately connectomics, while an important part of basic research, falls far short of the goal of reconstructing a mind, in two ways. First, we are far from constructing a connectome. The current best achievement was determining the connections in a tiny piece of brain tissue containing 1,700 synapses; the human brain has more than a hundred billion times that number of synapses. While progress is swift, no one has any realistic estimate of how long it will take to arrive at brain-size connectomes. (My wild guess: centuries.)

Second, even if this goal were achieved, it would be only a first step toward the goal of describing the brain sufficiently to capture a mind, which would mean understanding the brain’s detailed electrical activity. If neuron A makes a synaptic connection onto neuron B, we would need to know the strength of the electrical signal in neuron B that would be caused by each electrical event from neuron A. The connectome might give an average strength for each connection, but the actual strength varies over time. Over short times (thousandths of a second to tens of seconds), the strength is changed, often sharply, by each signal that A sends. Over longer times (minutes to years), both the overall strength and the patterns of short-term changes can alter more permanently as part of learning. The details of these variations differ from synapse to synapse. To describe this complex transmission of information by a single fixed strength would be like describing air traffic using only the average number of flights between each pair of airports.

Underlying this complex behavior is a complex structure: Each synapse is an enormously complicated molecular machine, one of the most complicated known in biology, made up of over 1,000 different proteins with multiple copies of each. Why does a synapse need to be so complex? We don’t know all of the things that synapses do, but beyond dynamically changing their signal strengths, synapses may also need to control how changeable they are: Our best current theories of how we store new memories without overwriting old ones suggest that each synapse needs to continually reintegrate its past experience (the patterns of activity in neuron A and neuron B) to determine how fixed or changeable it will be in response to the next new experience. Take away this synapse-by-synapse malleability, current theory suggests, and either our memories would quickly disappear or we would have great difficulty forming new ones. Without being able to characterize how each synapse would respond in real time to new inputs and modify itself in response to them, we cannot reconstruct the dynamic, learning, changing entity that is the mind.

But that’s not all. Neurons themselves are complex and variable. Axons vary in their speed and reliability of transmission. Each neuron makes a treelike branching structure that reaches out to receive synaptic input from other neurons, as a tree’s branches reach out to sunlight. The branches, called dendrites, differ in their sensitivity to synaptic input, with the molecular composition as well as shape of a dendrite determining how it would respond to the electrical input it receives from synapses.

Nor are any of these parts of a living brain fixed entities. The brain’s components, including the neurons, axons, dendrites and synapses (and more), are constantly adapting to their electrical and chemical “experience,” as part of learning, to maintain the ability to give appropriately different responses to different inputs, and to keep the brain stable and prevent seizures. These adaptations depend on the dynamic molecular machinery in each neural structure. The states of all of these components are constantly being modulated by a wash of chemicals from brainstem neurons that determine such things as when we are awake or attentive and when we are asleep, and by hormones from the body that help drive our motivations. Each element differs in its susceptibility to these influences.

To reconstruct a mind, perhaps one would not need to replicate every molecular detail; given enough structure, the rest might be self-correcting. But an extraordinarily deep level of detail would be required, not only to characterize the connectome but also to understand how the neurons, dendrites, axons and synapses would dynamically operate, change and adapt themselves.

I don’t wish to suggest that only hopelessly complicated models of the brain are useful. Quite the contrary. Our most powerful theoretical research tools for understanding brain function are often enormously simplified models of small pieces of the brain — for example, characterizing synapses by a single overall strength and ignoring dendritic structure. I make my living studying such models. These simple models, developed in close interaction with experimental findings, can reveal basic mechanisms operating in brain circuits. Adding complexity to our models does not necessarily give us a more realistic picture of brain circuits because we do not know enough about the details of this complexity to model it accurately, and the complexity can obscure the relationships we are trying to grasp. But far more information would be needed before we could characterize the dynamic operation of even a generic whole brain. Capturing all of the structure that makes it one person’s individual mind would be fantastically more complicated still.

Neuroscience is progressing rapidly, but the distance to go in understanding brain function is enormous. It will almost certainly be a very long time before we can hope to preserve a brain in sufficient detail and for sufficient time that some civilization much farther in the future, perhaps thousands or even millions of years from now, might have the technological capacity to “upload” and recreate that individual’s mind.

I certainly have my own fears of annihilation. But I also know that I had no existence for the 13.8 billion years that the universe existed before my birth, and I expect the same will be true after my death. The universe is not about me or any other individual; we come and we go as part of a much larger process. More and more I am content with this awareness. We all find our own solutions to the problem death poses. For the foreseeable future, bringing your mind back to life will not be one of them.

Kenneth D. Miller is a professor of neuroscience at Columbia and a co-director of the Center for Theoretical Neuroscience.

http://www.nytimes.com/2015/10/11/opini ... ef=opinion
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Gamblers, Scientists and the Mysterious Hot Hand

IN the opening act of Tom Stoppard’s play “Rosencrantz and Guildenstern Are Dead,” the two characters are passing the time by betting on the outcome of a coin toss. Guildenstern retrieves a gold piece from his bag and flips it in the air. “Heads,” Rosencrantz announces as he adds the coin to his growing collection.

Guil, as he’s called for short, flips another coin. Heads. And another. Heads again. Seventy-seven heads later, as his satchel becomes emptier and emptier, he wonders: Has there been a breakdown in the laws of probability? Are supernatural forces intervening? Have he and his friend become stuck in time, reliving the same random coin flip again and again?

Eighty-five heads, 89… Surely his losing streak is about to end.

Psychologists who study how the human mind responds to randomness call this the gambler’s fallacy — the belief that on some cosmic plane a run of bad luck creates an imbalance that must ultimately be corrected, a pressure that must be relieved. After several bad rolls, surely the dice are primed to land in a more advantageous way.

If you flip a fair coin four times in a row, there are 16 possible outcomes. Now calculate for each sequence the odds that a head is followed by a head and average the results. The answer is not 50-50, as most people would expect, but 40.5 percent — in favor of tails.

This is not, however, a violation of the laws of randomness. A head is followed by a head 12 times and by a tail 12 times. But by concentrating only on the flips that follow heads and ignoring the other data, we are fooled by a selection bias.

The opposite of that is the hot-hand fallacy — the belief that winning streaks, whether in basketball or coin tossing, have a tendency to continue, as if propelled by their own momentum. Both misconceptions are reflections of the brain’s wired-in rejection of the power that randomness holds over our lives. Look deep enough, we instinctively believe, and we may uncover a hidden order.

Recent studies show how anyone, including scientists, can be fooled by these cognitive biases. A working paper published this summer has caused a stir by proposing that a classic body of research disproving the existence of the hot hand in basketball is flawed by a subtle misperception about randomness. If the analysis is correct, the possibility remains that the hot hand is real.

I was thinking about Guil and the psychologists last week as I walked into the Camel Rock Casino, operated by the pueblo of Tesuque, a few miles north of Santa Fe. With five full-scale gambling operations in a stretch of 30 miles, the highway there has become a kind of elongated Las Vegas Strip.

Gamblers, with their systems and superstitions, sat nearly immobile at video slots, trying to outguess the algorithmic heart beating inside. They were immersed in what the anthropologist Natasha Dow Schüll calls “the machine zone.”

In her book “Addiction by Design,” she describes how modern slot machines are engineered to maximize “gaming productivity” — the velocity with which dollars fly from the players’ pockets. Mechanical levers have been replaced by faster, more efficient electronic buttons, while the simulated reels of cherries, bars and other symbols are programmed to give the illusion that you missed a jackpot by just a hair — fuel for the gambler’s fallacy.

I’d first come to Camel Rock more than 20 years ago while I was writing a book about the human drive to find order in the world — and impose it when it is not really there. In those days there was only a makeshift bingo hall, and all eyes were on a large machine in which the lettered and numbered balls jumped around like popcorn — an analog equivalent of the random-number-generating chips driving today’s slots. I thought of how an omniscient intelligence, like the one imagined by the philosopher Pierre-Simon Laplace, could precisely track the trajectories of the balls, the elasticity of their impacts, the buoyancy of the air — a vast amount of data — and predict the outcome of the game.

We mortals can benefit, at least in theory, from islands of predictability — a barely perceptible tilt of a roulette table that makes the ball slightly more likely to land on one side of the wheel than the other. The same is true for the random walk of the stock market. Becoming aware of information before it has propagated worldwide can give a speculator a tiny, temporary edge. Some traders pay a premium to locate their computer servers as close as possible to Lower Manhattan, gaining advantages measured in microseconds.

But often the patterns we see are illusions. Some research has suggested that more excitable people are likelier to embrace the magic of the hot hand (go, go, go!) while those with “higher cognitive skills,” as the studies put it, are prone to the gambler’s fallacy — the belief that a run of heads will probably be followed by tails. Their swaggering brains think they have psyched out the system, discovering an underlying regularity.

Or maybe they are misapplying a real phenomenon called regression toward the mean. In the long run the number of heads and tails will even out, but that says nothing about how the next flip will fall. A paper this summer in a German economics journal found that in clearly random situations, the tables are turned: People with lower cognitive abilities are likelier than more rational types to be led astray by the gambler’s fallacy.

In a study that appeared this summer, Joshua B. Miller and Adam Sanjurjo suggest why the gambler’s fallacy remains so deeply ingrained. Take a fair coin — one as likely to land on heads as tails — and flip it four times. How often was heads followed by another head? In the sequence HHHT, for example, that happened two out of three times — a score of about 67 percent. For HHTH or HHTT, the score is 50 percent.

Altogether there are 16 different ways the coins can fall. I know it sounds crazy but when you average the scores together the answer is not 50-50, as most people would expect, but about 40-60 in favor of tails.

There is not, as Guildenstern might imagine, a tear in the fabric of space-time. It remains as true as ever that each flip is independent, with even odds that the coin will land one way or the other. But by concentrating on only some of the data — the flips that follow heads — a gambler falls prey to a selection bias.

In an interesting twist, Dr. Miller and Dr. Sanjurjo propose that research claiming to debunk the hot hand in basketball is flawed by the same kind of misperception. Studies by the psychologist Thomas Gilovich and others conclude that basketball is no streakier than a coin toss. For a 50 percent shooter, for example, the odds of making a basket are supposed to be no better after a hit — still 50-50. But in a purely random situation, according to the new analysis, a hit would be expected to be followed by another hit less than half the time. Finding 50 percent would actually be evidence in favor of the hot hand. If so, the next step would be to establish the physiological or psychological reasons that make players different from tossed coins.

Dr. Gilovich is withholding judgment. “The larger the sample of data for a given player, the less of an issue this is,” he wrote in an email. “Because our samples were fairly large, I don’t believe this changes the original conclusions about the hot hand. ”

Flaws in perceptions about randomness affect more than gambling and basketball. When multiple cases of cancer occur in a community, especially among children, it is only human to fear a common cause. Most often these cancer clusters turn out to be statistical illusions, the result of what epidemiologists call the Texas sharpshooter fallacy. (Blast the side of a barn with a random spray of buckshot and then draw a circle around one of the clusters: It’s a bull's-eye.)

Taken to extremes, seeing connections that don’t exist can be a symptom of a psychiatric condition called apophenia. In less pathological forms, the brain’s hunger for pattern gives rise to superstitions (astrology, numerology) and is a driving factor in what has been called a replication crisis in science — a growing number of papers that cannot be confirmed by other laboratories.

For all their care to be objective, scientists are as prone as anyone to valuing data that support their hypothesis over those that contradict it. Sometimes this results in experiments that succeed only under very refined conditions, in certain labs with special reagents and performed by a scientist with a hot hand.

We’re all in the same boat. We evolved with this uncanny ability to find patterns. The difficulty lies in separating what really exists from what is only in our minds.

George Johnson is the author of the “Raw Data” column for Science Times. His book “Fire in the Mind: Science, Faith and the Search for Order” is being published this month in a 20th-anniversary edition.

http://www.nytimes.com/2015/10/18/sunda ... inion&_r=0
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Sorry, Einstein. Quantum Study Suggests ‘Spooky Action’ Is Real.

In a landmark study, scientists at Delft University of Technology in the Netherlands reported that they had conducted an experiment that they say proved one of the most fundamental claims of quantum theory — that objects separated by great distance can instantaneously affect each other’s behavior.

The finding is another blow to one of the bedrock principles of standard physics known as “locality,” which states that an object is directly influenced only by its immediate surroundings. The Delft study, published Wednesday in the journal Nature, lends further credence to an idea that Einstein famously rejected. He said quantum theory necessitated “spooky action at a distance,” and he refused to accept the notion that the universe could behave in such a strange and apparently random fashion.

In particular, Einstein derided the idea that separate particles could be “entangled” so completely that measuring one particle would instantaneously influence the other, regardless of the distance separating them.

More....
http://www.nytimes.com/2015/10/22/scien ... 87722&_r=0
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Cassini Seeks Insights to Life in Plumes of Enceladus, Saturn’s Icy Moon

Where there is water, is there life?

That’s the $64 billion question now facing NASA and the rest of lonely humanity. When the New Horizons spacecraft, cameras clicking, sped past Pluto in July, it represented an inflection point in the conquest of the solar system. Half a century after the first planetary probe sailed past Venus, all the planets and would-be planets we have known and loved, and all the marvelous rocks and snowballs circling them, have been detected and inspected, reconnoitered.

That part of human history, the astrophysical exploration of the solar system, is over. The next part, the biological exploration of space, is just beginning. We have finished counting the rocks in the neighborhood. It is time to find out if anything is living on them, a job that could easily take another half century.

NASA’s mantra for finding alien life has long been to “follow the water,” the one ingredient essential to our own biochemistry. On Wednesday, NASA sampled the most available water out there, as the Cassini spacecraft plunged through an icy spray erupting from the little Saturnian moon Enceladus.

Enceladus is only 300 miles across and whiter than a Bing Crosby Christmas, reflecting virtually all the sunlight that hits it, which should make it colder and deader than Scrooge’s heart.

But in 2005, shortly after starting an 11-year sojourn at Saturn, Cassini recorded jets of water squirting from cracks known as tiger stripes near the south pole of Enceladus — evidence, scientists say, of an underground ocean kept warm and liquid by tidal flexing of the little moon as it is stretched and squeezed by Saturn.

And with that, Enceladus leapfrogged to the top of astrobiologists’ list of promising places to look for life. If there is life in its ocean, alien microbes could be riding those geysers out into space where a passing spacecraft could grab them. No need to drill through miles of ice or dig up rocks.

As Chris McKay, an astrobiologist at NASA’s Ames Research Center, said, it’s as if nature had hung up a sign at Enceladus saying “Free Samples.”

Discovering life was not on the agenda when Cassini was designed and launched two decades ago. Its instruments can’t capture microbes or detect life, but in a couple of dozen passes through the plumes of Enceladus, it has detected various molecules associated with life: water vapor, carbon dioxide, methane, molecular nitrogen, propane, acetylene, formaldehyde and traces of ammonia.

Wednesday’s dive was the deepest Cassini will make through the plumes, only 30 miles above the icy surface. Scientists are especially interested in measuring the amount of hydrogen gas in the plume, which would tell them how much energy and heat are being generated by chemical reactions in hydrothermal vents at the bottom of the moon’s ocean.

It is in such ocean vents that some of the most primordial-looking life-forms have been found on our own planet. What the Cassini scientists find out could help set the stage for a return mission with a spacecraft designed to detect or even bring back samples of life.

These are optimistic, almost sci-fi times. The fact that life was present on Earth as early as 4.1 billion years ago — pretty much as soon as asteroids and leftover planet junk stopped bombarding the new Earth and let it cool down — has led astrobiologists to conclude that, given the right conditions, life will take hold quickly. Not just in our solar system, but in some of the thousands of planetary systems that Kepler and other missions squinting at distant stars have uncovered.

And if water is indeed the key, the solar system has had several chances to get lucky. Besides Enceladus, there is an ocean underneath the ice of Jupiter’s moon Europa, and the Hubble Space Telescope has hinted that it too is venting into space. NASA has begun planning for a mission next decade to fly by it.

And of course there’s Mars, with its dead oceans and intriguing streaks of damp sand, springboard of a thousand sci-fi invasions of Earth, but in recent decades the target of robot invasions going the other direction.

Some scientists even make the case that genesis happened not on Earth but on Mars. Our biochemical ancestors would then have made the passage on an asteroid, making us all Martians and perhaps explaining our curious attraction to the Red Planet.

And then there is Titan, Saturn’s largest moon, the only moon in the solar system with a thick atmosphere and lakes on its surface, except that in this case the liquid in them is methane and the beaches and valleys are made of hydrocarbon slush.

NASA’s working definition of life, coined by a group of biologists in 1992, is “a self-sustaining chemical system capable of Darwinian evolution.”

Any liquid could serve as the medium of this thing, process, whatever it is. Life on Titan would expand our notions of what is biochemically possible out there in the rest of the universe.

Our history of exploration suggests that surprise is the nature of the game. That was the lesson of the Voyager missions: Every world or moon encountered on that twin-spacecraft odyssey was different, an example of the laws of physics sculpted by time and circumstance into unique and weird forms.

And so far that is the lesson of the new astronomy of exoplanets — thousands of planetary systems, but not a single one that looks like our own.

The detection of a single piece of pond slime, one alien microbe, on some other world would rank as one of the greatest discoveries in the history of science. Why should we expect it to look anything like what we already know?

That microbe won’t come any cheaper than the Higgs boson, the keystone of modern particle physics, which cost more than $10 billion to hunt down over half a century.

Finding that microbe will involve launching big, complicated chunks of hardware to various corners of the solar system, and that means work for engineers, scientists, accountants, welders, machinists, electricians, programmers and practitioners of other crafts yet to be invented — astro-robot-paleontologists, say.

However many billions of dollars it takes to knock on doors and find out if anybody is at home, it will all be spent here on Earth, on people and things we all say we want: innovation, education, science, technology.

We’ve seen this have a happy ending before. It was the kids of the aerospace industry and the military-industrial complex, especially in California, who gave us Silicon Valley and general relativity in our pockets.

In this era, a happy ending could include the news that we are not alone, that the cosmos is more diverse, again, than we had imagined.

Or not.

In another 50 years the silence from out there could be deafening.

http://www.nytimes.com/2015/10/29/scien ... d=71987722
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

The Light-Beam Rider

THIS month marks the 100th anniversary of the General Theory of Relativity, the most beautiful theory in the history of science, and in its honor we should take a moment to celebrate the visualized “thought experiments” that were the navigation lights guiding Albert Einstein to his brilliant creation. Einstein relished what he called Gedankenexperimente, ideas that he twirled around in his head rather than in a lab. That’s what teachers call daydreaming, but if you’re Einstein you get to call them Gedankenexperimente.

As these thought experiments remind us, creativity is based on imagination. If we hope to inspire kids to love science, we need to do more than drill them in math and memorized formulas. We should stimulate their minds’ eyes as well. Even let them daydream.

More....
http://www.nytimes.com/2015/11/01/opini ... d=45305309
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

A Learning Advance in Artificial Intelligence Rivals Human Abilities

Computer researchers reported artificial-intelligence advances on Thursday that surpassed human capabilities for a narrow set of vision-related tasks.

The improvements are noteworthy because so-called machine-vision systems are becoming commonplace in many aspects of life, including car-safety systems that detect pedestrians and bicyclists, as well as in video game controls, Internet search and factory robots.

Researchers at the Massachusetts Institute of Technology, New York University and the University of Toronto reported a new type of “one shot” machine learning on Thursday in the journal Science, in which a computer vision program outperformed a group of humans in identifying handwritten characters based on a single example.

The program is capable of quickly learning the characters in a range of languages and generalizing from what it has learned. The authors suggest this capability is similar to the way humans learn and understand concepts.

The new approach, known as Bayesian Program Learning, or B.P.L., is different from current machine learning technologies known as deep neural networks.

Neural networks can be trained to recognize human speech, detect objects in images or identify kinds of behavior by being exposed to large sets of examples.

Although such networks are modeled after the behavior of biological neurons, they do not yet learn the way humans do — acquiring new concepts quickly. By contrast, the new software program described in the Science article is able to learn to recognize handwritten characters after “seeing” only a few or even a single example.

The researchers compared the capabilities of their Bayesian approach and other programming models using five separate learning tasks that involved a set of characters from a research data set known as Omniglot, which includes 1,623 handwritten character sets from 50 languages. Both images and pen strokes needed to create characters were captured.

“With all the progress in machine learning, it’s amazing what you can do with lots of data and faster computers,” said Joshua B. Tenenbaum, a professor of cognitive science and computation at M.I.T. and one of the authors of the Science paper. “But when you look at children, it’s amazing what they can learn from very little data. Some comes from prior knowledge and some is built into our brain.”

Also on Thursday, organizers of an annual academic machine vision competition reported gains in lowering the error rate in software for finding and classifying objects in digital images.
Photo


Three researchers who have created a computer model that captures humans’ unique ability to learn new concepts from a single example: from left, Ruslan Salakhutdinov, Brenden M. Lake and Joshua B. Tenenbaum.Credit Alain Decarie for The New York Times
“I’m constantly amazed by the rate of progress in the field,” said Alexander Berg, an assistant professor of computer science at the University of North Carolina, Chapel Hill.

The competition, known as the Imagenet Large Scale Visual Recognition Challenge, pits teams of researchers at academic, government and corporate laboratories against one another to design programs to both classify and detect objects. It was won this year by a group of researchers at the Microsoft Research laboratory in Beijing.

The Microsoft team was able to cut the number of errors in half in a task that required their program to classify objects from a set of 1,000 categories. The team also won a second competition by accurately detecting all instances of objects in 200 categories.

The contest requires the programs to examine a large number of digital images, and either label or find objects in the images. For example, they may need to distinguish between objects such as bicycles and cars, both of which might appear to have two wheels from a certain perspective.

In both the handwriting recognition task described in Science and in the visual classification and detection competition, researchers made efforts to compare their progress to human abilities. In both cases, the software advances now appear to surpass human abilities.

However, computer scientists cautioned against drawing conclusions about “thinking” machines or making direct comparisons to human intelligence.

“I would be very careful with terms like ‘superhuman performance,’ ” said Oren Etzioni, chief executive of the Allen Institute for Artificial Intelligence in Seattle. “Of course the calculator exhibits superhuman performance, with the possible exception of Dustin Hoffman,” he added, in reference to the actor’s portrayal of an autistic savant with extraordinary math skills in the movie “Rain Man.”

The advances reflect the intensifying focus in Silicon Valley and elsewhere on artificial intelligence.

Last month, the Toyota Motor Corporation announced a five-year, billion-dollar investment to create a research center based next to Stanford University to focus on artificial intelligence and robotics.

Also, a formerly obscure academic conference, Neural Information Processing Systems, underway this week in Montreal, has doubled in size since the previous year and has attracted a growing list of brand-name corporate sponsors, including Apple for the first time.

“There is a sellers’ market right now — not enough talent to fill the demand from companies who need them,” said Terrence Sejnowski, the director of the Computational Neurobiology Laboratory at the Salk Institute for Biological Studies in San Diego. “Ph.D. students are getting hired out of graduate schools for salaries that are higher than faculty members who are teaching them.”

http://www.nytimes.com/2015/12/11/scien ... d=71987722

******
First I.V.F. Puppies Are Born in Breakthrough at Cornell

Scientists at Cornell University announced Wednesday that for the first time they had been successful in delivering a litter of puppies conceived through in vitro fertilization.

In what was described as a “breakthrough,” the results of the research, which took decades to accomplish, were announced in the scientific journal Public Library of Science ONE and online by the university.

The successful multiple, live births open the door for the future conservation of endangered species and for the use of gene-editing technologies that could help scientists get rid of inherited diseases in dogs.

The research could also help in the study of genetic diseases, because dogs share more than 350 disease traits with humans, almost twice as many as any other species.

“Since the mid-1970s, people have been trying to do this in a dog and have been unsuccessful,” said Alexander J. Travis, associate professor of reproductive biology in the Baker Institute for Animal Health in Cornell’s College of Veterinary Medicine.

The scientists described two main challenges, such as finding the optimal stage for fertilization of the female dog’s eggs, and simulating the conditions in the lab for preparing the sperm. Ultimately, the team was able to achieve fertilization rates of 80 percent to 90 percent, Dr. Travis said.

Eventually, 19 embryos were transferred to a host female dog, which gave birth July 10 to seven puppies: five conceived from beagles and two that were a beagle and cocker spaniel mix.

Dr. Travis’s team reported in 2013 the successful birth of Klondike, a beagle-Labrador mix puppy born from a frozen embryo in a procedure that used artificial insemination.

But the announcement this week is different because it involves a litter and because the multiple embryos were cultivated in a dish and then implanted in the recipient female, Skylar Sylvester, one of the researchers, said in a telephone interview.

http://www.nytimes.com/2015/12/11/scien ... pe=article
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

SpaceX Successfully Lands Rocket After Launch of Satellites Into Orbit

People living along the central Atlantic coast of Florida have for decades enjoyed the spectacle of rockets headed for space. On Monday night, they were treated to a new sight that may become common: a rocket coming back down to a gentle landing.

More...
http://www.nytimes.com/2015/12/22/scien ... 87722&_r=0

******
Gene Drives Offer New Hope Against Diseases and Crop Pests

Biologists in the United States and Europe are developing a revolutionary genetic technique that promises to provide an unprecedented degree of control over insect-borne diseases and crop pests.

The technique involves a mechanism called a gene drive system, which propels a gene of choice throughout a population. No gene drives have yet been tested in the wild, but in laboratory organisms like the fruit fly, they have converted almost the entire population to carry the favored version of a gene.

More...
http://www.nytimes.com/2015/12/22/scien ... d=71987722

*******
In Developing World, Cancer Is a Very Different Disease

In the United States the median age at which colon cancer strikes is 69 for men and 73 for women. In Chad the average life expectancy at birth is about 50. Children who survive childbirth — and then malnutrition and diarrhea — are likely to die of pneumonia, tuberculosis, influenza, malaria, AIDS or even traffic accidents long before their cells accumulate the mutations that cause colon cancer.

In fact, cancers of any kind don’t make the top 15 causes of death in Chad — or in Somalia, the Central African Republic and other places where the average life span peaks in the low to mid-50s. Many people do die from cancer, and their numbers are multiplied by rapidly growing populations and a lack of medical care. But first come all those other threats.

More...
http://www.nytimes.com/2015/12/22/scien ... ctionfront
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Gene Editing Offers Hope for Treating Duchenne Muscular Dystrophy, Studies Find

After decades of disappointingly slow progress, researchers have taken a substantial step toward a possible treatment for Duchenne muscular dystrophy with the help of a powerful new gene-editing technique.

Duchenne muscular dystrophy is a progressive muscle-wasting disease that affects boys, putting them in wheelchairs by age 10, followed by an early death from heart failure or breathing difficulties. The disease is caused by defects in a gene that encodes a protein called dystrophin, which is essential for proper muscle function.

Because the disease is devastating and incurable, and common for a hereditary illness, it has long been a target for gene therapy, though without success. An alternative treatment, drugs based on chemicals known as antisense oligonucleotides, is in clinical trials.

But gene therapy — the idea of curing a genetic disease by inserting the correct gene into damaged cells — is making a comeback. A new technique, known as Crispr-Cas9, lets researchers cut the DNA of chromosomes at selected sites to remove or insert segments.

More...
http://www.nytimes.com/2016/01/01/scien ... ctionfront
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Looking Beyond the Internet of Things

Excerpt:

Imagine if almost everything — streets, car bumpers, doors, hydroelectric dams — had a tiny sensor. That is already happening through so-called Internet-of-Things projects run by big companies like General Electric and IBM.

All those devices and sensors would also wirelessly connect to far-off data centers, where millions of computer servers manage and learn from all that information.

Those servers would then send back commands to help whatever the sensors are connected to operate more effectively: A home automatically turns up the heat ahead of cold weather moving in, or streetlights behave differently when traffic gets bad. Or imagine an insurance company instantly resolving who has to pay for what an instant after a fender-bender because it has been automatically fed information about the accident.

More...
http://www.nytimes.com/2016/01/02/techn ... d=71987722
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Parents hope 'frozen' child will live again

Thai parents, both doctors, put their trust in the future of medical research, when they requested Arizona-based cryonic company Alcor to preserve their daughter’s brain at the point of death. Two-year-old Matheryn Naovaratpong became the youngest person to be cryogenically frozen after dying from a rare form of cancer in January 2015.

The toddler's brain and body were frozen separately at -196 C, to be revived at a date in the distant future.

Alcor’s website describes cryonics as: “an experimental procedure that preserves a human being using the best available technology for the purpose of saving his/her life.”

The company calls itself a ‘Life Extension Foundation,’ and backers say they “believe medical technology will advance further in coming decades than it has in the past several centuries, enabling it to heal damage at the cellular and molecular levels and to restore full physical and mental health.”

In June scientists who work for Alcor published studies showing for the first time that memories formed before an animal has been frozen can survive after it has been thawed.

However, their experiments were carried out on namatode worms with much simpler brains than those of humans.

Alcor researcher Natasha Vita-More said that 'further research on larger organisms with more complex nervous systems could prove to be beneficial to the issue of cryopreservation, including, specifically, memory retention after reviving.'

http://www.msn.com/en-ca/news/weekendre ... tmd#page=4
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

In a First, Element Will Be Named by Researchers in Japan

Since the 19th century, European and American discoveries have monopolized the naming of elements on the periodic table. It is evident in entries like francium, germanium, scandium, polonium, europium, californium, berkelium and americium.

But now, for the first time, researchers in Asia will make an addition to chemistry’s most fundamental catalog.

Scientists from the Riken institute in Japan will bestow an official name on Element 113, currently known by the placeholder name ununtrium, the International Union of Pure and Applied Chemistry announced last week.

The organization said that studies published by the Japanese scientists from 2004 to 2012 give the team the strongest claim to having discovered the element. The declaration comes more than 12 years after the Japanese team first attempted to synthesize the superheavy element, by firing beams of zinc at a thin bismuth film.

Led by Kosuke Morita, the group began to bombard bismuth atoms in a particle accelerator at 10 percent the speed of light in 2003. A year later, they successfully fused two atomic nuclei from these elements, creating their first nucleus of Element 113, but it decayed in less than a thousandth of a second. In 2005, the team produced Element 113 in a second event, but the chemistry union did not consider the demonstration strong enough to denote a discovery.

Imagine that you could name a new element on the periodic table. Send your ideas to [email protected] with a 50-100 word explanation. Before your imagination gets away, consider these published guidelines for new elements (For linguistic consistency, the names of all new elements should end in “-ium”). In keeping with tradition, elements are named after:

A mythological concept or character (including an astronomical object);
A mineral, or similar substance;
A place or geographical region;
A property of the element; or
A scientist.

“For over seven years, we continued to search for data conclusively identifying Element 113, but we just never saw another event,” Dr. Morita said in a statement. “I was not prepared to give up, however, as I believed that one day, if we persevered, luck would fall upon us again.”

In 2012, the team finally produced strong evidence that they had synthesized Element 113. Over the course of those nine years, the beam was active for 553 days and launched more than 130 quintillion zinc atoms, according to Nature.

The chemistry union, along with the International Union of Pure and Applied Physics, granted the Riken researchers naming rights to Element 113 over a joint Russia-United States team that had also claimed to discover the element.

The chemistry union’s decisions are detailed in two reports to appear in the journal Pure and Applied Chemistry. In addition to Element 113, Elements 115, 117 and 118 will also receive official names. Teams from Russia and the United States discovered those elements.

With their discovery, the bottom row of the periodic table will be complete. Elements are numbered by the protons they have in their nucleus, and Elements 114 (flerovium) and 116 (livermorium) had previously been confirmed and named.

Dr. Morita has not yet announced what he intends to name Element 113, but according to a 2004 article in The Japan Times when the team first published its results, one likely contender may be “japonium.”

http://www.nytimes.com/2016/01/05/scien ... d=71987722
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

A Placebo Treatment for Pain

THE crisis of painkiller addiction is becoming increasingly personal: Sixteen percent of Americans know someone who has died from a prescription painkiller overdose, according to a recent Kaiser Family Foundation survey; 9 percent have seen a family member or close friend die.

Addictive opioid painkillers were once reserved for extreme situations like terminal cancer. But opioids like Vicodin and OxyContin are now widely prescribed for common conditions like arthritis and lower back pain. The consequences have been catastrophic: In 2013, prescription painkillers caused nearly 7,000 emergency room visits and 44 deaths every day.

How do we tackle this crisis? We often hear about efforts to clamp down on abuse, for example by regulating pain clinics and monitoring prescription patterns. But these won’t dent the demand for opioids unless we can find better ways to treat the hundred million Americans said to suffer from chronic pain. Simply switching to other drugs isn’t the answer. Few new painkillers are being approved, and existing ones, like Motrin and Tylenol, come with their own risks when used long-term, and some appear to be less effective than we once thought.

Help might instead come from an unexpected corner: the placebo effect.

This phenomenon — in which someone feels better after receiving fake treatment — was once dismissed as an illusion. People who are ill often improve regardless of the treatment they receive. But neuroscientists are discovering that in some conditions, including pain, placebos create biological effects similar to those caused by drugs.

Taking a placebo painkiller dampens activity in pain-related areas of the brain and spinal cord, and triggers the release of endorphins, the natural pain-relieving chemicals that opioid drugs are designed to mimic. Even when we take a real painkiller, a big chunk of its effect is delivered not by any direct chemical action, but by our expectation that the drug will work. Studies show that widely used painkillers like morphine, buprenorphine and tramadol are markedly less effective if we don’t know we’re taking them.

Placebo effects in pain are so large, in fact, that drug manufacturers are finding it hard to beat them. Finding ways to minimize placebo effects in trials, for example by screening out those who are most susceptible, is now a big focus for research. But what if instead we seek to harness these effects? Placebos might ruin drug trials, but they also show us a new approach to treating pain.

It is unethical to deceive patients by prescribing fake treatments, of course. But there is evidence that people with some conditions benefit even if they know they are taking placebos. In a 2014 study that followed 459 migraine attacks in 66 patients, honestly labeled placebos provided significantly more pain relief than no treatment, and were nearly half as effective as the painkiller Maxalt. (The study also found that a placebo labeled “placebo” was 60 percent as effective as Maxalt if it was labeled “placebo.” If the placebo was labeled “Maxalt,” it was again 60 percent as effective as the real drug under its real label.)

With placebo responses in pain so high — and the risks of drugs so severe — why not prescribe a course of “honest” placebos for those who wish to try it, before proceeding, if necessary, to an active drug?

Another option is to employ alternative therapies, which through placebo responses can benefit patients even when there is no physical mode of action. A series of large trials in Germany published between 2005 and 2009 compared real and sham acupuncture (in which needles are placed at nonacupuncture points) with either no treatment or routine clinical care, for chronic pain conditions including migraine, tension headaches, lower back pain and osteoarthritis. Patients who received the acupuncture, real or sham, reported a similar amount of pain relief — and more than those who received no treatment or routine care that included pain medication.

Rather than relying on dummy pills and treatments, however, a broader hope is that teasing out why and when placebos work — and for whom — will help to maximize the effectiveness of drugs, and in some cases allow us to do without them.

The available funding for such research is minuscule compared with the efforts poured into developing new drugs. But a key ingredient is expectation: The greater our belief that a treatment will work, the better we’ll respond.

Individual attitudes and experiences are important, as are cultural factors. Placebo effects are getting stronger in the United States, for example, though not elsewhere. Researchers reported last year that in trials published in 1996, drugs for chronic pain produced on average 27 percent more pain relief than placebos. By 2013, that advantage had slipped to just 9 percent. Likely explanations include a growing cultural belief in the effectiveness of painkillers — a result of direct-to-consumer advertising (illegal in most other countries) and perhaps the fact that so many Americans have taken these drugs in the past.

These findings have implications for deciding which patients are likely to benefit from drugs — someone who has strong faith in painkillers’ effectiveness is more likely to benefit than someone who is suspicious of conventional medicine — as well as how physicians explain the benefits and side effects of treatments they prescribe. Trials show, for example, that strengthening patients’ positive expectations and reducing their anxiety during a variety of procedures, including minimally invasive surgery, while still being honest, can reduce the dose of painkillers required and cut complications.

Placebo studies also reveal the value of social interaction as a treatment for pain. Harvard researchers studied patients in pain from irritable bowel syndrome and found that 44 percent of those given sham acupuncture had adequate relief from their symptoms. If the person who performed the acupuncture was extra supportive and empathetic, however, that figure jumped to 62 percent.

Placebos tell us that pain is a complex mix of biological, psychological and social factors. We need to develop better drugs to treat it, but let’s also take more seriously the idea of relieving pain without them. With dozens of Americans dying every day from prescription painkillers, we need all the help we can get.

Jo Marchant is the author of the forthcoming book “Cure: A Journey into the Science of Mind Over Body.”
http://www.nytimes.com/2016/01/10/opini ... ef=opinion
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

http://www.ozy.com/rising-stars/this-mi ... tein/65094

She's Being Called the Next Albert Einstein

A 22-Year-Old Harvard Ph.D. candidate could change our understanding of the universe.


This Millennial Might Be the New Einstein

Rising Stars By Farah Halime JAN 122016

One of the things the brilliant minds at MIT do — besides ponder the nature of the universe and build sci-fi gizmos, of course — is notarize aircraft airworthiness for the federal government. So when Sabrina Pasterski walked into the campus offices one cold January morning seeking the OK for a single-engine plane she had built, it might have been business as usual. Except that the shaggy-haired, wide-eyed plane builder before them was just 14 and had already flown solo. “I couldn’t believe it,” recalls Peggy Udden, an executive secretary at MIT, “not only because she was so young, but a girl.”

OK, it’s 2016, and gifted females are not exactly rare at MIT; nearly half the undergrads are women. But something about Pasterski led Udden not just to help get her plane approved, but to get the attention of the university’s top professors. Now, eight years later, the lanky, 22-year-old Pasterski is already an MIT graduate and Harvard Ph.D. candidate who has the world of physics abuzz. She’s exploring some of the most challenging and complex issues in physics, much as Stephen Hawking and Albert Einstein (whose theory of relativity just turned 100 years old) did early in their careers. Her research delves into black holes, the nature of gravity and spacetime. A particular focus is trying to better understand “quantum gravity,” which seeks to explain the phenomenon of gravity within the context of quantum mechanics. Discoveries in that area could dramatically change our understanding of the workings of the universe.

Among the many skills she lists on her no-frills website: “spotting elegance within the chaos.”

She’s also caught the attention of some of America’s brightest working at NASA. Also? Jeff Bezos, founder of Amazon.com and aerospace developer and manufacturer Blue Origin, who’s promised her a job whenever she’s ready. Asked by e-mail recently whether his offer still stands, Bezos told OZY: “God, yes!”

But unless you’re the kind of rabid physics fan who’s seen her papers on semiclassical Virasoro symmetry of the quantum gravity S-matrix and Low’s subleading soft theorem as a symmetry of QED (both on approaches to understanding the shape of space and gravity and the first two papers she ever authored), you may not have heard of Pasterski. A first-generation Cuban-American born and bred in the suburbs of Chicago, she’s not on Facebook, LinkedIn or Instagram and doesn’t own a smartphone. She does, however, regularly update a no-frills website called PhysicsGirl, which features a long catalog of achievements and proficiencies. Among them: “spotting elegance within the chaos.”

Pasterski stands out among a growing number of newly minted physics grads in the U.S. There were 7,329 in 2013, double the four-decade low of 3,178 in 1999, according to the American Institute of Physics. Nima Arkani-Hamed, a Princeton professor and winner of the inaugural $3 million Fundamental Physics Prize, told OZY he’s heard “terrific things” about Pasterski from her adviser, Harvard professor Andrew Strominger, who is about to publish a paper with physics rock star Hawking. She’s also received hundreds of thousands of dollars in grants from the Hertz Foundation, the Smith Foundation and the National Science Foundation.

Pasterski, who speaks in frenetic bursts, says she has always been drawn to challenging what’s possible. “Years of pushing the bounds of what I could achieve led me to physics,” she says from her dorm room at Harvard. Yet she doesn’t make it sound like work at all: She calls physics “elegant” but also full of “utility.”

Despite her impressive résumé, MIT wait-listed Pasterski when she first applied. Professors Allen Haggerty and Earll Murman were aghast. Thanks to Udden, the pair had seen a video of Pasterski building her airplane. “Our mouths were hanging open after we looked at it,” Haggerty said. “Her potential is off the charts.” The two went to bat for her, and she was ultimately accepted, later graduating with a grade average of 5.00, the school’s highest score possible.

An only child, Pasterski speaks with some awkwardness and punctuates her e-mails with smiley faces and exclamation marks. She says she has a handful of close friends but has never had a boyfriend, an alcoholic drink or a cigarette. Pasterski says: “I’d rather stay alert, and hopefully I’m known for what I do and not what I don’t do.”

While mentors offer predictions of physics fame, Pasterski appears well grounded. “A theorist saying he will figure out something in particular over a long time frame almost guarantees that he will not do it,” she says. And Bezos’s pledge notwithstanding, the big picture for science grads in the U.S. is challenging: The U.S. Census Bureau’s most recent American Community Survey shows that only about 26 percent of science grads in the U.S. had jobs in their chosen fields, while nearly 30 percent of physics and chemistry post-docs are unemployed. Pasterski seems unperturbed. “Physics itself is exciting enough,” she says. ”It’s not like a 9-to-5 thing. When you’re tired you sleep, and when you’re not, you do physics.”

Farah Halime Ozy Author

Farah is a British-Palestinian transplant to Brooklyn who is still trying to figure out the strange habits of New Yorkers. Her work has been published in The New York Times, Financial Times and The Wall Street Journal, and she’s the founder of a blog called Rebel Economy.
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Unraveling the Ties of Altitude, Oxygen and Lung Cancer

Epidemiologists have long been puzzled by a strange pattern in their data: People living at higher altitudes appear less likely to get lung cancer.

Associations like these can be notoriously misleading. Slice and dice the profusion of data, and there is no end to the coincidences that can arise.

There is, for instance, a strong correlation between per-capita cheese consumption and the number of people strangled accidentally by their bedsheets. Year by year, the number of letters making up the winning word for the Scripps National Spelling Bee closely tracks the number of people killed by venomous spiders.

These are probably not important clues about the nature of reality. But the evidence for an inverse relationship between lung cancer and elevation has been much harder to dismiss.

A paper published last year in the journal PeerJ plumbed the question to new depths and arrived at an intriguing explanation. The higher you live, the thinner the air, so maybe oxygen is a cause of lung cancer.

Oxygen cannot compete with cigarettes, of course, but the study suggests that if everyone in the United States moved to the alpine heights of San Juan County, Colo. (population: 700), there would be 65,496 fewer cases of lung cancer each year.

This idea didn’t appear out of the blue. A connection between lung cancer and altitude was proposed as early as 1982. Five years later, other researchers suggested that oxygen might be the reason.

But the authors of the PeerJ paper — two doctoral students at the University of Pennsylvania and the University of California, San Francisco — have made the strongest case yet. At the University of Pennsylvania Medical School, the paper won last year’s Abramson Cancer Center prize for basic research. And in July it was chosen as one of PeerJ’s best papers on cancer biology.

Skeptics were quick to strike back, though not very effectively. A would-be debunking on the Cancer Research UK website was quickly followed by a debunking of the debunking.

All of the usual caveats apply. Studies like this, which compare whole populations, can be used only to suggest possibilities to be explored in future research. But the hypothesis is not as crazy as it may sound. Oxygen is what energizes the cells of our bodies. Like any fuel, it inevitably spews out waste — a corrosive exhaust of substances called “free radicals,” or “reactive oxygen species,” that can mutate DNA and nudge a cell closer to malignancy.

That is not a good reason to consume antioxidant pills. While the logic may seem sound, there is no convincing evidence that these supplements add to nature’s already formidable means of repairing oxidative damage — and they may even disrupt some delicate biological balance, increasing cancer risk and speeding tumor growth.

But there is no question that oxidation, so crucial to life, rusts our cells and can edge them closer to becoming cancerous.

In examining the possibility that breathing itself significantly increases the risk of lung cancer, the authors of the paper, Kamen P. Simeonov and Daniel S. Himmelstein, began by eliminating confounding variables. Maybe younger, healthier people tend to live at higher altitudes, with older and weaker ones, including smokers, retreating to lower lands. That could create the illusion of a protective altitude effect, but one that has nothing to do with oxygen.

The authors also took into account factors like income, education and race, which affect access to medical care. To reduce distortions caused by noisy data, the researchers excluded counties with large numbers of recent immigrants, who might have acquired cancer-causing mutations elsewhere. Also ruled out were places with a large number of Native Americans, whose cancer rates often go underreported.

Beyond the human variables were geophysical ones. Air at higher altitudes may be less polluted by carcinogens. And since sunlight exposure is more intense, maybe the increase in vitamin D helps stave off lung cancer — an idea previously suggested. Differences in precipitation and temperature might also have some effect.

These data, too, were added to the scales, along with the influence of radon gas and ultraviolet rays, which is greater at higher elevations. The frequency of obesity and diabetes, which are risks for many cancers, was adjusted for, along with alcohol use, meat consumption and other factors.

After an examination of all these numbers for the residents of 260 counties in the Western United States, situated from sea level to nearly 11,400 feet, one pattern stood out: a correlation between the concentration of oxygen in the air and the incidence of lung cancer. For each 1,000-meter rise in elevation, there were 7.23 fewer lung cancer cases per 100,000 people. (The study found no similar correlations for breast, colon and prostate cancer.)

That is not a good reason to inhale less deeply at sea level or to flee to the mountains. Wherever you live, smoking accounts for as much as 90 percent of lung cancer. Radon is considered a distant second cause. But the PeerJ study complicates things.

For various reasons, radon levels are generally higher at higher altitudes, while lung cancer rates are lower. Does that mean radon is not so dangerous after all? Or are its bad effects offset by the healthy deficit of carcinogenic oxygen?

Or maybe radon, like thinner air, protects against lung cancer. According to a long-debated hypothesis called hormesis, the earth’s low levels of natural radiation actually might reduce cancer risk.

However this all shakes out, the study is a reminder that not all carcinogens are manufactured by chemical plants. And not all of them can be avoided. You can quit smoking and mitigate the radon in your basement. But you can’t mitigate oxygen.

http://www.nytimes.com/2016/01/26/scien ... d=71987722
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

British Researcher Gets Permission to Edit Genes of Human Embryos

A British researcher has received permission to use a powerful new genome editing technique on human embryos, even though researchers throughout the world are observing a voluntary moratorium on making changes to DNA that could be passed down to subsequent generations.

The British experiment would not contravene the moratorium because there is no intention to implant the altered embryos in a womb. But it brings one step closer the fateful decision of whether to alter the human germ line for medical or other purposes.

The new genetic editing technique, known as Crispr or Crispr-Cas9, lets researchers perform cut-and-paste operations on DNA, the hereditary material, with unprecedented ease and precision. Unlike most types of gene therapy, a longstanding approach that aims to alter only adult human tissues that die with the patient, the Crispr technique could be used to change human eggs, sperm and early embryos, and such alterations would be inherited by the patient’s children. Because changing the human germ line is perceived to hold far-reaching consequences, the leading scientific academies of the United States, Britain and China issued a joint statement in December asking researchers around the world to hold off on altering human inheritance.

More...
http://www.nytimes.com/2016/02/02/healt ... ctionfront
Post Reply