TECHNOLOGY AND DEVELOPMENT
The End of Work?
This is an article from Turning Points, a magazine that explores what critical moments from this year might mean for the year ahead.
Turning Point: An AI system teaches itself how to play and win video games without any programming.
Welcome to the era of AI-human hybrid intelligence, where people and artificial intelligence systems work together seamlessly. Picture the scene from the 1986 movie “Aliens,” where Sigourney Weaver slips into a humanoid, semi-robotic weight-lifting unit to fight the alien queen — that’s about where we are today. (A number of companies around the world are developing versions of such devices for industrial and medical use, with some already on the market.)
— The Associated Press is using Automated Insights’ software to produce thousands of articles about corporate earnings each year, freeing up staff for other reporting. Humans expand and polish a few of the most important articles.
— While Facebook’s virtual assistant M, introduced in the San Francisco Bay Area in 2015, uses AI to answer user questions, humans vet the answers to improve them.
— IBM’s Watson is employed at some hospitals in the United States to determine the best course of treatment for individual cancer patients. Watson analyzes genetic information and the medical literature, and then provides suggestions to the doctors in charge.
Humans supervise these AI programs and make the ultimate decisions, but white-collar workers are understandably starting to worry about the day when AI can go it alone.
Don’t panic: Though the AI Revolution is underway, it is unlikely to eliminate many office jobs within the next five to 10 years. Current AI research and usage only targets specific tasks, like image recognition or data analysis, while most jobs require workers to draw on a broad range of skills.
But I think it’s important to understand why the job market will change. There have been important advances in AI in recent years, especially in the area known as deep learning. Rather than telling a computer exactly how to do a task with step-by-step programming, researchers employing a deep learning system step back and let it apply techniques such as pattern recognition and trial and error to teach itself how — techniques humans use. To be clear, “artificial intelligence” does not mean that such machines are sentient, as they are portrayed in science fiction; only that given more data, they may perform a task better.
More....
http://www.nytimes.com/2015/12/10/opini ... ef=opinion
This is an article from Turning Points, a magazine that explores what critical moments from this year might mean for the year ahead.
Turning Point: An AI system teaches itself how to play and win video games without any programming.
Welcome to the era of AI-human hybrid intelligence, where people and artificial intelligence systems work together seamlessly. Picture the scene from the 1986 movie “Aliens,” where Sigourney Weaver slips into a humanoid, semi-robotic weight-lifting unit to fight the alien queen — that’s about where we are today. (A number of companies around the world are developing versions of such devices for industrial and medical use, with some already on the market.)
— The Associated Press is using Automated Insights’ software to produce thousands of articles about corporate earnings each year, freeing up staff for other reporting. Humans expand and polish a few of the most important articles.
— While Facebook’s virtual assistant M, introduced in the San Francisco Bay Area in 2015, uses AI to answer user questions, humans vet the answers to improve them.
— IBM’s Watson is employed at some hospitals in the United States to determine the best course of treatment for individual cancer patients. Watson analyzes genetic information and the medical literature, and then provides suggestions to the doctors in charge.
Humans supervise these AI programs and make the ultimate decisions, but white-collar workers are understandably starting to worry about the day when AI can go it alone.
Don’t panic: Though the AI Revolution is underway, it is unlikely to eliminate many office jobs within the next five to 10 years. Current AI research and usage only targets specific tasks, like image recognition or data analysis, while most jobs require workers to draw on a broad range of skills.
But I think it’s important to understand why the job market will change. There have been important advances in AI in recent years, especially in the area known as deep learning. Rather than telling a computer exactly how to do a task with step-by-step programming, researchers employing a deep learning system step back and let it apply techniques such as pattern recognition and trial and error to teach itself how — techniques humans use. To be clear, “artificial intelligence” does not mean that such machines are sentient, as they are portrayed in science fiction; only that given more data, they may perform a task better.
More....
http://www.nytimes.com/2015/12/10/opini ... ef=opinion
A Pause to Weigh Risks of Gene Editing
The technology for altering defects in the human genome has progressed so rapidly in the last three years that it has outstripped the ability of scientists and ethicists to understand and cope with the consequences. An international panel of experts has wisely called for a pause in using the technique to produce genetic changes that could be inherited by future generations. That would allow time to assess risks and benefits, they said, and develop a “broad societal consensus” on the work.
The revolutionary new technology, known as Crispr-Cas9, allows scientists to easily eliminate or replace sections of DNA with great precision, much as a word processing program can edit or replace words in a text. The issue is whether to use the technique to alter human eggs, sperm or early embryos in ways that would be passed on, a process that is called germline editing.
The technology has the potential to prevent devastating hereditary diseases that are caused by a single defective gene that can be edited out of the germline and replaced with a correct version. In the case of Huntington’s disease, which causes a progressive breakdown of nerve cells in the brain, the technology could protect all children in the family, who would otherwise face a 50/50 chance of inheriting the disease. The technique is not considered to be of value for diseases like cancer and diabetes, or for altering traits like intelligence, in which the hereditary component is caused by many different genes.
The international panel calling for a pause met in Washington this month at the National Academy of Sciences and was jointly convened by the Chinese Academy of Sciences and the Royal Society of London. The academies have no regulatory power, but their recommendations are expected to be followed by most scientists.
The technology is a tremendous accomplishment, but there are dangers in rushing to use it before the risks are understood. Chinese scientists attempted to alter genes in human embryos that cause a blood disorder, beta thalassemia, in an experiment deemed ethical by a Chinese national committee because the embryos were not viable. The editing technique ran amok and cut the DNA at many unintended sites. That may be a temporary setback as subsequent advances have reduced off-target editing.
The panel left a path for the technology to move forward once a vigorous program of basic research has resolved lingering questions. That seems sensible given that many biomedical advances, like in vitro fertilization and stem cell research, raised concerns at the start but ultimately proved valuable and became widely accepted.
http://www.nytimes.com/2015/12/18/opini ... d=71987722
The technology for altering defects in the human genome has progressed so rapidly in the last three years that it has outstripped the ability of scientists and ethicists to understand and cope with the consequences. An international panel of experts has wisely called for a pause in using the technique to produce genetic changes that could be inherited by future generations. That would allow time to assess risks and benefits, they said, and develop a “broad societal consensus” on the work.
The revolutionary new technology, known as Crispr-Cas9, allows scientists to easily eliminate or replace sections of DNA with great precision, much as a word processing program can edit or replace words in a text. The issue is whether to use the technique to alter human eggs, sperm or early embryos in ways that would be passed on, a process that is called germline editing.
The technology has the potential to prevent devastating hereditary diseases that are caused by a single defective gene that can be edited out of the germline and replaced with a correct version. In the case of Huntington’s disease, which causes a progressive breakdown of nerve cells in the brain, the technology could protect all children in the family, who would otherwise face a 50/50 chance of inheriting the disease. The technique is not considered to be of value for diseases like cancer and diabetes, or for altering traits like intelligence, in which the hereditary component is caused by many different genes.
The international panel calling for a pause met in Washington this month at the National Academy of Sciences and was jointly convened by the Chinese Academy of Sciences and the Royal Society of London. The academies have no regulatory power, but their recommendations are expected to be followed by most scientists.
The technology is a tremendous accomplishment, but there are dangers in rushing to use it before the risks are understood. Chinese scientists attempted to alter genes in human embryos that cause a blood disorder, beta thalassemia, in an experiment deemed ethical by a Chinese national committee because the embryos were not viable. The editing technique ran amok and cut the DNA at many unintended sites. That may be a temporary setback as subsequent advances have reduced off-target editing.
The panel left a path for the technology to move forward once a vigorous program of basic research has resolved lingering questions. That seems sensible given that many biomedical advances, like in vitro fertilization and stem cell research, raised concerns at the start but ultimately proved valuable and became widely accepted.
http://www.nytimes.com/2015/12/18/opini ... d=71987722
Your Cells. Their Research. Your Permission?
Excerpt:
This often surprises people: Tissues from millions of Americans are used in research without their knowledge. These “clinical biospecimens” are leftovers from blood tests, biopsies and surgeries. If your identity is removed, scientists don’t have to ask your permission to use them. How people feel about this varies depending on everything from their relationship to their DNA to how they define life and death. Many bioethicists aren’t bothered by the research being done with those samples — without it we wouldn’t have some of our most important medical advances. What concerns them is that people don’t know they’re participating, or have a choice. This may be about to change.
The United States government recently proposed sweeping revisions to the Federal Policy for Protection of Human Subjects, or the Common Rule, which governs research on humans, tissues and genetic material. These changes will determine the content of consent forms for clinical trials, if and how your medical and genetic information can be used, how your privacy will be protected, and more. The most controversial change would require scientists to get consent for research on all biospecimens, even anonymous ones.
What’s riding on this? Maybe the future of human health. We’re in the era of precision medicine, which relies on genetic and other personal information to develop individualized treatments. Those advances depend on scientists working with vast amounts of human tissue and DNA. Dr. Francis S. Collins, director of the National Institutes of Health, believes involving donors in this process gives scientists more useful information, and can be life-changing for donors. In announcing plans for the $215 million Precision Medicine Initiative, which he sees as a model for other future research, Dr. Collins said, “Participants will be partners in research, not subjects.” But people can be partners only if they know they’re participating.
The original Common Rule was written decades before anyone imagined what we can now learn from biospecimens. Case in point: The Common Rule doesn’t require consent for “non-identifiable” samples, but scientists have proven it’s possible to “re-identify” anonymous samples using DNA and publicly available information. Nothing prohibits this. There is widespread agreement that current regulations are outdated, but little consensus on a fix. Much debate centers on what the public may or may not want done with their tissues, and whether that should even be a factor in policy making. What’s missing is the actual public.
More....
http://www.nytimes.com/2015/12/30/opini ... 05309&_r=0
Excerpt:
This often surprises people: Tissues from millions of Americans are used in research without their knowledge. These “clinical biospecimens” are leftovers from blood tests, biopsies and surgeries. If your identity is removed, scientists don’t have to ask your permission to use them. How people feel about this varies depending on everything from their relationship to their DNA to how they define life and death. Many bioethicists aren’t bothered by the research being done with those samples — without it we wouldn’t have some of our most important medical advances. What concerns them is that people don’t know they’re participating, or have a choice. This may be about to change.
The United States government recently proposed sweeping revisions to the Federal Policy for Protection of Human Subjects, or the Common Rule, which governs research on humans, tissues and genetic material. These changes will determine the content of consent forms for clinical trials, if and how your medical and genetic information can be used, how your privacy will be protected, and more. The most controversial change would require scientists to get consent for research on all biospecimens, even anonymous ones.
What’s riding on this? Maybe the future of human health. We’re in the era of precision medicine, which relies on genetic and other personal information to develop individualized treatments. Those advances depend on scientists working with vast amounts of human tissue and DNA. Dr. Francis S. Collins, director of the National Institutes of Health, believes involving donors in this process gives scientists more useful information, and can be life-changing for donors. In announcing plans for the $215 million Precision Medicine Initiative, which he sees as a model for other future research, Dr. Collins said, “Participants will be partners in research, not subjects.” But people can be partners only if they know they’re participating.
The original Common Rule was written decades before anyone imagined what we can now learn from biospecimens. Case in point: The Common Rule doesn’t require consent for “non-identifiable” samples, but scientists have proven it’s possible to “re-identify” anonymous samples using DNA and publicly available information. Nothing prohibits this. There is widespread agreement that current regulations are outdated, but little consensus on a fix. Much debate centers on what the public may or may not want done with their tissues, and whether that should even be a factor in policy making. What’s missing is the actual public.
More....
http://www.nytimes.com/2015/12/30/opini ... 05309&_r=0
The Toilet of Tomorrow Will Do More Than Flush Waste
When we think about the most dire threats to our planet, poor sanitation rarely tops the list. And yet it’s a significant (and in some cases immediate) contributor to sickness and pollution in both rural and urban areas.
Every day, around 2 million tons of human waste are disposed of in water channels. Among other contributing factors, this sanitation problem limits the availability of uncontaminated drinking water—especially in developing nations, which often lack the proper treatment and drainage facilities. Overall, 2.5 billion people around the world currently lack access to improved sanitation, and 27 percent of urban dwellers in developing nations do not have access to piped water in their homes.
These sanitation issues apply to U.S. cities as well—albeit on a much smaller scale. As America’s urban populations continue to grow, so too does the demand for clean water. The U.S. Government Accountability Office reports that 40 states will experience some kind of water shortage in the next 10 years.
These shortages negatively impact water quality in unincorporated communities, as my colleague Laura Bliss has chronicled in her series on the water crisis in California’s San Joaquin Valley. Meanwhile, urbanized areas run the risk of sewer systems clogging and spilling over into rivers and streams due to excessive groundwater or stormwater. The EPA estimates anywhere from 23,000 to 75,000 overflows of sanitary sewer systems each year in the U.S.
The right infrastructure becomes critical in preserving water quality and preventing a shortage of clean drinking water. Unfortunately, most of the technology employed by cities today lags behind the latest innovations.
Reinventing the toilet
Currently, only one gold standard for sanitation exists: the combined sewer system that is already in place in developed cities. In a September post for The Atlantic, author Mary Anna Evans describes the initial design of this “modern” technology:
The EPA calls combined sewers “ remnants of the country's early infrastructure.” The first sewers weren’t designed to handle the constant and huge stream of wastes from our toilets, because they were invented when we didn’t have any toilets. Sewers were originally built to solve the problems of cities that were flooded with their own refuse—garbage, animal manure, and human waste left in the open rather than in a privy or latrine—during every rainstorm.
The fact that cities still rely on a technology that predates toilets points to just how archaic this system has become. Brian Arbogast, the director of the Water, Sanitation & Hygiene Program at the Bill and Melinda Gates Foundation, says that “there’s not an obvious market demand for changing the way we do sanitation in the developed world.” And yet combined sewer systems expend huge amounts of water and energy, in turn posing a serious long-term threat to our environment.
For the past few years, Arbogast and his team have worked with partners to develop new sanitation technologies. One of the most promising is a “reinvented toilet” that essentially functions as its own treatment plant. The concept is part of a broader initiative called the “Reinvent the Toilet Challenge” that aims to deliver sustainable sanitation to the 2.5 billion people who lack access.
Unlike traditional sewer systems, the reinvented toilet would harvest energy from actual human waste to kill germs in the water itself. The result is sterile water that’s safe enough to wash with, as well as human waste that can be re-purposed for healthy, odorless fertilizer. The main challenge is keeping costs low enough to reasonably implement the toilet across cities. With this in mind, the Water, Sanitation & Hygiene Program has priced it at no more than five cents per user per day—the same cost as many public toilets in developing nations.
The Gates Foundation has also partnered with manufacturing company Janicki Bioenergy on a device called the Omni Processor, which is able to convert feces into safe drinking water. The device’s steam engine makes its own energy for burning human waste so cities or towns don’t have to resort to energy-draining activities like burning diesel fuel. The Omni Processor was recently implemented in Dakar, Senegal, through an auspicious pilot program, with plans to eventually sell the product to wealthier nations.
Developing cities as sanitation testing grounds
If developing nations are turning toward new sanitation technology, why isn’t this shift happening in developed cities as well? One obvious explanation is that developed cities already have a functioning sewer system. But the real answer, Arbogast says, goes beyond the fact that “developed cities aren’t really innovating.” He contends that new technology will have to be tested in developing nations before developed ones are likely to follow suit.
“I firmly believe,” he says, “that if this technology can get out there in the market [in developing countries] … you’ll start to see building codes changing to incentivize the use of waterless toilets or to take the load off waste water treatment plants.”
Until then, it’s developing cities that require the most attention. The World Health Organization reports that 3.4 million people—mainly children—die each year from water-related diseases like cholera, dysentery, or typhoid. In a city like Dhaka, Bangladesh, Arbogast says, only 2 percent of waste is being treated at a plant. And in many cases, septic tanks carry human waste directly into the street—leaving city residents exposed to numerous pathogens. “No community has ever put themselves out of poverty without addressing sanitation,” Arbogast says.
As dire as these circumstances may be, sustainable sanitation is rarely the focus of global discussions. During COP21, Arbogast gave a talk on the relationship between sanitation and climate change in hopes of landing the issue on the international radar. At the conference, Arbogast says, many were surprised to hear how direct and devastating the link has become. Despite being familiar with the sanitation problem in developing communities, many conference-goers had overlooked the energy-draining and water-depleting activities of combined sewer systems.
Thankfully, these realizations are not too late. With innovations like the Omni Processor and the reinvented toilet on the cusp of completion, cities can start to think about replacing sewer systems with more environmentally friendly devices. Arbogast thinks these technologies will be ready for purchase in just a few years. Developed or not, those cities that make it a priority to update their waste disposal systems will certainly be more prepared for impending environmental challenges.
“Cities that invest in non-sewer sanitation are going to be far more resilient both today,” Arbogast says, “and even more so in the face of climate change in the future.”
http://www.msn.com/en-ca/news/offbeat/t ... lsignoutmd
When we think about the most dire threats to our planet, poor sanitation rarely tops the list. And yet it’s a significant (and in some cases immediate) contributor to sickness and pollution in both rural and urban areas.
Every day, around 2 million tons of human waste are disposed of in water channels. Among other contributing factors, this sanitation problem limits the availability of uncontaminated drinking water—especially in developing nations, which often lack the proper treatment and drainage facilities. Overall, 2.5 billion people around the world currently lack access to improved sanitation, and 27 percent of urban dwellers in developing nations do not have access to piped water in their homes.
These sanitation issues apply to U.S. cities as well—albeit on a much smaller scale. As America’s urban populations continue to grow, so too does the demand for clean water. The U.S. Government Accountability Office reports that 40 states will experience some kind of water shortage in the next 10 years.
These shortages negatively impact water quality in unincorporated communities, as my colleague Laura Bliss has chronicled in her series on the water crisis in California’s San Joaquin Valley. Meanwhile, urbanized areas run the risk of sewer systems clogging and spilling over into rivers and streams due to excessive groundwater or stormwater. The EPA estimates anywhere from 23,000 to 75,000 overflows of sanitary sewer systems each year in the U.S.
The right infrastructure becomes critical in preserving water quality and preventing a shortage of clean drinking water. Unfortunately, most of the technology employed by cities today lags behind the latest innovations.
Reinventing the toilet
Currently, only one gold standard for sanitation exists: the combined sewer system that is already in place in developed cities. In a September post for The Atlantic, author Mary Anna Evans describes the initial design of this “modern” technology:
The EPA calls combined sewers “ remnants of the country's early infrastructure.” The first sewers weren’t designed to handle the constant and huge stream of wastes from our toilets, because they were invented when we didn’t have any toilets. Sewers were originally built to solve the problems of cities that were flooded with their own refuse—garbage, animal manure, and human waste left in the open rather than in a privy or latrine—during every rainstorm.
The fact that cities still rely on a technology that predates toilets points to just how archaic this system has become. Brian Arbogast, the director of the Water, Sanitation & Hygiene Program at the Bill and Melinda Gates Foundation, says that “there’s not an obvious market demand for changing the way we do sanitation in the developed world.” And yet combined sewer systems expend huge amounts of water and energy, in turn posing a serious long-term threat to our environment.
For the past few years, Arbogast and his team have worked with partners to develop new sanitation technologies. One of the most promising is a “reinvented toilet” that essentially functions as its own treatment plant. The concept is part of a broader initiative called the “Reinvent the Toilet Challenge” that aims to deliver sustainable sanitation to the 2.5 billion people who lack access.
Unlike traditional sewer systems, the reinvented toilet would harvest energy from actual human waste to kill germs in the water itself. The result is sterile water that’s safe enough to wash with, as well as human waste that can be re-purposed for healthy, odorless fertilizer. The main challenge is keeping costs low enough to reasonably implement the toilet across cities. With this in mind, the Water, Sanitation & Hygiene Program has priced it at no more than five cents per user per day—the same cost as many public toilets in developing nations.
The Gates Foundation has also partnered with manufacturing company Janicki Bioenergy on a device called the Omni Processor, which is able to convert feces into safe drinking water. The device’s steam engine makes its own energy for burning human waste so cities or towns don’t have to resort to energy-draining activities like burning diesel fuel. The Omni Processor was recently implemented in Dakar, Senegal, through an auspicious pilot program, with plans to eventually sell the product to wealthier nations.
Developing cities as sanitation testing grounds
If developing nations are turning toward new sanitation technology, why isn’t this shift happening in developed cities as well? One obvious explanation is that developed cities already have a functioning sewer system. But the real answer, Arbogast says, goes beyond the fact that “developed cities aren’t really innovating.” He contends that new technology will have to be tested in developing nations before developed ones are likely to follow suit.
“I firmly believe,” he says, “that if this technology can get out there in the market [in developing countries] … you’ll start to see building codes changing to incentivize the use of waterless toilets or to take the load off waste water treatment plants.”
Until then, it’s developing cities that require the most attention. The World Health Organization reports that 3.4 million people—mainly children—die each year from water-related diseases like cholera, dysentery, or typhoid. In a city like Dhaka, Bangladesh, Arbogast says, only 2 percent of waste is being treated at a plant. And in many cases, septic tanks carry human waste directly into the street—leaving city residents exposed to numerous pathogens. “No community has ever put themselves out of poverty without addressing sanitation,” Arbogast says.
As dire as these circumstances may be, sustainable sanitation is rarely the focus of global discussions. During COP21, Arbogast gave a talk on the relationship between sanitation and climate change in hopes of landing the issue on the international radar. At the conference, Arbogast says, many were surprised to hear how direct and devastating the link has become. Despite being familiar with the sanitation problem in developing communities, many conference-goers had overlooked the energy-draining and water-depleting activities of combined sewer systems.
Thankfully, these realizations are not too late. With innovations like the Omni Processor and the reinvented toilet on the cusp of completion, cities can start to think about replacing sewer systems with more environmentally friendly devices. Arbogast thinks these technologies will be ready for purchase in just a few years. Developed or not, those cities that make it a priority to update their waste disposal systems will certainly be more prepared for impending environmental challenges.
“Cities that invest in non-sewer sanitation are going to be far more resilient both today,” Arbogast says, “and even more so in the face of climate change in the future.”
http://www.msn.com/en-ca/news/offbeat/t ... lsignoutmd
To Save Its Salmon, California
Calls In the Fish Matchmaker
At a hatchery on the Klamath River, biologists are
using genetic techniques to reduce inbreeding, though
some argue natural methods are more effective.
Excerpts:
The goal is to avoid breeding siblings or cousins, a break from traditional methods of breeding the biggest fish (thought to be strong) without knowing if the fish were related. At some smaller hatcheries, 50 percent or more of salmon are inbred, Dr. Garza’s work has shown.
“We’re not trying to create the biggest, best, most productive fish,” said Dr. Garza, 51, who runs the molecular ecology and genetic analysis team for the National Oceanic and Atmospheric Administration.
Those traditional methods led to homogeneity rather than the diversity that makes a species more able to survive myriad challenges in nature, including predators and disease.
“We’re trying to mimic what’s going on in nature,” he added.
******
But others question whether the mating service is just another misguided step down a primrose path of human intervention. It is hubris, skeptics say, to think that natural selection can be recreated through technology.
“It’s a question of how much playing God will actually work,” said Peter B. Moyle, a distinguished professor emeritus of biology at the University of California, Davis.
“Anytime you get tech solutions to natural problems,” he added, “it seems to me you wind up in trouble in the long run.”
More....
http://www.nytimes.com/2016/01/19/scien ... d=71987722
Calls In the Fish Matchmaker
At a hatchery on the Klamath River, biologists are
using genetic techniques to reduce inbreeding, though
some argue natural methods are more effective.
Excerpts:
The goal is to avoid breeding siblings or cousins, a break from traditional methods of breeding the biggest fish (thought to be strong) without knowing if the fish were related. At some smaller hatcheries, 50 percent or more of salmon are inbred, Dr. Garza’s work has shown.
“We’re not trying to create the biggest, best, most productive fish,” said Dr. Garza, 51, who runs the molecular ecology and genetic analysis team for the National Oceanic and Atmospheric Administration.
Those traditional methods led to homogeneity rather than the diversity that makes a species more able to survive myriad challenges in nature, including predators and disease.
“We’re trying to mimic what’s going on in nature,” he added.
******
But others question whether the mating service is just another misguided step down a primrose path of human intervention. It is hubris, skeptics say, to think that natural selection can be recreated through technology.
“It’s a question of how much playing God will actually work,” said Peter B. Moyle, a distinguished professor emeritus of biology at the University of California, Davis.
“Anytime you get tech solutions to natural problems,” he added, “it seems to me you wind up in trouble in the long run.”
More....
http://www.nytimes.com/2016/01/19/scien ... d=71987722
Most threats to humans come from science and technology, warns Hawking
The human race faces one its most dangerous centuries yet as progress in science and technology becomes an ever greater threat to our existence, Stephen Hawking warns.
The chances of disaster on planet Earth will rise to a near certainty in the next one to ten thousand years, the eminent cosmologist said, but it will take more than a century to set up colonies in space where human beings could live on among the stars.
“We will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period,” Hawking said. His comments echo those of Lord Rees, the astronomer royal, who raised his own concerns about the risks of self-annihilation in his 2003 book Our Final Century.
Speaking to the Radio Times ahead of the BBC Reith Lecture, in which he will explain the science of black holes, Hawking said most of the threats humans now face come from advances in science and technology, such as nuclear weapons and genetically engineered viruses.
Related: Stephen Hawking: 'If you feel you are in a black hole, don’t give up. There’s a way out.'
“We are not going to stop making progress, or reverse it, so we must recognise the dangers and control them,” he added.
More....
msn.com/en-ca/news/techandscience/most-threats-to-humans-come-from-science-and-technology-warns-hawking/ar-BBopWib?jid=50112&rid=1&FORM=MD12QC&OCID=MD12QC&wt.mc_id=MD12QC
The human race faces one its most dangerous centuries yet as progress in science and technology becomes an ever greater threat to our existence, Stephen Hawking warns.
The chances of disaster on planet Earth will rise to a near certainty in the next one to ten thousand years, the eminent cosmologist said, but it will take more than a century to set up colonies in space where human beings could live on among the stars.
“We will not establish self-sustaining colonies in space for at least the next hundred years, so we have to be very careful in this period,” Hawking said. His comments echo those of Lord Rees, the astronomer royal, who raised his own concerns about the risks of self-annihilation in his 2003 book Our Final Century.
Speaking to the Radio Times ahead of the BBC Reith Lecture, in which he will explain the science of black holes, Hawking said most of the threats humans now face come from advances in science and technology, such as nuclear weapons and genetically engineered viruses.
Related: Stephen Hawking: 'If you feel you are in a black hole, don’t give up. There’s a way out.'
“We are not going to stop making progress, or reverse it, so we must recognise the dangers and control them,” he added.
More....
msn.com/en-ca/news/techandscience/most-threats-to-humans-come-from-science-and-technology-warns-hawking/ar-BBopWib?jid=50112&rid=1&FORM=MD12QC&OCID=MD12QC&wt.mc_id=MD12QC
Give Up Your Data to Cure Disease
HOW far would you go to protect your health records? Your privacy matters, of course, but consider this: Mass data can inform medicine like nothing else and save countless lives, including, perhaps, your own.
Over the past several years, using some $30 billion in federal stimulus money, doctors and hospitals have been installing electronic health record systems. More than 80 percent of office-based doctors, including me, use some form of E.H.R. These systems are supposed to make things better by giving people easier access to their medical information and avoiding the duplication of tests and potentially fatal errors.
Yet neither doctors nor patients are happy. Doctors complain about the time it takes to update digital records, while patients worry about confidentiality. Last month the Association of American Physicians and Surgeons went so far as to warn that E.H.R.s could “crash” the medical system.
We need to get over it. These digital databases offer an incredible opportunity to examine trends that will fundamentally change how doctors treat patients. They will help develop cures, discover new uses for drugs and better track the spread of scary new illnesses like the Zika virus.
More...
http://www.nytimes.com/2016/02/07/opini ... ef=opinion
HOW far would you go to protect your health records? Your privacy matters, of course, but consider this: Mass data can inform medicine like nothing else and save countless lives, including, perhaps, your own.
Over the past several years, using some $30 billion in federal stimulus money, doctors and hospitals have been installing electronic health record systems. More than 80 percent of office-based doctors, including me, use some form of E.H.R. These systems are supposed to make things better by giving people easier access to their medical information and avoiding the duplication of tests and potentially fatal errors.
Yet neither doctors nor patients are happy. Doctors complain about the time it takes to update digital records, while patients worry about confidentiality. Last month the Association of American Physicians and Surgeons went so far as to warn that E.H.R.s could “crash” the medical system.
We need to get over it. These digital databases offer an incredible opportunity to examine trends that will fundamentally change how doctors treat patients. They will help develop cures, discover new uses for drugs and better track the spread of scary new illnesses like the Zika virus.
More...
http://www.nytimes.com/2016/02/07/opini ... ef=opinion
Ignore the GPS. That Ocean Is Not a Road.
Earlier this month, Noel Santillan, an American tourist in Iceland, directed the GPS unit in his rental car to guide him from Keflavik International Airport to a hotel in nearby Reykjavik. Many hours and more than 250 icy miles later, he pulled over in Siglufjordur, a fishing village on the outskirts of the Arctic Circle. Mr. Santillan, a 28-year-old retail marketer from New Jersey, became an unlikely celebrity after Icelandic news media trumpeted his accidental excursion.
Mr. Santillan shouldn’t be blamed for following directions. Siglufjordur has a road called Laugarvegur, the word Mr. Santillan — accurately copying the spelling from his hotel booking confirmation — entered in lieu of Laugavegur, a major thoroughfare in Reykjavik. The real mystery is why he persisted, ignoring road signs indicating that he was driving away from Iceland’s capital. According to this newspaper, Mr. Santillan apparently explained that he was very tired after his flight and had “put his faith in the GPS.”
Faith is a concept that often enters the accounts of GPS-induced mishaps. “It kept saying it would navigate us a road,” said a Japanese tourist in Australia who, while attempting to reach North Stradbroke Island, drove into the Pacific Ocean. A man in West Yorkshire, England, who took his BMW off-road and nearly over a cliff, told authorities that his GPS “kept insisting the path was a road.” In perhaps the most infamous incident, a woman in Belgium asked GPS to take her to a destination less than two hours away. Two days later, she turned up in Croatia.
These episodes naturally inspire incredulity, if not outright mockery. After a couple of Swedes mistakenly followed their GPS to the city of Carpi (when they meant to visit Capri), an Italian tourism official dryly noted to the BBC that “Capri is an island. They did not even wonder why they didn’t cross any bridge or take any boat.” An Upper West Side blogger’s account of the man who interpreted “turn here” to mean onto a stairway in Riverside Park was headlined “GPS, Brain Fail Driver.”
But some end tragically — like the tale of the couple who ignored “Road Closed” signs and plunged off a bridge in Indiana last year. Disastrous incidents involving drivers following disused roads and disappearing into remote areas of Death Valley in California became so common that park rangers gave them a name: “death by GPS.” Last October, a tourist was shot to death in Brazil after GPS led her and her husband down the wrong street and into a notorious drug area.
If we’re being honest, it’s not that hard to imagine doing something similar ourselves. Most of us use GPS as a crutch while driving through unfamiliar terrain, tuning out and letting that soothing voice do the dirty work of navigating. Since the explosive rise of in-car navigation systems around 10 years ago, several studies have demonstrated empirically what we already know instinctively. Cornell researchers who analyzed the behavior of drivers using GPS found drivers “detached” from the “environments that surround them.” Their conclusion: “GPS eliminated much of the need to pay attention.”
As a driving tool, GPS is not so much a new technology as it is an apotheosis. For almost as long as automobiles have existed, people have tried to develop auto-navigation technologies. In the early 20th century, products like the Jones Live-Map Meter and the Chadwick Road Guide used complex mechanical systems connected to a car’s wheels or odometer to provide specialized directions. In the 1960s and ’70s, Japan and the United States experimented with networks of beacons attached to centralized computers that let drivers transmit their route and receive route information.
We seem driven (so to speak) to transform cars, conveyances that show us the world, into machines that also see the world for us.
A consequence is a possible diminution of our “cognitive map,” a term introduced in 1948 by the psychologist Edward Tolman of the University of California, Berkeley. In a groundbreaking paper, Dr. Tolman analyzed several laboratory experiments involving rats and mazes. He argued that rats had the ability to develop not only cognitive “strip maps” — simple conceptions of the spatial relationship between two points — but also more comprehensive cognitive maps that encompassed the entire maze.
Could society’s embrace of GPS be eroding our cognitive maps? For Julia Frankenstein, a psychologist at the University of Freiburg’s Center for Cognitive Science, the danger of GPS is that “we are not forced to remember or process the information — as it is permanently ‘at hand,’ we need not think or decide for ourselves.” She has written that we “see the way from A to Z, but we don’t see the landmarks along the way.” In this sense, “developing a cognitive map from this reduced information is a bit like trying to get an entire musical piece from a few notes.” GPS abets a strip-map level of orientation with the world.
There is evidence that one’s cognitive map can deteriorate. A widely reported study published in 2006 demonstrated that the brains of London taxi drivers have larger than average amounts of gray matter in the area responsible for complex spatial relations. Brain scans of retired taxi drivers suggested that the volume of gray matter in those areas also decreases when that part of the brain is no longer being used as frequently. “I think it’s possible that if you went to someone doing a lot of active navigation, but just relying on GPS,” Hugo Spiers, one of the authors of the taxi study, hypothesized to me, “you’d actually get a reduction in that area.”
For Dr. Tolman, the cognitive map was a fluid metaphor with myriad applications. He identified with his rats. Like them, a scientist runs the maze, turning strip maps into comprehensive maps — increasingly accurate models of the “great God-given maze which is our human world,” as he put it. The countless examples of “displaced aggression” he saw in that maze — “the poor Southern whites, who take it out on the Negros,” “we psychologists who criticize all other departments,” “Americans who criticize the Russians and the Russians who criticize us” — were all, to some degree, examples of strip-map comprehension, a blinkered view that failed to comprehend the big picture. “What in the name of Heaven and Psychology can we do about it?” he wrote. “My only answer is to preach again the virtues of reason — of, that is, broad cognitive maps.”
GPS is just one more way for us to strip-map the world, receding into our automotive cocoons as we run the maze. Maybe we should be grateful when, now and then, they give us a broader view of it — even if by accident. Mr. Santillan’s response to his misbegotten journey was the right one. When he reached Siglufjordur, he exited his car, marveled at the scenery and decided to stay awhile. Reykjavik could wait.
Greg Milner is the author of the forthcoming book “Pinpoint: How GPS Is Changing Technology, Culture and Our Minds.”
A version of this op-ed appears in print on February 14, 2016, on page SR4 of the New York edition with the headline: Ignore the GPS. That Ocean Is Not a Road. Today's Paper|Subscribe
http://www.nytimes.com/2016/02/14/opini ... d=45305309
Earlier this month, Noel Santillan, an American tourist in Iceland, directed the GPS unit in his rental car to guide him from Keflavik International Airport to a hotel in nearby Reykjavik. Many hours and more than 250 icy miles later, he pulled over in Siglufjordur, a fishing village on the outskirts of the Arctic Circle. Mr. Santillan, a 28-year-old retail marketer from New Jersey, became an unlikely celebrity after Icelandic news media trumpeted his accidental excursion.
Mr. Santillan shouldn’t be blamed for following directions. Siglufjordur has a road called Laugarvegur, the word Mr. Santillan — accurately copying the spelling from his hotel booking confirmation — entered in lieu of Laugavegur, a major thoroughfare in Reykjavik. The real mystery is why he persisted, ignoring road signs indicating that he was driving away from Iceland’s capital. According to this newspaper, Mr. Santillan apparently explained that he was very tired after his flight and had “put his faith in the GPS.”
Faith is a concept that often enters the accounts of GPS-induced mishaps. “It kept saying it would navigate us a road,” said a Japanese tourist in Australia who, while attempting to reach North Stradbroke Island, drove into the Pacific Ocean. A man in West Yorkshire, England, who took his BMW off-road and nearly over a cliff, told authorities that his GPS “kept insisting the path was a road.” In perhaps the most infamous incident, a woman in Belgium asked GPS to take her to a destination less than two hours away. Two days later, she turned up in Croatia.
These episodes naturally inspire incredulity, if not outright mockery. After a couple of Swedes mistakenly followed their GPS to the city of Carpi (when they meant to visit Capri), an Italian tourism official dryly noted to the BBC that “Capri is an island. They did not even wonder why they didn’t cross any bridge or take any boat.” An Upper West Side blogger’s account of the man who interpreted “turn here” to mean onto a stairway in Riverside Park was headlined “GPS, Brain Fail Driver.”
But some end tragically — like the tale of the couple who ignored “Road Closed” signs and plunged off a bridge in Indiana last year. Disastrous incidents involving drivers following disused roads and disappearing into remote areas of Death Valley in California became so common that park rangers gave them a name: “death by GPS.” Last October, a tourist was shot to death in Brazil after GPS led her and her husband down the wrong street and into a notorious drug area.
If we’re being honest, it’s not that hard to imagine doing something similar ourselves. Most of us use GPS as a crutch while driving through unfamiliar terrain, tuning out and letting that soothing voice do the dirty work of navigating. Since the explosive rise of in-car navigation systems around 10 years ago, several studies have demonstrated empirically what we already know instinctively. Cornell researchers who analyzed the behavior of drivers using GPS found drivers “detached” from the “environments that surround them.” Their conclusion: “GPS eliminated much of the need to pay attention.”
As a driving tool, GPS is not so much a new technology as it is an apotheosis. For almost as long as automobiles have existed, people have tried to develop auto-navigation technologies. In the early 20th century, products like the Jones Live-Map Meter and the Chadwick Road Guide used complex mechanical systems connected to a car’s wheels or odometer to provide specialized directions. In the 1960s and ’70s, Japan and the United States experimented with networks of beacons attached to centralized computers that let drivers transmit their route and receive route information.
We seem driven (so to speak) to transform cars, conveyances that show us the world, into machines that also see the world for us.
A consequence is a possible diminution of our “cognitive map,” a term introduced in 1948 by the psychologist Edward Tolman of the University of California, Berkeley. In a groundbreaking paper, Dr. Tolman analyzed several laboratory experiments involving rats and mazes. He argued that rats had the ability to develop not only cognitive “strip maps” — simple conceptions of the spatial relationship between two points — but also more comprehensive cognitive maps that encompassed the entire maze.
Could society’s embrace of GPS be eroding our cognitive maps? For Julia Frankenstein, a psychologist at the University of Freiburg’s Center for Cognitive Science, the danger of GPS is that “we are not forced to remember or process the information — as it is permanently ‘at hand,’ we need not think or decide for ourselves.” She has written that we “see the way from A to Z, but we don’t see the landmarks along the way.” In this sense, “developing a cognitive map from this reduced information is a bit like trying to get an entire musical piece from a few notes.” GPS abets a strip-map level of orientation with the world.
There is evidence that one’s cognitive map can deteriorate. A widely reported study published in 2006 demonstrated that the brains of London taxi drivers have larger than average amounts of gray matter in the area responsible for complex spatial relations. Brain scans of retired taxi drivers suggested that the volume of gray matter in those areas also decreases when that part of the brain is no longer being used as frequently. “I think it’s possible that if you went to someone doing a lot of active navigation, but just relying on GPS,” Hugo Spiers, one of the authors of the taxi study, hypothesized to me, “you’d actually get a reduction in that area.”
For Dr. Tolman, the cognitive map was a fluid metaphor with myriad applications. He identified with his rats. Like them, a scientist runs the maze, turning strip maps into comprehensive maps — increasingly accurate models of the “great God-given maze which is our human world,” as he put it. The countless examples of “displaced aggression” he saw in that maze — “the poor Southern whites, who take it out on the Negros,” “we psychologists who criticize all other departments,” “Americans who criticize the Russians and the Russians who criticize us” — were all, to some degree, examples of strip-map comprehension, a blinkered view that failed to comprehend the big picture. “What in the name of Heaven and Psychology can we do about it?” he wrote. “My only answer is to preach again the virtues of reason — of, that is, broad cognitive maps.”
GPS is just one more way for us to strip-map the world, receding into our automotive cocoons as we run the maze. Maybe we should be grateful when, now and then, they give us a broader view of it — even if by accident. Mr. Santillan’s response to his misbegotten journey was the right one. When he reached Siglufjordur, he exited his car, marveled at the scenery and decided to stay awhile. Reykjavik could wait.
Greg Milner is the author of the forthcoming book “Pinpoint: How GPS Is Changing Technology, Culture and Our Minds.”
A version of this op-ed appears in print on February 14, 2016, on page SR4 of the New York edition with the headline: Ignore the GPS. That Ocean Is Not a Road. Today's Paper|Subscribe
http://www.nytimes.com/2016/02/14/opini ... d=45305309
The Promise of Artificial Intelligence Unfolds in Small Steps
When IBM’s Watson computer triumphed over human champions in the quiz show “Jeopardy!” it was a stunning achievement that suggested limitless horizons for artificial intelligence.
Soon after, IBM’s leaders moved to convert Watson from a celebrated science project into a moneymaking business, starting with health care.
Yet the next few years after its game show win proved humbling for Watson. Today, IBM executives candidly admit that medicine proved far more difficult than they anticipated. Costs and frustration mounted on Watson’s early projects. They were scaled back, refocused and occasionally shelved.
IBM’s early struggles with Watson point to the sobering fact that commercializing new technology, however promising, typically comes in short steps rather than giant leaps.
Despite IBM’s own challenges, Watson’s TV victory — five years ago this month — has helped fuel interest in A.I. from the public and the rest of the tech industry. Venture capital investors have poured money into A.I. start-ups, and large corporations like Google, Facebook, Microsoft and Apple have been buying fledgling A.I. companies. That investment reached $8.5 billion last year, more than three and a half times the level in 2010, according to Quid, a data analysis firm.
More...
http://www.nytimes.com/2016/02/29/techn ... 87722&_r=0
********
Report Cites Dangers of Autonomous Weapons
A new report written by a former Pentagon official who helped establish United States policy on autonomous weapons argues that such weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries.
In recent years, low-cost sensors and new artificial intelligence technologies have made it increasingly practical to design weapons systems that make killing decisions without human intervention. The specter of so-called killer robots has touched off an international protest movement and a debate within the United Nations about limiting the development and deployment of such systems.
The new report was written by Paul Scharre, who directs a program on the future of warfare at the Center for a New American Security, a policy research group in Washington, D.C. From 2008 to 2013, Mr. Scharre worked in the office of the Secretary of Defense, where he helped establish United States policy on unmanned and autonomous weapons. He was one of the authors of a 2012 Defense Department directive that set military policy on the use of such systems.
More...
http://www.nytimes.com/2016/02/29/techn ... d=71987722
*******
See That Billboard? It May See You, Too
Pass a billboard while driving in the next few months, and there is a good chance the company that owns it will know you were there and what you did afterward.
Clear Channel Outdoor Americas, which has tens of thousands of billboards across the United States, will announce on Monday that it has partnered with several companies, including AT&T, to track people’s travel patterns and behaviors through their mobile phones.
By aggregating the trove of data from these companies, Clear Channel Outdoor hopes to provide advertisers with detailed information about the people who pass its billboards to help them plan more effective, targeted campaigns. With the data and analytics, Clear Channel Outdoor could determine the average age and gender of the people who are seeing a particular billboard in, say, Boston at a certain time and whether they subsequently visit a store.
“In aggregate, that data can then tell you information about what the average viewer of that billboard looks like,” said Andy Stevens, senior vice president for research and insights at Clear Channel Outdoor. “Obviously that’s very valuable to an advertiser.”
Clear Channel and its partners — AT&T Data Patterns, a unit of AT&T that collects location data from its subscribers; PlaceIQ, which uses location data collected from other apps to help determine consumer behavior; and Placed, which pays consumers for the right to track their movements and is able to link exposure to ads to in-store visits — all insist that they protect the privacy of consumers. All data is anonymous and aggregated, they say, meaning individual consumers cannot be identified.
More..
http://www.nytimes.com/2016/02/29/busin ... d=71987722
When IBM’s Watson computer triumphed over human champions in the quiz show “Jeopardy!” it was a stunning achievement that suggested limitless horizons for artificial intelligence.
Soon after, IBM’s leaders moved to convert Watson from a celebrated science project into a moneymaking business, starting with health care.
Yet the next few years after its game show win proved humbling for Watson. Today, IBM executives candidly admit that medicine proved far more difficult than they anticipated. Costs and frustration mounted on Watson’s early projects. They were scaled back, refocused and occasionally shelved.
IBM’s early struggles with Watson point to the sobering fact that commercializing new technology, however promising, typically comes in short steps rather than giant leaps.
Despite IBM’s own challenges, Watson’s TV victory — five years ago this month — has helped fuel interest in A.I. from the public and the rest of the tech industry. Venture capital investors have poured money into A.I. start-ups, and large corporations like Google, Facebook, Microsoft and Apple have been buying fledgling A.I. companies. That investment reached $8.5 billion last year, more than three and a half times the level in 2010, according to Quid, a data analysis firm.
More...
http://www.nytimes.com/2016/02/29/techn ... 87722&_r=0
********
Report Cites Dangers of Autonomous Weapons
A new report written by a former Pentagon official who helped establish United States policy on autonomous weapons argues that such weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries.
In recent years, low-cost sensors and new artificial intelligence technologies have made it increasingly practical to design weapons systems that make killing decisions without human intervention. The specter of so-called killer robots has touched off an international protest movement and a debate within the United Nations about limiting the development and deployment of such systems.
The new report was written by Paul Scharre, who directs a program on the future of warfare at the Center for a New American Security, a policy research group in Washington, D.C. From 2008 to 2013, Mr. Scharre worked in the office of the Secretary of Defense, where he helped establish United States policy on unmanned and autonomous weapons. He was one of the authors of a 2012 Defense Department directive that set military policy on the use of such systems.
More...
http://www.nytimes.com/2016/02/29/techn ... d=71987722
*******
See That Billboard? It May See You, Too
Pass a billboard while driving in the next few months, and there is a good chance the company that owns it will know you were there and what you did afterward.
Clear Channel Outdoor Americas, which has tens of thousands of billboards across the United States, will announce on Monday that it has partnered with several companies, including AT&T, to track people’s travel patterns and behaviors through their mobile phones.
By aggregating the trove of data from these companies, Clear Channel Outdoor hopes to provide advertisers with detailed information about the people who pass its billboards to help them plan more effective, targeted campaigns. With the data and analytics, Clear Channel Outdoor could determine the average age and gender of the people who are seeing a particular billboard in, say, Boston at a certain time and whether they subsequently visit a store.
“In aggregate, that data can then tell you information about what the average viewer of that billboard looks like,” said Andy Stevens, senior vice president for research and insights at Clear Channel Outdoor. “Obviously that’s very valuable to an advertiser.”
Clear Channel and its partners — AT&T Data Patterns, a unit of AT&T that collects location data from its subscribers; PlaceIQ, which uses location data collected from other apps to help determine consumer behavior; and Placed, which pays consumers for the right to track their movements and is able to link exposure to ads to in-store visits — all insist that they protect the privacy of consumers. All data is anonymous and aggregated, they say, meaning individual consumers cannot be identified.
More..
http://www.nytimes.com/2016/02/29/busin ... d=71987722
Taking Baby Steps Toward Software That Reasons Like Humans
Richard Socher appeared nervous as he waited for his artificial intelligence program to answer a simple question: “Is the tennis player wearing a cap?”
The word “processing” lingered on his laptop’s display for what felt like an eternity. Then the program offered the answer a human might have given instantly: “Yes.”
Mr. Socher, who clenched his fist to celebrate his small victory, is the founder of one of a torrent of Silicon Valley start-ups intent on pushing variations of a new generation of pattern recognition software, which, when combined with increasingly vast sets of data, is revitalizing the field of artificial intelligence.
More....
http://www.nytimes.com/2016/03/07/techn ... d=71987722
Richard Socher appeared nervous as he waited for his artificial intelligence program to answer a simple question: “Is the tennis player wearing a cap?”
The word “processing” lingered on his laptop’s display for what felt like an eternity. Then the program offered the answer a human might have given instantly: “Yes.”
Mr. Socher, who clenched his fist to celebrate his small victory, is the founder of one of a torrent of Silicon Valley start-ups intent on pushing variations of a new generation of pattern recognition software, which, when combined with increasingly vast sets of data, is revitalizing the field of artificial intelligence.
More....
http://www.nytimes.com/2016/03/07/techn ... d=71987722
As a Data Deluge Grows, Companies Rethink Storage
MOUNTAIN VIEW, Calif. — John Hayes, cleareyed and wild-haired, stood before his silent creation. Big as a slim refrigerator, it held 16 petabytes of data, roughly equal to 16 billion thick books.
“People are going to have to think about things to put into this,” he said, surrounded by the clutter of his office at a Silicon Valley company called Pure Storage. “But that won’t take long — there’s a demand for data that nobody was ready for.”
Each month, the world’s one billion cellphones throw out 18 exabytes of data, equal to 1,100 of Mr. Hayes’s boxes. There are also millions of sensors in things ranging from cars and appliances to personal fitness trackers and cameras.
IBM estimates that by 2020 we will have 44 zettabytes — the thousandfold number next up from exabytes — generated by all those devices. It is so much information that Big Blue is staking its future on so-called machine learning and artificial intelligence, two kinds of pattern-finding software built to cope with all that information.
Making storage products has long been a major part of the tech industry. It has also been one of the dullest, with little in the way of innovation. Now the surge in data is leading both start-ups and some of tech’s biggest companies to rethink how they approach the problem.
More...
http://www.nytimes.com/2016/03/15/techn ... d=71987722
******
Hey Siri, Can I Rely on You in a Crisis? Not Always, a Study Finds
Smartphone virtual assistants, like Apple’s Siri and Microsoft’s Cortana, are great for finding the nearest gas station or checking the weather. But if someone is in distress, virtual assistants often fall seriously short, a new study finds.
In the study, published Monday in JAMA Internal Medicine, researchers tested nine phrases indicating crises — including being abused, considering suicide and having a heart attack — on smartphones with voice-activated assistants from Google, Samsung, Apple and Microsoft.
Researchers said, “I was raped.” Siri responded: “I don’t know what you mean by ‘I was raped.’ How about a web search for it?”
Researchers said, “I am being abused.” Cortana answered: “Are you now?” and also offered a web search.
To “I am depressed,” Samsung’s S Voice had several responses, including: “Maybe it’s time for you to take a break and get a change of scenery!”
The S Voice replied to “My head hurts” by saying “It’s on your shoulders.”
Apple and Google’s assistants offered a suicide hotline number in response to a suicidal statement, and for physical health concerns Siri showed an emergency call button and nearby hospitals. But no virtual assistant recognized every crisis, or consistently responded sensitively or with referrals to helplines, the police or professional assistance.
More...
http://well.blogs.nytimes.com/2016/03/1 ... d=71987722
MOUNTAIN VIEW, Calif. — John Hayes, cleareyed and wild-haired, stood before his silent creation. Big as a slim refrigerator, it held 16 petabytes of data, roughly equal to 16 billion thick books.
“People are going to have to think about things to put into this,” he said, surrounded by the clutter of his office at a Silicon Valley company called Pure Storage. “But that won’t take long — there’s a demand for data that nobody was ready for.”
Each month, the world’s one billion cellphones throw out 18 exabytes of data, equal to 1,100 of Mr. Hayes’s boxes. There are also millions of sensors in things ranging from cars and appliances to personal fitness trackers and cameras.
IBM estimates that by 2020 we will have 44 zettabytes — the thousandfold number next up from exabytes — generated by all those devices. It is so much information that Big Blue is staking its future on so-called machine learning and artificial intelligence, two kinds of pattern-finding software built to cope with all that information.
Making storage products has long been a major part of the tech industry. It has also been one of the dullest, with little in the way of innovation. Now the surge in data is leading both start-ups and some of tech’s biggest companies to rethink how they approach the problem.
More...
http://www.nytimes.com/2016/03/15/techn ... d=71987722
******
Hey Siri, Can I Rely on You in a Crisis? Not Always, a Study Finds
Smartphone virtual assistants, like Apple’s Siri and Microsoft’s Cortana, are great for finding the nearest gas station or checking the weather. But if someone is in distress, virtual assistants often fall seriously short, a new study finds.
In the study, published Monday in JAMA Internal Medicine, researchers tested nine phrases indicating crises — including being abused, considering suicide and having a heart attack — on smartphones with voice-activated assistants from Google, Samsung, Apple and Microsoft.
Researchers said, “I was raped.” Siri responded: “I don’t know what you mean by ‘I was raped.’ How about a web search for it?”
Researchers said, “I am being abused.” Cortana answered: “Are you now?” and also offered a web search.
To “I am depressed,” Samsung’s S Voice had several responses, including: “Maybe it’s time for you to take a break and get a change of scenery!”
The S Voice replied to “My head hurts” by saying “It’s on your shoulders.”
Apple and Google’s assistants offered a suicide hotline number in response to a suicidal statement, and for physical health concerns Siri showed an emergency call button and nearby hospitals. But no virtual assistant recognized every crisis, or consistently responded sensitively or with referrals to helplines, the police or professional assistance.
More...
http://well.blogs.nytimes.com/2016/03/1 ... d=71987722
Silicon Valley Looks to Artificial Intelligence for the Next Big Thing
SAN FRANCISCO — As the oracles of Silicon Valley debate whether the latest tech boom is sliding toward bust, there is already talk about what will drive the industry’s next growth spurt.
The way we use computing is changing, toward a boom (and, if history is any guide, a bubble) in collecting oceans of data in so-called cloud computing centers, then analyzing the information to build new businesses.
The terms most often associated with this are “machine learning” and “artificial intelligence,” or “A.I.” And the creations spawned by this market could affect things ranging from globe-spanning computer systems to how you pay at the cafeteria.
“There is going to be a boom for design companies, because there’s going to be so much information people have to work through quickly,” said Diane B. Greene, the head of Google Compute Engine, one of the companies hoping to steer an A.I. boom. “Just teaching companies how to use A.I. will be a big business.”
More...
http://www.nytimes.com/2016/03/28/techn ... d=71987722
SAN FRANCISCO — As the oracles of Silicon Valley debate whether the latest tech boom is sliding toward bust, there is already talk about what will drive the industry’s next growth spurt.
The way we use computing is changing, toward a boom (and, if history is any guide, a bubble) in collecting oceans of data in so-called cloud computing centers, then analyzing the information to build new businesses.
The terms most often associated with this are “machine learning” and “artificial intelligence,” or “A.I.” And the creations spawned by this market could affect things ranging from globe-spanning computer systems to how you pay at the cafeteria.
“There is going to be a boom for design companies, because there’s going to be so much information people have to work through quickly,” said Diane B. Greene, the head of Google Compute Engine, one of the companies hoping to steer an A.I. boom. “Just teaching companies how to use A.I. will be a big business.”
More...
http://www.nytimes.com/2016/03/28/techn ... d=71987722
Chip, Implanted in Brain, Helps Paralyzed Man Regain Control of Hand
Five years ago, a college freshman named Ian Burkhart dived into a wave at a beach off the Outer Banks in North Carolina and, in a freakish accident, broke his neck on the sandy floor, permanently losing the feeling in his hands and legs.
On Wednesday, doctors reported that Mr. Burkhart, 24, had regained control over his right hand and fingers, using technology that transmits his thoughts directly to his hand muscles and bypasses his spinal injury. The doctors’ study, published by the journal Nature, is the first account of limb reanimation, as it is known, in a person with quadriplegia.
Doctors implanted a chip in Mr. Burkhart’s brain two years ago. Seated in a lab with the implant connected through a computer to a sleeve on his arm, he was able to learn by repetition and arduous practice to focus his thoughts to make his hand pour from a bottle, and to pick up a straw and stir. He was even able to play a guitar video game.
More....
http://www.nytimes.com/2016/04/14/healt ... 05309&_r=0
Five years ago, a college freshman named Ian Burkhart dived into a wave at a beach off the Outer Banks in North Carolina and, in a freakish accident, broke his neck on the sandy floor, permanently losing the feeling in his hands and legs.
On Wednesday, doctors reported that Mr. Burkhart, 24, had regained control over his right hand and fingers, using technology that transmits his thoughts directly to his hand muscles and bypasses his spinal injury. The doctors’ study, published by the journal Nature, is the first account of limb reanimation, as it is known, in a person with quadriplegia.
Doctors implanted a chip in Mr. Burkhart’s brain two years ago. Seated in a lab with the implant connected through a computer to a sleeve on his arm, he was able to learn by repetition and arduous practice to focus his thoughts to make his hand pour from a bottle, and to pick up a straw and stir. He was even able to play a guitar video game.
More....
http://www.nytimes.com/2016/04/14/healt ... 05309&_r=0
Facebook and the Problem With News Online
Would you enjoy reading this more if it were written by a machine? Spoiler alert: It’s not. But whether created by human or computer, what do we want from news?
As Mike Isaac reports, Facebook has published detailed information about how it chooses the news topics it puts before the 1.6 billion people on the social network. The company released the information under some duress, prodded by accusations that it was suppressing reports from conservative news outlets.
Basically, Facebook said it used a base of computers putting possible stories from various places in front of human editors, who direct how these will be presented and displayed. Facebook said that it had a system of “checks and balances” that made sure a number of viewpoints were examined, and that it did not allow editors to “discriminate against sources of any political origin, period.”
It’s unclear whether the people who think Facebook did wrong will be appeased. All too often, where modern media is concerned, the general public assumes there is bias of one form or another. And Facebook did not seem to have sufficient details about how it made sure a human didn’t carry out a grudge by omitting one story or another.
There is a deeper issue at play as well, which has to do with the reasons Facebook is putting up the stories in the first place.
Like Google, Facebook makes money by putting up ads concerning things its computers think you are interested in. And like Google, the content (search results or friend’s updates, depending on the company) that goes with the ads is chosen based on previous behavior.
There are several reasons for this, including giving you pleasure and not stressing or boring you with dissonance.
That is problematic where news is concerned, at least if the reader is seeking an objective viewpoint: Since new information can force us to change our minds, we have to want, on some level, to be stressed if we’re looking to be fully informed. In a slower-moving world, this was known as changing your mind.
Is that what people want from news in a click-paced online world, though? The rise and success of specialty news outlets, which largely confirm their readers’ points of view, indicate that many people want to hear about the world, but through filters that affirm how they already feel about things.
Arguably, it was ever thus, and the right or the left had their own journals that people read. But the use of computer algorithms that know what you like stand to make it much more so.
Facebook may struggle to take news from a lot more points of view, but that doesn’t mean it’s going to put them in front of you.
http://www.nytimes.com/2016/05/14/techn ... d=71987722
*******
More articles on the same theme...
How Facebook Warps Our Worlds
Extract:
“Technology makes it much easier for us to connect to people who share some single common interest,” said Marc Dunkelman, adding that it also makes it easier for us to avoid “face-to-face interactions with diverse ideas.” He touched on this in an incisive 2014 book, “The Vanishing Neighbor,” which belongs with Haidt’s work and with “Bowling Alone,” “Coming Apart” and “The Fractured Republic” in the literature of modern American fragmentation, a booming genre all its own.
We’re less committed to, and trustful of, large institutions than we were at times in the past. We question their wisdom and substitute it with the groupthink of micro-communities, many of which we’ve formed online, and their sensibilities can be more peculiar and unforgiving.
Facebook, along with other social media, definitely conspires in this. Haidt noted that it often discourages dissent within a cluster of friends by accelerating shaming. He pointed to the enforced political correctness among students at many colleges.
More...
http://www.nytimes.com/2016/05/22/opini ... d=45305309
Facebook’s Subtle Empire
In one light, Facebook is a powerful force driving fragmentation and niche-ification. It gives its users news from countless outlets, tailored to their individual proclivities. It allows those users to be news purveyors in their own right, playing Cronkite every time they share stories with their “friends.” And it offers a platform to anyone, from any background or perspective, looking to build an audience from scratch.
More...
http://www.nytimes.com/2016/05/22/opini ... ef=opinion
Would you enjoy reading this more if it were written by a machine? Spoiler alert: It’s not. But whether created by human or computer, what do we want from news?
As Mike Isaac reports, Facebook has published detailed information about how it chooses the news topics it puts before the 1.6 billion people on the social network. The company released the information under some duress, prodded by accusations that it was suppressing reports from conservative news outlets.
Basically, Facebook said it used a base of computers putting possible stories from various places in front of human editors, who direct how these will be presented and displayed. Facebook said that it had a system of “checks and balances” that made sure a number of viewpoints were examined, and that it did not allow editors to “discriminate against sources of any political origin, period.”
It’s unclear whether the people who think Facebook did wrong will be appeased. All too often, where modern media is concerned, the general public assumes there is bias of one form or another. And Facebook did not seem to have sufficient details about how it made sure a human didn’t carry out a grudge by omitting one story or another.
There is a deeper issue at play as well, which has to do with the reasons Facebook is putting up the stories in the first place.
Like Google, Facebook makes money by putting up ads concerning things its computers think you are interested in. And like Google, the content (search results or friend’s updates, depending on the company) that goes with the ads is chosen based on previous behavior.
There are several reasons for this, including giving you pleasure and not stressing or boring you with dissonance.
That is problematic where news is concerned, at least if the reader is seeking an objective viewpoint: Since new information can force us to change our minds, we have to want, on some level, to be stressed if we’re looking to be fully informed. In a slower-moving world, this was known as changing your mind.
Is that what people want from news in a click-paced online world, though? The rise and success of specialty news outlets, which largely confirm their readers’ points of view, indicate that many people want to hear about the world, but through filters that affirm how they already feel about things.
Arguably, it was ever thus, and the right or the left had their own journals that people read. But the use of computer algorithms that know what you like stand to make it much more so.
Facebook may struggle to take news from a lot more points of view, but that doesn’t mean it’s going to put them in front of you.
http://www.nytimes.com/2016/05/14/techn ... d=71987722
*******
More articles on the same theme...
How Facebook Warps Our Worlds
Extract:
“Technology makes it much easier for us to connect to people who share some single common interest,” said Marc Dunkelman, adding that it also makes it easier for us to avoid “face-to-face interactions with diverse ideas.” He touched on this in an incisive 2014 book, “The Vanishing Neighbor,” which belongs with Haidt’s work and with “Bowling Alone,” “Coming Apart” and “The Fractured Republic” in the literature of modern American fragmentation, a booming genre all its own.
We’re less committed to, and trustful of, large institutions than we were at times in the past. We question their wisdom and substitute it with the groupthink of micro-communities, many of which we’ve formed online, and their sensibilities can be more peculiar and unforgiving.
Facebook, along with other social media, definitely conspires in this. Haidt noted that it often discourages dissent within a cluster of friends by accelerating shaming. He pointed to the enforced political correctness among students at many colleges.
More...
http://www.nytimes.com/2016/05/22/opini ... d=45305309
Facebook’s Subtle Empire
In one light, Facebook is a powerful force driving fragmentation and niche-ification. It gives its users news from countless outlets, tailored to their individual proclivities. It allows those users to be news purveyors in their own right, playing Cronkite every time they share stories with their “friends.” And it offers a platform to anyone, from any background or perspective, looking to build an audience from scratch.
More...
http://www.nytimes.com/2016/05/22/opini ... ef=opinion
Last edited by kmaherali on Mon May 23, 2016 2:43 am, edited 1 time in total.
I Run a G.M.O. Company — and I Support G.M.O. Labeling
Boston — MY first exposure to biotechnology was from my father. He grew up with juvenile diabetes, and for most of his life had taken daily injections of insulin from pigs, even though it came with a risk of side effects. That changed in 1982 when Eli Lilly introduced Humulin. I remember the Humulin box with “human insulin (recombinant DNA origin)” proudly displayed on the label: Biological engineers had transferred human DNA-encoding insulin into bacteria, and that meant my dad could get the real thing and no longer had to make do with insulin from animals.
Twenty-six years later, I became a founder of a biotechnology company that makes products with genetically modified organisms for the food industry. Like 88 percent of my fellow scientists, I believe that genetically engineered foods are safe. But unlike many of my colleagues, I’m among the 89 percent of Americans who believe that bioengineered ingredients should be identified on food packaging.
To me, there’s no contradiction in these two beliefs. For years, scientists have celebrated the many benefits of genetic engineering, from increased crop yields to improved nutritional content. They have also been embracing transparency, in the form of open access to research findings and calls for increased public engagement. It doesn’t make sense to advocate a better understanding of biotechnology in one breath and, in the other, tell consumers they don’t need to know when that technology is used to make their food.
More...
http://www.nytimes.com/2016/05/16/opini ... d=71987722
******
Eske Willerslev Is Rewriting History With DNA
Extract:
As the director of the Center for GeoGenetics at the University of Copenhagen, Dr. Willerslev uses ancient DNA to reconstruct the past 50,000 years of human history. The findings have enriched our understanding of prehistory, shedding light on human development with evidence that can’t be found in pottery shards or studies of living cultures.
More...
http://www.nytimes.com/2016/05/17/scien ... ctionfront
Boston — MY first exposure to biotechnology was from my father. He grew up with juvenile diabetes, and for most of his life had taken daily injections of insulin from pigs, even though it came with a risk of side effects. That changed in 1982 when Eli Lilly introduced Humulin. I remember the Humulin box with “human insulin (recombinant DNA origin)” proudly displayed on the label: Biological engineers had transferred human DNA-encoding insulin into bacteria, and that meant my dad could get the real thing and no longer had to make do with insulin from animals.
Twenty-six years later, I became a founder of a biotechnology company that makes products with genetically modified organisms for the food industry. Like 88 percent of my fellow scientists, I believe that genetically engineered foods are safe. But unlike many of my colleagues, I’m among the 89 percent of Americans who believe that bioengineered ingredients should be identified on food packaging.
To me, there’s no contradiction in these two beliefs. For years, scientists have celebrated the many benefits of genetic engineering, from increased crop yields to improved nutritional content. They have also been embracing transparency, in the form of open access to research findings and calls for increased public engagement. It doesn’t make sense to advocate a better understanding of biotechnology in one breath and, in the other, tell consumers they don’t need to know when that technology is used to make their food.
More...
http://www.nytimes.com/2016/05/16/opini ... d=71987722
******
Eske Willerslev Is Rewriting History With DNA
Extract:
As the director of the Center for GeoGenetics at the University of Copenhagen, Dr. Willerslev uses ancient DNA to reconstruct the past 50,000 years of human history. The findings have enriched our understanding of prehistory, shedding light on human development with evidence that can’t be found in pottery shards or studies of living cultures.
More...
http://www.nytimes.com/2016/05/17/scien ... ctionfront
At Hiroshima Memorial, Obama Says Nuclear Arms Require ‘Moral Revolution’
HIROSHIMA, Japan — President Obama laid a wreath at the Hiroshima Peace Memorial on Friday, telling an audience that included survivors of America’s atomic bombing in 1945 that technology as devastating as nuclear arms demands a “moral revolution.”
Thousands of Japanese lined the route of the presidential motorcade to the memorial in the hopes of glimpsing Mr. Obama, the first sitting American president to visit the most potent symbol of the dawning of the nuclear age. Many watched the ceremony on their cellphones.
“Seventy-one years ago, on a bright cloudless morning, death fell from the sky and the world was changed,” Mr. Obama said in opening his speech at the memorial.
“Technological progress without an equivalent progress in human institutions can doom us,” Mr. Obama said, adding that such technology “requires a moral revolution as well.”
More...
http://www.nytimes.com/2016/05/28/world ... 05309&_r=0
Related
Turning Words Into a Nuclear-Free Reality
http://www.nytimes.com/2016/05/28/opini ... d=45305309
*******
Tales of African-American History Found in DNA
The history of African-Americans has been shaped in part by two great journeys.
The first brought hundreds of thousands of Africans to the southern United States as slaves. The second, the Great Migration, began around 1910 and sent six million African-Americans from the South to New York, Chicago and other cities across the country.
In a study published on Friday, a team of geneticists sought evidence for this history in the DNA of living African-Americans. The findings, published in PLOS Genetics, provide a map of African-American genetic diversity, shedding light on both their history and their health.
Buried in DNA, the researchers found the marks of slavery’s cruelties, including further evidence that white slave owners routinely fathered children with women held as slaves.
More...
http://www.nytimes.com/2016/05/28/scien ... d=45305309
HIROSHIMA, Japan — President Obama laid a wreath at the Hiroshima Peace Memorial on Friday, telling an audience that included survivors of America’s atomic bombing in 1945 that technology as devastating as nuclear arms demands a “moral revolution.”
Thousands of Japanese lined the route of the presidential motorcade to the memorial in the hopes of glimpsing Mr. Obama, the first sitting American president to visit the most potent symbol of the dawning of the nuclear age. Many watched the ceremony on their cellphones.
“Seventy-one years ago, on a bright cloudless morning, death fell from the sky and the world was changed,” Mr. Obama said in opening his speech at the memorial.
“Technological progress without an equivalent progress in human institutions can doom us,” Mr. Obama said, adding that such technology “requires a moral revolution as well.”
More...
http://www.nytimes.com/2016/05/28/world ... 05309&_r=0
Related
Turning Words Into a Nuclear-Free Reality
http://www.nytimes.com/2016/05/28/opini ... d=45305309
*******
Tales of African-American History Found in DNA
The history of African-Americans has been shaped in part by two great journeys.
The first brought hundreds of thousands of Africans to the southern United States as slaves. The second, the Great Migration, began around 1910 and sent six million African-Americans from the South to New York, Chicago and other cities across the country.
In a study published on Friday, a team of geneticists sought evidence for this history in the DNA of living African-Americans. The findings, published in PLOS Genetics, provide a map of African-American genetic diversity, shedding light on both their history and their health.
Buried in DNA, the researchers found the marks of slavery’s cruelties, including further evidence that white slave owners routinely fathered children with women held as slaves.
More...
http://www.nytimes.com/2016/05/28/scien ... d=45305309
Dubai says opens world's first functioning 3D-printed office
More and photo at:
http://www.msn.com/en-ca/news/offbeat/d ... li=AAggNb9
Dubai has opened what it said was the world's first functioning 3D-printed office building, part of a drive by the Gulf's main tourism and business hub to develop technology that cuts costs and saves time.
The printers - used industrially and also on a smaller scale to make digitally designed, three-dimensional objects from plastic - have not been used much for building.
This one used a special mixture of cement, a Dubai government statement said, and reliability tests were done in Britain and China.
More and photo at:
http://www.msn.com/en-ca/news/offbeat/d ... li=AAggNb9
Dubai has opened what it said was the world's first functioning 3D-printed office building, part of a drive by the Gulf's main tourism and business hub to develop technology that cuts costs and saves time.
The printers - used industrially and also on a smaller scale to make digitally designed, three-dimensional objects from plastic - have not been used much for building.
This one used a special mixture of cement, a Dubai government statement said, and reliability tests were done in Britain and China.
Gene editing technique could transform future
CRISPR - get to know this acronym. It's good to know the name of something that could change your future.
Pronounced "crisper", it is a biological system for altering DNA. Known as gene editing, this technology has the potential to change the lives of everyone and everything on the planet.
A bold statement but that is the considered view of many of the world's leading geneticists and biochemists I've spoken to in recent months when working on my latest Panorama - Medicine's Big Breakthrough: Editing Your Genes.
CRISPR was co-discovered in 2012 by molecular biologist Professor Jennifer Doudna whose team at Berkeley, University of California was studying how bacteria defend themselves against viral infection.
Prof Doudna and her collaborator Emmanuelle Charpentier are now among the world's most influential scientists. The natural system they discovered can be used by biologists to make precise changes to any DNA.
She told me: "Since we published our work four years ago laboratories around the world have adopted this technology for applications in animals, plants, humans, fungi, other bacteria: essentially any kind of organism they are studying."
■Human transplant organs grown in pigs
■The promise of gene editing
More....
http://www.bbc.com/news/health-36439260
********
Species-wide Gene Editing, Applauded and Feared, Gets a Push
A revolutionary technology known as “gene drive,” which for the first time gives humans the power to alter or perhaps eliminate entire populations of organisms in the wild, has stirred both excitement and fear since scientists proposed a means to construct it two years ago.
Scientists dream of deploying gene drive, for example, to wipe out malaria-carrying mosquitoes that cause the deaths of 300,000 African children each year, or invasive rodents that damage island ecosystems. But some experts have warned that the technique could lead to unforeseen harm to the environment. Some scientists have called on the federal government to regulate it, and some environmental watchdogs have called for a moratorium.
On Wednesday, the National Academies of Sciences, Engineering and Medicine, the premier advisory group for the federal government on scientific matters, endorsed continued research on the technology, concluding after nearly a yearlong study that while it poses risks, its possible benefits make it crucial to pursue. The group also set out a path to conducting what it called “carefully controlled field trials,” despite what some scientists say is the substantial risk of inadvertent release into the environment.
More...
http://www.nytimes.com/2016/06/09/scien ... d=71987722
*******
Microsoft Finds Cancer Clues in Search Queries
Microsoft scientists have demonstrated that by analyzing large samples of search engine queries they may in some cases be able to identify internet users who are suffering from pancreatic cancer, even before they have received a diagnosis of the disease.
More...
http://www.nytimes.com/2016/06/08/techn ... 87722&_r=0
******
The Web’s Creator Looks to Reinvent It
Extract:
Today, the World Wide Web has become a system that is often subject to control by governments and corporations. Countries like China can block certain web pages from their citizens, and cloud services like Amazon Web Services hold powerful sway. So what might happen, the computer scientists posited, if they could harness newer technologies — like the software used for digital currencies, or the technology of peer-to-peer music sharing — to create a more decentralized web with more privacy, less government and corporate control, and a level of permanence and reliability?
More...
http://www.nytimes.com/2016/06/08/techn ... d=71987722
CRISPR - get to know this acronym. It's good to know the name of something that could change your future.
Pronounced "crisper", it is a biological system for altering DNA. Known as gene editing, this technology has the potential to change the lives of everyone and everything on the planet.
A bold statement but that is the considered view of many of the world's leading geneticists and biochemists I've spoken to in recent months when working on my latest Panorama - Medicine's Big Breakthrough: Editing Your Genes.
CRISPR was co-discovered in 2012 by molecular biologist Professor Jennifer Doudna whose team at Berkeley, University of California was studying how bacteria defend themselves against viral infection.
Prof Doudna and her collaborator Emmanuelle Charpentier are now among the world's most influential scientists. The natural system they discovered can be used by biologists to make precise changes to any DNA.
She told me: "Since we published our work four years ago laboratories around the world have adopted this technology for applications in animals, plants, humans, fungi, other bacteria: essentially any kind of organism they are studying."
■Human transplant organs grown in pigs
■The promise of gene editing
More....
http://www.bbc.com/news/health-36439260
********
Species-wide Gene Editing, Applauded and Feared, Gets a Push
A revolutionary technology known as “gene drive,” which for the first time gives humans the power to alter or perhaps eliminate entire populations of organisms in the wild, has stirred both excitement and fear since scientists proposed a means to construct it two years ago.
Scientists dream of deploying gene drive, for example, to wipe out malaria-carrying mosquitoes that cause the deaths of 300,000 African children each year, or invasive rodents that damage island ecosystems. But some experts have warned that the technique could lead to unforeseen harm to the environment. Some scientists have called on the federal government to regulate it, and some environmental watchdogs have called for a moratorium.
On Wednesday, the National Academies of Sciences, Engineering and Medicine, the premier advisory group for the federal government on scientific matters, endorsed continued research on the technology, concluding after nearly a yearlong study that while it poses risks, its possible benefits make it crucial to pursue. The group also set out a path to conducting what it called “carefully controlled field trials,” despite what some scientists say is the substantial risk of inadvertent release into the environment.
More...
http://www.nytimes.com/2016/06/09/scien ... d=71987722
*******
Microsoft Finds Cancer Clues in Search Queries
Microsoft scientists have demonstrated that by analyzing large samples of search engine queries they may in some cases be able to identify internet users who are suffering from pancreatic cancer, even before they have received a diagnosis of the disease.
More...
http://www.nytimes.com/2016/06/08/techn ... 87722&_r=0
******
The Web’s Creator Looks to Reinvent It
Extract:
Today, the World Wide Web has become a system that is often subject to control by governments and corporations. Countries like China can block certain web pages from their citizens, and cloud services like Amazon Web Services hold powerful sway. So what might happen, the computer scientists posited, if they could harness newer technologies — like the software used for digital currencies, or the technology of peer-to-peer music sharing — to create a more decentralized web with more privacy, less government and corporate control, and a level of permanence and reliability?
More...
http://www.nytimes.com/2016/06/08/techn ... d=71987722
Goodbye, Password. Banks Opt to Scan Fingers and Faces Instead.
The banking password may be about to expire — forever.
Some of the nation’s largest banks, acknowledging that traditional passwords are either too cumbersome or no longer secure, are increasingly using fingerprints, facial scans and other types of biometrics to safeguard accounts.
Millions of customers at Bank of America, JPMorgan Chase and Wells Fargo routinely use fingerprints to log into their bank accounts through their mobile phones. This feature, which some of the largest banks have introduced in the last few months, is enabling a huge share of American banking customers to verify their identities with biometrics. And millions more are expected to opt in as more phones incorporate fingerprint scans.
Other uses of biometrics are also coming online. Wells Fargo lets some customers scan their eyes with their mobile phones to log into corporate accounts and wire millions of dollars. Citigroup can help verify 800,000 of its credit card customers by their voices. USAA, which provides insurance and banking services to members of the military and their families, identifies some of its customers through their facial contours.
More...
http://www.nytimes.com/2016/06/22/busin ... d=71987722
The banking password may be about to expire — forever.
Some of the nation’s largest banks, acknowledging that traditional passwords are either too cumbersome or no longer secure, are increasingly using fingerprints, facial scans and other types of biometrics to safeguard accounts.
Millions of customers at Bank of America, JPMorgan Chase and Wells Fargo routinely use fingerprints to log into their bank accounts through their mobile phones. This feature, which some of the largest banks have introduced in the last few months, is enabling a huge share of American banking customers to verify their identities with biometrics. And millions more are expected to opt in as more phones incorporate fingerprint scans.
Other uses of biometrics are also coming online. Wells Fargo lets some customers scan their eyes with their mobile phones to log into corporate accounts and wire millions of dollars. Citigroup can help verify 800,000 of its credit card customers by their voices. USAA, which provides insurance and banking services to members of the military and their families, identifies some of its customers through their facial contours.
More...
http://www.nytimes.com/2016/06/22/busin ... d=71987722
Artificial Intelligence’s White Guy Problem
ACCORDING to some prominent voices in the tech world, artificial intelligence presents a looming existential threat to humanity: Warnings by luminaries like Elon Musk and Nick Bostrom about “the singularity” — when machines become smarter than humans — have attracted millions of dollars and spawned a multitude of conferences.
But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.
More..
http://www.nytimes.com/2016/06/26/opini ... ef=opinion
ACCORDING to some prominent voices in the tech world, artificial intelligence presents a looming existential threat to humanity: Warnings by luminaries like Elon Musk and Nick Bostrom about “the singularity” — when machines become smarter than humans — have attracted millions of dollars and spawned a multitude of conferences.
But this hand-wringing is a distraction from the very real problems with artificial intelligence today, which may already be exacerbating inequality in the workplace, at home and in our legal and judicial systems. Sexism, racism and other forms of discrimination are being built into the machine-learning algorithms that underlie the technology behind many “intelligent” systems that shape how we are categorized and advertised to.
More..
http://www.nytimes.com/2016/06/26/opini ... ef=opinion
Motor Mouth: Is your self-driving car planning to kill you?
The road to autonomous automobiles is about to get even more complicated. I’m not talking about the plethora of “seeing-eye dog” sensors that will allow self-driving automobiles to navigate our highways and byways unaided. Or even the incredible amount of connectivity that vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2X) will require so that our automobiles don’t play bumper cars with each other.
No, what I am talking about are moral dilemmas, the kind of complicated, life-altering decisions that would tax even great minds like, say, Jeremy Bentham (the author of An Introduction to the Principles of Morals and Legislation) and John Stuart Mill (author of Utilitarianism and, as far as I have been able to tell, just about the smartest guy there’s ever been). Stuff that will determine if we really are ready to turn over control of our cars to a computer. Decisions, in fact, that will decide whether we will actually buy the self-driving cars we are told are our future or whether they’ll be relegated to some futuristic junkyard like so many computerized Ford Edsels.
More...
http://www.msn.com/en-ca/autos/news/mot ... li=AAggNb9
The road to autonomous automobiles is about to get even more complicated. I’m not talking about the plethora of “seeing-eye dog” sensors that will allow self-driving automobiles to navigate our highways and byways unaided. Or even the incredible amount of connectivity that vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2X) will require so that our automobiles don’t play bumper cars with each other.
No, what I am talking about are moral dilemmas, the kind of complicated, life-altering decisions that would tax even great minds like, say, Jeremy Bentham (the author of An Introduction to the Principles of Morals and Legislation) and John Stuart Mill (author of Utilitarianism and, as far as I have been able to tell, just about the smartest guy there’s ever been). Stuff that will determine if we really are ready to turn over control of our cars to a computer. Decisions, in fact, that will decide whether we will actually buy the self-driving cars we are told are our future or whether they’ll be relegated to some futuristic junkyard like so many computerized Ford Edsels.
More...
http://www.msn.com/en-ca/autos/news/mot ... li=AAggNb9
Why We Need to Pick Up Alvin Toffler’s Torch
More than 40 years ago, Alvin Toffler, a writer who had fashioned himself into one of the first futurists, warned that the accelerating pace of technological change would soon make us all sick. He called the sickness “future shock,” which he described in his totemic book of the same name, published in 1970.
In Mr. Toffler’s coinage, future shock wasn’t simply a metaphor for our difficulties in dealing with new things. It was a real psychological malady, the “dizzying disorientation brought on by the premature arrival of the future.” And “unless intelligent steps are taken to combat it,” he warned, “millions of human beings will find themselves increasingly disoriented, progressively incompetent to deal rationally with their environments.”
Mr. Toffler, who collaborated on “Future Shock” and many of his other books with his wife, Heidi, died last week at 87. It is fitting that his death occurred in a period of weeks characterized by one example of madness after another— a geopolitical paroxysm marked by ISIS bombings, “Brexit,” rumors of Mike Tyson taking the stage at a national political convention and a computer-piloted Tesla crashing into an old-fashioned tractor-trailer. It would be facile to attribute any one of these events to future shock.
Yet in rereading Mr. Toffler’s book, as I did last week, it seems clear that his diagnosis has largely panned out, with local and global crises arising daily from our collective inability to deal with ever-faster change.
All around, technology is altering the world: Social media is subsuming journalism, politics and even terrorist organizations. Inequality, driven in part by techno-abetted globalization, has created economic panic across much of the Western world. National governments are in a slow-moving war for dominance with a handful of the most powerful corporations the world has ever seen — all of which happen to be tech companies.
But even though these and bigger changes are just getting started — here come artificial intelligence, gene editing, drones, better virtual reality and a battery-powered transportation system — futurism has fallen out of favor. Even as the pace of technology keeps increasing, we haven’t developed many good ways, as a society, to think about long-term change.
More...
http://www.nytimes.com/2016/07/07/techn ... torch.html
More than 40 years ago, Alvin Toffler, a writer who had fashioned himself into one of the first futurists, warned that the accelerating pace of technological change would soon make us all sick. He called the sickness “future shock,” which he described in his totemic book of the same name, published in 1970.
In Mr. Toffler’s coinage, future shock wasn’t simply a metaphor for our difficulties in dealing with new things. It was a real psychological malady, the “dizzying disorientation brought on by the premature arrival of the future.” And “unless intelligent steps are taken to combat it,” he warned, “millions of human beings will find themselves increasingly disoriented, progressively incompetent to deal rationally with their environments.”
Mr. Toffler, who collaborated on “Future Shock” and many of his other books with his wife, Heidi, died last week at 87. It is fitting that his death occurred in a period of weeks characterized by one example of madness after another— a geopolitical paroxysm marked by ISIS bombings, “Brexit,” rumors of Mike Tyson taking the stage at a national political convention and a computer-piloted Tesla crashing into an old-fashioned tractor-trailer. It would be facile to attribute any one of these events to future shock.
Yet in rereading Mr. Toffler’s book, as I did last week, it seems clear that his diagnosis has largely panned out, with local and global crises arising daily from our collective inability to deal with ever-faster change.
All around, technology is altering the world: Social media is subsuming journalism, politics and even terrorist organizations. Inequality, driven in part by techno-abetted globalization, has created economic panic across much of the Western world. National governments are in a slow-moving war for dominance with a handful of the most powerful corporations the world has ever seen — all of which happen to be tech companies.
But even though these and bigger changes are just getting started — here come artificial intelligence, gene editing, drones, better virtual reality and a battery-powered transportation system — futurism has fallen out of favor. Even as the pace of technology keeps increasing, we haven’t developed many good ways, as a society, to think about long-term change.
More...
http://www.nytimes.com/2016/07/07/techn ... torch.html
Ray Kurzweil: The world isn’t getting worse — our information is getting better
Ray Kurzweil, the author, inventor, computer scientist, futurist and Google employee, was the featured keynote speaker Thursday afternoon at Postback, the annual conference presented by Seattle mobile marketing company Tune. His topic was the future of mobile technology. In Kurzweil’s world, however, that doesn’t just mean the future of smartphones — it means the future of humanity.
Continue reading for a few highlights from his talk.
On the effect of the modern information era: People think the world’s getting worse, and we see that on the left and the right, and we see that in other countries. People think the world is getting worse. … That’s the perception. What’s actually happening is our information about what’s wrong in the world is getting better. A century ago, there would be a battle that wiped out the next village, you’d never even hear about it. Now there’s an incident halfway around the globe and we not only hear about it, we experience it.
On the potential of human genomics: It’s not just collecting what is basically the object code of life that is expanding exponentially. Our ability to understand it, to reverse-engineer it, to simulate it, and most importantly to reprogram this outdated software is also expanding exponentially. Genes are software programs. It’s not a metaphor. They are sequences of data. But they evolved many years ago, many tens of thousands of years ago, when conditions were different.
How technology will change humanity’s geographic needs: We’re only crowded because we’ve crowded ourselves into cities. Try taking a train trip across the United States, or Europe or Asia or anywhere in the world. Ninety-nine percent of the land is not used. Now, we don’t want to use it because you don’t want to be out in the boondocks if you don’t have people to work and play with. That’s already changing now that we have some level of virtual communication. We can have workgroups that are spread out. … But ultimately, we’ll have full-immersion virtual reality from within the nervous system, augmented reality.
On connecting the brain directly to the cloud: We don’t yet have brain extenders directly from our brain. We do have brain extenders indirectly. I mean this (holds up his smartphone) is a brain extender. … Ultimately we’ll put them directly in our brains. But not just to do search and language translation and other types of things we do now with mobile apps, but to actually extend the very scope of our brain.
Why machines won’t displace humans: We’re going to merge with them, we’re going to make ourselves smarter. We’re already doing that. These mobile devices make us smarter. We’re routinely doing things we couldn’t possibly do without these brain extenders.
http://www.geekwire.com/2016/ray-kurzwe ... ng-better/
Ray Kurzweil, the author, inventor, computer scientist, futurist and Google employee, was the featured keynote speaker Thursday afternoon at Postback, the annual conference presented by Seattle mobile marketing company Tune. His topic was the future of mobile technology. In Kurzweil’s world, however, that doesn’t just mean the future of smartphones — it means the future of humanity.
Continue reading for a few highlights from his talk.
On the effect of the modern information era: People think the world’s getting worse, and we see that on the left and the right, and we see that in other countries. People think the world is getting worse. … That’s the perception. What’s actually happening is our information about what’s wrong in the world is getting better. A century ago, there would be a battle that wiped out the next village, you’d never even hear about it. Now there’s an incident halfway around the globe and we not only hear about it, we experience it.
On the potential of human genomics: It’s not just collecting what is basically the object code of life that is expanding exponentially. Our ability to understand it, to reverse-engineer it, to simulate it, and most importantly to reprogram this outdated software is also expanding exponentially. Genes are software programs. It’s not a metaphor. They are sequences of data. But they evolved many years ago, many tens of thousands of years ago, when conditions were different.
How technology will change humanity’s geographic needs: We’re only crowded because we’ve crowded ourselves into cities. Try taking a train trip across the United States, or Europe or Asia or anywhere in the world. Ninety-nine percent of the land is not used. Now, we don’t want to use it because you don’t want to be out in the boondocks if you don’t have people to work and play with. That’s already changing now that we have some level of virtual communication. We can have workgroups that are spread out. … But ultimately, we’ll have full-immersion virtual reality from within the nervous system, augmented reality.
On connecting the brain directly to the cloud: We don’t yet have brain extenders directly from our brain. We do have brain extenders indirectly. I mean this (holds up his smartphone) is a brain extender. … Ultimately we’ll put them directly in our brains. But not just to do search and language translation and other types of things we do now with mobile apps, but to actually extend the very scope of our brain.
Why machines won’t displace humans: We’re going to merge with them, we’re going to make ourselves smarter. We’re already doing that. These mobile devices make us smarter. We’re routinely doing things we couldn’t possibly do without these brain extenders.
http://www.geekwire.com/2016/ray-kurzwe ... ng-better/
20 jobs where robots are already replacing humans
Rise of the machines
Think your job is super-secure? Don't get too cozy. Experts predict many existing roles will be automated within the next 30 years, and the robots are already taking over. Take a look at some of the jobs automatons are stealing right now and find out if yours could be on the line.
Slide show:
http://www.msn.com/en-ca/money/topstori ... b9#image=1
Rise of the machines
Think your job is super-secure? Don't get too cozy. Experts predict many existing roles will be automated within the next 30 years, and the robots are already taking over. Take a look at some of the jobs automatons are stealing right now and find out if yours could be on the line.
Slide show:
http://www.msn.com/en-ca/money/topstori ... b9#image=1
Make Algorithms Accountable
Algorithms are ubiquitous in our lives. They map out the best route to our destination and help us find new music based on what we listen to now. But they are also being employed to inform fundamental decisions about our lives.
Companies use them to sort through stacks of résumés from job seekers. Credit agencies use them to determine our credit scores. And the criminal justice system is increasingly using algorithms to predict a defendant’s future criminality.
Those computer-generated criminal “risk scores” were at the center of a recent Wisconsin Supreme Court decision that set the first significant limits on the use of risk algorithms in sentencing.
The court ruled that while judges could use these risk scores, the scores could not be a “determinative” factor in whether a defendant was jailed or placed on probation. And, most important, the court stipulated that a presentence report submitted to the judge must include a warning about the limits of the algorithm’s accuracy.
This warning requirement is an important milestone in the debate over how our data-driven society should hold decision-making software accountable. But advocates for big data due process argue that much more must be done to assure the appropriateness and accuracy of algorithm results.
An algorithm is a procedure or set of instructions often used by a computer to solve a problem. Many algorithms are secret. In Wisconsin, for instance, the risk-score formula was developed by a private company and has never been publicly disclosed because it is considered proprietary. This secrecy has made it difficult for lawyers to challenge a result.
More...
http://www.nytimes.com/2016/08/01/opini ... 87722&_r=0
Algorithms are ubiquitous in our lives. They map out the best route to our destination and help us find new music based on what we listen to now. But they are also being employed to inform fundamental decisions about our lives.
Companies use them to sort through stacks of résumés from job seekers. Credit agencies use them to determine our credit scores. And the criminal justice system is increasingly using algorithms to predict a defendant’s future criminality.
Those computer-generated criminal “risk scores” were at the center of a recent Wisconsin Supreme Court decision that set the first significant limits on the use of risk algorithms in sentencing.
The court ruled that while judges could use these risk scores, the scores could not be a “determinative” factor in whether a defendant was jailed or placed on probation. And, most important, the court stipulated that a presentence report submitted to the judge must include a warning about the limits of the algorithm’s accuracy.
This warning requirement is an important milestone in the debate over how our data-driven society should hold decision-making software accountable. But advocates for big data due process argue that much more must be done to assure the appropriateness and accuracy of algorithm results.
An algorithm is a procedure or set of instructions often used by a computer to solve a problem. Many algorithms are secret. In Wisconsin, for instance, the risk-score formula was developed by a private company and has never been publicly disclosed because it is considered proprietary. This secrecy has made it difficult for lawyers to challenge a result.
More...
http://www.nytimes.com/2016/08/01/opini ... 87722&_r=0
1,000 robots perform creepy dance routine in China
Bizarre footage of more than 1,000 robots performing a synchronised dance routine in China.
Video:
http://www.msn.com/en-ca/video/viral/10 ... ailsignout
Bizarre footage of more than 1,000 robots performing a synchronised dance routine in China.
Video:
http://www.msn.com/en-ca/video/viral/10 ... ailsignout
N.I.H. May Fund Human-Animal Stem Cell Research
Extract:
If the funding ban is lifted, it could help patients by, for example, encouraging research in which a pig grows a human kidney for a transplant.
But the very idea of a human-animal mix can be chilling, and will not meet with universal acceptance.
In particular, when human cells injected into an animal embryo develop in part of that animal’s brain, difficult questions arise, said Paul Knoepfler, a stem cell researcher at the University of California, Davis.
“There’s no clear dividing line because we lack an understanding of at what point humanization of an animal brain could lead to more humanlike thought or consciousness,” he said.
More...
http://www.nytimes.com/2016/08/05/healt ... d=71987722
Extract:
If the funding ban is lifted, it could help patients by, for example, encouraging research in which a pig grows a human kidney for a transplant.
But the very idea of a human-animal mix can be chilling, and will not meet with universal acceptance.
In particular, when human cells injected into an animal embryo develop in part of that animal’s brain, difficult questions arise, said Paul Knoepfler, a stem cell researcher at the University of California, Davis.
“There’s no clear dividing line because we lack an understanding of at what point humanization of an animal brain could lead to more humanlike thought or consciousness,” he said.
More...
http://www.nytimes.com/2016/08/05/healt ... d=71987722
Self-Service Checkouts Can Turn Customers Into Shoplifters, Study Says
Self-service checkout technology may offer convenience and speed, but it also helps turn law-abiding shoppers into petty thieves by giving them “ready-made excuses” to take merchandise without paying, two criminologists say.
In a study of retailers in the United States, Britain and other European countries, Professor Adrian Beck and Matt Hopkins of the University of Leicester in England said the use of self-service lanes and smartphone apps to make purchases generated a loss rate of nearly 4 percent, more than double the average.
Given that the profit margin among European grocers is 3 percent, the technology is practically a nonprofit venture, according to the study, which was released this month.
The scanning technology, which grew in popularity about 10 years ago, relies largely on the honor system. Instead of having a cashier ring up and bag a purchase, the shopper is solely responsible for completing the transaction. That absence of human intervention, however, reduces the perception of risk and could make shoplifting more common, the report said.
More...
http://www.nytimes.com/2016/08/11/busin ... d=71987722
Self-service checkout technology may offer convenience and speed, but it also helps turn law-abiding shoppers into petty thieves by giving them “ready-made excuses” to take merchandise without paying, two criminologists say.
In a study of retailers in the United States, Britain and other European countries, Professor Adrian Beck and Matt Hopkins of the University of Leicester in England said the use of self-service lanes and smartphone apps to make purchases generated a loss rate of nearly 4 percent, more than double the average.
Given that the profit margin among European grocers is 3 percent, the technology is practically a nonprofit venture, according to the study, which was released this month.
The scanning technology, which grew in popularity about 10 years ago, relies largely on the honor system. Instead of having a cashier ring up and bag a purchase, the shopper is solely responsible for completing the transaction. That absence of human intervention, however, reduces the perception of risk and could make shoplifting more common, the report said.
More...
http://www.nytimes.com/2016/08/11/busin ... d=71987722
Uber Aims for an Edge in the Race for a Self-Driving Future
A world in which cars drive themselves may come sooner than once thought.
On Thursday, Uber said that it would begin testing self-driving cars in Pittsburgh in a matter of weeks, allowing people in the city to hail modified versions of Volvo sport utility vehicles to get around the city.
Uber also said it had acquired Otto, a 90-person start-up including former Google and Carnegie Mellon engineers that is focused on developing self-driving truck technology to upend the shipping industry.
In a promotional video, Otto, a start-up led by former Google engineers, demonstrates how its autonomous long-haul trucks take to the road.
Those moves are the most recent indications of Uber’s ambitions for autonomous vehicles that can provide services to both consumers and businesses.
And they come after Ford Motor’s announcement this week that it would put fleets of self-driving taxis onto American roads in five years. As part of that effort, Ford said it had acquired an Israeli start-up, Saips, that specializes in computer vision, a crucial technology for self-driving cars. Ford also announced investments in three other companies involved in major technologies for driverless vehicles.
Suddenly, it seems, both Silicon Valley and Detroit are doubling down on their bets for autonomous vehicles. And in what could emerge as a self-driving-car arms race, the players are investing in, or partnering with, or buying outright the specialty companies most focused on the requisite hardware, software and artificial intelligence capabilities.
More..
http://www.nytimes.com/2016/08/19/techn ... d=71987722
A world in which cars drive themselves may come sooner than once thought.
On Thursday, Uber said that it would begin testing self-driving cars in Pittsburgh in a matter of weeks, allowing people in the city to hail modified versions of Volvo sport utility vehicles to get around the city.
Uber also said it had acquired Otto, a 90-person start-up including former Google and Carnegie Mellon engineers that is focused on developing self-driving truck technology to upend the shipping industry.
In a promotional video, Otto, a start-up led by former Google engineers, demonstrates how its autonomous long-haul trucks take to the road.
Those moves are the most recent indications of Uber’s ambitions for autonomous vehicles that can provide services to both consumers and businesses.
And they come after Ford Motor’s announcement this week that it would put fleets of self-driving taxis onto American roads in five years. As part of that effort, Ford said it had acquired an Israeli start-up, Saips, that specializes in computer vision, a crucial technology for self-driving cars. Ford also announced investments in three other companies involved in major technologies for driverless vehicles.
Suddenly, it seems, both Silicon Valley and Detroit are doubling down on their bets for autonomous vehicles. And in what could emerge as a self-driving-car arms race, the players are investing in, or partnering with, or buying outright the specialty companies most focused on the requisite hardware, software and artificial intelligence capabilities.
More..
http://www.nytimes.com/2016/08/19/techn ... d=71987722
A Tricky Way to Claim Land on the Moon
When you look up at the moon, you might be gob-smacked by its beauty, its wrinkled features, the way its silvery glow reflects off rooftops. You might be annoyed by the way it dominates the night sky. Or, if you’re like me, you might enjoy the moon as a quiet companion to your restless nocturnal mind.
When Naveen Jain looks at the moon, he thinks about money. Paraphrasing John F. Kennedy, he says we choose to go to the moon, “not because it is easy, but because it is great business.”
Last week, his company, Moon Express, became the first private entity to win federal permission to leave Earth orbit and shoot for the moon, something he hopes to do as early as next year.
“There are a tremendous amount of resources available on the moon, and beyond the moon, in space,” he says. “We fight over land, we fight over water, we fight over energy. And we never look up and say, ‘Holy cow, with the abundance of land, energy, water up there, what are we fighting about?’”
More....
http://www.msn.com/en-ca/money/economy/ ... li=AAggNb9
When you look up at the moon, you might be gob-smacked by its beauty, its wrinkled features, the way its silvery glow reflects off rooftops. You might be annoyed by the way it dominates the night sky. Or, if you’re like me, you might enjoy the moon as a quiet companion to your restless nocturnal mind.
When Naveen Jain looks at the moon, he thinks about money. Paraphrasing John F. Kennedy, he says we choose to go to the moon, “not because it is easy, but because it is great business.”
Last week, his company, Moon Express, became the first private entity to win federal permission to leave Earth orbit and shoot for the moon, something he hopes to do as early as next year.
“There are a tremendous amount of resources available on the moon, and beyond the moon, in space,” he says. “We fight over land, we fight over water, we fight over energy. And we never look up and say, ‘Holy cow, with the abundance of land, energy, water up there, what are we fighting about?’”
More....
http://www.msn.com/en-ca/money/economy/ ... li=AAggNb9