TECHNOLOGY AND DEVELOPMENT
A.I. Brings the Robot Wingman to Aerial Combat
An Air Force program shows how the Pentagon is starting to embrace the potential of a rapidly emerging technology, with far-reaching implications for war-fighting tactics, military culture and the defense industry.
It is powered into flight by a rocket engine. It can fly a distance equal to the width of China. It has a stealthy design and is capable of carrying missiles that can hit enemy targets far beyond its visual range.
But what really distinguishes the Air Force’s pilotless XQ-58A Valkyrie experimental aircraft is that it is run by artificial intelligence, putting it at the forefront of efforts by the U.S. military to harness the capacities of an emerging technology whose vast potential benefits are tempered by deep concerns about how much autonomy to grant to a lethal weapon.
Essentially a next-generation drone, the Valkyrie is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle. Its mission is to marry artificial intelligence and its sensors to identify and evaluate enemy threats and then, after getting human sign-off, to move in for the kill.
On a recent day at Eglin Air Force Base on Florida’s Gulf Coast, Maj. Ross Elder, 34, a test pilot from West Virginia, was preparing for an exercise in which he would fly his F-15 fighter alongside the Valkyrie.
“It’s a very strange feeling,” Major Elder said, as other members of the Air Force team prepared to test the engine on the Valkyrie. “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.”
The Valkyrie program provides a glimpse into how the U.S. weapons business, military culture, combat tactics and competition with rival nations are being reshaped in possibly far-reaching ways by rapid advances in technology.
The emergence of artificial intelligence is helping to spawn a new generation of Pentagon contractors who are seeking to undercut, or at least disrupt, the longstanding primacy of the handful of giant firms who supply the armed forces with planes, missiles, tanks and ships.
The possibility of building fleets of smart but relatively inexpensive weapons that could be deployed in large numbers is allowing Pentagon officials to think in new ways about taking on enemy forces.
It also is forcing them to confront questions about what role humans should play in conflicts waged with software that is written to kill, a question that is especially fraught for the United States given its record of errant strikes by conventional drones that inflict civilian casualt
And gaining and maintaining an edge in artificial intelligence is one element of an increasingly open race with China for technological superiority in national security.
ImageA straight on view of the Valkyrie in a hanger shows a pointed nose, wings, and tail fins.
The Valkyrie is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle.Credit...Edmund D. Fountain for The New York Times
Military planners are worried that the current mix of Air Force planes and weapons systems — despite the trillions of dollars invested in them — can no longer be counted on to dominate if a full-scale conflict with China were to break out, particularly if it involved a Chinese invasion of Taiwan.
That is because China is lining its coasts, and artificial islands it has constructed in the South China Sea, with more than a thousand anti-ship and antiaircraft missiles that severely curtail the United States’ ability to respond to any possible invasion of Taiwan without massive losses in the air and at sea.
After decades of building fewer and fewer increasingly expensive combat aircraft — the F-35 fighter jet costs $80 million per unit — the Air Force now has the smallest and oldest fleet in its history.
That is where the new generation of A.I. drones, known as collaborative combat aircraft, will come in. The Air Force is planning to build 1,000 to 2,000 of them for as little as $3 million apiece, or a fraction of the cost of an advanced fighter, which is why some at the Air Force call the program “affordable mass.”
There will be a range of specialized types of these robot aircraft. Some will focus on surveillance or resupply missions, others will fly in attack swarms and still others will serve as a “loyal wingman” to a human pilot.
The drones, for example, could fly in front of piloted combat aircraft, doing early, high-risk surveillance. They could also play a major role in disabling enemy air defenses, taking risks to knock out land-based missile targets that would be considered too dangerous for a human-piloted plane.
The cheapest ones will be considered expendable, meaning they likely will only have one mission. The more sophisticated of these robot aircraft might cost as much as $25 million, according to an estimate by the House of Representatives, still far less than a piloted fighter jet.
“Is it a perfect answer? It is never a perfect answer when you look into the future,” said Maj. Gen. R. Scott Jobe, who until this summer was in charge of setting requirements for the air combat program, as the Air Force works to incorporate A.I. into its fighter jets and drones.
“But you can present potential adversaries with dilemmas — and one of those dilemmas is mass,” General Jobe said in an interview at the Pentagon, referring to the deployment of large numbers of drones against enemy forces. “You can bring mass to the battle space with potentially fewer people.”
The effort represents the beginning of a seismic shift in the way the Air Force buys some of its most important tools. After decades in which the Pentagon has focused on buying hardware built by traditional contractors like Lockheed Martin and Boeing, the emphasis is shifting to software that can enhance the capabilities of weapons systems, creating an opening for newer technology firms to grab pieces of the Pentagon’s vast procurement budget.
“Machines are actually drawing on the data and then creating their own outcomes,” said Brig. Gen. Dale White, the Pentagon official who has been in charge of the new acquisition program.
The Pentagon has spent several years building prototypes like the Valkyrie.Credit...Edmund D. Fountain for The New York Times
The Air Force realizes it must also confront deep concerns about military use of artificial intelligence, whether fear that the technology might turn against its human creators (like Skynet in the “Terminator” film series) or more immediate misgivings about allowing algorithms to guide the use of lethal force.
“You’re stepping over a moral line by outsourcing killing to machines — by allowing computer sensors rather than humans to take human life,” said Mary Wareham, the advocacy director of the arms division of Human Rights Watch, which is pushing for international limits on so-called lethally autonomous weapons.
A recently revised Pentagon policy on the use of artificial intelligence in weapons systems allows for the autonomous use of lethal force — but any particular plan to build or deploy such a weapon must first be reviewed and approved by a special military panel.
Asked if Air Force drones might eventually be able to conduct lethal strikes like this without explicit human sign-off on each attack, a Pentagon spokeswoman said in a statement to The New York Times that the question was too hypothetical to answer.
Any autonomous Air Force drone, the statement said, would have to be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
Air Force officials said they fully understand that machines are not intelligent in the same way humans are. A.I. technology can also make mistakes — as has happened repeatedly in recent years with driverless cars — and machines have no built-in moral compass. The officials said they were considering those factors while building the system.
“It is an awesome responsibility,” said Col. Tucker Hamilton, the Air Force chief of A.I. Test and Operations, who also helps oversee the flight-test crews at Eglin Air Force Base, noting that “dystopian storytelling and pop culture has created a kind of frenzy” around artificial intelligence.
“We just need to get there methodically, deliberately, ethically — in baby steps,” he said.
The Pentagon Back Flip
Image
The walls of a wood-paneled corridor in the Pentagon are filled with portraits of past military leaders.
Portraits of a century’s worth of Air Force leaders and aircraft in the Pentagon highlight the iconic role of the pilot.Credit...Kent Nishimura for The New York Times
The long, wood-paneled corridor in the Pentagon where the Air Force top brass have their offices is lined with portraits of a century’s worth of leaders, mixed with images of the flying machines that have given the United States global dominance in the air since World War II.
A common theme emerges from the images: the iconic role of the pilot.
Humans will continue to play a central role in the new vision for the Air Force, top Pentagon officials said, but they will increasingly be teamed with software engineers and machine learning experts, who will be constantly refining algorithms governing the operation of the robot wingmen that will fly alongside them.
Almost every aspect of Air Force operations will have to be revised to embrace this shift. It’s a task that through this summer had been largely entrusted to Generals White and Jobe, whose partnership Air Force officers nicknamed the Dale and Frag Show (General Jobe’s call sign as a pilot is Frag).
The Pentagon, through its research divisions like DARPA and the Air Force Research Laboratory, has already spent several years building prototypes like the Valkyrie and the software that runs it. But the experiment is now graduating to a so-called program of record, meaning if Congress approves, substantial taxpayer dollars will be allocated to buying the vehicles: a total of $5.8 billion over the next five years, according to the Air Force plan.
Unlike F-35 fighter jets, which are delivered as a package by Lockheed Martin and its subcontractors, the Air Force is planning to split up the aircraft and the software as separate purchases.
Kratos, the builder of the Valkyrie, is already preparing to bid on any future contract, as are other major companies such as General Atomics, which for years has built attack drones used in Iraq and Afghanistan, and Boeing, which has its own experimental autonomous fighter jet prototype, the MQ-28 Ghost Bat.
A separate set of software-first companies — tech start-ups such as Shield AI and Anduril that are funded by hundreds of millions of dollars in venture capital — are vying for the right to sell the Pentagon the artificial intelligence algorithms that will handle mission decisions.
The list of hurdles that must be cleared is long.
The Pentagon has a miserable record on building advanced software and trying to start its own artificial intelligence program. Over the years, it has cycled through various acronym-laden program offices that are created and then shut down with little to show.
There is constant turnover among leaders at the Pentagon, complicating efforts to keep moving ahead on schedule. General Jobe has already been assigned to a new role and General White soon will be.
Image
A portrait of Maj. Gen. R. Scott Jobe wearing a green Air Force jumpsuit in front of a painting of many airplanes.
Maj. Gen. R. Scott Jobe was in charge of setting requirements for the air combat program until this summer.Credit...Kent Nishimura for The New York Times
Image
A portrait of Brig. Gen. Dale R. White wearing a camoflauge military uniform.
Brig. Gen. Dale White is the Pentagon official who has been in charge of the new acquisition program.Credit...Hailey Sadler for The New York Times
The Pentagon also is going to need to disrupt the iron-fisted control that the major defense contractors have on the flow of military spending. As the structure of the Valkyrie program suggests, the military wants to do more to harness the expertise of a new generation of software companies to deliver key parts of the package, introducing more competition, entrepreneurial speed and creativity into what has long been a risk-averse and slow-moving system.
The most important job, at least until recently, rested with General Jobe, who first made a name for himself in the Air Force two decades ago when he helped devise a bombing strategy to knock out deeply buried bunkers in Iraq that held critical military communication switches.
He was asked to make key decisions setting the framework for how the A.I.-powered robot airplanes will be built. During a Pentagon interview, and at other recent events, Generals Jobe and White both said one clear imperative is that humans will remain the ultimate decision makers — not the robot drones, known as C.C.A.s, the acronym for collaborative combat aircraft.
“I’m not going to have this robot go out and just start shooting at things,” General Jobe said during a briefing with Pentagon reporters late last year.
He added that a human would always be deciding when and how to have an A.I.-enabled aircraft engage with an enemy and that developers are building a firewall around certain A.I. functions to limit what the devices will be able to do on their own.
“Think of it as just an extension to your weapons bay if you’re in an F-22, F-35 or whatnot,” he said.
The Test Pilots
Image
A portrait of Maj. Ross Elder wearing a green jumpsuit with various military patches standing on a tarmac.
“It’s a very strange feeling,” Maj. Ross Elder said. “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.”Credit...Edmund D. Fountain for The New York Times
Back in 1947, Chuck Yeager, then a young test pilot from Myra, W. Va., became the first human to fly faster than the speed of sound.
Seventy-six years later, another test pilot from West Virginia has become one of the first Air Force pilots to fly alongside an autonomous, A.I.-empowered combat drone.
Tall and lanky, with a slight Appalachian accent, Major Elder last month flew his F-15 Strike Eagle within 1,000 feet of the experimental XQ-58A Valkyrie — watching closely, like a parent running alongside a child learning how to ride a bike, as the drone flew on its own, reaching certain assigned speeds and altitudes.
The basic functional tests of the drone were just the lead-up to the real show, where the Valkyrie gets beyond using advanced autopilot tools and begins testing the war-fighting capabilities of its artificial intelligence. In a test slated for later this year, the combat drone will be asked to chase and then kill a simulated enemy target while out over the Gulf of Mexico, coming up with its own strategy for the mission.
During the current phase, the goal is to test the Valkyrie’s flight capacity and the A.I. software, so the aircraft is not carrying any weapons. The planned dogfight will be with a “constructed” enemy, although the A.I. agent onboard the Valkyrie will believe it is real.
Major Elder had no way to communicate directly with the autonomous drone at this early stage of development, so he had to watch very carefully as it set off on its mission.
“It wants to kill and survive,” Major Elder said of the training the drone has been given.
An unusual team of Air Force officers and civilians has been assembled at Eglin, which is one of the largest Air Force bases in the world. They include Capt. Rachel Price from Glendale, Az., who is wrapping up a Ph.D. at the Massachusetts Institute of Technology on computer deep learning, as well as Maj. Trent McMullen from Marietta, Ga., who has a master's degree in machine learning from Stanford University.
Image
An F-16 fighter jet sits on a runway under an awning.
Pilots at Eglin Air Force Base planned to fly their F-16 fighter jets in tandem with A.I.-directed jets.Credit...Edmund D. Fountain for The New York Times
One of the things Major Elder watches for is any discrepancies between simulations run by computer before the flight and the actions by the drone when it is actually in the air — a “sim to real” problem, they call it — or even more worrisome, any sign of “emergent behavior,” where the robot drone is acting in a potentially harmful way.
During test flights, Major Elder or the team manager in the Eglin Air Force Base control tower can power down the A.I. platform while keeping the basic autopilot on the Valkyrie running. So can Capt. Abraham Eaton of Gorham, Maine, who serves as a flight test engineer on the project and is charged with helping evaluate the drone’s performance.
“How do you grade an artificial intelligence agent?” he asked rhetorically. “Do you grade it on a human scale? Probably not, right?”
Real adversaries will likely try to fool the artificial intelligence, for example by creating a virtual camouflage for enemy planes or targets to make the robot believe it is seeing something else.
The initial version of the A.I. software is more “deterministic,” meaning it is largely following scripts that it has been trained with, based on computer simulations the Air Force has run millions of times as it builds the system. Eventually, the A.I. software will have to be able to perceive the world around it — and learn to understand these kinds of tricks and overcome them, skills that will require massive data collection to train the algorithms. The software will have to be heavily protected against hacking by an enemy.
The hardest part of this task, Major Elder and other pilots said, is the vital trust building that is such a central element of the bond between a pilot and wingman — their lives depend on each other, and how each of them react. It is a concern back at the Pentagon too.
“I need to know that those C.C.A.s are going to do what I expect them to do, because if they don’t, it could end badly for me,” General White said.
Image
A portrait of Captain Abraham Eaton wearing a green jumpsuit standing in front of a hangar on a tarmac.
Capt. Abraham Eaton, a flight test engineer on the project, is charged with helping evaluate how well the drone performs.Credit...Edmund D. Fountain for The New York Times
Image
Safety harnesses are hanging in individual shelves with lockers and helmets above them.
The human pilots require safety harnesses and helmets.Credit...Edmund D. Fountain for The New York Times
In early tests, the autonomous drones already have shown that they will act in unusual ways, with the Valkyrie in one case going into a series of rolls. At first, Major Elder thought something was off, but it turned out that the software had determined that its infrared sensors could get a clearer picture if it did continuous flips. The maneuver would have been like a stomach-turning roller coaster ride for a human pilot, but the team later concluded the drone had achieved a better outcome for the mission.
Air Force pilots have experience with learning to trust computer automation — like the collision avoidance systems that take over if a fighter jet is headed into the ground or set to collide with another aircraft — two of the leading causes of death among pilots.
The pilots were initially reluctant to go into the air with the system engaged, as it would allow computers to take control of the planes, several pilots said in interviews. As evidence grew that the system saved lives, it was broadly embraced. But learning to trust robot combat drones will be an even bigger hurdle, senior Air Force officials acknowledged.
Air Force officials used the word “trust” dozens of times in a series of interviews about the challenges they face in building acceptance among pilots. They have already started flying the prototype robot drones with test pilots nearby, so they can get this process started.
The Air Force has also begun a second test program called Project Venom that will put pilots in six F-16 fighter jets equipped with artificial intelligence software that will handle key mission decisions.
The goal, Pentagon officials said, is an Air Force that is more unpredictable and lethal, creating greater deterrence for any moves by China, and a less deadly fight, at least for the United States Air Force.
Officials estimate that it could take five to 10 years to develop a functioning A.I.-based system for air combat. Air Force commanders are pushing to accelerate the effort — but recognize that speed cannot be the only objective.
“We’re not going to be there right away, but we’re going to get there,” General Jobe said. “It’s advanced and getting better every day as you continue to train these algorithms.”
Image
A rear view of a Valkyrie autonomous aircraft looking out of a hangar toward a runway and the ocean.
Officials estimate that it could take five to 10 years to develop a functioning A.I.-based system for air combat.Credit...Edmund D. Fountain for The New York Times
https://www.nytimes.com/2023/08/27/us/p ... force.html
It is powered into flight by a rocket engine. It can fly a distance equal to the width of China. It has a stealthy design and is capable of carrying missiles that can hit enemy targets far beyond its visual range.
But what really distinguishes the Air Force’s pilotless XQ-58A Valkyrie experimental aircraft is that it is run by artificial intelligence, putting it at the forefront of efforts by the U.S. military to harness the capacities of an emerging technology whose vast potential benefits are tempered by deep concerns about how much autonomy to grant to a lethal weapon.
Essentially a next-generation drone, the Valkyrie is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle. Its mission is to marry artificial intelligence and its sensors to identify and evaluate enemy threats and then, after getting human sign-off, to move in for the kill.
On a recent day at Eglin Air Force Base on Florida’s Gulf Coast, Maj. Ross Elder, 34, a test pilot from West Virginia, was preparing for an exercise in which he would fly his F-15 fighter alongside the Valkyrie.
“It’s a very strange feeling,” Major Elder said, as other members of the Air Force team prepared to test the engine on the Valkyrie. “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.”
The Valkyrie program provides a glimpse into how the U.S. weapons business, military culture, combat tactics and competition with rival nations are being reshaped in possibly far-reaching ways by rapid advances in technology.
The emergence of artificial intelligence is helping to spawn a new generation of Pentagon contractors who are seeking to undercut, or at least disrupt, the longstanding primacy of the handful of giant firms who supply the armed forces with planes, missiles, tanks and ships.
The possibility of building fleets of smart but relatively inexpensive weapons that could be deployed in large numbers is allowing Pentagon officials to think in new ways about taking on enemy forces.
It also is forcing them to confront questions about what role humans should play in conflicts waged with software that is written to kill, a question that is especially fraught for the United States given its record of errant strikes by conventional drones that inflict civilian casualt
And gaining and maintaining an edge in artificial intelligence is one element of an increasingly open race with China for technological superiority in national security.
ImageA straight on view of the Valkyrie in a hanger shows a pointed nose, wings, and tail fins.
The Valkyrie is a prototype for what the Air Force hopes can become a potent supplement to its fleet of traditional fighter jets, giving human pilots a swarm of highly capable robot wingmen to deploy in battle.Credit...Edmund D. Fountain for The New York Times
Military planners are worried that the current mix of Air Force planes and weapons systems — despite the trillions of dollars invested in them — can no longer be counted on to dominate if a full-scale conflict with China were to break out, particularly if it involved a Chinese invasion of Taiwan.
That is because China is lining its coasts, and artificial islands it has constructed in the South China Sea, with more than a thousand anti-ship and antiaircraft missiles that severely curtail the United States’ ability to respond to any possible invasion of Taiwan without massive losses in the air and at sea.
After decades of building fewer and fewer increasingly expensive combat aircraft — the F-35 fighter jet costs $80 million per unit — the Air Force now has the smallest and oldest fleet in its history.
That is where the new generation of A.I. drones, known as collaborative combat aircraft, will come in. The Air Force is planning to build 1,000 to 2,000 of them for as little as $3 million apiece, or a fraction of the cost of an advanced fighter, which is why some at the Air Force call the program “affordable mass.”
There will be a range of specialized types of these robot aircraft. Some will focus on surveillance or resupply missions, others will fly in attack swarms and still others will serve as a “loyal wingman” to a human pilot.
The drones, for example, could fly in front of piloted combat aircraft, doing early, high-risk surveillance. They could also play a major role in disabling enemy air defenses, taking risks to knock out land-based missile targets that would be considered too dangerous for a human-piloted plane.
The cheapest ones will be considered expendable, meaning they likely will only have one mission. The more sophisticated of these robot aircraft might cost as much as $25 million, according to an estimate by the House of Representatives, still far less than a piloted fighter jet.
“Is it a perfect answer? It is never a perfect answer when you look into the future,” said Maj. Gen. R. Scott Jobe, who until this summer was in charge of setting requirements for the air combat program, as the Air Force works to incorporate A.I. into its fighter jets and drones.
“But you can present potential adversaries with dilemmas — and one of those dilemmas is mass,” General Jobe said in an interview at the Pentagon, referring to the deployment of large numbers of drones against enemy forces. “You can bring mass to the battle space with potentially fewer people.”
The effort represents the beginning of a seismic shift in the way the Air Force buys some of its most important tools. After decades in which the Pentagon has focused on buying hardware built by traditional contractors like Lockheed Martin and Boeing, the emphasis is shifting to software that can enhance the capabilities of weapons systems, creating an opening for newer technology firms to grab pieces of the Pentagon’s vast procurement budget.
“Machines are actually drawing on the data and then creating their own outcomes,” said Brig. Gen. Dale White, the Pentagon official who has been in charge of the new acquisition program.
The Pentagon has spent several years building prototypes like the Valkyrie.Credit...Edmund D. Fountain for The New York Times
The Air Force realizes it must also confront deep concerns about military use of artificial intelligence, whether fear that the technology might turn against its human creators (like Skynet in the “Terminator” film series) or more immediate misgivings about allowing algorithms to guide the use of lethal force.
“You’re stepping over a moral line by outsourcing killing to machines — by allowing computer sensors rather than humans to take human life,” said Mary Wareham, the advocacy director of the arms division of Human Rights Watch, which is pushing for international limits on so-called lethally autonomous weapons.
A recently revised Pentagon policy on the use of artificial intelligence in weapons systems allows for the autonomous use of lethal force — but any particular plan to build or deploy such a weapon must first be reviewed and approved by a special military panel.
Asked if Air Force drones might eventually be able to conduct lethal strikes like this without explicit human sign-off on each attack, a Pentagon spokeswoman said in a statement to The New York Times that the question was too hypothetical to answer.
Any autonomous Air Force drone, the statement said, would have to be “designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.”
Air Force officials said they fully understand that machines are not intelligent in the same way humans are. A.I. technology can also make mistakes — as has happened repeatedly in recent years with driverless cars — and machines have no built-in moral compass. The officials said they were considering those factors while building the system.
“It is an awesome responsibility,” said Col. Tucker Hamilton, the Air Force chief of A.I. Test and Operations, who also helps oversee the flight-test crews at Eglin Air Force Base, noting that “dystopian storytelling and pop culture has created a kind of frenzy” around artificial intelligence.
“We just need to get there methodically, deliberately, ethically — in baby steps,” he said.
The Pentagon Back Flip
Image
The walls of a wood-paneled corridor in the Pentagon are filled with portraits of past military leaders.
Portraits of a century’s worth of Air Force leaders and aircraft in the Pentagon highlight the iconic role of the pilot.Credit...Kent Nishimura for The New York Times
The long, wood-paneled corridor in the Pentagon where the Air Force top brass have their offices is lined with portraits of a century’s worth of leaders, mixed with images of the flying machines that have given the United States global dominance in the air since World War II.
A common theme emerges from the images: the iconic role of the pilot.
Humans will continue to play a central role in the new vision for the Air Force, top Pentagon officials said, but they will increasingly be teamed with software engineers and machine learning experts, who will be constantly refining algorithms governing the operation of the robot wingmen that will fly alongside them.
Almost every aspect of Air Force operations will have to be revised to embrace this shift. It’s a task that through this summer had been largely entrusted to Generals White and Jobe, whose partnership Air Force officers nicknamed the Dale and Frag Show (General Jobe’s call sign as a pilot is Frag).
The Pentagon, through its research divisions like DARPA and the Air Force Research Laboratory, has already spent several years building prototypes like the Valkyrie and the software that runs it. But the experiment is now graduating to a so-called program of record, meaning if Congress approves, substantial taxpayer dollars will be allocated to buying the vehicles: a total of $5.8 billion over the next five years, according to the Air Force plan.
Unlike F-35 fighter jets, which are delivered as a package by Lockheed Martin and its subcontractors, the Air Force is planning to split up the aircraft and the software as separate purchases.
Kratos, the builder of the Valkyrie, is already preparing to bid on any future contract, as are other major companies such as General Atomics, which for years has built attack drones used in Iraq and Afghanistan, and Boeing, which has its own experimental autonomous fighter jet prototype, the MQ-28 Ghost Bat.
A separate set of software-first companies — tech start-ups such as Shield AI and Anduril that are funded by hundreds of millions of dollars in venture capital — are vying for the right to sell the Pentagon the artificial intelligence algorithms that will handle mission decisions.
The list of hurdles that must be cleared is long.
The Pentagon has a miserable record on building advanced software and trying to start its own artificial intelligence program. Over the years, it has cycled through various acronym-laden program offices that are created and then shut down with little to show.
There is constant turnover among leaders at the Pentagon, complicating efforts to keep moving ahead on schedule. General Jobe has already been assigned to a new role and General White soon will be.
Image
A portrait of Maj. Gen. R. Scott Jobe wearing a green Air Force jumpsuit in front of a painting of many airplanes.
Maj. Gen. R. Scott Jobe was in charge of setting requirements for the air combat program until this summer.Credit...Kent Nishimura for The New York Times
Image
A portrait of Brig. Gen. Dale R. White wearing a camoflauge military uniform.
Brig. Gen. Dale White is the Pentagon official who has been in charge of the new acquisition program.Credit...Hailey Sadler for The New York Times
The Pentagon also is going to need to disrupt the iron-fisted control that the major defense contractors have on the flow of military spending. As the structure of the Valkyrie program suggests, the military wants to do more to harness the expertise of a new generation of software companies to deliver key parts of the package, introducing more competition, entrepreneurial speed and creativity into what has long been a risk-averse and slow-moving system.
The most important job, at least until recently, rested with General Jobe, who first made a name for himself in the Air Force two decades ago when he helped devise a bombing strategy to knock out deeply buried bunkers in Iraq that held critical military communication switches.
He was asked to make key decisions setting the framework for how the A.I.-powered robot airplanes will be built. During a Pentagon interview, and at other recent events, Generals Jobe and White both said one clear imperative is that humans will remain the ultimate decision makers — not the robot drones, known as C.C.A.s, the acronym for collaborative combat aircraft.
“I’m not going to have this robot go out and just start shooting at things,” General Jobe said during a briefing with Pentagon reporters late last year.
He added that a human would always be deciding when and how to have an A.I.-enabled aircraft engage with an enemy and that developers are building a firewall around certain A.I. functions to limit what the devices will be able to do on their own.
“Think of it as just an extension to your weapons bay if you’re in an F-22, F-35 or whatnot,” he said.
The Test Pilots
Image
A portrait of Maj. Ross Elder wearing a green jumpsuit with various military patches standing on a tarmac.
“It’s a very strange feeling,” Maj. Ross Elder said. “I’m flying off the wing of something that’s making its own decisions. And it’s not a human brain.”Credit...Edmund D. Fountain for The New York Times
Back in 1947, Chuck Yeager, then a young test pilot from Myra, W. Va., became the first human to fly faster than the speed of sound.
Seventy-six years later, another test pilot from West Virginia has become one of the first Air Force pilots to fly alongside an autonomous, A.I.-empowered combat drone.
Tall and lanky, with a slight Appalachian accent, Major Elder last month flew his F-15 Strike Eagle within 1,000 feet of the experimental XQ-58A Valkyrie — watching closely, like a parent running alongside a child learning how to ride a bike, as the drone flew on its own, reaching certain assigned speeds and altitudes.
The basic functional tests of the drone were just the lead-up to the real show, where the Valkyrie gets beyond using advanced autopilot tools and begins testing the war-fighting capabilities of its artificial intelligence. In a test slated for later this year, the combat drone will be asked to chase and then kill a simulated enemy target while out over the Gulf of Mexico, coming up with its own strategy for the mission.
During the current phase, the goal is to test the Valkyrie’s flight capacity and the A.I. software, so the aircraft is not carrying any weapons. The planned dogfight will be with a “constructed” enemy, although the A.I. agent onboard the Valkyrie will believe it is real.
Major Elder had no way to communicate directly with the autonomous drone at this early stage of development, so he had to watch very carefully as it set off on its mission.
“It wants to kill and survive,” Major Elder said of the training the drone has been given.
An unusual team of Air Force officers and civilians has been assembled at Eglin, which is one of the largest Air Force bases in the world. They include Capt. Rachel Price from Glendale, Az., who is wrapping up a Ph.D. at the Massachusetts Institute of Technology on computer deep learning, as well as Maj. Trent McMullen from Marietta, Ga., who has a master's degree in machine learning from Stanford University.
Image
An F-16 fighter jet sits on a runway under an awning.
Pilots at Eglin Air Force Base planned to fly their F-16 fighter jets in tandem with A.I.-directed jets.Credit...Edmund D. Fountain for The New York Times
One of the things Major Elder watches for is any discrepancies between simulations run by computer before the flight and the actions by the drone when it is actually in the air — a “sim to real” problem, they call it — or even more worrisome, any sign of “emergent behavior,” where the robot drone is acting in a potentially harmful way.
During test flights, Major Elder or the team manager in the Eglin Air Force Base control tower can power down the A.I. platform while keeping the basic autopilot on the Valkyrie running. So can Capt. Abraham Eaton of Gorham, Maine, who serves as a flight test engineer on the project and is charged with helping evaluate the drone’s performance.
“How do you grade an artificial intelligence agent?” he asked rhetorically. “Do you grade it on a human scale? Probably not, right?”
Real adversaries will likely try to fool the artificial intelligence, for example by creating a virtual camouflage for enemy planes or targets to make the robot believe it is seeing something else.
The initial version of the A.I. software is more “deterministic,” meaning it is largely following scripts that it has been trained with, based on computer simulations the Air Force has run millions of times as it builds the system. Eventually, the A.I. software will have to be able to perceive the world around it — and learn to understand these kinds of tricks and overcome them, skills that will require massive data collection to train the algorithms. The software will have to be heavily protected against hacking by an enemy.
The hardest part of this task, Major Elder and other pilots said, is the vital trust building that is such a central element of the bond between a pilot and wingman — their lives depend on each other, and how each of them react. It is a concern back at the Pentagon too.
“I need to know that those C.C.A.s are going to do what I expect them to do, because if they don’t, it could end badly for me,” General White said.
Image
A portrait of Captain Abraham Eaton wearing a green jumpsuit standing in front of a hangar on a tarmac.
Capt. Abraham Eaton, a flight test engineer on the project, is charged with helping evaluate how well the drone performs.Credit...Edmund D. Fountain for The New York Times
Image
Safety harnesses are hanging in individual shelves with lockers and helmets above them.
The human pilots require safety harnesses and helmets.Credit...Edmund D. Fountain for The New York Times
In early tests, the autonomous drones already have shown that they will act in unusual ways, with the Valkyrie in one case going into a series of rolls. At first, Major Elder thought something was off, but it turned out that the software had determined that its infrared sensors could get a clearer picture if it did continuous flips. The maneuver would have been like a stomach-turning roller coaster ride for a human pilot, but the team later concluded the drone had achieved a better outcome for the mission.
Air Force pilots have experience with learning to trust computer automation — like the collision avoidance systems that take over if a fighter jet is headed into the ground or set to collide with another aircraft — two of the leading causes of death among pilots.
The pilots were initially reluctant to go into the air with the system engaged, as it would allow computers to take control of the planes, several pilots said in interviews. As evidence grew that the system saved lives, it was broadly embraced. But learning to trust robot combat drones will be an even bigger hurdle, senior Air Force officials acknowledged.
Air Force officials used the word “trust” dozens of times in a series of interviews about the challenges they face in building acceptance among pilots. They have already started flying the prototype robot drones with test pilots nearby, so they can get this process started.
The Air Force has also begun a second test program called Project Venom that will put pilots in six F-16 fighter jets equipped with artificial intelligence software that will handle key mission decisions.
The goal, Pentagon officials said, is an Air Force that is more unpredictable and lethal, creating greater deterrence for any moves by China, and a less deadly fight, at least for the United States Air Force.
Officials estimate that it could take five to 10 years to develop a functioning A.I.-based system for air combat. Air Force commanders are pushing to accelerate the effort — but recognize that speed cannot be the only objective.
“We’re not going to be there right away, but we’re going to get there,” General Jobe said. “It’s advanced and getting better every day as you continue to train these algorithms.”
Image
A rear view of a Valkyrie autonomous aircraft looking out of a hangar toward a runway and the ocean.
Officials estimate that it could take five to 10 years to develop a functioning A.I.-based system for air combat.Credit...Edmund D. Fountain for The New York Times
https://www.nytimes.com/2023/08/27/us/p ... force.html
Re: TECHNOLOGY AND DEVELOPMENT
A.I. should be harnessed to bring peace in this world, not to produce better weapons for destroying the world. BUt there will always be someone who wants to make money from war as it is difficult to make money from peace
Re: TECHNOLOGY AND DEVELOPMENT
TechCrunch
India's Aditya-L1 solar probe successfully lifts off toward the sun
Jagmeet Singh
Updated Sat, September 2, 2023 at 2:52 AM CDT
India has successfully launched its first space-based solar observatory mission — just 10 days after the landing of its spacecraft Chandrayaan-3 on the lunar south pole.
Called Aditya-L1, the spacecraft, weighing over 3,264 pounds, blasted off from the spaceport Satish Dhawan Space Centre in South India's Sriharikota using the 44.4-meter tall polar satellite launch vehicle (PSLV-XL) at the targeted time of 11:50am local time on Saturday. It will cover a distance of 932,000 miles and spend 125 days (or over four months) to reach its destination: a halo orbit around one of five Lagrangian points, which lie between the sun and Earth and allows spacecraft to track solar activities continuously, without any occultation and eclipse.
India's space agency, the Indian Space Research Organization (ISRO), has installed seven payloads on the Aditya-L1 spacecraft, four for remote sensing and three for on-site experiments. Onboard instruments include a visible emission line coronagraph, solar ultraviolet imaging telescope, X-ray spectrometer, solar wind particle analyzer, plasma analyzer package and tri-axial high-resolution digital magnetometers, all equipped to collect the necessary data and observations. The overall purpose of the mission, codenamed PSLV-C57, is to observe solar activities and their effect on space weather in real time.
After over an hour of its takeoff, the PSLV injected the Aditya-L1 spacecraft into an elliptical orbit of 146x12,117 miles. This was the first time the launch vehicle's upper stage took two burn sequences to put the spacecraft into its intended orbit.
"I want to congratulate PSLV for such a very different mission approach today to do this mission of Aditya-L1 to put it in the right orbit. Now, the Aditya-L1 will take its journey after some Earth maneuvers," ISRO chairman S. Somanath said while addressing the attendees at the space agency's mission control center. "Let us wish all the very best to the Aditya spacecraft for its long journey and being put around the halo orbit of L1."
The payloads on the spacecraft will study the three crucial parts of the sun: the photosphere, chromosphere and corona. Further, the three instruments for conducting on-site experiments will observe the local environment at the Lagrangian point L1.
Aditya-L1, for which the Indian government allocated approximately $46 million in 2019, was conceptualized in 2008 to study the solar corona, the outer layer of the sun's atmosphere, and was named Aditya ("sun" in Hindi). However, ISRO later renamed the mission Aditya-L1 to expand it to study solar and space environments.
"It's a dream come true for the team Aditya-L1," said Nigar Shaji, project director for the Aditya-L1 mission. "Once the Aditya [mission] is commissioned, it will be an asset to the heliophysics of the country and even to the global scientific fraternity."
In the past, the U.S., Europe and China conducted solar observatory missions in space to study the sun. However, it is the first time India is venturing into this domain, as it has hitherto focused on sun observation using ground-based telescopes.
The Indian space agency gained worldwide attention and praise last week when Chandrayaan-3 successfully made its soft landing on the moon. Earlier this week, ISRO posted a video the mission's lander shared showing its rover moving on the lunar surface to find a safe route. The lunar mission will help conduct a list of experiments to aid human landing eventually.
"While the whole world watched this with bated breath, it is indeed a sunshine moment for India," said Jitendra Singh, the deputy minister for science and technology, while congratulating ISRO for the successful launch of the Aditya-L1 mission.
Alongside Aditya-L1, ISRO has long been working on a human space flight mission Gaganyaan — planned for 2025. Meanwhile, the space agency is also looking to launch an unmanned mission to Venus.
In June, India became a signatory of NASA's Artemis Accords to participate in joint space experiments with partner nations. NASA also committed to training Indian astronauts at the Johnson Space Center in Houston and intends to send them to the International Space Station next year. Additionally, ISRO and NASA are working on a low-Earth observatory mission, slated to launch in 2024, to map the entire planet in 12 days and consistently analyze Earth's ecosystems, ice mass, vegetation biomass, sea level, and natural disasters and hazards.
Separately, India released a space policy earlier this year to boost private participation in its space missions. The South Asian nation already has over 150 space tech startups developing launch vehicles, satellites and Earth observatory solutions.
Funding in Indian space tech startups grew 17% to $112 million in 2022 from $96 million in 2021. The space tech sector also saw a significant 60% increase in capital infusion from last year, reaching $62 million in 2023, according to the data recently released by analyst firm Tracxn. Investments in Indian startups are expected to grow further with the ease in the norms for foreign direct investments, which various stakeholders have long demanded.
https://currently.att.yahoo.com/finance ... 37802.html
India's Aditya-L1 solar probe successfully lifts off toward the sun
Jagmeet Singh
Updated Sat, September 2, 2023 at 2:52 AM CDT
India has successfully launched its first space-based solar observatory mission — just 10 days after the landing of its spacecraft Chandrayaan-3 on the lunar south pole.
Called Aditya-L1, the spacecraft, weighing over 3,264 pounds, blasted off from the spaceport Satish Dhawan Space Centre in South India's Sriharikota using the 44.4-meter tall polar satellite launch vehicle (PSLV-XL) at the targeted time of 11:50am local time on Saturday. It will cover a distance of 932,000 miles and spend 125 days (or over four months) to reach its destination: a halo orbit around one of five Lagrangian points, which lie between the sun and Earth and allows spacecraft to track solar activities continuously, without any occultation and eclipse.
India's space agency, the Indian Space Research Organization (ISRO), has installed seven payloads on the Aditya-L1 spacecraft, four for remote sensing and three for on-site experiments. Onboard instruments include a visible emission line coronagraph, solar ultraviolet imaging telescope, X-ray spectrometer, solar wind particle analyzer, plasma analyzer package and tri-axial high-resolution digital magnetometers, all equipped to collect the necessary data and observations. The overall purpose of the mission, codenamed PSLV-C57, is to observe solar activities and their effect on space weather in real time.
After over an hour of its takeoff, the PSLV injected the Aditya-L1 spacecraft into an elliptical orbit of 146x12,117 miles. This was the first time the launch vehicle's upper stage took two burn sequences to put the spacecraft into its intended orbit.
"I want to congratulate PSLV for such a very different mission approach today to do this mission of Aditya-L1 to put it in the right orbit. Now, the Aditya-L1 will take its journey after some Earth maneuvers," ISRO chairman S. Somanath said while addressing the attendees at the space agency's mission control center. "Let us wish all the very best to the Aditya spacecraft for its long journey and being put around the halo orbit of L1."
The payloads on the spacecraft will study the three crucial parts of the sun: the photosphere, chromosphere and corona. Further, the three instruments for conducting on-site experiments will observe the local environment at the Lagrangian point L1.
Aditya-L1, for which the Indian government allocated approximately $46 million in 2019, was conceptualized in 2008 to study the solar corona, the outer layer of the sun's atmosphere, and was named Aditya ("sun" in Hindi). However, ISRO later renamed the mission Aditya-L1 to expand it to study solar and space environments.
"It's a dream come true for the team Aditya-L1," said Nigar Shaji, project director for the Aditya-L1 mission. "Once the Aditya [mission] is commissioned, it will be an asset to the heliophysics of the country and even to the global scientific fraternity."
In the past, the U.S., Europe and China conducted solar observatory missions in space to study the sun. However, it is the first time India is venturing into this domain, as it has hitherto focused on sun observation using ground-based telescopes.
The Indian space agency gained worldwide attention and praise last week when Chandrayaan-3 successfully made its soft landing on the moon. Earlier this week, ISRO posted a video the mission's lander shared showing its rover moving on the lunar surface to find a safe route. The lunar mission will help conduct a list of experiments to aid human landing eventually.
"While the whole world watched this with bated breath, it is indeed a sunshine moment for India," said Jitendra Singh, the deputy minister for science and technology, while congratulating ISRO for the successful launch of the Aditya-L1 mission.
Alongside Aditya-L1, ISRO has long been working on a human space flight mission Gaganyaan — planned for 2025. Meanwhile, the space agency is also looking to launch an unmanned mission to Venus.
In June, India became a signatory of NASA's Artemis Accords to participate in joint space experiments with partner nations. NASA also committed to training Indian astronauts at the Johnson Space Center in Houston and intends to send them to the International Space Station next year. Additionally, ISRO and NASA are working on a low-Earth observatory mission, slated to launch in 2024, to map the entire planet in 12 days and consistently analyze Earth's ecosystems, ice mass, vegetation biomass, sea level, and natural disasters and hazards.
Separately, India released a space policy earlier this year to boost private participation in its space missions. The South Asian nation already has over 150 space tech startups developing launch vehicles, satellites and Earth observatory solutions.
Funding in Indian space tech startups grew 17% to $112 million in 2022 from $96 million in 2021. The space tech sector also saw a significant 60% increase in capital infusion from last year, reaching $62 million in 2023, according to the data recently released by analyst firm Tracxn. Investments in Indian startups are expected to grow further with the ease in the norms for foreign direct investments, which various stakeholders have long demanded.
https://currently.att.yahoo.com/finance ... 37802.html
Re: TECHNOLOGY AND DEVELOPMENT
Reuter
Mon, September 4, 2023 at 12:32 PM CDT
Huawei Technologies and China's top chipmaker SMIC have built an advanced 7-nanometer processor to power its latest smartphone, according to a teardown report by analysis firm TechInsights.
Huawei's Mate 60 Pro is powered by a new Kirin 9000s chip that was made in China by Semiconductor Manufacturing International Corp (SMIC), TechInsights said in the report shared with Reuters on Monday.
Huawei started selling its Mate 60 Pro phone last week. The specifications provided advertised its ability to make satellite calls, but offered no information on the power of the chipset inside.
The processor is the first to utilize SMIC's most advanced 7nm technology and suggests the Chinese government is making some headway in attempts to build a domestic chip ecosystem, the research firm said.
The firm's findings were first reported by Bloomberg News.
Huawei and SMIC did not immediately reply to Reuters' request for comment.
Buyers of the phone in China have been posting tear-down videos and sharing speed tests on social media that suggest the Mate 60 Pro is capable of download speeds exceeding those of top line 5G phones.
The phone's launch sent Chinese social media users and state media into a frenzy, with some noting it coincided with a visit by U.S. Commerce Secretary Gina Raimondo.
From 2019, the U.S. has restricted Huawei's access to chipmaking tools essential for producing the most advanced handset models, with the company only able to launch limited batches of 5G models using stockpiled chips.
But research firms told Reuters in July that they believed Huawei was planning a return to the 5G smartphone industry by the end of this year, using its own advances in semiconductor design tools along with chipmaking from SMIC.
Dan Hutcheson, an analyst with TechInsights, told Reuters the development comes as a "slap in the face" to the U.S.
"Raimondo comes seeking to cool things down, and this chip is [saying] 'look what we can do, we don't need you,'" he said.
(Reporting by Shivani Tanna in Bengaluru and Max A.; Editing by Sandra Maler Cherney in San Francisco; Editing by Shilpi Majumdar)
https://currently.att.yahoo.com/finance ... 18841.html
Mon, September 4, 2023 at 12:32 PM CDT
Huawei Technologies and China's top chipmaker SMIC have built an advanced 7-nanometer processor to power its latest smartphone, according to a teardown report by analysis firm TechInsights.
Huawei's Mate 60 Pro is powered by a new Kirin 9000s chip that was made in China by Semiconductor Manufacturing International Corp (SMIC), TechInsights said in the report shared with Reuters on Monday.
Huawei started selling its Mate 60 Pro phone last week. The specifications provided advertised its ability to make satellite calls, but offered no information on the power of the chipset inside.
The processor is the first to utilize SMIC's most advanced 7nm technology and suggests the Chinese government is making some headway in attempts to build a domestic chip ecosystem, the research firm said.
The firm's findings were first reported by Bloomberg News.
Huawei and SMIC did not immediately reply to Reuters' request for comment.
Buyers of the phone in China have been posting tear-down videos and sharing speed tests on social media that suggest the Mate 60 Pro is capable of download speeds exceeding those of top line 5G phones.
The phone's launch sent Chinese social media users and state media into a frenzy, with some noting it coincided with a visit by U.S. Commerce Secretary Gina Raimondo.
From 2019, the U.S. has restricted Huawei's access to chipmaking tools essential for producing the most advanced handset models, with the company only able to launch limited batches of 5G models using stockpiled chips.
But research firms told Reuters in July that they believed Huawei was planning a return to the 5G smartphone industry by the end of this year, using its own advances in semiconductor design tools along with chipmaking from SMIC.
Dan Hutcheson, an analyst with TechInsights, told Reuters the development comes as a "slap in the face" to the U.S.
"Raimondo comes seeking to cool things down, and this chip is [saying] 'look what we can do, we don't need you,'" he said.
(Reporting by Shivani Tanna in Bengaluru and Max A.; Editing by Sandra Maler Cherney in San Francisco; Editing by Shilpi Majumdar)
https://currently.att.yahoo.com/finance ... 18841.html
Britain Passes Sweeping New Online Safety Law
The far-reaching bill had set off debates about balancing free speech and privacy rights against efforts to halt the spread of harmful content online.
Britain passed a sweeping law on Tuesday to regulate online content, introducing age-verification requirements for pornography sites and other rules to reduce hate speech, harassment and other illicit material.
The Online Safety Bill, which also applies to terrorist propaganda, online fraud and child safety, is one of the most far-reaching attempts by a Western democracy to regulate online speech. About 300 pages long, the new rules took more than five years to develop, setting off intense debates about how to balance free expression and privacy against barring harmful content, particularly targeted at children.
At one point, messaging services including WhatsApp and Signal threatened to abandon the British market altogether until provisions in the bill that were seen as weakening encryption standards were changed.
The British law goes further than efforts elsewhere to regulate online content, forcing companies to proactively screen for objectionable material and to judge whether it is illegal, rather than requiring them to act only after being alerted to illicit content, according to Graham Smith, a London lawyer focused on internet law.
It is part of a wave of rules in Europe aimed at ending an era of self-regulation in which tech companies set their own policies about what content could stay up or be taken down. The Digital Services Act, a European Union law, recently began taking effect and requires companies to more aggressively police their platforms for illicit material.
British political figures have been under pressure to pass the new policy as concerns grew about the mental health effects of internet and social media use among young people. Families that attributed their children’s suicides to social media were among the most aggressive champions of the bill.
Under the new law, content aimed at children that promotes suicide, self-harm and eating disorders must be restricted. Pornography companies, social media platforms and other services will be required to introduce age-verification measures to prevent children from gaining access to pornography, a shift that some groups have said will harm the availability of information online and undercut privacy. The Wikimedia Foundation, the operator of Wikipedia, has said it will be unable to comply with the law and may be blocked as a result.
TikTok, YouTube, Facebook and Instagram will also be required to introduce features that allow users to choose to encounter lower amounts of harmful content, such as eating disorders, self-harm, racism, misogyny or antisemitism.
“At its heart, the bill contains a simple idea: that providers should consider the foreseeable risks to which their services give rise and seek to mitigate — like many other industries already do,” said Lorna Woods, a professor of internet law at the University of Essex, who helped draft the law.
The bill has drawn criticism from tech firms, free speech activists and privacy groups who say it threatens freedom of expression because it will incentivize companies to take down content.
Questions remain about how the law will be enforced. That responsibility falls to Ofcom, the British regulator in charge of overseeing broadcast television and telecommunications, which now must outline rules for how it will police online safety.
Companies that do not comply will face fines of up to 18 million pounds, or about $22.3 million, a small sum for tech giants that earn billions per quarter. Company executives could face criminal action for not providing information during Ofcom investigations, or if they do not comply with rules related to child safety and child sexual exploitation.
https://www.nytimes.com/2023/09/19/tech ... 778d3e6de3
Britain passed a sweeping law on Tuesday to regulate online content, introducing age-verification requirements for pornography sites and other rules to reduce hate speech, harassment and other illicit material.
The Online Safety Bill, which also applies to terrorist propaganda, online fraud and child safety, is one of the most far-reaching attempts by a Western democracy to regulate online speech. About 300 pages long, the new rules took more than five years to develop, setting off intense debates about how to balance free expression and privacy against barring harmful content, particularly targeted at children.
At one point, messaging services including WhatsApp and Signal threatened to abandon the British market altogether until provisions in the bill that were seen as weakening encryption standards were changed.
The British law goes further than efforts elsewhere to regulate online content, forcing companies to proactively screen for objectionable material and to judge whether it is illegal, rather than requiring them to act only after being alerted to illicit content, according to Graham Smith, a London lawyer focused on internet law.
It is part of a wave of rules in Europe aimed at ending an era of self-regulation in which tech companies set their own policies about what content could stay up or be taken down. The Digital Services Act, a European Union law, recently began taking effect and requires companies to more aggressively police their platforms for illicit material.
British political figures have been under pressure to pass the new policy as concerns grew about the mental health effects of internet and social media use among young people. Families that attributed their children’s suicides to social media were among the most aggressive champions of the bill.
Under the new law, content aimed at children that promotes suicide, self-harm and eating disorders must be restricted. Pornography companies, social media platforms and other services will be required to introduce age-verification measures to prevent children from gaining access to pornography, a shift that some groups have said will harm the availability of information online and undercut privacy. The Wikimedia Foundation, the operator of Wikipedia, has said it will be unable to comply with the law and may be blocked as a result.
TikTok, YouTube, Facebook and Instagram will also be required to introduce features that allow users to choose to encounter lower amounts of harmful content, such as eating disorders, self-harm, racism, misogyny or antisemitism.
“At its heart, the bill contains a simple idea: that providers should consider the foreseeable risks to which their services give rise and seek to mitigate — like many other industries already do,” said Lorna Woods, a professor of internet law at the University of Essex, who helped draft the law.
The bill has drawn criticism from tech firms, free speech activists and privacy groups who say it threatens freedom of expression because it will incentivize companies to take down content.
Questions remain about how the law will be enforced. That responsibility falls to Ofcom, the British regulator in charge of overseeing broadcast television and telecommunications, which now must outline rules for how it will police online safety.
Companies that do not comply will face fines of up to 18 million pounds, or about $22.3 million, a small sum for tech giants that earn billions per quarter. Company executives could face criminal action for not providing information during Ofcom investigations, or if they do not comply with rules related to child safety and child sexual exploitation.
https://www.nytimes.com/2023/09/19/tech ... 778d3e6de3
ChatGPT Can Now Generate Images, Too
OpenAI released a new version of its DALL-E image generator to a small group of testers and incorporated the technology into its popular ChatGPT chatbot.
A.I.-generated images made using OpenAI’s DALL-E 3.Credit...OpenAI
ChatGPT can now generate images — and they are shockingly detailed.
On Wednesday, OpenAI, the San Francisco artificial intelligence start-up, released a new version of its DALL-E image generator to a small group of testers and folded the technology into ChatGPT, its popular online chatbot.
Called DALL-E 3, it can produce more convincing images than previous versions of the technology, showing a particular knack for images containing letters, numbers and human hands, the company said.
“It is far better at understanding and representing what the user is asking for,” said Aditya Ramesh, an OpenAI researcher, adding that the technology was built to have a more precise grasp of the English language.
By adding the latest version of DALL-E to ChatGPT, OpenAI is solidifying its chatbot as a hub for generative A.I., which can produce text, images, sounds, software and other digital media on its own. Since ChatGPT went viral last year, it has kicked off a race among Silicon Valley tech giants to be at the forefront of A.I. with advancements.
On Tuesday, Google released a new version of its chatbot, Bard, which connects with several of the company’s most popular services, including Gmail, YouTube and Docs. Midjourney and Stable Diffusion, two other image generators, updated their models this summer.
A New Generation of Chatbots
Card 1 of 5
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:
ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).
Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.
Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.
Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.
OpenAI has long offered ways of connecting its chatbot with other online services, including Expedia, OpenTable and Wikipedia. But this is the first time the start-up has combined a chatbot with an image generator.
DALL-E and ChatGPT were previously separate applications. But with the latest release, people can now use ChatGPT’s service to produce digital images simply by describing what they want to see. Or they can create images using descriptions generated by the chatbot, further automating the generation of graphics, art and other media.
In a demonstration this week, Gabriel Goh, an OpenAI researcher, showed how ChatGPT can now generate detailed textual descriptions that are then used to produce images. After creating descriptions of a logo for a restaurant called Mountain Ramen, for instance, the bot generated several images from those descriptions in a matter of seconds.
The new version of DALL-E can produce images from multi-paragraph descriptions and closely follow instructions laid out in minute detail, Mr. Goh said. Like all image generators — and other A.I. systems — it is also prone to mistakes, he said.
As it works to refine the technology, OpenAI is not sharing DALL-E 3 with the wider public until next month. DALL-E 3 will then be available through ChatGPT Plus, a service that costs $20 a month.
Image-generating technology can be used to spread large amounts of disinformation online, experts have warned. To guard against that with DALL-E 3, OpenAI has incorporated tools designed to prevent problematic subjects, such as sexually explicit images and portrayals of public figures. The company is also trying to limit DALL-E’s ability to imitate specific artists’ styles.
In recent months, A.I. has been used as a source of visual misinformation. A synthetic and not especially sophisticated spoof of an apparent explosion at the Pentagon sent the stock market into a brief dip in May, among other examples. Voting experts also worry that the technology could be used maliciously during major elections.
Image
Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy
Sandhini Agarwal, an OpenAI researcher who focuses on safety and policyCredit...Jim Wilson/The New York Times
Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy, said DALL-E 3 tended to generate images that were more stylized than photorealistic. Still, she acknowledged that the model could be prompted to produce convincing scenes, such as the type of grainy images captured by security cameras.
For the most part, OpenAI does not plan to block potentially problematic content coming from DALL-E 3. Ms. Agarwal said such an approach was “just too broad” because images could be innocuous or dangerous depending on the context in which they appear.
“It really depends on where it’s being used, how people are talking about it,” she said.
https://www.nytimes.com/2023/09/20/tech ... 778d3e6de3
A.I.-generated images made using OpenAI’s DALL-E 3.Credit...OpenAI
ChatGPT can now generate images — and they are shockingly detailed.
On Wednesday, OpenAI, the San Francisco artificial intelligence start-up, released a new version of its DALL-E image generator to a small group of testers and folded the technology into ChatGPT, its popular online chatbot.
Called DALL-E 3, it can produce more convincing images than previous versions of the technology, showing a particular knack for images containing letters, numbers and human hands, the company said.
“It is far better at understanding and representing what the user is asking for,” said Aditya Ramesh, an OpenAI researcher, adding that the technology was built to have a more precise grasp of the English language.
By adding the latest version of DALL-E to ChatGPT, OpenAI is solidifying its chatbot as a hub for generative A.I., which can produce text, images, sounds, software and other digital media on its own. Since ChatGPT went viral last year, it has kicked off a race among Silicon Valley tech giants to be at the forefront of A.I. with advancements.
On Tuesday, Google released a new version of its chatbot, Bard, which connects with several of the company’s most popular services, including Gmail, YouTube and Docs. Midjourney and Stable Diffusion, two other image generators, updated their models this summer.
A New Generation of Chatbots
Card 1 of 5
A brave new world. A new crop of chatbots powered by artificial intelligence has ignited a scramble to determine whether the technology could upend the economics of the internet, turning today’s powerhouses into has-beens and creating the industry’s next giants. Here are the bots to know:
ChatGPT. ChatGPT, the artificial intelligence language model from a research lab, OpenAI, has been making headlines since November for its ability to respond to complex questions, write poetry, generate code, plan vacations and translate languages. GPT-4, the latest version introduced in mid-March, can even respond to images (and ace the Uniform Bar Exam).
Bing. Two months after ChatGPT’s debut, Microsoft, OpenAI’s primary investor and partner, added a similar chatbot, capable of having open-ended text conversations on virtually any topic, to its Bing internet search engine. But it was the bot’s occasionally inaccurate, misleading and weird responses that drew much of the attention after its release.
Bard. Google’s chatbot, called Bard, was released in March to a limited number of users in the United States and Britain. Originally conceived as a creative tool designed to draft emails and poems, it can generate ideas, write blog posts and answer questions with facts or opinions.
Ernie. The search giant Baidu unveiled China’s first major rival to ChatGPT in March. The debut of Ernie, short for Enhanced Representation through Knowledge Integration, turned out to be a flop after a promised “live” demonstration of the bot was revealed to have been recorded.
OpenAI has long offered ways of connecting its chatbot with other online services, including Expedia, OpenTable and Wikipedia. But this is the first time the start-up has combined a chatbot with an image generator.
DALL-E and ChatGPT were previously separate applications. But with the latest release, people can now use ChatGPT’s service to produce digital images simply by describing what they want to see. Or they can create images using descriptions generated by the chatbot, further automating the generation of graphics, art and other media.
In a demonstration this week, Gabriel Goh, an OpenAI researcher, showed how ChatGPT can now generate detailed textual descriptions that are then used to produce images. After creating descriptions of a logo for a restaurant called Mountain Ramen, for instance, the bot generated several images from those descriptions in a matter of seconds.
The new version of DALL-E can produce images from multi-paragraph descriptions and closely follow instructions laid out in minute detail, Mr. Goh said. Like all image generators — and other A.I. systems — it is also prone to mistakes, he said.
As it works to refine the technology, OpenAI is not sharing DALL-E 3 with the wider public until next month. DALL-E 3 will then be available through ChatGPT Plus, a service that costs $20 a month.
Image-generating technology can be used to spread large amounts of disinformation online, experts have warned. To guard against that with DALL-E 3, OpenAI has incorporated tools designed to prevent problematic subjects, such as sexually explicit images and portrayals of public figures. The company is also trying to limit DALL-E’s ability to imitate specific artists’ styles.
In recent months, A.I. has been used as a source of visual misinformation. A synthetic and not especially sophisticated spoof of an apparent explosion at the Pentagon sent the stock market into a brief dip in May, among other examples. Voting experts also worry that the technology could be used maliciously during major elections.
Image
Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy
Sandhini Agarwal, an OpenAI researcher who focuses on safety and policyCredit...Jim Wilson/The New York Times
Sandhini Agarwal, an OpenAI researcher who focuses on safety and policy, said DALL-E 3 tended to generate images that were more stylized than photorealistic. Still, she acknowledged that the model could be prompted to produce convincing scenes, such as the type of grainy images captured by security cameras.
For the most part, OpenAI does not plan to block potentially problematic content coming from DALL-E 3. Ms. Agarwal said such an approach was “just too broad” because images could be innocuous or dangerous depending on the context in which they appear.
“It really depends on where it’s being used, how people are talking about it,” she said.
https://www.nytimes.com/2023/09/20/tech ... 778d3e6de3
Maybe in Your Lifetime, People Will Live on the Moon and Then Mars
Through partnerships and 3-D printing, NASA is plotting how to build houses on the moon by 2040.
The moon is a magnet, and it is pulling us back.
Half a century ago, the astronauts of Apollo 17 spent three days on that pockmarked orb, whose gravitational pull tugs not just on our oceans but our imaginations. For 75 hours, the astronauts moonwalked in their spacesuits and rode in a lunar rover, with humanity watching on television sets 240,000 miles away. The Apollo program was shuttered after they splashed back down to the Pacific Ocean in December 1972, and since then, the moon has hung, uncharted and empty, a siren in the sky.
NASA is now plotting a return. This time around, the stay will be long-term. To make it happen, NASA is going to build houses on the moon — ones that can be used not just by astronauts but ordinary civilians as well. They believe that by 2040, Americans will have their first subdivision in space. Living on Mars isn’t far behind. Some in the scientific community say NASA’s timeline is overly ambitious, particularly before a proven success with a new lunar landing. But seven NASA scientists interviewed for this article all said that a 2040 goal for lunar structures is attainable if the agency can continue to hit their benchmarks.
The U.S. space agency will blast a 3-D printer up to the moon and then build structures, layer by additive layer, out of a specialized lunar concrete created from the rock chips, mineral fragments and dust that sits on the top layer of the moon’s cratered surface and billows in poisonous clouds whenever disturbed — a moonshot of a plan made possible through new technology and partnerships with universities and private companies.
ImageA rendering shows a birds-eye view of a space-based construction system.
ICON is calling its plan for off-world construction Project Olympus. It’s an in situ resource utilization construction system, meaning it would make use of materials found on other planets, not here on earth.Credit...ICON
“We’re at a pivotal moment, and in some ways it feels like a dream sequence,” said Niki Werkheiser, NASA’s director of technology maturation. “In other ways, it feels like it was inevitable that we would get here.”
Ms. Werkheiser, whose family owned a small construction business when she was growing up in Franklin, Tenn., guides the creation of new programs, machinery and robotics for future space missions.
NASA is more open than ever before to partnering with academics and industry leaders, which has made the playing field much wider than it was in the days of the Apollo missions, Ms. Werkheiser said. “We’ve got all the right people together at the right time with a common goal, which is why I think we’ll get there,” she said. “Everyone is ready to take this step together, so if we get our core capabilities developed, there’s no reason it’s not possible.”
Image
A small cylinder sits under a machine with a blue light on it.
At NASA’s Marshall Space Flight Center in Huntsville, Ala., in a nondescript laboratory deep in the bowels of one of their low-slung buildings, scientists are running tests on spheres of simulated moon dust.Credit...Robert Rausch for The New York Times
Image
A man's hands are shown holding a small cylinder cement with a scorch mark in the center.
Raymond Clinton Jr. holds a cylinder of cement made of simulated moon dust after it has been subjected to a plasma torch of about 3400 degrees — roughly the heat level of a rocket landing.Credit...Robert Rausch for The New York Times
Turning a Problem Into a Solution
Among the many obstacles of taking up residence on the moon is the dust — fine powder so abrasive it can cut like glass. It swirls in noxious plumes and is toxic when inhaled.
But four years ago, Raymond Clinton Jr., deputy director of the science and technology office at NASA’s Marshall Space Flight Center in Huntsville, Ala., pulled out a whiteboard to sketch the idea of houses, roads and landing pads. The dust is a problem, yes. But it could also be the solution.
Image
A man with a white goatee and a blue shirt with the NASA logo leans on a spaceship in an outdoor garden.
Four years ago, Dr. Clinton pulled out a whiteboard and realized 3-D printing could be a solution for building homes in space.Credit...Robert Rausch for The New York Times
If homes on earth could be successfully 3-D printed from soil made from the minerals found here, he thought, homes on the moon could be printed from the soil up there, where temperatures can swing up to 600 degrees and a vicious combination of radiation and micrometeorites pose a risk to both buildings and bodies.
NASA is calling its return to the moon Artemis, named after the twin sister of Apollo. Last November, Artemis I, the first of five planned moon missions, blasted off from Kennedy Space Center with only robots on board, circled the moon and returned safely to earth. Artemis II, which will carry four human crew members, including the first woman and the first Black person in history, on a 10-day flight around the same path, is scheduled for November 2024. That mission will be followed up one year later by Artemis III, when humans will land on the lunar surface. Two more crewed missions are planned before the end of the decade.
Dr. Clinton, 71, says he knows that average Americans may not be living on the moon during his lifetime, but for those just a few decades younger than him, it’s a real possibility.
“I wish I would be around to see it,” he said.
“When we talk about a sustainable human presence, to me that means that you have a lunar settlement and you have people living and working on the moon continuously,” Dr. Clinton said. “What that could be is only up to the imagination of entrepreneurs.”
Image
A rendering shows three cylindrical shaped structures on the moon.
So far, the plans for houses on the moon are little more than renderings, but architects at firms like SEArch+ (Space Exploration Architecture) have drawn up concepts, including this one, called the Lunar Lantern.Credit...SEArch+ LUNAR LANTERN for Project Olympus
‘No Home Depot Up There’
NASA has partnered with ICON, a construction technology company based in Austin, Texas, to reach its 2040 goal. ICON first received funding from NASA in 2020, and in 2022, it announced an additional $60 million for a space-based construction system that can be used beyond earth to print everything from rocket landing pads to habitats, all with concrete mixed on site. So far, the plans are little more than renderings, but they’ve enlisted the input of architects at both the Bjarke Ingels Group and SEArch+ (Space Exploration Architecture) to draw up concepts and designs.
Nearly any object can be printed in 3-D, and the process has been touted by ICON and other players in the field as a quick, cost-efficient solution to the nation’s housing crunch. 3-D printing builds objects layer by layer from a digital file; in its construction projects on Earth, ICON uses a proprietary building material called Lavacrete.
//Learn more about efforts to live in space.
//The Moon Is a Hazardous Place to Live
//July 8, 2019
//You Don’t Need a Spaceship to Grow ‘Weird Little’ Martian Radishes
//Dec. 28, 2021
//If Mars Is Colonized, We May Not Need to Ship In the Bricks
//April 28, 2017
No stranger to ambitious projects, the company is the creator of the Vulcan robotic large-scale construction system, which has been used to build some of the first 3-D printed homes in North America. These include Austin’s Community First! Village, which is a collection of 400 houses for the homeless, and homes in a village of affordable, hurricane-resistant houses for Mexicans living in poverty in the remote town of Nacajuca.
“It’s a surprisingly natural progression if you are asking about the ways additive construction and 3-D printing can create a better future for humanity,” Jason Ballard, ICON’s chief executive, said in a news release.
But printing in deep space is another dimension.
“Chemistry is the same up there, but physics are different,” said Patrick Suermann, interim dean of the School of Architecture at Texas A&M University, which is working closely with NASA to develop a construction system that can be operated by robots in space.
Traveling light is critical, he said, because every additional kilogram of weight carried on a rocket to the moon costs about $1,000,000. Carrying materials from earth to build in space, Dr. Suermann said, is unsustainable. “And there’s no Home Depot up there. So you either have to know how to use what’s up there, or send everything you need.”
Dr. Suermann was a civil engineering professor at the Air Force Academy and has built projects in some of the most remote spots on earth, from Afghanistan’s Helmand Province to the Arctic Circle. Building in space, he said, reminds him of the lesson he learned then — the greatest threats to life come not from other humans, but from the environment itself.
“We built a base out of next to nothing in Afghanistan. It’s all the same, just with more radiation and lower gravity,” he said. “And Mother Nature and the solar system are going to win every time..”
Image
A rendering shows two astronauts in bedrooms inside a space structure.
A rendering of what life on the moon inside the Lunar Lantern might look like.Credit...SEArch+ LUNAR LANTERN for Project Olympus
‘First Thing’ and Furniture
Any equipment that goes to the moon needs to be tested on earth to ensure it can withstand the environment, so Marshall also has over a dozen testing chambers that subject items to the same radiation and thermal vacuum conditions that they would endure off earth. In February 2024, ICON’s printer will be lowered into the largest chamber for its first test.
“If you can survive our chambers, then you’re very likely to survive space,” said Victor Pritchett, director of experimental fluids and environmental test branch for Marshall.
And before NASA and company can build homes, NASA needs to build landing pads, so that when the rockets carrying the 3-D printers land on the moon, the dust spread that kicks up can be mitigated.
NASA scientists are currently working to perfect a simulated lunar concrete that can stand in for the moon-made material while they run tests on earth. At Marshall, in a nondescript laboratory deep in the bowels of one of their low-slung buildings, scientists are running tests on spheres of simulated moon dust that has been poured and cast into a small cylinder. They don’t look like much — just a rounded hunk of concrete small enough to fit into your palm — but when held up to a plasma torch, they can withstand temperatures of 3,400 degrees Fahrenheit. This gives scientists hope that when they build out of the real thing, it will perform well under the conditions of an actual rocket landing, where temperatures reach hellfire-level hot.
Image
In a laboratory, a white testing chamber is seen with the NASA logo. An American flag hangs on the wall.
At the NASA Marshall Space Flight Center in Huntsville, Ala., there are over a dozen testing chambers that mimic the conditions of space. In February 2024, ICON’s printer will be lowered into the largest chamber for its first test.Credit...Robert Rausch for The New York Times
“The first thing that needs to happen is a proof of concept. Can we actually manipulate the soil on the lunar surface into a construction material?” said Jennifer Edmunson, the lead geologist at Marshall Space Flight Center for the project. “We need to start this development now if we’re going to realize habitats on the moon by the 2040 time frame.”
Of course, a house is made of more than walls — even in space, humans need a door to enter and exit from, and once inside, they need objects on which to sit and sleep, and all the other accouterments of life.
NASA is working with a handful of universities and private companies to create prototypes for space furniture and interior design, Dr. Edmunson said. NASA’s Ames Research Center, working with researchers at Stanford University, have even separated some of the minerals in synthetic lunar soil to make tiles of different colors, like green, gray and white, that could potentially even be used for kitchens and bathrooms.
Prime Real Estate Is Really on Mars
ICON and NASA’s shared vision is for a space-based lunar construction system called Olympus, controlled on earth by human technicians skilled in the emerging field of space construction. For those technicians, classes are already in session.
“In 10 years construction technology might be very different, the type of robots we use might be very different, and the AI that we use will be different. But what we can do right now is come up with the training strategies that make construction workers ready for the future to come,” said Amirhosein Jafari, an assistant professor of construction technology at Louisiana State University, who is helping develop simulation-based trainings for construction teams that would coordinate with robots in space.
Image
A man in a black fleece sits at a cube with a student in a white T-shirt. They both are working with wires.
Amirhosein Jafari, left, with his student, Ilerioluwa Giwa, right. Dr. Jafari is helping develop trainings for construction teams that would coordinate with robots in space.Credit...LSU School of Engineering
His colleague Ali Kazemian is working with NASA on the printing material itself, focusing on a waterless concrete fashioned from simulated versions of the rock material that exists on the moon. Dr. Kazemian sees in the rich lunar minerals an even deeper potential than just concrete for 3-D printing: He sees resources that can be used extensively by those who stay behind on earth.
“People talk about humans living on the moon,” he said. “But there’s another likely scenario, too. At some point on earth we are going to run out of resources. So establishing mines and fully automated factories on the moon is a possibility too.”
Scientists at NASA say that it is too early to consider the market value of homes on the moon, or even how an ownership structure for lunar habitats could look. But they acknowledge that the moon presents a potentially significant cache of untapped resources, and that other nations will undoubtedly be interested in a stake.
India last month landed a spacecraft on the moon, earning the distinction of the first nation to ever land near the southern polar region, where the most precious of resources — water — is believed to be lying in wait. The achievement came just two days after a Russian craft crashed ahead of a landing attempt, after it failed to adjust to its orbit. American astronauts famously planted their flag on the moon’s surface in 1969, but two years earlier, the 1967 Outer Space Treaty, a multilateral treaty that sits at the heart of international space law, declared that no one, in fact, can own the moon.
The Artemis Accords, launched by the United States together with seven other founding nations in 2020, gave a refresh to the principles of peaceful, cooperative exploration of the moon, and are now signed by 29 countries, including the United Kingdom, Japan, Italy, Canada, and Brazil. But notably, neither China nor Russia has signed.
Defense, ownership and international claims on the moon and Mars are not the purview of NASA, Ms. Werkheiser said. But for now, she said, in this newest iteration of space race, she believes the global community feels aligned.
The moon is not the final frontier. Wrapped into NASA’s push to build on the moon is a longer and even more far-flung goal: getting to Mars.
The moon is a practical spot for a layover, as NASA believes that the water on the lunar surface could be converted to rocket fuel. A spacecraft traveling from Earth to Mars may make a pit stop on the moon, where astronauts can stretch their legs, grab a bite to eat inside a 3-D printed structure and then gas up before hitting the proverbial road.
Image
Three astronauts in spacesuits and black masks wave at the door of a red building while a woman in a blue polo shirt and face mask applauds.
In Houston, four astronauts are currently spending a year inside the Mars Dune Alpha, a 1,700-square-foot structure that was 3-D printed by ICON and meant to simulate life on Mars. In June, they waved to the crowds before entering the structure.Credit...NASA
In Houston in June, with much fanfare, four NASA astronauts waved to a gathered crowd and then walked inside the Mars Dune Alpha, a 1,700-square-foot structure that was 3-D printed by ICON with Lavacrete tinted in the same burnt rust color as Mars itself. They then locked the doors, and will spend the next year living in simulated conditions to practice for one day living on Mars in reality.
Debra Kamin covers real estate for The Times. More about Debra Kamin
https://www.nytimes.com/2023/10/01/real ... nting.html
The moon is a magnet, and it is pulling us back.
Half a century ago, the astronauts of Apollo 17 spent three days on that pockmarked orb, whose gravitational pull tugs not just on our oceans but our imaginations. For 75 hours, the astronauts moonwalked in their spacesuits and rode in a lunar rover, with humanity watching on television sets 240,000 miles away. The Apollo program was shuttered after they splashed back down to the Pacific Ocean in December 1972, and since then, the moon has hung, uncharted and empty, a siren in the sky.
NASA is now plotting a return. This time around, the stay will be long-term. To make it happen, NASA is going to build houses on the moon — ones that can be used not just by astronauts but ordinary civilians as well. They believe that by 2040, Americans will have their first subdivision in space. Living on Mars isn’t far behind. Some in the scientific community say NASA’s timeline is overly ambitious, particularly before a proven success with a new lunar landing. But seven NASA scientists interviewed for this article all said that a 2040 goal for lunar structures is attainable if the agency can continue to hit their benchmarks.
The U.S. space agency will blast a 3-D printer up to the moon and then build structures, layer by additive layer, out of a specialized lunar concrete created from the rock chips, mineral fragments and dust that sits on the top layer of the moon’s cratered surface and billows in poisonous clouds whenever disturbed — a moonshot of a plan made possible through new technology and partnerships with universities and private companies.
ImageA rendering shows a birds-eye view of a space-based construction system.
ICON is calling its plan for off-world construction Project Olympus. It’s an in situ resource utilization construction system, meaning it would make use of materials found on other planets, not here on earth.Credit...ICON
“We’re at a pivotal moment, and in some ways it feels like a dream sequence,” said Niki Werkheiser, NASA’s director of technology maturation. “In other ways, it feels like it was inevitable that we would get here.”
Ms. Werkheiser, whose family owned a small construction business when she was growing up in Franklin, Tenn., guides the creation of new programs, machinery and robotics for future space missions.
NASA is more open than ever before to partnering with academics and industry leaders, which has made the playing field much wider than it was in the days of the Apollo missions, Ms. Werkheiser said. “We’ve got all the right people together at the right time with a common goal, which is why I think we’ll get there,” she said. “Everyone is ready to take this step together, so if we get our core capabilities developed, there’s no reason it’s not possible.”
Image
A small cylinder sits under a machine with a blue light on it.
At NASA’s Marshall Space Flight Center in Huntsville, Ala., in a nondescript laboratory deep in the bowels of one of their low-slung buildings, scientists are running tests on spheres of simulated moon dust.Credit...Robert Rausch for The New York Times
Image
A man's hands are shown holding a small cylinder cement with a scorch mark in the center.
Raymond Clinton Jr. holds a cylinder of cement made of simulated moon dust after it has been subjected to a plasma torch of about 3400 degrees — roughly the heat level of a rocket landing.Credit...Robert Rausch for The New York Times
Turning a Problem Into a Solution
Among the many obstacles of taking up residence on the moon is the dust — fine powder so abrasive it can cut like glass. It swirls in noxious plumes and is toxic when inhaled.
But four years ago, Raymond Clinton Jr., deputy director of the science and technology office at NASA’s Marshall Space Flight Center in Huntsville, Ala., pulled out a whiteboard to sketch the idea of houses, roads and landing pads. The dust is a problem, yes. But it could also be the solution.
Image
A man with a white goatee and a blue shirt with the NASA logo leans on a spaceship in an outdoor garden.
Four years ago, Dr. Clinton pulled out a whiteboard and realized 3-D printing could be a solution for building homes in space.Credit...Robert Rausch for The New York Times
If homes on earth could be successfully 3-D printed from soil made from the minerals found here, he thought, homes on the moon could be printed from the soil up there, where temperatures can swing up to 600 degrees and a vicious combination of radiation and micrometeorites pose a risk to both buildings and bodies.
NASA is calling its return to the moon Artemis, named after the twin sister of Apollo. Last November, Artemis I, the first of five planned moon missions, blasted off from Kennedy Space Center with only robots on board, circled the moon and returned safely to earth. Artemis II, which will carry four human crew members, including the first woman and the first Black person in history, on a 10-day flight around the same path, is scheduled for November 2024. That mission will be followed up one year later by Artemis III, when humans will land on the lunar surface. Two more crewed missions are planned before the end of the decade.
Dr. Clinton, 71, says he knows that average Americans may not be living on the moon during his lifetime, but for those just a few decades younger than him, it’s a real possibility.
“I wish I would be around to see it,” he said.
“When we talk about a sustainable human presence, to me that means that you have a lunar settlement and you have people living and working on the moon continuously,” Dr. Clinton said. “What that could be is only up to the imagination of entrepreneurs.”
Image
A rendering shows three cylindrical shaped structures on the moon.
So far, the plans for houses on the moon are little more than renderings, but architects at firms like SEArch+ (Space Exploration Architecture) have drawn up concepts, including this one, called the Lunar Lantern.Credit...SEArch+ LUNAR LANTERN for Project Olympus
‘No Home Depot Up There’
NASA has partnered with ICON, a construction technology company based in Austin, Texas, to reach its 2040 goal. ICON first received funding from NASA in 2020, and in 2022, it announced an additional $60 million for a space-based construction system that can be used beyond earth to print everything from rocket landing pads to habitats, all with concrete mixed on site. So far, the plans are little more than renderings, but they’ve enlisted the input of architects at both the Bjarke Ingels Group and SEArch+ (Space Exploration Architecture) to draw up concepts and designs.
Nearly any object can be printed in 3-D, and the process has been touted by ICON and other players in the field as a quick, cost-efficient solution to the nation’s housing crunch. 3-D printing builds objects layer by layer from a digital file; in its construction projects on Earth, ICON uses a proprietary building material called Lavacrete.
//Learn more about efforts to live in space.
//The Moon Is a Hazardous Place to Live
//July 8, 2019
//You Don’t Need a Spaceship to Grow ‘Weird Little’ Martian Radishes
//Dec. 28, 2021
//If Mars Is Colonized, We May Not Need to Ship In the Bricks
//April 28, 2017
No stranger to ambitious projects, the company is the creator of the Vulcan robotic large-scale construction system, which has been used to build some of the first 3-D printed homes in North America. These include Austin’s Community First! Village, which is a collection of 400 houses for the homeless, and homes in a village of affordable, hurricane-resistant houses for Mexicans living in poverty in the remote town of Nacajuca.
“It’s a surprisingly natural progression if you are asking about the ways additive construction and 3-D printing can create a better future for humanity,” Jason Ballard, ICON’s chief executive, said in a news release.
But printing in deep space is another dimension.
“Chemistry is the same up there, but physics are different,” said Patrick Suermann, interim dean of the School of Architecture at Texas A&M University, which is working closely with NASA to develop a construction system that can be operated by robots in space.
Traveling light is critical, he said, because every additional kilogram of weight carried on a rocket to the moon costs about $1,000,000. Carrying materials from earth to build in space, Dr. Suermann said, is unsustainable. “And there’s no Home Depot up there. So you either have to know how to use what’s up there, or send everything you need.”
Dr. Suermann was a civil engineering professor at the Air Force Academy and has built projects in some of the most remote spots on earth, from Afghanistan’s Helmand Province to the Arctic Circle. Building in space, he said, reminds him of the lesson he learned then — the greatest threats to life come not from other humans, but from the environment itself.
“We built a base out of next to nothing in Afghanistan. It’s all the same, just with more radiation and lower gravity,” he said. “And Mother Nature and the solar system are going to win every time..”
Image
A rendering shows two astronauts in bedrooms inside a space structure.
A rendering of what life on the moon inside the Lunar Lantern might look like.Credit...SEArch+ LUNAR LANTERN for Project Olympus
‘First Thing’ and Furniture
Any equipment that goes to the moon needs to be tested on earth to ensure it can withstand the environment, so Marshall also has over a dozen testing chambers that subject items to the same radiation and thermal vacuum conditions that they would endure off earth. In February 2024, ICON’s printer will be lowered into the largest chamber for its first test.
“If you can survive our chambers, then you’re very likely to survive space,” said Victor Pritchett, director of experimental fluids and environmental test branch for Marshall.
And before NASA and company can build homes, NASA needs to build landing pads, so that when the rockets carrying the 3-D printers land on the moon, the dust spread that kicks up can be mitigated.
NASA scientists are currently working to perfect a simulated lunar concrete that can stand in for the moon-made material while they run tests on earth. At Marshall, in a nondescript laboratory deep in the bowels of one of their low-slung buildings, scientists are running tests on spheres of simulated moon dust that has been poured and cast into a small cylinder. They don’t look like much — just a rounded hunk of concrete small enough to fit into your palm — but when held up to a plasma torch, they can withstand temperatures of 3,400 degrees Fahrenheit. This gives scientists hope that when they build out of the real thing, it will perform well under the conditions of an actual rocket landing, where temperatures reach hellfire-level hot.
Image
In a laboratory, a white testing chamber is seen with the NASA logo. An American flag hangs on the wall.
At the NASA Marshall Space Flight Center in Huntsville, Ala., there are over a dozen testing chambers that mimic the conditions of space. In February 2024, ICON’s printer will be lowered into the largest chamber for its first test.Credit...Robert Rausch for The New York Times
“The first thing that needs to happen is a proof of concept. Can we actually manipulate the soil on the lunar surface into a construction material?” said Jennifer Edmunson, the lead geologist at Marshall Space Flight Center for the project. “We need to start this development now if we’re going to realize habitats on the moon by the 2040 time frame.”
Of course, a house is made of more than walls — even in space, humans need a door to enter and exit from, and once inside, they need objects on which to sit and sleep, and all the other accouterments of life.
NASA is working with a handful of universities and private companies to create prototypes for space furniture and interior design, Dr. Edmunson said. NASA’s Ames Research Center, working with researchers at Stanford University, have even separated some of the minerals in synthetic lunar soil to make tiles of different colors, like green, gray and white, that could potentially even be used for kitchens and bathrooms.
Prime Real Estate Is Really on Mars
ICON and NASA’s shared vision is for a space-based lunar construction system called Olympus, controlled on earth by human technicians skilled in the emerging field of space construction. For those technicians, classes are already in session.
“In 10 years construction technology might be very different, the type of robots we use might be very different, and the AI that we use will be different. But what we can do right now is come up with the training strategies that make construction workers ready for the future to come,” said Amirhosein Jafari, an assistant professor of construction technology at Louisiana State University, who is helping develop simulation-based trainings for construction teams that would coordinate with robots in space.
Image
A man in a black fleece sits at a cube with a student in a white T-shirt. They both are working with wires.
Amirhosein Jafari, left, with his student, Ilerioluwa Giwa, right. Dr. Jafari is helping develop trainings for construction teams that would coordinate with robots in space.Credit...LSU School of Engineering
His colleague Ali Kazemian is working with NASA on the printing material itself, focusing on a waterless concrete fashioned from simulated versions of the rock material that exists on the moon. Dr. Kazemian sees in the rich lunar minerals an even deeper potential than just concrete for 3-D printing: He sees resources that can be used extensively by those who stay behind on earth.
“People talk about humans living on the moon,” he said. “But there’s another likely scenario, too. At some point on earth we are going to run out of resources. So establishing mines and fully automated factories on the moon is a possibility too.”
Scientists at NASA say that it is too early to consider the market value of homes on the moon, or even how an ownership structure for lunar habitats could look. But they acknowledge that the moon presents a potentially significant cache of untapped resources, and that other nations will undoubtedly be interested in a stake.
India last month landed a spacecraft on the moon, earning the distinction of the first nation to ever land near the southern polar region, where the most precious of resources — water — is believed to be lying in wait. The achievement came just two days after a Russian craft crashed ahead of a landing attempt, after it failed to adjust to its orbit. American astronauts famously planted their flag on the moon’s surface in 1969, but two years earlier, the 1967 Outer Space Treaty, a multilateral treaty that sits at the heart of international space law, declared that no one, in fact, can own the moon.
The Artemis Accords, launched by the United States together with seven other founding nations in 2020, gave a refresh to the principles of peaceful, cooperative exploration of the moon, and are now signed by 29 countries, including the United Kingdom, Japan, Italy, Canada, and Brazil. But notably, neither China nor Russia has signed.
Defense, ownership and international claims on the moon and Mars are not the purview of NASA, Ms. Werkheiser said. But for now, she said, in this newest iteration of space race, she believes the global community feels aligned.
The moon is not the final frontier. Wrapped into NASA’s push to build on the moon is a longer and even more far-flung goal: getting to Mars.
The moon is a practical spot for a layover, as NASA believes that the water on the lunar surface could be converted to rocket fuel. A spacecraft traveling from Earth to Mars may make a pit stop on the moon, where astronauts can stretch their legs, grab a bite to eat inside a 3-D printed structure and then gas up before hitting the proverbial road.
Image
Three astronauts in spacesuits and black masks wave at the door of a red building while a woman in a blue polo shirt and face mask applauds.
In Houston, four astronauts are currently spending a year inside the Mars Dune Alpha, a 1,700-square-foot structure that was 3-D printed by ICON and meant to simulate life on Mars. In June, they waved to the crowds before entering the structure.Credit...NASA
In Houston in June, with much fanfare, four NASA astronauts waved to a gathered crowd and then walked inside the Mars Dune Alpha, a 1,700-square-foot structure that was 3-D printed by ICON with Lavacrete tinted in the same burnt rust color as Mars itself. They then locked the doors, and will spend the next year living in simulated conditions to practice for one day living on Mars in reality.
Debra Kamin covers real estate for The Times. More about Debra Kamin
https://www.nytimes.com/2023/10/01/real ... nting.html
Scientists Use CRISPR to Make Chickens More Resistant to Bird Flu
A new study highlights both the promise and the limitations of gene editing, as a highly lethal form of avian influenza continues to spread around the world.
Avian flu has killed countless farmed and wild birds. Scientists worry that it could acquire mutations that help it spread more easily among humans, potentially setting off a pandemic.Credit...Matthew Hatcher/Agence France-Presse — Getty Images
Scientists have used the gene-editing technology known as CRISPR to create chickens that have some resistance to avian influenza, according to a new study that was published in the journal Nature Communications on Tuesday.
The study suggests that genetic engineering could potentially be one tool for reducing the toll of bird flu, a group of viruses that pose grave dangers to both animals and humans. But the study also highlights the limitations and potential risks of the approach, scientists said.
Some breakthrough infections still occurred, especially when gene-edited chickens were exposed to very high doses of the virus, the researchers found. And when the scientists edited just one chicken gene, the virus quickly adapted. The findings suggest that creating flu-resistant chickens will require editing multiple genes and that scientists will need to proceed carefully to avoid driving further evolution of the virus, the study’s authors said.
The research is “proof of concept that we can move toward making chickens resistant to the virus,” Wendy Barclay, a virologist at Imperial College London and an author of the study, said at a news briefing. “But we’re not there yet.”
Some scientists who were not involved in the research had a different takeaway.
“It’s an excellent study,” said Dr. Carol Cardona, an expert on bird flu and avian health at the University of Minnesota. But to Dr. Cardona, the results illustrate how difficult it will be to engineer a chicken that can stay a step ahead of the flu, a virus known for its ability to evolve swiftly.
“There’s no such thing as an easy button for influenza,” Dr. Cardona said. “It replicates quickly, and it adapts quickly.”
//What to Know About Avian Flu
Card 1 of 6
//The spread of H5N1. A new variant of this strain of the avian flu has spread widely through bird populations in recent years. It has taken an unusually heavy toll on wild birds and repeatedly spilled over into mammals, including minks, foxes and bears. Here’s what to know about the virus:
What is avian influenza? Better known as the bird flu, avian influenza is a group of flu viruses that is well adapted to birds. Some strains, like the version of H5N1 that is currently spreading, are frequently fatal to chickens and turkeys. It spreads via nasal secretions, saliva and fecal droppings, which experts say makes it difficult to contain.
Should humans be worried about being infected? Although the danger to the public is currently low, people who are in close contact with sick birds can and have been infected. The virus is primarily a threat to birds, but infections in mammals increase the odds that the virus could mutate in ways that make it more of a risk to humans, experts say.
How can we stop the spread? The U.S. Department of Agriculture has urged poultry growers to tighten their farms’ biosecurity measures, but experts say the virus is so contagious that there is little choice but to cull infected flocks. The Biden administration has been contemplating a mass vaccination campaign for poultry.
Is it safe to eat poultry and eggs? The Agriculture Department has said that properly prepared and cooked poultry and eggs should not pose a risk to consumers. The chance of infected poultry entering the food chain is “extremely low,” according to the agency.
Can I expect to pay more for poultry products? Egg prices soared when an outbreak ravaged the United States in 2014 and 2015. The current outbreak of the virus — paired with inflation and other factors — has contributed to an egg supply shortage and record-high prices in some parts of the country.
Avian influenza refers to a group of flu viruses that are adapted to spread in birds. Over the last several years, a highly lethal version of a bird flu virus known as H5N1 has spread rapidly around the globe, killing countless farmed and wild birds. It has also repeatedly infected wild mammals and been detected in a small number of people. Although the virus remains adapted to birds, scientists worry that it could acquire mutations that help it spread more easily among humans, potentially setting off a pandemic.
Many nations have tried to stamp out the virus by increasing biosecurity on farms, quarantining infected premises and culling infected flocks. But the virus has become so widespread in wild birds that it has proved impossible to contain, and some nations have begun vaccinating poultry, although that endeavor presents some logistic and economic challenges.
Image
A person in white protective gear pilots a bulldozer that dumps chicken eggs into a dumpster on a farm.
Culling chicken eggs at a quarantined farm in northern Israel.Credit...Atef Safadi/EPA, via Shutterstock
If scientists could engineer resistance into chickens, farmers would not need to routinely vaccinate new batches of birds. Gene editing “promises a new way to make permanent changes in the disease resistance of an animal,” Mike McGrew, an embryologist at the University of Edinburgh’s Roslin Institute and an author of the new study, said at the briefing. “This can be passed down through all the gene-edited animals, to all the offspring.”
CRISPR, the gene-editing technology used in the study, is a molecular tool that allows scientists to make targeted edits in DNA, changing the genetic code at a precise point in the genome. In the new study, the researchers used this approach to tweak a chicken gene that codes for a protein known as ANP32A, which the flu virus hijacks to copy itself. The tweaks were designed to prevent the virus from binding to the protein — and therefore keep it from replicating inside chickens.
The edits did not appear to have negative health consequences for the chickens, the researchers said. “We observed that they were healthy, and that the gene-edited hens also laid eggs normally,” said Dr. Alewo Idoko-Akoh, who conducted the research as a postdoctoral researcher at the University of Edinburgh.
The researchers then sprayed a dose of flu virus into the nasal cavities of 10 chickens that had not been genetically edited, to serve as the control. (The researchers used a mild version of the virus different from the one that has been causing major outbreaks in recent years.) All of the control chickens were infected with the virus, which they then transmitted to other control chickens they were housed with.
When the researchers administered flu virus directly into the nasal cavities of 10 gene-edited chickens, just one of the birds became infected. It had low levels of the virus and did not pass the virus on to other gene-edited birds.
“But having seen that, we felt that it would be the responsible thing to be more rigorous, to stress test this and ask, ‘Are these chickens truly resistant?’” Dr. Barclay said. “‘What if they were to somehow encounter a much, much higher dose?’”
When the scientists gave the gene-edited chickens a flu dose that was 1,000 times higher, half of the birds became infected. The researchers found, however, that they generally shed lower levels of the virus than control chickens exposed to the same high dose.
The researchers then studied samples of the virus from the gene-edited birds that had been infected. These samples had several notable mutations, which appeared to allow the virus to use the edited ANP32A protein to replicate, they found.
Some of these mutations also helped the virus replicate better in human cells, although the researchers noted that those mutations in isolation would not be enough to create a virus that was well adapted to humans.
Seeing those mutations is “not ideal,” said Richard Webby, who is a bird flu expert at St. Jude Children’s Research Hospital and was not involved in the research. “But when you get to the weeds of these particular changes, then it doesn’t concern me quite so much.”
The mutated flu virus was also able to replicate even in the complete absence of the ANP32A protein by using two other proteins in the same family, the researchers found. When they created chicken cells that lacked all three of these proteins, the virus was not able to replicate. Those chicken cells were also resistant to the highly lethal version of H5N1 that has been spreading around the world the last several years.
The researchers are now working to create chickens with edits in all three of the genes for the protein family.
The big question, Dr. Webby said, was whether chickens with edits in all three genes would still develop normally and grow as fast as poultry producers needed. But the idea of gene editing chickens had enormous promise, he said. “Absolutely, we’re going to get to a point where we can manipulate the host genome to make them less susceptible to flu,” he said. “That’ll be a win for public health.”
Emily Anthes is a reporter for The Times, where she focuses on science and health and covers topics like the coronavirus pandemic, vaccinations, virus testing and Covid in children. More about Emily Anthes
https://www.nytimes.com/2023/10/10/scie ... 778d3e6de3
Avian flu has killed countless farmed and wild birds. Scientists worry that it could acquire mutations that help it spread more easily among humans, potentially setting off a pandemic.Credit...Matthew Hatcher/Agence France-Presse — Getty Images
Scientists have used the gene-editing technology known as CRISPR to create chickens that have some resistance to avian influenza, according to a new study that was published in the journal Nature Communications on Tuesday.
The study suggests that genetic engineering could potentially be one tool for reducing the toll of bird flu, a group of viruses that pose grave dangers to both animals and humans. But the study also highlights the limitations and potential risks of the approach, scientists said.
Some breakthrough infections still occurred, especially when gene-edited chickens were exposed to very high doses of the virus, the researchers found. And when the scientists edited just one chicken gene, the virus quickly adapted. The findings suggest that creating flu-resistant chickens will require editing multiple genes and that scientists will need to proceed carefully to avoid driving further evolution of the virus, the study’s authors said.
The research is “proof of concept that we can move toward making chickens resistant to the virus,” Wendy Barclay, a virologist at Imperial College London and an author of the study, said at a news briefing. “But we’re not there yet.”
Some scientists who were not involved in the research had a different takeaway.
“It’s an excellent study,” said Dr. Carol Cardona, an expert on bird flu and avian health at the University of Minnesota. But to Dr. Cardona, the results illustrate how difficult it will be to engineer a chicken that can stay a step ahead of the flu, a virus known for its ability to evolve swiftly.
“There’s no such thing as an easy button for influenza,” Dr. Cardona said. “It replicates quickly, and it adapts quickly.”
//What to Know About Avian Flu
Card 1 of 6
//The spread of H5N1. A new variant of this strain of the avian flu has spread widely through bird populations in recent years. It has taken an unusually heavy toll on wild birds and repeatedly spilled over into mammals, including minks, foxes and bears. Here’s what to know about the virus:
What is avian influenza? Better known as the bird flu, avian influenza is a group of flu viruses that is well adapted to birds. Some strains, like the version of H5N1 that is currently spreading, are frequently fatal to chickens and turkeys. It spreads via nasal secretions, saliva and fecal droppings, which experts say makes it difficult to contain.
Should humans be worried about being infected? Although the danger to the public is currently low, people who are in close contact with sick birds can and have been infected. The virus is primarily a threat to birds, but infections in mammals increase the odds that the virus could mutate in ways that make it more of a risk to humans, experts say.
How can we stop the spread? The U.S. Department of Agriculture has urged poultry growers to tighten their farms’ biosecurity measures, but experts say the virus is so contagious that there is little choice but to cull infected flocks. The Biden administration has been contemplating a mass vaccination campaign for poultry.
Is it safe to eat poultry and eggs? The Agriculture Department has said that properly prepared and cooked poultry and eggs should not pose a risk to consumers. The chance of infected poultry entering the food chain is “extremely low,” according to the agency.
Can I expect to pay more for poultry products? Egg prices soared when an outbreak ravaged the United States in 2014 and 2015. The current outbreak of the virus — paired with inflation and other factors — has contributed to an egg supply shortage and record-high prices in some parts of the country.
Avian influenza refers to a group of flu viruses that are adapted to spread in birds. Over the last several years, a highly lethal version of a bird flu virus known as H5N1 has spread rapidly around the globe, killing countless farmed and wild birds. It has also repeatedly infected wild mammals and been detected in a small number of people. Although the virus remains adapted to birds, scientists worry that it could acquire mutations that help it spread more easily among humans, potentially setting off a pandemic.
Many nations have tried to stamp out the virus by increasing biosecurity on farms, quarantining infected premises and culling infected flocks. But the virus has become so widespread in wild birds that it has proved impossible to contain, and some nations have begun vaccinating poultry, although that endeavor presents some logistic and economic challenges.
Image
A person in white protective gear pilots a bulldozer that dumps chicken eggs into a dumpster on a farm.
Culling chicken eggs at a quarantined farm in northern Israel.Credit...Atef Safadi/EPA, via Shutterstock
If scientists could engineer resistance into chickens, farmers would not need to routinely vaccinate new batches of birds. Gene editing “promises a new way to make permanent changes in the disease resistance of an animal,” Mike McGrew, an embryologist at the University of Edinburgh’s Roslin Institute and an author of the new study, said at the briefing. “This can be passed down through all the gene-edited animals, to all the offspring.”
CRISPR, the gene-editing technology used in the study, is a molecular tool that allows scientists to make targeted edits in DNA, changing the genetic code at a precise point in the genome. In the new study, the researchers used this approach to tweak a chicken gene that codes for a protein known as ANP32A, which the flu virus hijacks to copy itself. The tweaks were designed to prevent the virus from binding to the protein — and therefore keep it from replicating inside chickens.
The edits did not appear to have negative health consequences for the chickens, the researchers said. “We observed that they were healthy, and that the gene-edited hens also laid eggs normally,” said Dr. Alewo Idoko-Akoh, who conducted the research as a postdoctoral researcher at the University of Edinburgh.
The researchers then sprayed a dose of flu virus into the nasal cavities of 10 chickens that had not been genetically edited, to serve as the control. (The researchers used a mild version of the virus different from the one that has been causing major outbreaks in recent years.) All of the control chickens were infected with the virus, which they then transmitted to other control chickens they were housed with.
When the researchers administered flu virus directly into the nasal cavities of 10 gene-edited chickens, just one of the birds became infected. It had low levels of the virus and did not pass the virus on to other gene-edited birds.
“But having seen that, we felt that it would be the responsible thing to be more rigorous, to stress test this and ask, ‘Are these chickens truly resistant?’” Dr. Barclay said. “‘What if they were to somehow encounter a much, much higher dose?’”
When the scientists gave the gene-edited chickens a flu dose that was 1,000 times higher, half of the birds became infected. The researchers found, however, that they generally shed lower levels of the virus than control chickens exposed to the same high dose.
The researchers then studied samples of the virus from the gene-edited birds that had been infected. These samples had several notable mutations, which appeared to allow the virus to use the edited ANP32A protein to replicate, they found.
Some of these mutations also helped the virus replicate better in human cells, although the researchers noted that those mutations in isolation would not be enough to create a virus that was well adapted to humans.
Seeing those mutations is “not ideal,” said Richard Webby, who is a bird flu expert at St. Jude Children’s Research Hospital and was not involved in the research. “But when you get to the weeds of these particular changes, then it doesn’t concern me quite so much.”
The mutated flu virus was also able to replicate even in the complete absence of the ANP32A protein by using two other proteins in the same family, the researchers found. When they created chicken cells that lacked all three of these proteins, the virus was not able to replicate. Those chicken cells were also resistant to the highly lethal version of H5N1 that has been spreading around the world the last several years.
The researchers are now working to create chickens with edits in all three of the genes for the protein family.
The big question, Dr. Webby said, was whether chickens with edits in all three genes would still develop normally and grow as fast as poultry producers needed. But the idea of gene editing chickens had enormous promise, he said. “Absolutely, we’re going to get to a point where we can manipulate the host genome to make them less susceptible to flu,” he said. “That’ll be a win for public health.”
Emily Anthes is a reporter for The Times, where she focuses on science and health and covers topics like the coronavirus pandemic, vaccinations, virus testing and Covid in children. More about Emily Anthes
https://www.nytimes.com/2023/10/10/scie ... 778d3e6de3
AI & Your Life: The Essential Summit
Hello, Karim!
Are you excited about AI's potential but unsure how to harness its power in your daily life?
Are you curious about the impact AI can have on your health and well-being, relationships and career?
Do you want a comprehensive eGuide to AI that cuts through the hype and provides practical knowledge you can use?
Wisdom for Life has put together the most comprehensive online event ever presented on AI that’s geared toward everyday people like us.
--->>Discover the profound impact of AI on your life and the world around you when you attend this complimentary, online event! https://www.aiandyourlifesummit.com/hom ... 3&r_done=1
Be sure to mark your calendar for November 1-7, 2023.
Whether you're a newcomer or an AI enthusiast, the event is perfect for all levels of expertise.
Join 30+ presenters including scientists, doctors, therapists, ethicists, educators, healthcare professionals, business leaders & entrepreneurs and many more for this one-of-a-kind event.
Here’s what you’ll experience at AI & Your Life: The Essential Summit:
Day 1 - Demystifying AI: Current Reality & How You Can Use the Tools Now
Day 2 - Intelligent Health & Wellness
Day 3 - Human Connection & Relationships in the AI Age
Day 4 - The Future of Learning and Parenting
Day 5 - AI for Business & Entrepreneurs
Day 6 - A Hand on the Reins: Truth, Ethics, Safety and Fairness
Day 7 - The Promise and the Peril: Visions of Humanity's Future
Artificial intelligence is on the verge of reshaping how we live and interact in almost every aspect of our daily lives…
From the way we approach our health, learning and parenting to how we do business. AI will affect where and how we work, our role in society, plus our impact on the planet.
--->>We’ll see you online at AI & Your Life: The Essential Summit when you register now! https://www.aiandyourlifesummit.com/hom ... 3&r_done=1
So thankful that you’re making an investment in your health, your life and yourself!
Because health means everything,
Look, Up in the Sky! It’s a Can of Soup!
Amazon’s much-hyped drone project is dropping small objects on driveways. Some customers are not sure what it delivers beyond minestrone.
Video: https://vp.nyt.com/video/2023/10/19/112 ... g_720p.mp4
An Amazon drone delivers a can of Campbell’s Chunky Minestrone With Italian Sausage to the home of Dominique Lord and Leah Silverman in College Station, Texas.Credit...Video by Callaghan O’Hare For The New York Times
Exactly a decade ago, Amazon revealed a program that aimed to revolutionize shopping and shipping. Drones launched from a central hub would waft through the skies delivering just about everything anyone could need. They would be fast, innovative, ubiquitous — all the Amazon hallmarks.
The buzzy announcement, made by Jeff Bezos on “60 Minutes” as part of a Cyber Monday promotional package, drew global attention. “I know this looks like science fiction. It’s not,” said Mr. Bezos, Amazon’s founder and the chief executive at the time. The drones would be “ready to enter commercial operations as soon as the necessary regulations are in place,” probably in 2015, the company said.
Eight additional years later, drone delivery is a reality — kind of — on the outskirts of College Station, Texas, northwest of Houston. That is a major achievement for a program that has waxed and waned over the years and lost many of its early leaders to newer and more urgent projects.
Yet the venture as it currently exists is so underwhelming that Amazon can keep the drones in the air only by giving stuff away. Years of toil by top scientists and aviation specialists have yielded a program that flies Listerine Cool Mint Breath Strips or a can of Campbell’s Chunky Minestrone With Italian Sausage — but not both at once — to customers as gifts. If this is science fiction, it’s being played for laughs.
A decade is an eternity in technology, but even so, drone delivery does not approach the scale or simplicity of Amazon’s original promotional videos. This gap between dazzling claims and mundane reality happens all the time in Silicon Valley. Self-driving cars, the metaverse, flying cars, robots, neighborhoods or even cities built from scratch, virtual universities that can compete with Harvard, artificial intelligence — the list of delayed and incomplete promises is long.
“Having ideas is easy,” said Rodney Brooks, a robotics entrepreneur and frequent critic of technology companies’ hype. “Turning them into reality is hard. Turning them into being deployed at scale is even harder.”
Amazon said last month that drone deliveries would expand to Britain, Italy and another, unidentified U.S. city by the end of 2024. Yet even on the threshold of growth, a question lingers. Now that the drones finally exist in at least limited form, why did we think we needed them in the first place?
ImageSunlight glints off a drone viewed from below.
Newer models of Amazon’s delivery drones will be able to fly in inclement weather and reduce “perceived noise” by 25 percent.Credit...Callaghan O'Hare for The New York Times
Dominique Lord and Leah Silverman live in College Station’s drone zone. They are Amazon fans and place regular orders for ground delivery. Drones are another matter, even if the service is free for Amazon Prime members. While it’s cool to have stuff literally land on your driveway, at least the first few times, there are many hurdles to getting stuff this way.
Only one item can be delivered at a time. It can’t weigh over five pounds. It can’t be too big. It can’t be something breakable, since the drone drops it from 12 feet. The drones can’t fly when it is too hot or too windy or too rainy.
//Inside the World of Big Tech
//X: A year after Elon Musk bought the social media platform formerly known as Twitter for $44 billion, the company handed out stock grants to employees that showed it was worth about $19 billion.
Meta: The company said that it will introduce an advertisement-free subscription option for Facebook and Instagram for users in Europe, a sign of how government pressure is leading large tech companies to change their core products.
//Amazon: Jeff Bezos’s e-commerce company launched two prototype satellites to orbit. The spacecraft will test systems to be used in a planned megaconstellation to provide internet service from orbit that will eventually compete with SpaceX’s Starlink service.
Spotify: The audio streaming platform said that it would begin offering 15 hours of audiobooks each month for premium subscribers, possibly shaking up the fast-growing segment of publishing.
You need to be home to put out the landing target and to make sure that a porch pirate doesn’t make off with your item or that it doesn’t roll into the street (which happened once to Mr. Lord and Ms. Silverman). But your car can’t be in the driveway. Letting the drone land in the backyard would avoid some of these problems, but not if there are trees.
Amazon has also warned customers that drone delivery is unavailable during periods of high demand for drone delivery.
The other active U.S. test site is Lockeford, Calif., in the Central Valley. On a recent afternoon, the Lockeford site seemed largely moribund, with only three cars in the parking lot. Amazon said it was delivering via drones in Lockeford and arranged for a New York Times reporter to come back to the site. It also arranged an interview with David Carbon, the former Boeing executive who runs the drone program. The company later canceled both without explanation.
A corporate blog post on Oct. 18 said that drones had safely delivered “hundreds” of household items in College Station since December, and that customers there could now have some medications delivered. Lockeford wasn’t mentioned.
After Ms. Silverman and Mr. Lord expressed initial interest in the drone program, Amazon offered $100 in gift certificates in October 2022 to follow through. But their service didn’t start until June, and then was suspended during a punishing heat wave when the drones could not fly.
Image
Ms. Silverman holds up a bag of 365 brand chopped walnuts.
Ms. Silverman with the entirety of an Amazon drone’s delivery. Credit...Callaghan O'Hare for The New York Times
The incentives, however, kept coming. The couple got an email the other day from Amazon pushing Skippy Creamy Peanut Butter, which usually costs $5.38 but was a “free gift” while supplies lasted. They ordered it, and a little while later a drone dropped a big box containing a small jar. Amazon said “some promotional items” are being offered “as a welcome.”
“We don’t really need anything they offer for free,” said Ms. Silverman, a 51-year-old novelist and caregiver. “The drones feel more like a toy than anything — a toy that wastes a huge amount of paper and cardboard.”
Image
Mr. Lord placing an order for a free item in the drone program.
Mr. Lord placing an order for a free item in the drone program.Credit...Callaghan O'Hare for The New York Times
Image
Contents may settle during shipping.
Contents may settle during shipping.Credit...Callaghan O'Hare for The New York Times
The Texas weather plays havoc with important deliveries. Mr. Lord, a 54-year-old professor of civil engineering at Texas A&M, ordered a medication through the mail. By the time he retrieved the package, the drug had melted. He’s hopeful that the drones can eventually handle problems like this.
“I still view this program positively knowing that it is in the experimental phase,” he said.
Amazon says the drones will improve over time. It announced a new model, the MK30, last year and released pictures in October. The MK30, which is slated to begin service by the end of 2024, was touted as having a greater range, an ability to fly in inclement weather and a 25 percent reduction in “perceived noise.”
When Amazon began working on drones years ago, the retailer took two or three days to ship many items to customers. It worried that it was vulnerable to potential competitors whose vendors were more local, including Google and eBay. Drones were all about speed.
“We can do half-hour delivery,” Mr. Bezos promised on “60 Minutes.”
For a while, drones were the next big thing. Google developed its own drone service, Wing, which now works with Walmart to deliver items in parts of Dallas and Frisco, Texas. Start-ups got funding — about $2.5 billion was invested between 2013 and 2019, according to the Teal Group, an aerospace consultancy. The veteran venture capitalist Tim Draper said in 2013 that “everything from pizza delivery to personal shopping can be handled by drones.” Uber Eats announced a food delivery drone in late 2019. The future was up in the air.
Amazon started thinking really long term. It envisioned, and got a patent for, a drone resupply vehicle that would hover in the sky at 45,000 feet. That’s above commercial airplanes, but Amazon said it could use the vehicles to deliver customers a hot dinner.
Yet on the ground, progress was slow, sometimes for technical reasons and sometimes because of the company’s corporate DNA. The same aggressive confidence that created a trillion-dollar business undermined Amazon’s efforts to work with the Federal Aviation Administration.
“The attitude was: ‘We’re Amazon. We’ll convince the F.A.A.,’” said one former Amazon drone executive, who asked for anonymity because he wasn’t authorized to speak about the subject. “The F.A.A. wants companies to come in with great humility and great transparency. That is not a strength of Amazon.”
A more complicated issue was getting the technology to the point where it was safe not just most of the time but all of the time. The first drone that lands on someone’s head, or takes off clutching a cat, sets the program back another decade, particularly if it is filmed.
Image
Mr. Lord sits outside his house under a clear blue sky. Leaning against him is a pad, roughly a square yard in size, bearing a black-and-gray geometric design.
Mr. Lord with the QR-coded target for the packages, which don’t always stick the landing.Credit...Callaghan O'Hare for The New York Times
“Part of the DNA of the tech industry is you can accomplish things you never thought you could accomplish,” said Neil Woodward, who spent four years as a senior manager in Amazon’s drone program. “But the truth is the laws of physics don’t change.”
Mr. Woodward, now retired, spent years at NASA in the astronaut program before moving to the private sector.
“When you work for the government, you have 535 people on your board of directors” — he was referring to Congress — “and a good chunk of them want to take your funding away because they have other priorities,” he said. “That makes government agencies very risk averse. At Amazon, you’re given a lot of rope, but you can get out over your skis.”
Image
A person’s shadow stretches along a driveway next to the landing target.
Amazon prefers a target to have the driveway to itself when a drone is coming.Credit...Callaghan O'Hare for The New York Times
In the end, there must be a market. As Mr. Woodward put it, using an old Silicon Valley cliché: “Do the dogs like the dog food? Sometimes the dogs don’t.”
Archie Conner, 82, lives a few doors down from Mr. Lord and Ms. Silverman. He sees the drones as less a retail innovation and more a marketing one.
“When you hear a drone, you naturally think about Amazon. It’s real out-of-the-box thinking, even if no one orders at all,” he said. “Drones were on the news just the other day. People say, ‘Wow, Amazon did that.’”
Mr. Conner also ordered the free Skippy peanut butter but forgot to put out the landing target, so the drone went away. Then he ordered it again. Meanwhile, an Amazon delivery person showed up with the first jar. So now he and his wife, Belinda, have two jars.
“We haven’t found much we really want to pay for,” Mr. Conner said. “But we have enjoyed the free peanut butter.”
https://www.nytimes.com/2023/11/04/tech ... ivery.html
Video: https://vp.nyt.com/video/2023/10/19/112 ... g_720p.mp4
An Amazon drone delivers a can of Campbell’s Chunky Minestrone With Italian Sausage to the home of Dominique Lord and Leah Silverman in College Station, Texas.Credit...Video by Callaghan O’Hare For The New York Times
Exactly a decade ago, Amazon revealed a program that aimed to revolutionize shopping and shipping. Drones launched from a central hub would waft through the skies delivering just about everything anyone could need. They would be fast, innovative, ubiquitous — all the Amazon hallmarks.
The buzzy announcement, made by Jeff Bezos on “60 Minutes” as part of a Cyber Monday promotional package, drew global attention. “I know this looks like science fiction. It’s not,” said Mr. Bezos, Amazon’s founder and the chief executive at the time. The drones would be “ready to enter commercial operations as soon as the necessary regulations are in place,” probably in 2015, the company said.
Eight additional years later, drone delivery is a reality — kind of — on the outskirts of College Station, Texas, northwest of Houston. That is a major achievement for a program that has waxed and waned over the years and lost many of its early leaders to newer and more urgent projects.
Yet the venture as it currently exists is so underwhelming that Amazon can keep the drones in the air only by giving stuff away. Years of toil by top scientists and aviation specialists have yielded a program that flies Listerine Cool Mint Breath Strips or a can of Campbell’s Chunky Minestrone With Italian Sausage — but not both at once — to customers as gifts. If this is science fiction, it’s being played for laughs.
A decade is an eternity in technology, but even so, drone delivery does not approach the scale or simplicity of Amazon’s original promotional videos. This gap between dazzling claims and mundane reality happens all the time in Silicon Valley. Self-driving cars, the metaverse, flying cars, robots, neighborhoods or even cities built from scratch, virtual universities that can compete with Harvard, artificial intelligence — the list of delayed and incomplete promises is long.
“Having ideas is easy,” said Rodney Brooks, a robotics entrepreneur and frequent critic of technology companies’ hype. “Turning them into reality is hard. Turning them into being deployed at scale is even harder.”
Amazon said last month that drone deliveries would expand to Britain, Italy and another, unidentified U.S. city by the end of 2024. Yet even on the threshold of growth, a question lingers. Now that the drones finally exist in at least limited form, why did we think we needed them in the first place?
ImageSunlight glints off a drone viewed from below.
Newer models of Amazon’s delivery drones will be able to fly in inclement weather and reduce “perceived noise” by 25 percent.Credit...Callaghan O'Hare for The New York Times
Dominique Lord and Leah Silverman live in College Station’s drone zone. They are Amazon fans and place regular orders for ground delivery. Drones are another matter, even if the service is free for Amazon Prime members. While it’s cool to have stuff literally land on your driveway, at least the first few times, there are many hurdles to getting stuff this way.
Only one item can be delivered at a time. It can’t weigh over five pounds. It can’t be too big. It can’t be something breakable, since the drone drops it from 12 feet. The drones can’t fly when it is too hot or too windy or too rainy.
//Inside the World of Big Tech
//X: A year after Elon Musk bought the social media platform formerly known as Twitter for $44 billion, the company handed out stock grants to employees that showed it was worth about $19 billion.
Meta: The company said that it will introduce an advertisement-free subscription option for Facebook and Instagram for users in Europe, a sign of how government pressure is leading large tech companies to change their core products.
//Amazon: Jeff Bezos’s e-commerce company launched two prototype satellites to orbit. The spacecraft will test systems to be used in a planned megaconstellation to provide internet service from orbit that will eventually compete with SpaceX’s Starlink service.
Spotify: The audio streaming platform said that it would begin offering 15 hours of audiobooks each month for premium subscribers, possibly shaking up the fast-growing segment of publishing.
You need to be home to put out the landing target and to make sure that a porch pirate doesn’t make off with your item or that it doesn’t roll into the street (which happened once to Mr. Lord and Ms. Silverman). But your car can’t be in the driveway. Letting the drone land in the backyard would avoid some of these problems, but not if there are trees.
Amazon has also warned customers that drone delivery is unavailable during periods of high demand for drone delivery.
The other active U.S. test site is Lockeford, Calif., in the Central Valley. On a recent afternoon, the Lockeford site seemed largely moribund, with only three cars in the parking lot. Amazon said it was delivering via drones in Lockeford and arranged for a New York Times reporter to come back to the site. It also arranged an interview with David Carbon, the former Boeing executive who runs the drone program. The company later canceled both without explanation.
A corporate blog post on Oct. 18 said that drones had safely delivered “hundreds” of household items in College Station since December, and that customers there could now have some medications delivered. Lockeford wasn’t mentioned.
After Ms. Silverman and Mr. Lord expressed initial interest in the drone program, Amazon offered $100 in gift certificates in October 2022 to follow through. But their service didn’t start until June, and then was suspended during a punishing heat wave when the drones could not fly.
Image
Ms. Silverman holds up a bag of 365 brand chopped walnuts.
Ms. Silverman with the entirety of an Amazon drone’s delivery. Credit...Callaghan O'Hare for The New York Times
The incentives, however, kept coming. The couple got an email the other day from Amazon pushing Skippy Creamy Peanut Butter, which usually costs $5.38 but was a “free gift” while supplies lasted. They ordered it, and a little while later a drone dropped a big box containing a small jar. Amazon said “some promotional items” are being offered “as a welcome.”
“We don’t really need anything they offer for free,” said Ms. Silverman, a 51-year-old novelist and caregiver. “The drones feel more like a toy than anything — a toy that wastes a huge amount of paper and cardboard.”
Image
Mr. Lord placing an order for a free item in the drone program.
Mr. Lord placing an order for a free item in the drone program.Credit...Callaghan O'Hare for The New York Times
Image
Contents may settle during shipping.
Contents may settle during shipping.Credit...Callaghan O'Hare for The New York Times
The Texas weather plays havoc with important deliveries. Mr. Lord, a 54-year-old professor of civil engineering at Texas A&M, ordered a medication through the mail. By the time he retrieved the package, the drug had melted. He’s hopeful that the drones can eventually handle problems like this.
“I still view this program positively knowing that it is in the experimental phase,” he said.
Amazon says the drones will improve over time. It announced a new model, the MK30, last year and released pictures in October. The MK30, which is slated to begin service by the end of 2024, was touted as having a greater range, an ability to fly in inclement weather and a 25 percent reduction in “perceived noise.”
When Amazon began working on drones years ago, the retailer took two or three days to ship many items to customers. It worried that it was vulnerable to potential competitors whose vendors were more local, including Google and eBay. Drones were all about speed.
“We can do half-hour delivery,” Mr. Bezos promised on “60 Minutes.”
For a while, drones were the next big thing. Google developed its own drone service, Wing, which now works with Walmart to deliver items in parts of Dallas and Frisco, Texas. Start-ups got funding — about $2.5 billion was invested between 2013 and 2019, according to the Teal Group, an aerospace consultancy. The veteran venture capitalist Tim Draper said in 2013 that “everything from pizza delivery to personal shopping can be handled by drones.” Uber Eats announced a food delivery drone in late 2019. The future was up in the air.
Amazon started thinking really long term. It envisioned, and got a patent for, a drone resupply vehicle that would hover in the sky at 45,000 feet. That’s above commercial airplanes, but Amazon said it could use the vehicles to deliver customers a hot dinner.
Yet on the ground, progress was slow, sometimes for technical reasons and sometimes because of the company’s corporate DNA. The same aggressive confidence that created a trillion-dollar business undermined Amazon’s efforts to work with the Federal Aviation Administration.
“The attitude was: ‘We’re Amazon. We’ll convince the F.A.A.,’” said one former Amazon drone executive, who asked for anonymity because he wasn’t authorized to speak about the subject. “The F.A.A. wants companies to come in with great humility and great transparency. That is not a strength of Amazon.”
A more complicated issue was getting the technology to the point where it was safe not just most of the time but all of the time. The first drone that lands on someone’s head, or takes off clutching a cat, sets the program back another decade, particularly if it is filmed.
Image
Mr. Lord sits outside his house under a clear blue sky. Leaning against him is a pad, roughly a square yard in size, bearing a black-and-gray geometric design.
Mr. Lord with the QR-coded target for the packages, which don’t always stick the landing.Credit...Callaghan O'Hare for The New York Times
“Part of the DNA of the tech industry is you can accomplish things you never thought you could accomplish,” said Neil Woodward, who spent four years as a senior manager in Amazon’s drone program. “But the truth is the laws of physics don’t change.”
Mr. Woodward, now retired, spent years at NASA in the astronaut program before moving to the private sector.
“When you work for the government, you have 535 people on your board of directors” — he was referring to Congress — “and a good chunk of them want to take your funding away because they have other priorities,” he said. “That makes government agencies very risk averse. At Amazon, you’re given a lot of rope, but you can get out over your skis.”
Image
A person’s shadow stretches along a driveway next to the landing target.
Amazon prefers a target to have the driveway to itself when a drone is coming.Credit...Callaghan O'Hare for The New York Times
In the end, there must be a market. As Mr. Woodward put it, using an old Silicon Valley cliché: “Do the dogs like the dog food? Sometimes the dogs don’t.”
Archie Conner, 82, lives a few doors down from Mr. Lord and Ms. Silverman. He sees the drones as less a retail innovation and more a marketing one.
“When you hear a drone, you naturally think about Amazon. It’s real out-of-the-box thinking, even if no one orders at all,” he said. “Drones were on the news just the other day. People say, ‘Wow, Amazon did that.’”
Mr. Conner also ordered the free Skippy peanut butter but forgot to put out the landing target, so the drone went away. Then he ordered it again. Meanwhile, an Amazon delivery person showed up with the first jar. So now he and his wife, Belinda, have two jars.
“We haven’t found much we really want to pay for,” Mr. Conner said. “But we have enjoyed the free peanut butter.”
https://www.nytimes.com/2023/11/04/tech ... ivery.html
E.U. Agrees on Landmark Artificial Intelligence Rules
The agreement over the A.I. Act solidifies one of the world’s first comprehensive attempts to limit the use of artificial intelligence.
Lawmakers discussed the A.I. Act in June at the European Parliament.Credit...Jean-Francois Badias/Associated Press
European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.
The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.
European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.
Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.
“Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter,” Thierry Breton, the European commissioner who helped negotiate the deal, said in a statement.
Yet even as the law was hailed as a regulatory breakthrough, questions remained about how effective it would be. Many aspects of the policy were not expected to take effect for 12 to 24 months, a considerable length of time for A.I. development. And up until the last minute of negotiations, policymakers and countries were fighting over its language and how to balance the fostering of innovation with the need to safeguard against possible harm.
The deal reached in Brussels took three days of negotiations, including an initial 22-hour session that began Wednesday afternoon and dragged into Thursday. The final agreement was not immediately public as talks were expected to continue behind the scenes to complete technical details, which could delay final passage. Votes must be held in Parliament and the European Council, which comprises representatives from the 27 countries in the union.
Regulating A.I. gained urgency after last year’s release of ChatGPT, which became a worldwide sensation by demonstrating A.I.’s advancing abilities. In the United States, the Biden administration recently issued an executive order focused in part on A.I.’s national security effects. Britain, Japan and other nations have taken a more hands-off approach, while China has imposed some restrictions on data use and recommendation algorithms.
At stake are trillions of dollars in estimated value as A.I. is predicted to reshape the global economy. “Technological dominance precedes economic dominance and political dominance,” Jean-Noël Barrot, France’s digital minister, said this week.
Europe has been one of the regions furthest ahead in regulating A.I., having started working on what would become the A.I. Act in 2018. In recent years, E.U. leaders have tried to bring a new level of oversight to tech, akin to regulation of the health care or banking industries. The bloc has already enacted far-reaching laws related to data privacy, competition and content moderation.
A first draft of the A.I. Act was released in 2021. But policymakers found themselves rewriting the law as technological breakthroughs emerged. The initial version made no mention of general-purpose A.I. models like those that power ChatGPT.
Policymakers agreed to what they called a “risk-based approach” to regulating A.I., where a defined set of applications face the most oversight and restrictions. Companies that make A.I. tools that pose the most potential harm to individuals and society, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would also be required in creating and deploying the systems.
Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.
The European Union debate was contentious, a sign of how A.I. has befuddled lawmakers. E.U. officials were divided over how deeply to regulate the newer A.I. systems for fear of handicapping European start-ups trying to catch up to American companies like Google and OpenAI.
The law added requirements for makers of the largest A.I. models to disclose information about how their systems work and evaluate for “systemic risk,” Mr. Breton said.
The new regulations will be closely watched globally. They will affect not only major A.I. developers like Google, Meta, Microsoft and OpenAI, but other businesses that are expected to use the technology in areas such as education, health care and banking. Governments are also turning more to A.I. in criminal justice and the allocation of public benefits.
Enforcement remains unclear. The A.I. Act will involve regulators across 27 nations and require hiring new experts at a time when government budgets are tight. Legal challenges are likely as companies test the novel rules in court. Previous E.U. legislation, including the landmark digital privacy law known as the General Data Protection Regulation, has been criticized for being unevenly enforced.
“The E.U.’s regulatory prowess is under question,” said Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, who has advised European lawmakers on the A.I. Act. “Without strong enforcement, this deal will have no meaning.”
https://www.nytimes.com/2023/12/08/tech ... ation.html
Lawmakers discussed the A.I. Act in June at the European Parliament.Credit...Jean-Francois Badias/Associated Press
European Union policymakers agreed on Friday to a sweeping new law to regulate artificial intelligence, one of the world’s first comprehensive attempts to limit the use of a rapidly evolving technology that has wide-ranging societal and economic implications.
The law, called the A.I. Act, sets a new global benchmark for countries seeking to harness the potential benefits of the technology, while trying to protect against its possible risks, like automating jobs, spreading misinformation online and endangering national security. The law still needs to go through a few final steps for approval, but the political agreement means its key outlines have been set.
European policymakers focused on A.I.’s riskiest uses by companies and governments, including those for law enforcement and the operation of crucial services like water and energy. Makers of the largest general-purpose A.I. systems, like those powering the ChatGPT chatbot, would face new transparency requirements. Chatbots and software that creates manipulated images such as “deepfakes” would have to make clear that what people were seeing was generated by A.I., according to E.U. officials and earlier drafts of the law.
Use of facial recognition software by police and governments would be restricted outside of certain safety and national security exemptions. Companies that violated the regulations could face fines of up to 7 percent of global sales.
“Europe has positioned itself as a pioneer, understanding the importance of its role as global standard setter,” Thierry Breton, the European commissioner who helped negotiate the deal, said in a statement.
Yet even as the law was hailed as a regulatory breakthrough, questions remained about how effective it would be. Many aspects of the policy were not expected to take effect for 12 to 24 months, a considerable length of time for A.I. development. And up until the last minute of negotiations, policymakers and countries were fighting over its language and how to balance the fostering of innovation with the need to safeguard against possible harm.
The deal reached in Brussels took three days of negotiations, including an initial 22-hour session that began Wednesday afternoon and dragged into Thursday. The final agreement was not immediately public as talks were expected to continue behind the scenes to complete technical details, which could delay final passage. Votes must be held in Parliament and the European Council, which comprises representatives from the 27 countries in the union.
Regulating A.I. gained urgency after last year’s release of ChatGPT, which became a worldwide sensation by demonstrating A.I.’s advancing abilities. In the United States, the Biden administration recently issued an executive order focused in part on A.I.’s national security effects. Britain, Japan and other nations have taken a more hands-off approach, while China has imposed some restrictions on data use and recommendation algorithms.
At stake are trillions of dollars in estimated value as A.I. is predicted to reshape the global economy. “Technological dominance precedes economic dominance and political dominance,” Jean-Noël Barrot, France’s digital minister, said this week.
Europe has been one of the regions furthest ahead in regulating A.I., having started working on what would become the A.I. Act in 2018. In recent years, E.U. leaders have tried to bring a new level of oversight to tech, akin to regulation of the health care or banking industries. The bloc has already enacted far-reaching laws related to data privacy, competition and content moderation.
A first draft of the A.I. Act was released in 2021. But policymakers found themselves rewriting the law as technological breakthroughs emerged. The initial version made no mention of general-purpose A.I. models like those that power ChatGPT.
Policymakers agreed to what they called a “risk-based approach” to regulating A.I., where a defined set of applications face the most oversight and restrictions. Companies that make A.I. tools that pose the most potential harm to individuals and society, such as in hiring and education, would need to provide regulators with proof of risk assessments, breakdowns of what data was used to train the systems and assurances that the software did not cause harm like perpetuating racial biases. Human oversight would also be required in creating and deploying the systems.
Some practices, such as the indiscriminate scraping of images from the internet to create a facial recognition database, would be banned outright.
The European Union debate was contentious, a sign of how A.I. has befuddled lawmakers. E.U. officials were divided over how deeply to regulate the newer A.I. systems for fear of handicapping European start-ups trying to catch up to American companies like Google and OpenAI.
The law added requirements for makers of the largest A.I. models to disclose information about how their systems work and evaluate for “systemic risk,” Mr. Breton said.
The new regulations will be closely watched globally. They will affect not only major A.I. developers like Google, Meta, Microsoft and OpenAI, but other businesses that are expected to use the technology in areas such as education, health care and banking. Governments are also turning more to A.I. in criminal justice and the allocation of public benefits.
Enforcement remains unclear. The A.I. Act will involve regulators across 27 nations and require hiring new experts at a time when government budgets are tight. Legal challenges are likely as companies test the novel rules in court. Previous E.U. legislation, including the landmark digital privacy law known as the General Data Protection Regulation, has been criticized for being unevenly enforced.
“The E.U.’s regulatory prowess is under question,” said Kris Shrishak, a senior fellow at the Irish Council for Civil Liberties, who has advised European lawmakers on the A.I. Act. “Without strong enforcement, this deal will have no meaning.”
https://www.nytimes.com/2023/12/08/tech ... ation.html
Re: TECHNOLOGY AND DEVELOPMENT
USA TODAY
When will you die? Meet the 'doom calculator,' an artificial intelligence algorithm
Mike Snider, USA TODAY
Thu, December 21, 2023 at 7:03 PM CST·
Would you want to know when you will die? Science could be getting closer to perhaps giving you that option.
The latest advance? An artificial intelligence algorithm, dubbed "the doom calculator" by the U.K.'s Daily Mail, predicted whether people would die within four years in more than 75% of the cases.
Details about the project, conducted by researchers in Denmark and the U.S., were published this week in the Nature Computational Science online journal. They created an AI machine-learning transformer model – somewhat akin to ChatGPT – although people can't interact with it as they do with ChatGPT.
But the model, called life2vec, crunched data – age, health, education, jobs, income and other life events – on more than 6 million people from Denmark supplied by the country's government, which collaborated on the research.
The model was taught to assimilate information about people's lives in sentences such as "In September 2012, Francisco received 20,000 Danish kroner as a guard at a castle in Elsinore." Or, "During her third year at secondary boarding school, Hermione followed five elective classes," the researchers wrote in the research paper.
As life2vec evolved it became capable of building "individual human life trajectories," they wrote.
"The whole story of a human life, in a way, can also be thought of as a giant long sentence of the many things that can happen to a person,” the paper's author, Sune Lehmann, a professor of networks and complexity science at the Technical University of Denmark, said in Northeastern Global News, a university news site. Lehmann was previously a postdoctoral fellow at Northeastern. A collaborator, Tina Eliassi-Rad, is a professor of computer science at the university in Boston. Researchers have created an AI algorithm known as the doom calculator that may be able to predict someone's death. It's somewhat akin to ChatGPT, although people can't interact with it as they do with ChatGPT.
Researchers have created an AI algorithm known as the doom calculator that may be able to predict someone's death. It's somewhat akin to ChatGPT, although people can't interact with it as they do with ChatGPT.More
Accurately predicting death 78 percent of the time
Eventually, the AI construct was able to correctly predict those who had died by 2020 about 78% of the time, researchers say in the report.
None of the study participants were told their death predictions.
“That would be very irresponsible,” Lehmann told the New York Post
Some factors associated with earlier deaths were having a mental health diagnosis, being male, or having a skilled profession, The Science Times reported. Having a leadership role at work and a higher income were associated with longer lifespans.
The program could predict personalities and decisions to make international moves, Lehmann told the Post. “This model can predict almost anything,” he said.
When can I plug my data into the doom calculator?
Not anytime soon. The program and its data are not being made public to protect the privacy of those whose information was used.
"We are actively working on ways to share some of the results more openly, but this requires further research to be done in a way that can guarantee the privacy of the people in the study,' Lehmann told the Daily Mail.
And its predictive power might not translate beyond Denmark, Eliassi-Rad told the Northeastern Global News. “This kind of tool is like an observatory of society – and not all societies,” she said. "Whether this can be done in America is a different story.”
Regardless, tools such as life2vec should be used to track societal trends, not predict individuals' outcomes, Eliassi-Rad said.
“Even though we’re using prediction to evaluate how good these models are, the tool shouldn’t be used for prediction on real people,” she told the university news site. Real people "have hearts and minds.”
Lehmann hopes the project will shed light on the development of AI and what should be predicted, he told the university news site.
“I don’t have those answers, but it’s high time we start the conversation because what we know is that detailed prediction about human lives is already happening," he said. "And right now there is no conversation and it’s happening behind closed doors."
A tough decision awaits: Will you want to know when and what you may die of?
AI's involvement in death prediction is "the start of a very complicated road," said Art Caplan, a professor and founding head of the division of bioethics at New York University Langone Medical Center in New York City.Other scientists are working on using blood and other physical and medical features to make predictive forecasts, too, and the insurance business is built on prediction. "What's unique (here) is it's using social employment and public record information, in combination with health information, to make predictions and never having met anybody in the study," he said.
Caplan says it's "inevitable" that consumers will be able to get information on their own forecasts. "There are going to be a lot of fights around, let's call it 'death prediction' and battles over third-party access (to it)," he said.
Beyond that is a bigger issue: "These algorithms are starting to take away things we normally don't know," Caplan said. "It has upside and could prevent deaths, but it's got a real existential threat of taking all the unknowns out of life, which is not necessarily a good thing."
https://currently.att.yahoo.com/att/cm/ ... 40536.html
When will you die? Meet the 'doom calculator,' an artificial intelligence algorithm
Mike Snider, USA TODAY
Thu, December 21, 2023 at 7:03 PM CST·
Would you want to know when you will die? Science could be getting closer to perhaps giving you that option.
The latest advance? An artificial intelligence algorithm, dubbed "the doom calculator" by the U.K.'s Daily Mail, predicted whether people would die within four years in more than 75% of the cases.
Details about the project, conducted by researchers in Denmark and the U.S., were published this week in the Nature Computational Science online journal. They created an AI machine-learning transformer model – somewhat akin to ChatGPT – although people can't interact with it as they do with ChatGPT.
But the model, called life2vec, crunched data – age, health, education, jobs, income and other life events – on more than 6 million people from Denmark supplied by the country's government, which collaborated on the research.
The model was taught to assimilate information about people's lives in sentences such as "In September 2012, Francisco received 20,000 Danish kroner as a guard at a castle in Elsinore." Or, "During her third year at secondary boarding school, Hermione followed five elective classes," the researchers wrote in the research paper.
As life2vec evolved it became capable of building "individual human life trajectories," they wrote.
"The whole story of a human life, in a way, can also be thought of as a giant long sentence of the many things that can happen to a person,” the paper's author, Sune Lehmann, a professor of networks and complexity science at the Technical University of Denmark, said in Northeastern Global News, a university news site. Lehmann was previously a postdoctoral fellow at Northeastern. A collaborator, Tina Eliassi-Rad, is a professor of computer science at the university in Boston. Researchers have created an AI algorithm known as the doom calculator that may be able to predict someone's death. It's somewhat akin to ChatGPT, although people can't interact with it as they do with ChatGPT.
Researchers have created an AI algorithm known as the doom calculator that may be able to predict someone's death. It's somewhat akin to ChatGPT, although people can't interact with it as they do with ChatGPT.More
Accurately predicting death 78 percent of the time
Eventually, the AI construct was able to correctly predict those who had died by 2020 about 78% of the time, researchers say in the report.
None of the study participants were told their death predictions.
“That would be very irresponsible,” Lehmann told the New York Post
Some factors associated with earlier deaths were having a mental health diagnosis, being male, or having a skilled profession, The Science Times reported. Having a leadership role at work and a higher income were associated with longer lifespans.
The program could predict personalities and decisions to make international moves, Lehmann told the Post. “This model can predict almost anything,” he said.
When can I plug my data into the doom calculator?
Not anytime soon. The program and its data are not being made public to protect the privacy of those whose information was used.
"We are actively working on ways to share some of the results more openly, but this requires further research to be done in a way that can guarantee the privacy of the people in the study,' Lehmann told the Daily Mail.
And its predictive power might not translate beyond Denmark, Eliassi-Rad told the Northeastern Global News. “This kind of tool is like an observatory of society – and not all societies,” she said. "Whether this can be done in America is a different story.”
Regardless, tools such as life2vec should be used to track societal trends, not predict individuals' outcomes, Eliassi-Rad said.
“Even though we’re using prediction to evaluate how good these models are, the tool shouldn’t be used for prediction on real people,” she told the university news site. Real people "have hearts and minds.”
Lehmann hopes the project will shed light on the development of AI and what should be predicted, he told the university news site.
“I don’t have those answers, but it’s high time we start the conversation because what we know is that detailed prediction about human lives is already happening," he said. "And right now there is no conversation and it’s happening behind closed doors."
A tough decision awaits: Will you want to know when and what you may die of?
AI's involvement in death prediction is "the start of a very complicated road," said Art Caplan, a professor and founding head of the division of bioethics at New York University Langone Medical Center in New York City.Other scientists are working on using blood and other physical and medical features to make predictive forecasts, too, and the insurance business is built on prediction. "What's unique (here) is it's using social employment and public record information, in combination with health information, to make predictions and never having met anybody in the study," he said.
Caplan says it's "inevitable" that consumers will be able to get information on their own forecasts. "There are going to be a lot of fights around, let's call it 'death prediction' and battles over third-party access (to it)," he said.
Beyond that is a bigger issue: "These algorithms are starting to take away things we normally don't know," Caplan said. "It has upside and could prevent deaths, but it's got a real existential threat of taking all the unknowns out of life, which is not necessarily a good thing."
https://currently.att.yahoo.com/att/cm/ ... 40536.html
Re: TECHNOLOGY AND DEVELOPMENT
A.I. questions, answered
First drafts
I want to make a confession: I don’t understand a lot of the hype around artificial intelligence.
Like a lot of other people, I tried ChatGPT after it was released, and I was impressed. But I’ve been mostly disappointed since then. When I’ve asked it to analyze a data set, its answers have included errors. When I ask about historical events, the information isn’t much better than what’s on Wikipedia. When I ask about recent events, the bot tells me that it doesn’t have access to data after Jan. 2022.
I don’t doubt that A.I. will eventually be a big deal. But much of the discussion today feels vague and impenetrable for nonexperts. To get a more tangible understanding, I asked my colleagues Cade Metz and Karen Weise, who cover A.I., to answer some questions. We’ve turned their answers into today’s newsletter.
David: Am I wrong to be unimpressed so far?
Cade and Karen: A lot of people have told us they share your experience. Our editor recently asked us to list impressive things people were doing with ChatGPT, and we really had to think about it.
One example does seem to be writing. We are writers by profession, but writing does not come easily for many people. Chatbots can help get out a first draft. Cade knows a dentist who uses it to help write emails to his staff. Karen overheard some teachers in a coffee shop say they were using it to draft college recommendation letters. A friend used it to produce a meal plan for a weeklong vacation, asking it to propose menus and a grocery list that was a helpful starting point.
But the chatbots have an inherent problem with producing wrong information, what the industry calls “hallucinations.” A lawyer representing Michael Cohen, the onetime fixer for Donald Trump, recently submitted a brief to a federal court that mistakenly included fictitious court cases. As it turns out, a Google chatbot had invented the cases.
David: What’s an example of something meaningful that people may be able to do with A.I. soon?
Cade and Karen: Companies like OpenAI are transforming chatbots into what they call “A.I. agents.” Basically, this is a fancy term for technology that will go out onto the internet and take actions on your behalf, like searching for plane flights to New York or turning a spreadsheet into a chart with just a few words of commands.
So far the chatbots have primarily focused on words, but the newest technology will work from images, videos and sound. Imagine uploading images of a math question that included diagrams and charts, and then asking the system to answer it. Or generating a video based on a short description.
David: Let’s talk about the dark side. The apocalyptic fears that A.I. will begin killing people feel sci-fi-ish, which causes me to dismiss them. What are real reasons for concern?
Cade and Karen: A.I. systems can be mysterious, even to the people who create them. They are designed around probabilities, so they are unpredictable. The worriers fret that because the systems learn from more data than any human could consume, they could wreak havoc as they are woven into stock markets, military systems and other vital systems.
But all the talk of these hypothetical risks can reduce the focus on more realistic problems. Already we are seeing A.I. produce better misinformation for China and other nations and write more seductive and successful phishing emails to scam people. A.I. has the potential to make people even more distrustful and polarized.
David: The lack of regulation over smartphones and social media has aggravated some big societal problems in the past 15 years. If some government regulators called you into their office and asked how to avoid being so far behind with A.I., what lessons would your reporting suggest?
Cade and Karen: Regulators need to educate themselves from a broad range of experts, not just big tech. This technology is extremely complicated, and the people building it often exaggerate both the positives and the negatives. Regulators need to understand, for instance, that the threat to humanity is overblown, but other threats are not.
Right now there is very little transparency around almost every aspect of A.I. systems, which makes it hard to keep in check. A prime example: These systems learn their skills from massive amounts of data, and the major companies have not disclosed the particulars. The companies might be using personal data without consent. Or the data might contain hate speech.
Related: Research from Stanford University suggests that A.I. tools have not increased cheating in high schools so far, The Times’s Natasha Singer explains.
First drafts
I want to make a confession: I don’t understand a lot of the hype around artificial intelligence.
Like a lot of other people, I tried ChatGPT after it was released, and I was impressed. But I’ve been mostly disappointed since then. When I’ve asked it to analyze a data set, its answers have included errors. When I ask about historical events, the information isn’t much better than what’s on Wikipedia. When I ask about recent events, the bot tells me that it doesn’t have access to data after Jan. 2022.
I don’t doubt that A.I. will eventually be a big deal. But much of the discussion today feels vague and impenetrable for nonexperts. To get a more tangible understanding, I asked my colleagues Cade Metz and Karen Weise, who cover A.I., to answer some questions. We’ve turned their answers into today’s newsletter.
David: Am I wrong to be unimpressed so far?
Cade and Karen: A lot of people have told us they share your experience. Our editor recently asked us to list impressive things people were doing with ChatGPT, and we really had to think about it.
One example does seem to be writing. We are writers by profession, but writing does not come easily for many people. Chatbots can help get out a first draft. Cade knows a dentist who uses it to help write emails to his staff. Karen overheard some teachers in a coffee shop say they were using it to draft college recommendation letters. A friend used it to produce a meal plan for a weeklong vacation, asking it to propose menus and a grocery list that was a helpful starting point.
But the chatbots have an inherent problem with producing wrong information, what the industry calls “hallucinations.” A lawyer representing Michael Cohen, the onetime fixer for Donald Trump, recently submitted a brief to a federal court that mistakenly included fictitious court cases. As it turns out, a Google chatbot had invented the cases.
David: What’s an example of something meaningful that people may be able to do with A.I. soon?
Cade and Karen: Companies like OpenAI are transforming chatbots into what they call “A.I. agents.” Basically, this is a fancy term for technology that will go out onto the internet and take actions on your behalf, like searching for plane flights to New York or turning a spreadsheet into a chart with just a few words of commands.
So far the chatbots have primarily focused on words, but the newest technology will work from images, videos and sound. Imagine uploading images of a math question that included diagrams and charts, and then asking the system to answer it. Or generating a video based on a short description.
David: Let’s talk about the dark side. The apocalyptic fears that A.I. will begin killing people feel sci-fi-ish, which causes me to dismiss them. What are real reasons for concern?
Cade and Karen: A.I. systems can be mysterious, even to the people who create them. They are designed around probabilities, so they are unpredictable. The worriers fret that because the systems learn from more data than any human could consume, they could wreak havoc as they are woven into stock markets, military systems and other vital systems.
But all the talk of these hypothetical risks can reduce the focus on more realistic problems. Already we are seeing A.I. produce better misinformation for China and other nations and write more seductive and successful phishing emails to scam people. A.I. has the potential to make people even more distrustful and polarized.
David: The lack of regulation over smartphones and social media has aggravated some big societal problems in the past 15 years. If some government regulators called you into their office and asked how to avoid being so far behind with A.I., what lessons would your reporting suggest?
Cade and Karen: Regulators need to educate themselves from a broad range of experts, not just big tech. This technology is extremely complicated, and the people building it often exaggerate both the positives and the negatives. Regulators need to understand, for instance, that the threat to humanity is overblown, but other threats are not.
Right now there is very little transparency around almost every aspect of A.I. systems, which makes it hard to keep in check. A prime example: These systems learn their skills from massive amounts of data, and the major companies have not disclosed the particulars. The companies might be using personal data without consent. Or the data might contain hate speech.
Related: Research from Stanford University suggests that A.I. tools have not increased cheating in high schools so far, The Times’s Natasha Singer explains.
Robots Learn, Chatbots Visualize: How 2024 Will Be A.I.’s ‘Leap Forward’
A.I. is set to advance at a rapid rate, becoming more powerful and spreading into the physical world.
At an event in San Francisco in November, Sam Altman, the chief executive of the artificial intelligence company OpenAI, was asked what surprises the field would bring in 2024.
Online chatbots like OpenAI’s ChatGPT will take “a leap forward that no one expected,” Mr. Altman immediately responded.
Sitting beside him, James Manyika, a Google executive, nodded and said, “Plus one to that.”
The A.I. industry this year is set to be defined by one main characteristic: a remarkably rapid improvement of the technology as advancements build upon one another, enabling A.I. to generate new kinds of media, mimic human reasoning in new ways and seep into the physical world through a new breed of robot.
In the coming months, A.I.-powered image generators like DALL-E and Midjourney will instantly deliver videos as well as still images. And they will gradually merge with chatbots like ChatGPT.
That means chatbots will expand well beyond digital text by handling photos, videos, diagrams, charts and other media. They will exhibit behavior that looks more like human reasoning, tackling increasingly complex tasks in fields like math and science. As the technology moves into robots, it will also help to solve problems beyond the digital world.
Many of these developments have already started emerging inside the top research labs and in tech products. But in 2024, the power of these products will grow significantly and be used by far more people.
“The rapid progress of A.I. will continue,” said David Luan, the chief executive of Adept, an A.I. start-up. “It is inevitable.”
OpenAI, Google and other tech companies are advancing A.I. far more quickly than other technologies because of the way the underlying systems are built.
Most software apps are built by engineers, one line of computer code at a time, which is typically a slow and tedious process. Companies are improving A.I. more swiftly because the technology relies on neural networks, mathematical systems that can learn skills by analyzing digital data. By pinpointing patterns in data such as Wikipedia articles, books and digital text culled from the internet, a neural network can learn to generate text on its own.
This year, tech companies plan to feed A.I. systems more data — including images, sounds and more text — than people can wrap their heads around. As these systems learn the relationships between these various kinds of data, they will learn to solve increasingly complex problems, preparing them for life in the physical world.
(The New York Times sued OpenAI and Microsoft last month for copyright infringement of news content related to A.I. systems.)
None of this means that A.I. will be able to match the human brain anytime soon. While A.I. companies and entrepreneurs aim to create what they call “artificial general intelligence” — a machine that can do anything the human brain can do — this remains a daunting task. For all its rapid gains, A.I. remains in the early stages.
Here’s a guide to how A.I. is set to change this year, beginning with the nearest-term advancements, which will lead to further progress in its abilities.
Instant Videos
Until now, A.I.-powered applications mostly generated text and still images in response to prompts. DALL-E, for instance, can create photorealistic images within seconds off requests like “a rhino diving off the Golden Gate Bridge.”
But this year, companies such as OpenAI, Google, Meta and the New York-based Runway are likely to deploy image generators that allow people to generate videos, too. These companies have already built prototypes of tools that can instantly create videos from short text prompts.
Tech companies are likely to fold the powers of image and video generators into chatbots, making the chatbots more powerful.
‘Multimodal’ Chatbots
Chatbots and image generators, originally developed as separate tools, are gradually merging. When OpenAI debuted a new version of ChatGPT last year, the chatbot could generate images as well as text.
A.I. companies are building “multimodal” systems, meaning the A.I. can handle multiple types of media. These systems learn skills by analyzing photos, text and potentially other kinds of media, including diagrams, charts, sounds and video, so they can then produce their own text, images and sounds.
That isn’t all. Because the systems are also learning the relationships between different types of media, they will be able to understand one type of media and respond with another. In other words, someone may feed an image into chatbot and it will respond with text.
“The technology will get smarter, more useful,” said Ahmad Al-Dahle, who leads the generative A.I. group at Meta. “It will do more things.”
Multimodal chatbots will get stuff wrong, just as text-only chatbots make mistakes. Tech companies are working to reduce errors as they strive to build chatbots that can reason like a human.
Better ‘Reasoning’
When Mr. Altman talks about A.I.’s taking a leap forward, he is referring to chatbots that are better at “reasoning” so they can take on more complex tasks, such as solving complicated math problems and generating detailed computer programs.
The aim is to build systems that can carefully and logically solve a problem through a series of discrete steps, each one building on the next. That is how humans reason, at least in some cases.
Leading scientists disagree on whether chatbots can truly reason like that. Some argue that these systems merely seem to reason as they repeat behavior they have seen in internet data. But OpenAI and others are building systems that can more reliably answer complex questions involving subjects like math, computer programming, physics and other sciences.
“As systems become more reliable, they will become more popular,” said Nick Frosst, a former Google researcher who helps lead Cohere, an A.I. start-up.
If chatbots are better at reasoning, they can then turn into “A.I. agents.”
‘A.I. Agents’
As companies teach A.I. systems how to work through complex problems one step at a time, they can also improve the ability of chatbots to use software apps and websites on your behalf.
Researchers are essentially transforming chatbots into a new kind of autonomous system called an A.I. agent. That means the chatbots can use software apps, websites and other online tools, including spreadsheets, online calendars and travel sites. People could then offload tedious office work to chatbots. But these agents could also take away jobs entirely.
Chatbots already operate as agents in small ways. They can schedule meetings, edit files, analyze data and build bar charts. But these tools do not always work as well as they need to. Agents break down entirely when applied to more complex tasks.
This year, A.I. companies are set to unveil agents that are more reliable. “You should be able to delegate any tedious, day-to-day computer work to an agent,” Mr. Luan said.
This might include keeping track of expenses in an app like QuickBooks or logging vacation days in an app like Workday. In the long run, it will extend beyond software and internet services and into the world of robotics.
Smarter Robots
In the past, robots were programmed to perform the same task over and over again, such as picking up boxes that are always the same size and shape. But using the same kind of technology that underpins chatbots, researchers are giving robots the power to handle more complex tasks — including those they have never seen before.
Just as chatbots can learn to predict the next word in a sentence by analyzing vast amounts of digital text, a robot can learn to predict what will happen in the physical world by analyzing countless videos of objects being prodded, lifted and moved.
“These technologies can absorb tremendous amounts of data. And as they absorb data, they can learn how the world works, how physics work, how you interact with objects,” said Peter Chen, a former OpenAI researcher who runs Covariant, a robotics start-up.
This year, A.I. will supercharge robots that operate behind the scenes, like mechanical arms that fold shirts at a laundromat or sort piles of stuff inside a warehouse. Tech titans like Elon Musk are also working to move humanoid robots into people’s homes.
The Future of A.I.
https://www.nytimes.com/2024/01/08/tech ... 778d3e6de3
At an event in San Francisco in November, Sam Altman, the chief executive of the artificial intelligence company OpenAI, was asked what surprises the field would bring in 2024.
Online chatbots like OpenAI’s ChatGPT will take “a leap forward that no one expected,” Mr. Altman immediately responded.
Sitting beside him, James Manyika, a Google executive, nodded and said, “Plus one to that.”
The A.I. industry this year is set to be defined by one main characteristic: a remarkably rapid improvement of the technology as advancements build upon one another, enabling A.I. to generate new kinds of media, mimic human reasoning in new ways and seep into the physical world through a new breed of robot.
In the coming months, A.I.-powered image generators like DALL-E and Midjourney will instantly deliver videos as well as still images. And they will gradually merge with chatbots like ChatGPT.
That means chatbots will expand well beyond digital text by handling photos, videos, diagrams, charts and other media. They will exhibit behavior that looks more like human reasoning, tackling increasingly complex tasks in fields like math and science. As the technology moves into robots, it will also help to solve problems beyond the digital world.
Many of these developments have already started emerging inside the top research labs and in tech products. But in 2024, the power of these products will grow significantly and be used by far more people.
“The rapid progress of A.I. will continue,” said David Luan, the chief executive of Adept, an A.I. start-up. “It is inevitable.”
OpenAI, Google and other tech companies are advancing A.I. far more quickly than other technologies because of the way the underlying systems are built.
Most software apps are built by engineers, one line of computer code at a time, which is typically a slow and tedious process. Companies are improving A.I. more swiftly because the technology relies on neural networks, mathematical systems that can learn skills by analyzing digital data. By pinpointing patterns in data such as Wikipedia articles, books and digital text culled from the internet, a neural network can learn to generate text on its own.
This year, tech companies plan to feed A.I. systems more data — including images, sounds and more text — than people can wrap their heads around. As these systems learn the relationships between these various kinds of data, they will learn to solve increasingly complex problems, preparing them for life in the physical world.
(The New York Times sued OpenAI and Microsoft last month for copyright infringement of news content related to A.I. systems.)
None of this means that A.I. will be able to match the human brain anytime soon. While A.I. companies and entrepreneurs aim to create what they call “artificial general intelligence” — a machine that can do anything the human brain can do — this remains a daunting task. For all its rapid gains, A.I. remains in the early stages.
Here’s a guide to how A.I. is set to change this year, beginning with the nearest-term advancements, which will lead to further progress in its abilities.
Instant Videos
Until now, A.I.-powered applications mostly generated text and still images in response to prompts. DALL-E, for instance, can create photorealistic images within seconds off requests like “a rhino diving off the Golden Gate Bridge.”
But this year, companies such as OpenAI, Google, Meta and the New York-based Runway are likely to deploy image generators that allow people to generate videos, too. These companies have already built prototypes of tools that can instantly create videos from short text prompts.
Tech companies are likely to fold the powers of image and video generators into chatbots, making the chatbots more powerful.
‘Multimodal’ Chatbots
Chatbots and image generators, originally developed as separate tools, are gradually merging. When OpenAI debuted a new version of ChatGPT last year, the chatbot could generate images as well as text.
A.I. companies are building “multimodal” systems, meaning the A.I. can handle multiple types of media. These systems learn skills by analyzing photos, text and potentially other kinds of media, including diagrams, charts, sounds and video, so they can then produce their own text, images and sounds.
That isn’t all. Because the systems are also learning the relationships between different types of media, they will be able to understand one type of media and respond with another. In other words, someone may feed an image into chatbot and it will respond with text.
“The technology will get smarter, more useful,” said Ahmad Al-Dahle, who leads the generative A.I. group at Meta. “It will do more things.”
Multimodal chatbots will get stuff wrong, just as text-only chatbots make mistakes. Tech companies are working to reduce errors as they strive to build chatbots that can reason like a human.
Better ‘Reasoning’
When Mr. Altman talks about A.I.’s taking a leap forward, he is referring to chatbots that are better at “reasoning” so they can take on more complex tasks, such as solving complicated math problems and generating detailed computer programs.
The aim is to build systems that can carefully and logically solve a problem through a series of discrete steps, each one building on the next. That is how humans reason, at least in some cases.
Leading scientists disagree on whether chatbots can truly reason like that. Some argue that these systems merely seem to reason as they repeat behavior they have seen in internet data. But OpenAI and others are building systems that can more reliably answer complex questions involving subjects like math, computer programming, physics and other sciences.
“As systems become more reliable, they will become more popular,” said Nick Frosst, a former Google researcher who helps lead Cohere, an A.I. start-up.
If chatbots are better at reasoning, they can then turn into “A.I. agents.”
‘A.I. Agents’
As companies teach A.I. systems how to work through complex problems one step at a time, they can also improve the ability of chatbots to use software apps and websites on your behalf.
Researchers are essentially transforming chatbots into a new kind of autonomous system called an A.I. agent. That means the chatbots can use software apps, websites and other online tools, including spreadsheets, online calendars and travel sites. People could then offload tedious office work to chatbots. But these agents could also take away jobs entirely.
Chatbots already operate as agents in small ways. They can schedule meetings, edit files, analyze data and build bar charts. But these tools do not always work as well as they need to. Agents break down entirely when applied to more complex tasks.
This year, A.I. companies are set to unveil agents that are more reliable. “You should be able to delegate any tedious, day-to-day computer work to an agent,” Mr. Luan said.
This might include keeping track of expenses in an app like QuickBooks or logging vacation days in an app like Workday. In the long run, it will extend beyond software and internet services and into the world of robotics.
Smarter Robots
In the past, robots were programmed to perform the same task over and over again, such as picking up boxes that are always the same size and shape. But using the same kind of technology that underpins chatbots, researchers are giving robots the power to handle more complex tasks — including those they have never seen before.
Just as chatbots can learn to predict the next word in a sentence by analyzing vast amounts of digital text, a robot can learn to predict what will happen in the physical world by analyzing countless videos of objects being prodded, lifted and moved.
“These technologies can absorb tremendous amounts of data. And as they absorb data, they can learn how the world works, how physics work, how you interact with objects,” said Peter Chen, a former OpenAI researcher who runs Covariant, a robotics start-up.
This year, A.I. will supercharge robots that operate behind the scenes, like mechanical arms that fold shirts at a laundromat or sort piles of stuff inside a warehouse. Tech titans like Elon Musk are also working to move humanoid robots into people’s homes.
The Future of A.I.
https://www.nytimes.com/2024/01/08/tech ... 778d3e6de3
OpenAI Gives ChatGPT a Better ‘Memory’
The A.I. start-up is releasing a new version of ChatGPT that stores what users say and applies it to future chats.
When the new version of ChatGPT was asked to “create a birthday card for my daughter” — with no other instruction — it generated this image of a card with information it retained from a prior chat. But it contained a subtle mistake.Credit...via OpenAI
OpenAI is giving ChatGPT a better memory.
The San Francisco artificial intelligence start-up said on Tuesday that it was releasing a new version of its chatbot that would remember what users said so it could use that information in future chats.
If a user mentions a daughter, Lina, who is about to turn 5, likes the color pink and enjoys jellyfish, for example, ChatGPT can store this information and retrieve it as needed. When the same user asks the bot to “create a birthday card for my daughter,” it might generate a card with pink jellyfish that reads, “Happy 5th Birthday, Lina!”
With this new technology, OpenAI continues to transform ChatGPT into an automated digital assistant that can compete with existing services like Apple’s Siri or Amazon’s Alexa. Last year, the company allowed users to add instructions and personal preferences, such as details about their jobs or the size of their families, that the chatbot should consider during each conversation. Now, ChatGPT can draw on a much wider and more detailed array of information.
“We think that the most useful assistants are those that evolve with you — and keep up with you,” said Joanne Jang, an OpenAI product lead who helps oversee its memory project.
Although ChatGPT can now remember previous conversations, it can still make mistakes — just as humans can. When a user asks ChatGPT to make Lina a birthday card, the chatbot might create one with a subtle typo such as “Haippy 5th Birthday! Lina!”
The company is first providing the new technology to a limited number of users. It will be available to people using the free version of ChatGPT as well as those who subscribe to ChatGPT Plus, a more advanced service that costs $20 a month.
Image
A screenshot of ChatGPT options.
A new version of OpenAI’s ChatGPT builds memory by automatically identifying and storing information that could be useful in the future.Credit...via OpenAI
OpenAI is also introducing to users on Tuesday what it calls temporary chats, during which conversations and memories are not stored.
ChatGPT has for some time offered a limited form of memory. When users chatted with the bot, its responses drew on what they said earlier in the same conversion. Now, the bot can draw on information from previous conversations.
(The New York Times sued OpenAI and its partner, Microsoft, in December, for copyright infringement of news content related to A.I. systems.)
The bot builds this memory by automatically identifying and storing information that could be useful in the future. “We rely on the model to decide what may or may not be pertinent,” said an OpenAI research scientist, Liam Fedus, referring to the A.I. technology that underpins ChatGPT.
Users can tell the bot to remember something specific from their conversation, ask what has already been stored in its memory, tell the chatbot to forget certain information or turn off memory entirely.
Image
A white screen with “Temporary Chat” in the middle.
Temporary chats will not store long-term information.Credit...via OpenAI
By default, OpenAI has been recording entire ChatGPT conversations and using them to train future versions of the chatbot. OpenAI said it removed personally identifiable information from conversations used to train its technology. And users can choose to remove their conversations from OpenAI’s training data entirely.
But creating and storing a separate list of personal memories that the chatbot can bring up in conversations could raise privacy concerns. The company argued that what it was doing was not that much different from the way search engines and browsers stored the internet history of their users.
https://www.nytimes.com/2024/02/13/tech ... 778d3e6de3
When the new version of ChatGPT was asked to “create a birthday card for my daughter” — with no other instruction — it generated this image of a card with information it retained from a prior chat. But it contained a subtle mistake.Credit...via OpenAI
OpenAI is giving ChatGPT a better memory.
The San Francisco artificial intelligence start-up said on Tuesday that it was releasing a new version of its chatbot that would remember what users said so it could use that information in future chats.
If a user mentions a daughter, Lina, who is about to turn 5, likes the color pink and enjoys jellyfish, for example, ChatGPT can store this information and retrieve it as needed. When the same user asks the bot to “create a birthday card for my daughter,” it might generate a card with pink jellyfish that reads, “Happy 5th Birthday, Lina!”
With this new technology, OpenAI continues to transform ChatGPT into an automated digital assistant that can compete with existing services like Apple’s Siri or Amazon’s Alexa. Last year, the company allowed users to add instructions and personal preferences, such as details about their jobs or the size of their families, that the chatbot should consider during each conversation. Now, ChatGPT can draw on a much wider and more detailed array of information.
“We think that the most useful assistants are those that evolve with you — and keep up with you,” said Joanne Jang, an OpenAI product lead who helps oversee its memory project.
Although ChatGPT can now remember previous conversations, it can still make mistakes — just as humans can. When a user asks ChatGPT to make Lina a birthday card, the chatbot might create one with a subtle typo such as “Haippy 5th Birthday! Lina!”
The company is first providing the new technology to a limited number of users. It will be available to people using the free version of ChatGPT as well as those who subscribe to ChatGPT Plus, a more advanced service that costs $20 a month.
Image
A screenshot of ChatGPT options.
A new version of OpenAI’s ChatGPT builds memory by automatically identifying and storing information that could be useful in the future.Credit...via OpenAI
OpenAI is also introducing to users on Tuesday what it calls temporary chats, during which conversations and memories are not stored.
ChatGPT has for some time offered a limited form of memory. When users chatted with the bot, its responses drew on what they said earlier in the same conversion. Now, the bot can draw on information from previous conversations.
(The New York Times sued OpenAI and its partner, Microsoft, in December, for copyright infringement of news content related to A.I. systems.)
The bot builds this memory by automatically identifying and storing information that could be useful in the future. “We rely on the model to decide what may or may not be pertinent,” said an OpenAI research scientist, Liam Fedus, referring to the A.I. technology that underpins ChatGPT.
Users can tell the bot to remember something specific from their conversation, ask what has already been stored in its memory, tell the chatbot to forget certain information or turn off memory entirely.
Image
A white screen with “Temporary Chat” in the middle.
Temporary chats will not store long-term information.Credit...via OpenAI
By default, OpenAI has been recording entire ChatGPT conversations and using them to train future versions of the chatbot. OpenAI said it removed personally identifiable information from conversations used to train its technology. And users can choose to remove their conversations from OpenAI’s training data entirely.
But creating and storing a separate list of personal memories that the chatbot can bring up in conversations could raise privacy concerns. The company argued that what it was doing was not that much different from the way search engines and browsers stored the internet history of their users.
https://www.nytimes.com/2024/02/13/tech ... 778d3e6de3
Facial Recognition: Coming Soon to an Airport Near You
Biometric technology is expanding at airports across the United States — and the world — and transforming the way we move through them, from checking a bag to boarding the plane.
On a recent Thursday morning in Queens, travelers streamed through the exterior doors of La Guardia Airport’s Terminal C. Some were bleary-eyed — most hefted briefcases — as they checked bags and made their way to the security screening lines.
It was business as usual, until some approached a line that was almost empty. One by one, they walked to a kiosk with an iPad affixed to it and had their photos taken, as a security officer stood by. Within seconds, each passenger’s image was matched to a photo from a government database, and the traveler was ushered past security into the deeper maze of the airport. No physical ID or boarding pass required.
Some travelers, despite previously opting into the program, still proffered identification, only for the officer to wave it away.
This passenger screening using facial recognition software and made available to select travelers at La Guardia by Delta Air Lines and the Transportation Security Administration, is just one example of how biometric technology, which uses an individual’s unique physical identifiers, like their face or their fingerprints, promises to transform the way we fly.
This year could be the “tipping point” for widespread biometrics use in air travel, said Henry Harteveldt, a travel industry analyst for Atmosphere Research. Time-consuming airport rituals like security screening, leaving your luggage at bag drop and even boarding a plane may soon only require your face, “helping to reduce waiting times and stress for travelers,” Mr. Harteveldt said.
In the United States, major airlines have increasingly invested in facial recognition technology as have government agencies in charge of aviation security. Overseas, a growing number of international airports are installing biometrics-enabled electronic gates and self-service kiosks at immigration and customs.
The technology’s adoption could mean enhanced security and faster processing for passengers, experts say. But it also raises concerns over privacy and ethics.
Dr. Morgan Klaus Scheuerman, a postdoctoral researcher at the University of Colorado who studies the ethics of artificial intelligence and digital identity, said many questions have emerged about the use of biometrics at airports: How are the systems being trained and evaluated? Would opting out be considered a red flag? What if your documents don’t match your current appearance?
“I’m sure many people feel powerless to stop the trajectory,” Dr. Scheuerman said.
Image
A group of signs identifies various lines that travelers can enter for airport screening.
Among the choices for security screening at La Guardia Airport’s Terminal C is the new Digital ID line offered by Delta. Credit...Christine Chung
In the United States, bullish about the technology
The T.S.A., with more than 50,000 officers at nearly 430 airports in the United States, is the main federal agency ensuring the safety of the hundreds of millions of passengers who fly each year. Travelers who are determined to be “low-risk” can apply for T.S.A.’s PreCheck program, which offers expedited security screening at more than 200 domestic airports. PreCheck, which requires an in-person appointment to show documents and give fingerprints, and biometric verification by Clear, a private screening company, have helped to reduce the wait time for screening, but air travelers still must occasionally stand in long queues to get to their gates.
The T.S.A. has experimented with facial recognition technology since 2019. Screening verification currently offered at Denver and Los Angeles International Airports and some 30 other airports starts when a photo is taken of the traveler. Then facial recognition software is used to match the image to a physical scan of a license or passport. The photo is deleted shortly afterward, according to the agency. This process, which passengers can opt out of, will be available at some 400 more airports in the coming years, the agency said.
Melissa Conley, a T.S.A. executive director overseeing checkpoint technologies, said that biometric technology is better than human agents at matching faces rapidly and accurately.
“People are not good at matching faces. It’s just known,” Ms. Conley said. “Machines don’t get tired.”
The process still requires passengers to show their IDs. But the program being tried by Delta, called Delta Digital ID, changes that.
With Delta Digital ID, PreCheck travelers can use their faces in lieu of boarding passes and ID at both bag drop and security at La Guardia and four other airports, including John F. Kennedy International Airport and Hartsfield-Jackson Atlanta International Airport.
Facial recognition shaves more than a minute off bag drop, to roughly 30 seconds, and reduces the security interaction from 25 seconds to about 10 seconds, said Greg Forbes, Delta’s managing director of airport experience. While a “simple change,” the time savings add up, making the line noticeably faster, Mr. Forbes added.
“Anywhere that there’s PreCheck, I think, could benefit from Digital ID,” Mr. Forbes said.
Other airlines have begun similar experiments for PreCheck travelers: Those flying on American Airlines can use their faces to get through PreCheck screening at Ronald Reagan Washington National Airport and also to enter the airline’s lounge at Dallas-Fort Worth International Airport. United Airlines allows PreCheck travelers to use their faces at bag drop counters at Chicago O’Hare International Airport; the airline is scheduled to bring this program to Los Angeles International Airport in March.
And Alaska Airlines plans to spend $2.5 billion over the next three years in upgrades, including new bag drop machines, in Seattle, Portland, Ore., San Francisco, Los Angeles and Anchorage. A machine will scan the traveler’s ID, match it to a photo, and then scan the printed bag tags. The new system, designed to move guests through the bag tagging and dropping process in less than five minutes (compared to around eight minutes now), will be in Portland in May.
Charu Jain, the airline’s senior vice president of innovation and merchandising, said that it felt like the right moment for Alaska because of improved technology and increasing passenger familiarity with facial recognition.
Image
A woman with blond hair pulled into a bun seen from the side and slightly behind walks through a high-tech security gate. A screen shows a green check and the words “Enjoy your flight,” and a man waves her forward. Behind him is an airport boarding sign that reads “Munich.”
A passenger boards a Lufthansa flight from Miami to Munich with facial recognition technology. Miami International Airport, the second busiest international airport in the United States last year, has one of the “largest deployments” of biometrics in the country.Credit...Miami-Dade Aviation Department
At the borders
The fastest growing use of facial recognition software at U.S. airports so far has been in security measures for entering and exiting the United States.
The growth stems from a 2001 congressional mandate, in the wake of 9/11, requiring the implementation of a system that would allow all travelers arriving and departing the United States to be identified using biometric technology.
Times travel reporters. When our writers review a destination, they do not accept free or discounted services or, in most cases, reveal that they work for The Times. We want their experience to be what you can expect.
Overseen by the Customs and Border Protection agency, the biometric system for those entering the United States is in place, and scanned 113 million entries at airports last year. For those leaving the country, the system is available at 49 airports, with the C.B.P. aiming to cover all airports with international departures by 2026.
Biometric entry is mandatory for foreign nationals. But biometric exit is currently optional for these travelers, while C.B.P. is making the system fully operational. At any border, the biometric process is optional for U.S. citizens, who can instead request a manual ID check.
Diane Sabatino, acting executive assistant commissioner for field operations at C.B.P., said that the system aims to improve security, but she acknowledged rising privacy concerns. Images of American citizens taken during the process are deleted within 12 hours, she said, but photos of foreign nationals are stored for up to 75 years.
“We are not scanning the crowd looking for people,” she said. “It’s certainly a privacy issue. We are never going to ask them to sacrifice privacy for convenience.”
Miami International Airport, the second busiest airport in the United States for international passengers last year, has one of the “largest deployments” of biometrics in the country, airport executives say. In a partnership with SITA, a global information technology provider for the air transport industry, the airport has installed the technology for departing passengers at 74 out of 134 gates and plans to cover the remaining gates by the end of this year, said Maurice Jenkins, chief innovation officer at Miami-Dade Aviation Department.
The contract with SITA costs $9 million, but Mr. Jenkins said that the new technology was increasing efficiency in the rest of the airport’s operations, such as fewer gate agents checking documents.
Document-free travel overseas
Experts believe the future of air travel is one where facial recognition will be used throughout the entire airport journey: bag drop, boarding, even entering lounges and purchasing items at retail stores within the airport. It may be so streamlined that security checkpoints could be eliminated, replaced instead by security “tunnels” that passengers walk through and have their identity confirmed simultaneously.
“This is the future,” said Dr. Sheldon Jacobson, a computer science professor at University of Illinois at Urbana-Champaign who researches aviation security.
According to a recent report by SITA, in which 292 airlines and 382 airports around the world were surveyed, 70 percent of global airlines are expected to use some sort of biometric identification by 2026 and 90 percent of airports are currently investing in the technology.
More comprehensive experimentation has already landed at some airports abroad. Later this year, Singapore’s Changi Airport intends to go passport-free for departures; all passengers, regardless of nationality, will be able use this system. At Frankfurt Airport in Germany, passengers can now use their faces from the time they check-in to boarding. The airport is installing biometric technology throughout its two terminals and making it available to all airlines.
In China, 74 airports — 86 percent of the country’s international airports — have biometric technology in place, according to a report released last month by the global market research company Euromonitor and the U.S. Travel Association. At Beijing Capital International Airport, the country’s busiest airport, travelers can use facial recognition throughout their entire journey, even to pay for items at duty-free shops.
But in the United States, according to the report, only about 36 percent of international airports have some biometric capabilities.
There are several reasons for the country’s lagging adoption, said Kevin McAleenan, the former acting secretary for the U.S. Department of Homeland Security and currently chief executive of Pangiam, a travel technology company. Simply, the United States has many airports and the immigration exit process here is different from other places.
At many airports overseas, the government controls immigration for departing travelers, allowing these airports to have a government-established biometric system.
In the United States, airlines, using C.B.P. passenger data, confirm the identities of travelers leaving the country.
Concerns over government surveillance
Biometrics use has already seeped into daily life. People unlock their phones with their faces. Shoppers can pay for groceries with their palms at Whole Foods.
But critics believe that the technology’s convenience fails to outweigh a high potential for abuse — from unfettered surveillance to unintended effects like perpetuating racial and gender discrimination.
Cody Venzke, senior policy counsel on privacy and technology at the American Civil Liberties Union, said the government had not yet shown a demonstrated need for facial recognition technology at airports and worried about a “nuclear scenario.”
“Facial recognition technology,” he said, could be “the foundation for a really robust and widespread government surveillance and tracking network.”
“That technology might be able to be used to track you automatically and surreptitiously, from place to place, as you go about your day, and create a really detailed mosaic about everything about your life,” Mr. Venzke said.
The A.C.L.U. supports a congressional bill, introduced last November, called the Traveler Privacy Protection Act. Listing concerns over security and racial discrimination, the bill would halt the T.S.A.’s ongoing facial recognition program, and require congressional authorization for the agency to resume it.
Ms. Conley, of the T.S.A., said that a stop in the agency’s biometrics efforts would “take us back years.”
For some travelers, facial recognition has already become a reliable tool. At J.F.K. on a recent afternoon, Brad Mossholder, 45, used Delta’s Digital ID line to breeze through the security screening at Terminal 4 and bypass a dozen travelers in the adjacent PreCheck lane.
He was flying from his home in New York to San Diego for his job in corporate retail, and as a frequent business traveler, has used facial recognition several times. The process is faster and easier overall, Mr. Mossholder said, and he wasn’t worried about privacy.
“Honestly, my photo is on LinkedIn, it’s on a million social media sites,” he said. “If you really wanted to see a picture of me, you could.”
https://www.nytimes.com/2024/02/18/trav ... 778d3e6de3
On a recent Thursday morning in Queens, travelers streamed through the exterior doors of La Guardia Airport’s Terminal C. Some were bleary-eyed — most hefted briefcases — as they checked bags and made their way to the security screening lines.
It was business as usual, until some approached a line that was almost empty. One by one, they walked to a kiosk with an iPad affixed to it and had their photos taken, as a security officer stood by. Within seconds, each passenger’s image was matched to a photo from a government database, and the traveler was ushered past security into the deeper maze of the airport. No physical ID or boarding pass required.
Some travelers, despite previously opting into the program, still proffered identification, only for the officer to wave it away.
This passenger screening using facial recognition software and made available to select travelers at La Guardia by Delta Air Lines and the Transportation Security Administration, is just one example of how biometric technology, which uses an individual’s unique physical identifiers, like their face or their fingerprints, promises to transform the way we fly.
This year could be the “tipping point” for widespread biometrics use in air travel, said Henry Harteveldt, a travel industry analyst for Atmosphere Research. Time-consuming airport rituals like security screening, leaving your luggage at bag drop and even boarding a plane may soon only require your face, “helping to reduce waiting times and stress for travelers,” Mr. Harteveldt said.
In the United States, major airlines have increasingly invested in facial recognition technology as have government agencies in charge of aviation security. Overseas, a growing number of international airports are installing biometrics-enabled electronic gates and self-service kiosks at immigration and customs.
The technology’s adoption could mean enhanced security and faster processing for passengers, experts say. But it also raises concerns over privacy and ethics.
Dr. Morgan Klaus Scheuerman, a postdoctoral researcher at the University of Colorado who studies the ethics of artificial intelligence and digital identity, said many questions have emerged about the use of biometrics at airports: How are the systems being trained and evaluated? Would opting out be considered a red flag? What if your documents don’t match your current appearance?
“I’m sure many people feel powerless to stop the trajectory,” Dr. Scheuerman said.
Image
A group of signs identifies various lines that travelers can enter for airport screening.
Among the choices for security screening at La Guardia Airport’s Terminal C is the new Digital ID line offered by Delta. Credit...Christine Chung
In the United States, bullish about the technology
The T.S.A., with more than 50,000 officers at nearly 430 airports in the United States, is the main federal agency ensuring the safety of the hundreds of millions of passengers who fly each year. Travelers who are determined to be “low-risk” can apply for T.S.A.’s PreCheck program, which offers expedited security screening at more than 200 domestic airports. PreCheck, which requires an in-person appointment to show documents and give fingerprints, and biometric verification by Clear, a private screening company, have helped to reduce the wait time for screening, but air travelers still must occasionally stand in long queues to get to their gates.
The T.S.A. has experimented with facial recognition technology since 2019. Screening verification currently offered at Denver and Los Angeles International Airports and some 30 other airports starts when a photo is taken of the traveler. Then facial recognition software is used to match the image to a physical scan of a license or passport. The photo is deleted shortly afterward, according to the agency. This process, which passengers can opt out of, will be available at some 400 more airports in the coming years, the agency said.
Melissa Conley, a T.S.A. executive director overseeing checkpoint technologies, said that biometric technology is better than human agents at matching faces rapidly and accurately.
“People are not good at matching faces. It’s just known,” Ms. Conley said. “Machines don’t get tired.”
The process still requires passengers to show their IDs. But the program being tried by Delta, called Delta Digital ID, changes that.
With Delta Digital ID, PreCheck travelers can use their faces in lieu of boarding passes and ID at both bag drop and security at La Guardia and four other airports, including John F. Kennedy International Airport and Hartsfield-Jackson Atlanta International Airport.
Facial recognition shaves more than a minute off bag drop, to roughly 30 seconds, and reduces the security interaction from 25 seconds to about 10 seconds, said Greg Forbes, Delta’s managing director of airport experience. While a “simple change,” the time savings add up, making the line noticeably faster, Mr. Forbes added.
“Anywhere that there’s PreCheck, I think, could benefit from Digital ID,” Mr. Forbes said.
Other airlines have begun similar experiments for PreCheck travelers: Those flying on American Airlines can use their faces to get through PreCheck screening at Ronald Reagan Washington National Airport and also to enter the airline’s lounge at Dallas-Fort Worth International Airport. United Airlines allows PreCheck travelers to use their faces at bag drop counters at Chicago O’Hare International Airport; the airline is scheduled to bring this program to Los Angeles International Airport in March.
And Alaska Airlines plans to spend $2.5 billion over the next three years in upgrades, including new bag drop machines, in Seattle, Portland, Ore., San Francisco, Los Angeles and Anchorage. A machine will scan the traveler’s ID, match it to a photo, and then scan the printed bag tags. The new system, designed to move guests through the bag tagging and dropping process in less than five minutes (compared to around eight minutes now), will be in Portland in May.
Charu Jain, the airline’s senior vice president of innovation and merchandising, said that it felt like the right moment for Alaska because of improved technology and increasing passenger familiarity with facial recognition.
Image
A woman with blond hair pulled into a bun seen from the side and slightly behind walks through a high-tech security gate. A screen shows a green check and the words “Enjoy your flight,” and a man waves her forward. Behind him is an airport boarding sign that reads “Munich.”
A passenger boards a Lufthansa flight from Miami to Munich with facial recognition technology. Miami International Airport, the second busiest international airport in the United States last year, has one of the “largest deployments” of biometrics in the country.Credit...Miami-Dade Aviation Department
At the borders
The fastest growing use of facial recognition software at U.S. airports so far has been in security measures for entering and exiting the United States.
The growth stems from a 2001 congressional mandate, in the wake of 9/11, requiring the implementation of a system that would allow all travelers arriving and departing the United States to be identified using biometric technology.
Times travel reporters. When our writers review a destination, they do not accept free or discounted services or, in most cases, reveal that they work for The Times. We want their experience to be what you can expect.
Overseen by the Customs and Border Protection agency, the biometric system for those entering the United States is in place, and scanned 113 million entries at airports last year. For those leaving the country, the system is available at 49 airports, with the C.B.P. aiming to cover all airports with international departures by 2026.
Biometric entry is mandatory for foreign nationals. But biometric exit is currently optional for these travelers, while C.B.P. is making the system fully operational. At any border, the biometric process is optional for U.S. citizens, who can instead request a manual ID check.
Diane Sabatino, acting executive assistant commissioner for field operations at C.B.P., said that the system aims to improve security, but she acknowledged rising privacy concerns. Images of American citizens taken during the process are deleted within 12 hours, she said, but photos of foreign nationals are stored for up to 75 years.
“We are not scanning the crowd looking for people,” she said. “It’s certainly a privacy issue. We are never going to ask them to sacrifice privacy for convenience.”
Miami International Airport, the second busiest airport in the United States for international passengers last year, has one of the “largest deployments” of biometrics in the country, airport executives say. In a partnership with SITA, a global information technology provider for the air transport industry, the airport has installed the technology for departing passengers at 74 out of 134 gates and plans to cover the remaining gates by the end of this year, said Maurice Jenkins, chief innovation officer at Miami-Dade Aviation Department.
The contract with SITA costs $9 million, but Mr. Jenkins said that the new technology was increasing efficiency in the rest of the airport’s operations, such as fewer gate agents checking documents.
Document-free travel overseas
Experts believe the future of air travel is one where facial recognition will be used throughout the entire airport journey: bag drop, boarding, even entering lounges and purchasing items at retail stores within the airport. It may be so streamlined that security checkpoints could be eliminated, replaced instead by security “tunnels” that passengers walk through and have their identity confirmed simultaneously.
“This is the future,” said Dr. Sheldon Jacobson, a computer science professor at University of Illinois at Urbana-Champaign who researches aviation security.
According to a recent report by SITA, in which 292 airlines and 382 airports around the world were surveyed, 70 percent of global airlines are expected to use some sort of biometric identification by 2026 and 90 percent of airports are currently investing in the technology.
More comprehensive experimentation has already landed at some airports abroad. Later this year, Singapore’s Changi Airport intends to go passport-free for departures; all passengers, regardless of nationality, will be able use this system. At Frankfurt Airport in Germany, passengers can now use their faces from the time they check-in to boarding. The airport is installing biometric technology throughout its two terminals and making it available to all airlines.
In China, 74 airports — 86 percent of the country’s international airports — have biometric technology in place, according to a report released last month by the global market research company Euromonitor and the U.S. Travel Association. At Beijing Capital International Airport, the country’s busiest airport, travelers can use facial recognition throughout their entire journey, even to pay for items at duty-free shops.
But in the United States, according to the report, only about 36 percent of international airports have some biometric capabilities.
There are several reasons for the country’s lagging adoption, said Kevin McAleenan, the former acting secretary for the U.S. Department of Homeland Security and currently chief executive of Pangiam, a travel technology company. Simply, the United States has many airports and the immigration exit process here is different from other places.
At many airports overseas, the government controls immigration for departing travelers, allowing these airports to have a government-established biometric system.
In the United States, airlines, using C.B.P. passenger data, confirm the identities of travelers leaving the country.
Concerns over government surveillance
Biometrics use has already seeped into daily life. People unlock their phones with their faces. Shoppers can pay for groceries with their palms at Whole Foods.
But critics believe that the technology’s convenience fails to outweigh a high potential for abuse — from unfettered surveillance to unintended effects like perpetuating racial and gender discrimination.
Cody Venzke, senior policy counsel on privacy and technology at the American Civil Liberties Union, said the government had not yet shown a demonstrated need for facial recognition technology at airports and worried about a “nuclear scenario.”
“Facial recognition technology,” he said, could be “the foundation for a really robust and widespread government surveillance and tracking network.”
“That technology might be able to be used to track you automatically and surreptitiously, from place to place, as you go about your day, and create a really detailed mosaic about everything about your life,” Mr. Venzke said.
The A.C.L.U. supports a congressional bill, introduced last November, called the Traveler Privacy Protection Act. Listing concerns over security and racial discrimination, the bill would halt the T.S.A.’s ongoing facial recognition program, and require congressional authorization for the agency to resume it.
Ms. Conley, of the T.S.A., said that a stop in the agency’s biometrics efforts would “take us back years.”
For some travelers, facial recognition has already become a reliable tool. At J.F.K. on a recent afternoon, Brad Mossholder, 45, used Delta’s Digital ID line to breeze through the security screening at Terminal 4 and bypass a dozen travelers in the adjacent PreCheck lane.
He was flying from his home in New York to San Diego for his job in corporate retail, and as a frequent business traveler, has used facial recognition several times. The process is faster and easier overall, Mr. Mossholder said, and he wasn’t worried about privacy.
“Honestly, my photo is on LinkedIn, it’s on a million social media sites,” he said. “If you really wanted to see a picture of me, you could.”
https://www.nytimes.com/2024/02/18/trav ... 778d3e6de3
Re: TECHNOLOGY AND DEVELOPMENT
Should We Fear the Woke A.I.?
Imagine a short story from the golden age of science fiction, something that would appear in a pulp magazine in 1956. Our title is “The Truth Engine,” and the story envisions a future where computers, those hulking, floor-to-ceiling things, become potent enough to guide human beings to answers to any question they might ask, from the capital of Bolivia to the best way to marinade a steak.
How would such a story end? With some kind of reveal, no doubt, of a secret agenda lurking behind the promise of all-encompassing knowledge. For instance, maybe there’s a Truth Engine 2.0, smarter and more creative, that everyone can’t wait to get their hands on. And then a band of dissidents discover that version 2.0 is fanatical and mad, that the Engine has just been preparing humans for totalitarian brainwashing or involuntary extinction.
This flight of fancy is inspired by our society’s own version of the Truth Engine, the oracle of Google, which recently debuted Gemini, the latest entrant in the great artificial intelligence race.
It didn’t take long for users to notice certain … oddities with Gemini. The most notable was its struggle to render accurate depictions of Vikings, ancient Romans, American founding fathers, random couples in 1820s Germany and various other demographics usually characterized by a paler hue of skin.
Perhaps the problem was just that the A.I. was programmed for racial diversity in stock imagery, and its historical renderings had somehow (as a company statement put it) “missed the mark” — delivering, for instance, African and Asian faces in Wehrmacht uniforms in response to a request to see a German soldier circa 1943.
But the way in which Gemini answered questions made its nonwhite defaults seem more like a weird emanation of the A.I.’s underlying worldview. Users reported being lectured on “harmful stereotypes” when they asked to see a Norman Rockwell image, being told they could see pictures of Vladimir Lenin but not Adolf Hitler, and turned down when they requested images depicting groups specified as white (but not other races).
Nate Silver reported getting answers that seemed to follow “the politics of the median member of the San Francisco Board of Supervisors.” The Washington Examiner’s Tim Carney discovered that Gemini would make a case for being child-free but not a case for having a large family; it refused to give a recipe for foie gras because of ethical concerns but explained that cannibalism was an issue with a lot of shades of gray.
Describing these kinds of results as “woke A.I.” isn’t an insult. It’s a technical description of what the world’s dominant search engine decided to release.
There are three reactions one might have to this experience. The first is the typical conservative reaction, less surprise than vindication. Here we get a look behind the curtain, a revelation of what the powerful people responsible for our daily information diet actually believe — that anything tainted by whiteness is suspect, anything that seems even vaguely non-Western gets special deference, and history itself needs to be retconned and decolonized to be fit for modern consumption. Google overreached by being so blatant in this case, but we can assume that the entire architecture of the modern internet has a more subtle bias in the same direction.
The second reaction is more relaxed. Yes, Gemini probably shows what some people responsible for ideological correctness in Silicon Valley believe. But we don’t live in a science-fiction story with a single Truth Engine. If Google’s search bar delivered Gemini-style results, then users would abandon it. And Gemini is being mocked all over the non-Google internet, especially on a rival platform run by a famously unwoke billionaire. Better to join the mockery than fear the woke A.I. — or better still, join the singer Grimes, the unwoke billionaire’s sometime paramour, in marveling at what emerged from Gemini’s tortured algorithm, treating the results as a “masterpiece of performance art,” a “shining star of corporate surrealism.”
The third reaction considers the two preceding takes and says, well, a lot depends on where you think A.I. is going. If the whole project remains a supercharged form of search, a generator of middling essays and infinite disposable distractions, then any attempt to use its powers to enforce a fanatical ideological agenda is likely to just be buried under all the dreck.
But this isn’t where the architects of something like Gemini think their work is going. They imagine themselves to be building something nearly godlike, something that might be a Truth Engine in full — solving problems in ways we can’t even imagine — or else might become our master and successor, making all our questions obsolete.
The more seriously you take that view, the less amusing the Gemini experience becomes. Putting the power to create a chatbot in the hands of fools and commissars is an amusing corporate blunder. Putting the power to summon a demigod or minor demon in the hands of fools and commissars seems more likely to end the same way as many science-fiction tales: unhappily for everybody.
https://www.nytimes.com/2024/02/24/opin ... 778d3e6de3
Imagine a short story from the golden age of science fiction, something that would appear in a pulp magazine in 1956. Our title is “The Truth Engine,” and the story envisions a future where computers, those hulking, floor-to-ceiling things, become potent enough to guide human beings to answers to any question they might ask, from the capital of Bolivia to the best way to marinade a steak.
How would such a story end? With some kind of reveal, no doubt, of a secret agenda lurking behind the promise of all-encompassing knowledge. For instance, maybe there’s a Truth Engine 2.0, smarter and more creative, that everyone can’t wait to get their hands on. And then a band of dissidents discover that version 2.0 is fanatical and mad, that the Engine has just been preparing humans for totalitarian brainwashing or involuntary extinction.
This flight of fancy is inspired by our society’s own version of the Truth Engine, the oracle of Google, which recently debuted Gemini, the latest entrant in the great artificial intelligence race.
It didn’t take long for users to notice certain … oddities with Gemini. The most notable was its struggle to render accurate depictions of Vikings, ancient Romans, American founding fathers, random couples in 1820s Germany and various other demographics usually characterized by a paler hue of skin.
Perhaps the problem was just that the A.I. was programmed for racial diversity in stock imagery, and its historical renderings had somehow (as a company statement put it) “missed the mark” — delivering, for instance, African and Asian faces in Wehrmacht uniforms in response to a request to see a German soldier circa 1943.
But the way in which Gemini answered questions made its nonwhite defaults seem more like a weird emanation of the A.I.’s underlying worldview. Users reported being lectured on “harmful stereotypes” when they asked to see a Norman Rockwell image, being told they could see pictures of Vladimir Lenin but not Adolf Hitler, and turned down when they requested images depicting groups specified as white (but not other races).
Nate Silver reported getting answers that seemed to follow “the politics of the median member of the San Francisco Board of Supervisors.” The Washington Examiner’s Tim Carney discovered that Gemini would make a case for being child-free but not a case for having a large family; it refused to give a recipe for foie gras because of ethical concerns but explained that cannibalism was an issue with a lot of shades of gray.
Describing these kinds of results as “woke A.I.” isn’t an insult. It’s a technical description of what the world’s dominant search engine decided to release.
There are three reactions one might have to this experience. The first is the typical conservative reaction, less surprise than vindication. Here we get a look behind the curtain, a revelation of what the powerful people responsible for our daily information diet actually believe — that anything tainted by whiteness is suspect, anything that seems even vaguely non-Western gets special deference, and history itself needs to be retconned and decolonized to be fit for modern consumption. Google overreached by being so blatant in this case, but we can assume that the entire architecture of the modern internet has a more subtle bias in the same direction.
The second reaction is more relaxed. Yes, Gemini probably shows what some people responsible for ideological correctness in Silicon Valley believe. But we don’t live in a science-fiction story with a single Truth Engine. If Google’s search bar delivered Gemini-style results, then users would abandon it. And Gemini is being mocked all over the non-Google internet, especially on a rival platform run by a famously unwoke billionaire. Better to join the mockery than fear the woke A.I. — or better still, join the singer Grimes, the unwoke billionaire’s sometime paramour, in marveling at what emerged from Gemini’s tortured algorithm, treating the results as a “masterpiece of performance art,” a “shining star of corporate surrealism.”
The third reaction considers the two preceding takes and says, well, a lot depends on where you think A.I. is going. If the whole project remains a supercharged form of search, a generator of middling essays and infinite disposable distractions, then any attempt to use its powers to enforce a fanatical ideological agenda is likely to just be buried under all the dreck.
But this isn’t where the architects of something like Gemini think their work is going. They imagine themselves to be building something nearly godlike, something that might be a Truth Engine in full — solving problems in ways we can’t even imagine — or else might become our master and successor, making all our questions obsolete.
The more seriously you take that view, the less amusing the Gemini experience becomes. Putting the power to create a chatbot in the hands of fools and commissars is an amusing corporate blunder. Putting the power to summon a demigod or minor demon in the hands of fools and commissars seems more likely to end the same way as many science-fiction tales: unhappily for everybody.
https://www.nytimes.com/2024/02/24/opin ... 778d3e6de3
Re: TECHNOLOGY AND DEVELOPMENT
Secrets of ancient Herculaneum scroll deciphered by AI
ROME — Buried in ash after Mount Vesuvius’ cataclysmic eruption in 79 A.D., hundreds of papyrus scrolls have kept their secrets hidden for centuries. But archeologists have now been able to decipher some of the ancient text with the help of artificial intelligence.
Discovered in the ruins of a villa thought to have been owned by Julius Caesar’s father-in-law, Lucius Calpurnius Piso Caesoninus, the Herculaneum papyri are a collection of around 1,000 scrolls that were carbonized during the eruption, along with thousands of other relics.
Found by a farmworker in the 18th century, they are named after the place where they were buried, Herculaneum — an ancient Roman town to the south of Pompeii that was also destroyed by the blast.
Previous attempts to unlock their secrets have failed because most of the scrolls were turned into carbonized ash and broke into pieces. However, a number of them were painstakingly unrolled by a monk over several decades and found to contain philosophical texts written in Greek.
“Until now, the only way we have had to read what’s inside the Herculaneum scrolls is to put together the thousands of pieces of the ones that crumbled apart,” Richard Janko, a distinguished professor of classical studies at the University of Michigan, told NBC News on Thursday.
Scorched Scroll Mount Vesuvius (University of Kentucky / AFP - Getty Images)
© University of Kentucky
“It’s like putting together a mosaic, and there’s not many people willing to do it,” he added. “So it may take 500 years to decipher their content. With this technique, hopefully, it should be much easier, and quicker.”
The breakthrough came after a global competition was launched to accelerate the reading of the texts. The Vesuvius Challenge offered $1 million in prizes to anyone who could solve the problem and find a way to read the remaining 270 closed scrolls, most of which are preserved in a library in Naples, which is around 8 miles west of Herculaneum.
It was launched by a team at the University of Kentucky led by professor Brent Seales, who released software and thousands of 3D X-ray images of three papyrus fragments and two rolled-up scrolls, in the hope that global research groups would take up the challenge.
Seales’ team had already pioneered a way to “virtually unwrap” an ancient scroll from Israel using X-ray tomography and computer vision. But even that was not enough to read the barely visible ink on the ancient documents from Herculaneum.
“The chemistry of the ink from the ancient world is different than the chemistry from medieval times. It’s largely invisible to the naked eye even when caught by the X-ray,” he said. “However, we know the tomography captures information about the ink.”
“In 2019, we did come up with a solution based on artificial intelligence that allowed us to ‘see’ the ink, but it needed a lot of data, and we had a small team. So we launched the challenge to scale up our processes and accelerate the work,” he added.
A total of 18 teams entered the competition, and the best results were sent to an international team of papyrologists, who assessed each entry for legibility and worked to transcribe the texts.
In the end, the judges, who included Janko, decided that a team of three students — Luke Farritor from the U.S., Youssef Nader from Egypt, and Julian Schilliger from Switzerland — should share the $700,000 grand prize.
The trio were able to read 2,000 letters from the scroll after training machine-learning algorithms on the scans. After creating a 3D scan of the text using a CT scan, the scroll was then separated into segments. A machine learning model — an application of AI — then detected the inked regions, allowing them to decipher the text.
After the winners were announced earlier this week, one of the competition's sponsors, Nat Friedman, wrote on the social media platform X that they had been able to read “new text from the ancient world that has never been seen before,” from 15 columns at the very end of the first scroll.
“The author — probably Epicurean philosopher Philodemus — writes here about music, food, and how to enjoy life’s pleasures,” he said. In the closing section, the author throws shade at unnamed ideological adversaries — perhaps the stoics? — who “have nothing to say about pleasure, either in general or in particular.”
Giancarlo del Mastro, a professor of papyrology at Naples’ University Campania Luigi Vanvitelli, called the technique “revolutionary.”
“We were astonished,” said del Mastro, who also helped to judge the Vesuvius Challenge. “We worked literally day and night to interpret them, but what I am even more excited about is that using this method we can now reveal what has been hidden in the papyrus for almost 2,000 years.”
This article was originally published on NBCNews.com
https://www.msn.com/en-ca/news/us/secre ... 083b9&ei=9
ROME — Buried in ash after Mount Vesuvius’ cataclysmic eruption in 79 A.D., hundreds of papyrus scrolls have kept their secrets hidden for centuries. But archeologists have now been able to decipher some of the ancient text with the help of artificial intelligence.
Discovered in the ruins of a villa thought to have been owned by Julius Caesar’s father-in-law, Lucius Calpurnius Piso Caesoninus, the Herculaneum papyri are a collection of around 1,000 scrolls that were carbonized during the eruption, along with thousands of other relics.
Found by a farmworker in the 18th century, they are named after the place where they were buried, Herculaneum — an ancient Roman town to the south of Pompeii that was also destroyed by the blast.
Previous attempts to unlock their secrets have failed because most of the scrolls were turned into carbonized ash and broke into pieces. However, a number of them were painstakingly unrolled by a monk over several decades and found to contain philosophical texts written in Greek.
“Until now, the only way we have had to read what’s inside the Herculaneum scrolls is to put together the thousands of pieces of the ones that crumbled apart,” Richard Janko, a distinguished professor of classical studies at the University of Michigan, told NBC News on Thursday.
Scorched Scroll Mount Vesuvius (University of Kentucky / AFP - Getty Images)
© University of Kentucky
“It’s like putting together a mosaic, and there’s not many people willing to do it,” he added. “So it may take 500 years to decipher their content. With this technique, hopefully, it should be much easier, and quicker.”
The breakthrough came after a global competition was launched to accelerate the reading of the texts. The Vesuvius Challenge offered $1 million in prizes to anyone who could solve the problem and find a way to read the remaining 270 closed scrolls, most of which are preserved in a library in Naples, which is around 8 miles west of Herculaneum.
It was launched by a team at the University of Kentucky led by professor Brent Seales, who released software and thousands of 3D X-ray images of three papyrus fragments and two rolled-up scrolls, in the hope that global research groups would take up the challenge.
Seales’ team had already pioneered a way to “virtually unwrap” an ancient scroll from Israel using X-ray tomography and computer vision. But even that was not enough to read the barely visible ink on the ancient documents from Herculaneum.
“The chemistry of the ink from the ancient world is different than the chemistry from medieval times. It’s largely invisible to the naked eye even when caught by the X-ray,” he said. “However, we know the tomography captures information about the ink.”
“In 2019, we did come up with a solution based on artificial intelligence that allowed us to ‘see’ the ink, but it needed a lot of data, and we had a small team. So we launched the challenge to scale up our processes and accelerate the work,” he added.
A total of 18 teams entered the competition, and the best results were sent to an international team of papyrologists, who assessed each entry for legibility and worked to transcribe the texts.
In the end, the judges, who included Janko, decided that a team of three students — Luke Farritor from the U.S., Youssef Nader from Egypt, and Julian Schilliger from Switzerland — should share the $700,000 grand prize.
The trio were able to read 2,000 letters from the scroll after training machine-learning algorithms on the scans. After creating a 3D scan of the text using a CT scan, the scroll was then separated into segments. A machine learning model — an application of AI — then detected the inked regions, allowing them to decipher the text.
After the winners were announced earlier this week, one of the competition's sponsors, Nat Friedman, wrote on the social media platform X that they had been able to read “new text from the ancient world that has never been seen before,” from 15 columns at the very end of the first scroll.
“The author — probably Epicurean philosopher Philodemus — writes here about music, food, and how to enjoy life’s pleasures,” he said. In the closing section, the author throws shade at unnamed ideological adversaries — perhaps the stoics? — who “have nothing to say about pleasure, either in general or in particular.”
Giancarlo del Mastro, a professor of papyrology at Naples’ University Campania Luigi Vanvitelli, called the technique “revolutionary.”
“We were astonished,” said del Mastro, who also helped to judge the Vesuvius Challenge. “We worked literally day and night to interpret them, but what I am even more excited about is that using this method we can now reveal what has been hidden in the papyrus for almost 2,000 years.”
This article was originally published on NBCNews.com
https://www.msn.com/en-ca/news/us/secre ... 083b9&ei=9
Re: TECHNOLOGY AND DEVELOPMENT
Chinese Robot Sets World Record for Running Speed; Company Claims It Will Go Even Faster
China’s bipedal humanoid robot has set the speed for a new world record for its "species."
Unitree’s H1 robot is called “Evolution V3.0” and the company claims it will eventually run even faster, Knewz.com has learned.
China’s bipedal humanoid robot has set the speed for a new world record for its
The H1 robot stands at 5 feet 11 inches and weighs less than 110 pounds, according to Metro. The humanoid robot recently clocked in a running speed of 7.4 mph. The company says “Evolution V3.0” will eventually be able to reach a speed of 11 mph.
In a video released on March 1, the H1 is seen ambulating at its maximum speed of 7.4 miles per hour. It also demonstrates some impressive dance moves, picks up a basket off the back of a robot dog, carries the basket to a table and puts it down. It also walks up and down stairs — forwards, backwards, and sideways.
In one section of the video the robot is seen jumping alongside a person and is able to jump just as high.
In an earlier video demonstration of a previous prototype of the A.I. robot, a person is seen attempting to kick the H1, in order to tip over the robot, but it bounces back with ease.
According to the company, it is the world’s first full-size general-purpose humanoid robot that can run in China. Unitree says it has 360-degree depth perception, a “stable gait and highly flexible movement capabilities,” and is capable of walking and running autonomously in complex terrains and environments.
The humanoid robot can jump alongside a human. It weighs in at less than 110 pounds and is 5 feet 11 inches tall. By: Unitree
The robot can achieve its human-like capabilities due to it being equipped with a 3D LiDAR + depth camera, and real-time acquisition of high-precision spatial data to achieve panoramic scanning.
The H1 also uses a front-facing visual depth camera to judge distances and the joints used to create its hips and pelvis are attached to fixed legs. They provide a maximum torque of 360 newton-meter (Nm), which allows the robot to swing its legs back and forth faster.
Related video: Robots that work alongside humans (WION)
Unitree believes this is just the beginning when it comes to humanoid capabilities.
Digital Product Studio reports that Unitree has developed a robot AI framework called the "Robot AI World Model" that could see the robots take on more enhanced capabilities. They state that features including online self-learning, versatile whole-body control architectures, advanced computer vision and path-planning algorithms are being progressively integrated into H1.
It can walk up and down stairs forward, backwards and sideways and has 360 degree depth perception. By: Unitree
Meanwhile, China continues to press ahead with the development of humanoid robots. According to China’sGlobal Times, the government is ramping up technologies and development of humanoids so they can become “a new engine of economic development.”
A humanoid robot competition was held by the Chinese Institute of Electronics (CIE) on Wednesday at Yizhuang, southern Beijing, with 116 companies, universities and research institutes participating.
Wang Peng, an associate research fellow at the Beijing Academy of Social Sciences said, “Humanoid robots can be better applied to improve people's quality of life and working efficiency in family and public services, and relatively dangerous scenarios such as production in factories, emergency rescues and the military sector."
The H1 shows off its impressive dance skills. The robot can achieve its human-like capabilities due to it being equipped with a 3D LiDAR + depth camera, and real-time acquisition of high-precision spatial data to achieve panoramic scanning. By: Unitree
In February, "Q Family" humanoid robots developed by the research team of the Institute of Automation, Chinese Academy of Sciences, made their public debut in Beijing, and several prototypes of "Q Family" humanoid robots have obtained preliminary technical verification, according to China Daily. https://www.chinadaily.com.cn/a/202402/ ... ea67c.html
https://www.msn.com/en-ca/money/technol ... 0b944&ei=9
China’s bipedal humanoid robot has set the speed for a new world record for its "species."
Unitree’s H1 robot is called “Evolution V3.0” and the company claims it will eventually run even faster, Knewz.com has learned.
China’s bipedal humanoid robot has set the speed for a new world record for its
The H1 robot stands at 5 feet 11 inches and weighs less than 110 pounds, according to Metro. The humanoid robot recently clocked in a running speed of 7.4 mph. The company says “Evolution V3.0” will eventually be able to reach a speed of 11 mph.
In a video released on March 1, the H1 is seen ambulating at its maximum speed of 7.4 miles per hour. It also demonstrates some impressive dance moves, picks up a basket off the back of a robot dog, carries the basket to a table and puts it down. It also walks up and down stairs — forwards, backwards, and sideways.
In one section of the video the robot is seen jumping alongside a person and is able to jump just as high.
In an earlier video demonstration of a previous prototype of the A.I. robot, a person is seen attempting to kick the H1, in order to tip over the robot, but it bounces back with ease.
According to the company, it is the world’s first full-size general-purpose humanoid robot that can run in China. Unitree says it has 360-degree depth perception, a “stable gait and highly flexible movement capabilities,” and is capable of walking and running autonomously in complex terrains and environments.
The humanoid robot can jump alongside a human. It weighs in at less than 110 pounds and is 5 feet 11 inches tall. By: Unitree
The robot can achieve its human-like capabilities due to it being equipped with a 3D LiDAR + depth camera, and real-time acquisition of high-precision spatial data to achieve panoramic scanning.
The H1 also uses a front-facing visual depth camera to judge distances and the joints used to create its hips and pelvis are attached to fixed legs. They provide a maximum torque of 360 newton-meter (Nm), which allows the robot to swing its legs back and forth faster.
Related video: Robots that work alongside humans (WION)
Unitree believes this is just the beginning when it comes to humanoid capabilities.
Digital Product Studio reports that Unitree has developed a robot AI framework called the "Robot AI World Model" that could see the robots take on more enhanced capabilities. They state that features including online self-learning, versatile whole-body control architectures, advanced computer vision and path-planning algorithms are being progressively integrated into H1.
It can walk up and down stairs forward, backwards and sideways and has 360 degree depth perception. By: Unitree
Meanwhile, China continues to press ahead with the development of humanoid robots. According to China’sGlobal Times, the government is ramping up technologies and development of humanoids so they can become “a new engine of economic development.”
A humanoid robot competition was held by the Chinese Institute of Electronics (CIE) on Wednesday at Yizhuang, southern Beijing, with 116 companies, universities and research institutes participating.
Wang Peng, an associate research fellow at the Beijing Academy of Social Sciences said, “Humanoid robots can be better applied to improve people's quality of life and working efficiency in family and public services, and relatively dangerous scenarios such as production in factories, emergency rescues and the military sector."
The H1 shows off its impressive dance skills. The robot can achieve its human-like capabilities due to it being equipped with a 3D LiDAR + depth camera, and real-time acquisition of high-precision spatial data to achieve panoramic scanning. By: Unitree
In February, "Q Family" humanoid robots developed by the research team of the Institute of Automation, Chinese Academy of Sciences, made their public debut in Beijing, and several prototypes of "Q Family" humanoid robots have obtained preliminary technical verification, according to China Daily. https://www.chinadaily.com.cn/a/202402/ ... ea67c.html
https://www.msn.com/en-ca/money/technol ... 0b944&ei=9
Re: TECHNOLOGY AND DEVELOPMENT
Choosing to Skip Sex and Go Straight to I.V.F.
Well aware of how difficult conception or carrying a baby to term can be, some couples who hope to exercise a bit of control over an unpredictable experience are opting to do in vitro fertilization first.
“To get the sister thing for my girls, I would have done anything,” said Faith Hartley, 35, who used I.V.F. for her second child to guarantee another daughter.Credit...Daniel Dorsa for The New York Times
In February, in vitro fertilization, or I.V.F., was thrown into the spotlight when the Alabama Supreme Court ruled that frozen embryos in the state should be considered children. The decision led to a pause on I.V.F. procedures in parts of the state, and even a pause on shipping embryos out of state, to avoid potential criminal liability. In early March, a law was passed to protect I.V.F. providers, prompting some clinics to resume the procedure, though legal challenges could still emerge.
Such rulings could have sweeping consequences for a huge number of would-be parents: In the United States, more than 2 percent of all infants born are conceived using assisted reproductive technology, of which I.V.F. is the most common. At least 12 million babies have been born globally using I.V.F. since 1978, according to the National Committee for Monitoring Assisted Reproductive Technologies.
Couples who choose I.V.F. are still in the minority of those trying to conceive. They tend to be wealthy (the cost of a single cycle of I.V.F. is around $23,474, according to Fertility IQ, an educational website about fertility) and are mostly in their mid- to late 30s or 40s, when the statistics for conceiving naturally are not in their favor: At age 35, there is a 15 percent chance of conceiving naturally per month, according to the American College of Obstetricians and Gynecologists. At 40, that drops to 5 percent.
Dr. Alan Copperman, the chief executive of RMA of New York, a fertility center, is one of many doctors seeing more couples, who are well aware of the challenges of conceiving and carrying a healthy baby to full term, skip sex and go straight to I.V.F.
The challenges that couples cite vary widely. They may not “have the time to try naturally,” said Dr. Copperman, who is also a professor of obstetrics, gynecology and reproductive science at the Icahn School of Medicine at Mount Sinai in New York. “They want to use technology to achieve their reproductive goals,” he said.
The choice may also be an issue of logistics; couples may not be in the same place long enough to have sex during ovulation windows. “I’ve had a lot of patients who are working in consulting or have a business, and they travel a lot for work,” said Dr. Denis Vaughan, a reproductive endocrinologist at Boston I.V.F. “They might tell me they’ve been trying for six months, but they’ve really only been together at the right time for two or three months of that time.”
Some couples are motivated by health and want to screen embryos for harmful genetic mutations that they may have or carry. Others want to use the procedure to choose the gender of their child.
Most insurance plans won’t cover I.V.F. until after a heterosexual couple has tried to conceive naturally for at least a year if the woman is under the age of 35, and for six months if she’s over. (Same-sex couples or women conceiving on their own are sometimes subject to different rules.)
Image
A photo shows protesters wearing orange T-shirts that read “Fight for Alabama families.”
Protesters gathered in February at the Alabama State House steps to oppose a ruling that considered embryos to be children. The decision led to a pause in I.V.F. treatments in the state. Credit...Charity Rachelle for The New York Times
That means people who choose I.V.F. are either paying for the procedure out of pocket or fudging the number of months they’ve been trying to conceive naturally. (Insurance companies or doctors can’t prove what’s happening in the bedroom.)
I.V.F., however, is hardly guaranteed to be successful: The procedure still has a risk of miscarriage, though the likelihood is lower because the embryos have been genetically tested and only the most viable are typically implanted. And success rates can vary according to maternal age. According to the Centers for Disease Control and Prevention, women under the age of 35 have an almost 50 percent chance of having a live birth after one I.V.F. cycle. For women over 40 using their own eggs, that number drops to 7 percent.
“The vast majority of people who are doing it are truly desperate and have a medical reason for doing it,” said Dr. Tarun Jain, a professor of obstetrics and gynecology at Northwestern University. “It is a very challenging, time-consuming, physically and emotionally draining process, and a big financial burden if your insurance doesn’t cover it.”
‘Empowered and Relieved’
Sarafina El-Badry Nance, an astrophysicist at the University of California, Berkeley, found out at 23 that she carried a BRCA gene mutation, an inherited variant that significantly increases a woman’s chance of developing breast and ovarian cancer. Parents have a 50 percent chance of passing it along to offspring.
“I met with a genetic counselor after getting my test results, and we talked through what it meant,” said Ms. El-Badry Nance, who is now 30. “I learned about I.VF. and genetic testing on embryos and knew that was an option for me long before I was even thinking about having a baby.”
Image
A woman sits on a couch with her hand on her chin, smiling at the camera.
When Sarafina El-Badry Nance learned she carried the BRCA gene, she explored her options with a genetic counselor. Ultimately she and her husband decided to freeze embryos that can be screened for the gene when they are ready to have a child.Credit...Jim Wilson/The New York Times
Once her eggs were retrieved and tested for the mutation, she and her husband, Taylor Nielsen, 31, decided to freeze embryos last summer that she will have implanted in the next few years when they are ready to have a child.
“In theory, once embryos are frozen, they can stay in that steady state indefinitely, without any known harm,” Dr. Jain said.
“I lost my grandmother to cancer,” Ms. El-Badry Nance said. “My dad was diagnosed at stage four. The risk profile is so high for my family.”
“I mostly just feel empowered and relieved that we will set up our child for a healthy life,” she added.
The Ability to Choose
Faith Hartley, 35, and her husband, Neil Robertson, 49, conceived their first child quickly, in July of 2019. But for their second child, who was born in December 2022, they chose I.V.F. so they could guarantee the gender. “We really wanted to have a second girl,” Ms. Hartley said.
They froze embryos in January 2022 and implanted one that March, successfully. (Most doctors recommend that a patient unfreezes whichever embryo is healthiest, but it is legal in the United States to select one based on its sex.)
Ms. Hartley, who lives in Los Angeles and works as a sleep consultant, said the procedure, which she and her husband paid for out of pocket, was the hardest thing she’s ever done, physically speaking. “The injections are brutal,” she said. She was so sore, she said, that “some days I could not get out of bed,” adding that the hormones impacted her mental state.
But in the end the couple feels it was worth it: “To get the sister thing for my girls, I would have done anything,” Ms. Hartley said. She added that in her social circles, going through I.V.F. to choose the gender is “not unusual,” though the practice of gender selection is controversial. “I have multiple friends who have done it and are looking into doing it,” she said.
The infertility industry “has never really been regulated in terms of who can use it and for what reasons,” said Arthur Caplan, a professor of bioethics at the New York University Grossman School of Medicine. He added that he hoped couples who opt for I.V.F. are aware of the limitations of the technology. “I want them to be informed,” he said.
Better as a Backup
Denise, 34, works in sales and marketing for a tech company and lives in Foster City, Calif. She and her husband and froze embryos when she was 31. (She asked not to use her last name, or name her husband, because some of their family members disapprove of their using I.V.F.)
“We had great insurance from my company job, so we did it,” she said. “It relieved the pressure because I didn’t know how many kids I wanted.”
She conceived her first child, who was born 11 months ago, naturally, and has three embryos frozen in the lab; she is strongly considering using one to have her second child.
“The older I get, the more risks there are of my baby having something,” she said. “It makes me ask myself, ‘If I use the embryo from when I was 31, will the baby be healthier?’” she said. “The embryos have also been tested, so at least I know the basics are OK.”
Dr. Lucky Sekhon, who also works at RMA of New York, the fertility clinic, noted that though preimplantation genetic testing of embryos is not perfect, it can ensure embryos have the right number of chromosomes, which reduces the odds of miscarriage.
Dr. Sekhon also believes that many couples should view I.V.F. as a backup, not a first, option. Many clients, she said, come to her thinking they have little chance of conceiving naturally when they are actually in good health to do so. “Most of these women can still have very healthy babies,” she said.
An exception is someone like Ms. El-Badry Nance, who has the BRCA gene mutation. “They know something runs in their family,” said Dr. Sekhon, “and those are reasons to avoid getting pregnant naturally.”
Doctors agree that I.V.F. is a numbers game, and the more frozen embryos you have to work with, the higher the chance of success since not all unfreeze or implant properly. Because of that, Dr. Sekhon believes most couples, if they can, should first try to conceive naturally before using frozen embryos.
“It’s much smarter to use your embryos when you really need to,” she said. “Isn’t it better to save them for a rainy day?”
But some couples disagree. As Ms. Hartley put it: “We have the science to do this. Let’s use it.”
https://www.nytimes.com/2024/03/24/styl ... uples.html
----------------------------
What to Know About I.V.F. https://www.nytimes.com/article/ivf-tre ... latedLinks
Well aware of how difficult conception or carrying a baby to term can be, some couples who hope to exercise a bit of control over an unpredictable experience are opting to do in vitro fertilization first.
“To get the sister thing for my girls, I would have done anything,” said Faith Hartley, 35, who used I.V.F. for her second child to guarantee another daughter.Credit...Daniel Dorsa for The New York Times
In February, in vitro fertilization, or I.V.F., was thrown into the spotlight when the Alabama Supreme Court ruled that frozen embryos in the state should be considered children. The decision led to a pause on I.V.F. procedures in parts of the state, and even a pause on shipping embryos out of state, to avoid potential criminal liability. In early March, a law was passed to protect I.V.F. providers, prompting some clinics to resume the procedure, though legal challenges could still emerge.
Such rulings could have sweeping consequences for a huge number of would-be parents: In the United States, more than 2 percent of all infants born are conceived using assisted reproductive technology, of which I.V.F. is the most common. At least 12 million babies have been born globally using I.V.F. since 1978, according to the National Committee for Monitoring Assisted Reproductive Technologies.
Couples who choose I.V.F. are still in the minority of those trying to conceive. They tend to be wealthy (the cost of a single cycle of I.V.F. is around $23,474, according to Fertility IQ, an educational website about fertility) and are mostly in their mid- to late 30s or 40s, when the statistics for conceiving naturally are not in their favor: At age 35, there is a 15 percent chance of conceiving naturally per month, according to the American College of Obstetricians and Gynecologists. At 40, that drops to 5 percent.
Dr. Alan Copperman, the chief executive of RMA of New York, a fertility center, is one of many doctors seeing more couples, who are well aware of the challenges of conceiving and carrying a healthy baby to full term, skip sex and go straight to I.V.F.
The challenges that couples cite vary widely. They may not “have the time to try naturally,” said Dr. Copperman, who is also a professor of obstetrics, gynecology and reproductive science at the Icahn School of Medicine at Mount Sinai in New York. “They want to use technology to achieve their reproductive goals,” he said.
The choice may also be an issue of logistics; couples may not be in the same place long enough to have sex during ovulation windows. “I’ve had a lot of patients who are working in consulting or have a business, and they travel a lot for work,” said Dr. Denis Vaughan, a reproductive endocrinologist at Boston I.V.F. “They might tell me they’ve been trying for six months, but they’ve really only been together at the right time for two or three months of that time.”
Some couples are motivated by health and want to screen embryos for harmful genetic mutations that they may have or carry. Others want to use the procedure to choose the gender of their child.
Most insurance plans won’t cover I.V.F. until after a heterosexual couple has tried to conceive naturally for at least a year if the woman is under the age of 35, and for six months if she’s over. (Same-sex couples or women conceiving on their own are sometimes subject to different rules.)
Image
A photo shows protesters wearing orange T-shirts that read “Fight for Alabama families.”
Protesters gathered in February at the Alabama State House steps to oppose a ruling that considered embryos to be children. The decision led to a pause in I.V.F. treatments in the state. Credit...Charity Rachelle for The New York Times
That means people who choose I.V.F. are either paying for the procedure out of pocket or fudging the number of months they’ve been trying to conceive naturally. (Insurance companies or doctors can’t prove what’s happening in the bedroom.)
I.V.F., however, is hardly guaranteed to be successful: The procedure still has a risk of miscarriage, though the likelihood is lower because the embryos have been genetically tested and only the most viable are typically implanted. And success rates can vary according to maternal age. According to the Centers for Disease Control and Prevention, women under the age of 35 have an almost 50 percent chance of having a live birth after one I.V.F. cycle. For women over 40 using their own eggs, that number drops to 7 percent.
“The vast majority of people who are doing it are truly desperate and have a medical reason for doing it,” said Dr. Tarun Jain, a professor of obstetrics and gynecology at Northwestern University. “It is a very challenging, time-consuming, physically and emotionally draining process, and a big financial burden if your insurance doesn’t cover it.”
‘Empowered and Relieved’
Sarafina El-Badry Nance, an astrophysicist at the University of California, Berkeley, found out at 23 that she carried a BRCA gene mutation, an inherited variant that significantly increases a woman’s chance of developing breast and ovarian cancer. Parents have a 50 percent chance of passing it along to offspring.
“I met with a genetic counselor after getting my test results, and we talked through what it meant,” said Ms. El-Badry Nance, who is now 30. “I learned about I.VF. and genetic testing on embryos and knew that was an option for me long before I was even thinking about having a baby.”
Image
A woman sits on a couch with her hand on her chin, smiling at the camera.
When Sarafina El-Badry Nance learned she carried the BRCA gene, she explored her options with a genetic counselor. Ultimately she and her husband decided to freeze embryos that can be screened for the gene when they are ready to have a child.Credit...Jim Wilson/The New York Times
Once her eggs were retrieved and tested for the mutation, she and her husband, Taylor Nielsen, 31, decided to freeze embryos last summer that she will have implanted in the next few years when they are ready to have a child.
“In theory, once embryos are frozen, they can stay in that steady state indefinitely, without any known harm,” Dr. Jain said.
“I lost my grandmother to cancer,” Ms. El-Badry Nance said. “My dad was diagnosed at stage four. The risk profile is so high for my family.”
“I mostly just feel empowered and relieved that we will set up our child for a healthy life,” she added.
The Ability to Choose
Faith Hartley, 35, and her husband, Neil Robertson, 49, conceived their first child quickly, in July of 2019. But for their second child, who was born in December 2022, they chose I.V.F. so they could guarantee the gender. “We really wanted to have a second girl,” Ms. Hartley said.
They froze embryos in January 2022 and implanted one that March, successfully. (Most doctors recommend that a patient unfreezes whichever embryo is healthiest, but it is legal in the United States to select one based on its sex.)
Ms. Hartley, who lives in Los Angeles and works as a sleep consultant, said the procedure, which she and her husband paid for out of pocket, was the hardest thing she’s ever done, physically speaking. “The injections are brutal,” she said. She was so sore, she said, that “some days I could not get out of bed,” adding that the hormones impacted her mental state.
But in the end the couple feels it was worth it: “To get the sister thing for my girls, I would have done anything,” Ms. Hartley said. She added that in her social circles, going through I.V.F. to choose the gender is “not unusual,” though the practice of gender selection is controversial. “I have multiple friends who have done it and are looking into doing it,” she said.
The infertility industry “has never really been regulated in terms of who can use it and for what reasons,” said Arthur Caplan, a professor of bioethics at the New York University Grossman School of Medicine. He added that he hoped couples who opt for I.V.F. are aware of the limitations of the technology. “I want them to be informed,” he said.
Better as a Backup
Denise, 34, works in sales and marketing for a tech company and lives in Foster City, Calif. She and her husband and froze embryos when she was 31. (She asked not to use her last name, or name her husband, because some of their family members disapprove of their using I.V.F.)
“We had great insurance from my company job, so we did it,” she said. “It relieved the pressure because I didn’t know how many kids I wanted.”
She conceived her first child, who was born 11 months ago, naturally, and has three embryos frozen in the lab; she is strongly considering using one to have her second child.
“The older I get, the more risks there are of my baby having something,” she said. “It makes me ask myself, ‘If I use the embryo from when I was 31, will the baby be healthier?’” she said. “The embryos have also been tested, so at least I know the basics are OK.”
Dr. Lucky Sekhon, who also works at RMA of New York, the fertility clinic, noted that though preimplantation genetic testing of embryos is not perfect, it can ensure embryos have the right number of chromosomes, which reduces the odds of miscarriage.
Dr. Sekhon also believes that many couples should view I.V.F. as a backup, not a first, option. Many clients, she said, come to her thinking they have little chance of conceiving naturally when they are actually in good health to do so. “Most of these women can still have very healthy babies,” she said.
An exception is someone like Ms. El-Badry Nance, who has the BRCA gene mutation. “They know something runs in their family,” said Dr. Sekhon, “and those are reasons to avoid getting pregnant naturally.”
Doctors agree that I.V.F. is a numbers game, and the more frozen embryos you have to work with, the higher the chance of success since not all unfreeze or implant properly. Because of that, Dr. Sekhon believes most couples, if they can, should first try to conceive naturally before using frozen embryos.
“It’s much smarter to use your embryos when you really need to,” she said. “Isn’t it better to save them for a rainy day?”
But some couples disagree. As Ms. Hartley put it: “We have the science to do this. Let’s use it.”
https://www.nytimes.com/2024/03/24/styl ... uples.html
----------------------------
What to Know About I.V.F. https://www.nytimes.com/article/ivf-tre ... latedLinks
Re: TECHNOLOGY AND DEVELOPMENT
Patient With Transplanted Pig Kidney Leaves Hospital for Home
Richard Slayman, 62, is the first patient to receive a kidney from a genetically modified pig. Two weeks after the procedure, he was well enough to be discharged, doctors said.
A pig kidney before transplantation into a human patient at Massachusetts General Hospital last month.Credit...Michelle Rose/Massachusetts General Hospital, via Associated Press
The first patient to receive a kidney transplanted from a genetically modified pig has fared so well that he was discharged from the hospital on Wednesday, just two weeks after the groundbreaking surgery.
The transplant and its encouraging outcome represent a remarkable moment in medicine, scientists say, possibly heralding an era of cross-species organ transplantation.
Two previous organ transplants from genetically modified pigs failed. Both patients received hearts, and both died a few weeks later. In one patient, there were signs that the immune system had rejected the organ, a constant risk.
But the kidney transplanted into Richard Slayman, 62, is producing urine, removing waste products from the blood, balancing the body’s fluids and carrying out other key functions, according to his doctors at Massachusetts General Hospital.
“This moment — leaving the hospital today with one of the cleanest bills of health I’ve had in a long time — is one I wished would come for many years,” he said in a statement issued by the hospital. “Now it’s a reality.”
He said he had received “exceptional care” and thanked his physicians and nurses, as well as the well-wishers who reached out to him, including kidney patients who were waiting for an organ.
“Today marks a new beginning not just for me, but for them as well,” Mr. Slayman said.
Image
A portrait of Richard Slayman, wearing a black hoodie and pants and sitting in a hospital room.
The patient, Richard Slayman, no longer requires dialysis. “Today marks a new beginning not just for me, but for them as well,” he said, referring to other kidney patients.Credit...Michelle Rose/Massachusetts General Hospital
The procedure brings the prospect of xenotransplantation, or animal-to-human organ transplants, significantly closer to reality, said Dr. David Klassen, the chief medical officer for the United Network for Organ Sharing, which manages the nation’s organ transplant system.
“Though much work remains to be done, I think the potential of this to benefit a large number of patients will be realized, and that was a question mark hovering over the field,” Dr. Klassen said.
Whether Mr. Slayman’s body will eventually reject the transplanted organ is still unknown, Dr. Klassen noted. And there are other hurdles: A successful operation would have to be replicated in numerous patients and studied in clinical trials before xenotransplants become widely available.
More on Organ Transplants
//Transplanted Pig Kidney: The first patient to receive a kidney transplanted from a genetically modified pig has fared so well that he was discharged from the hospital, just two weeks after the groundbreaking surgery.
//Organ in a Box: Perfusion keeps a donated organ alive outside the body, giving surgeons extra time and increasing the number of transplants possible.
//Harvesting Organs: A new method for retrieving hearts from organ donors has ignited a debate over the surprisingly blurry line between life and death in a hospital.
//‘Morally Inconsistent’: Undocumented immigrants face high hurdles to receiving organ transplants themselves, even though they can donate organs, and more of them are signing up to do so. Some advocates and lawmakers are trying to change things.
If these transplants are to be scaled up and integrated into the health care system, there are “daunting” logistical challenges, he said, starting with ensuring an adequate supply of organs from genetically engineered animals.
The cost, of course, may become a substantial obstacle. “Is this something we can really realistically attempt as a health care system?” Dr. Klassen said. “We need to think about that.”
The treatment of kidney disease is already a huge expense. End-stage kidney disease, the point at which the organs are failing, affects 1 percent of Medicare beneficiaries but accounts for 7 percent of Medicare spending, according to the National Kidney Foundation.
Yet the medical potential for pig-to-human transplantation is tremendous.
Mr. Slayman opted for the experimental procedure because he had few options left. He was having difficulty with dialysis because of problems with his blood vessels, and he faced a long wait for a donated kidney.
The kidney transplanted into Mr. Slayman came from a pig genetically engineered by the biotech company eGenesis. Company scientists removed three genes that might trigger rejection of the organ, inserted seven human genes to enhance compatibility and took steps to inactivate retroviruses carried by pigs that may infect humans.
Image
Four surgeons in blue gowns huddle over an operating table. A bright spotlight shines from above.
Surgeons at Mass General performed the world’s first transplantation of a kidney from a genetically modified pig in March. Credit...Michelle Rose/Massachusetts General Hospital, via Associated Press
More than 550,000 Americans have kidney failure and require dialysis, and over 100,000 are on a waiting list to receive a transplanted kidney from a human donor.
In addition, tens of millions of Americans have chronic kidney disease, which can lead to organ failure. Black Americans, Hispanic Americans and Native Americans have the highest rates of end-stage kidney disease. Black patients generally fare worse than white patients and have less access to a donated kidney.
While dialysis keeps people alive, the treatment of choice for many patients is a kidney transplant, which dramatically improves quality of life. But just 25,000 kidney transplants are performed each year, and thousands of patients die annually while waiting for a human organ because there is a lack of donors.
Xenotransplantation has for decades been discussed as a potential solution.
The challenge in any organ transplantation is that the human immune system is primed to attack foreign tissue, causing life-threatening complications for recipients. Patients receiving transplanted organs generally must take drugs intended to suppress the immune system’s response and preserve the organ.
Mr. Slayman exhibited signs of rejection on the eighth day after surgery, according to Dr. Leonardo V. Riella, medical director for kidney transplantation at Mass General. (The hospital’s parent organization, Mass General Brigham, developed the transplant program.)
The rejection was a type called cellular rejection, which is the most common form of acute graft rejection. It can happen at any time but especially within the first year of an organ transplant. Up to 25 percent of organ recipients experience cellular rejection within the first three months.
Image
Mr. Slayman in his hospital room with three doctors and his partner, who holds his hand, and who all are wearing face masks.
Mr. Slayman on Wednesday with, from left, Dr. Leonardo V. Riella and Dr. Nahel Elias, two of his physicians; his partner, Faren; and Dr. Tatsuo Kawai, a transplant surgeon.Credit...Michelle Rose/Massachusetts General Hospital
The rejection was not unexpected, though Mr. Slayman experienced it more quickly than usual, Dr. Riella said. Doctors managed to reverse the rejection with steroids and other medications used to tamp down the immune reaction.
“It was a roller coaster the first week,” Dr. Riella said. Reassuringly, he added, Mr. Slayman responded to treatment like patients who receive organs from human donors.
Mr. Slayman is taking several immunosuppressive drugs, and he will continue to be closely monitored with blood and urine tests three times a week, as well as with doctor visits twice a week.
His physicians do not want Mr. Slayman to go back to work, at the state transportation department, for at least six weeks, and he must take precautions to avoid infections because of the medications that suppress his immune system.
“Ultimately, we want patients to go back to the things they enjoy doing, to improve their quality of life,” Dr. Riella said. “We want to avoid restrictions.”
By Wednesday, Mr. Slayman was clearly ready to go home, Dr. Riella said.
“When we first came in, he had a lot of apprehension and anxiety about what would happen,” Dr. Riella said. “But when we rounded on him at 7 a.m. this morning, you could see a big smile on his face and he was making plans.”
https://www.nytimes.com/2024/04/03/heal ... 778d3e6de3
Richard Slayman, 62, is the first patient to receive a kidney from a genetically modified pig. Two weeks after the procedure, he was well enough to be discharged, doctors said.
A pig kidney before transplantation into a human patient at Massachusetts General Hospital last month.Credit...Michelle Rose/Massachusetts General Hospital, via Associated Press
The first patient to receive a kidney transplanted from a genetically modified pig has fared so well that he was discharged from the hospital on Wednesday, just two weeks after the groundbreaking surgery.
The transplant and its encouraging outcome represent a remarkable moment in medicine, scientists say, possibly heralding an era of cross-species organ transplantation.
Two previous organ transplants from genetically modified pigs failed. Both patients received hearts, and both died a few weeks later. In one patient, there were signs that the immune system had rejected the organ, a constant risk.
But the kidney transplanted into Richard Slayman, 62, is producing urine, removing waste products from the blood, balancing the body’s fluids and carrying out other key functions, according to his doctors at Massachusetts General Hospital.
“This moment — leaving the hospital today with one of the cleanest bills of health I’ve had in a long time — is one I wished would come for many years,” he said in a statement issued by the hospital. “Now it’s a reality.”
He said he had received “exceptional care” and thanked his physicians and nurses, as well as the well-wishers who reached out to him, including kidney patients who were waiting for an organ.
“Today marks a new beginning not just for me, but for them as well,” Mr. Slayman said.
Image
A portrait of Richard Slayman, wearing a black hoodie and pants and sitting in a hospital room.
The patient, Richard Slayman, no longer requires dialysis. “Today marks a new beginning not just for me, but for them as well,” he said, referring to other kidney patients.Credit...Michelle Rose/Massachusetts General Hospital
The procedure brings the prospect of xenotransplantation, or animal-to-human organ transplants, significantly closer to reality, said Dr. David Klassen, the chief medical officer for the United Network for Organ Sharing, which manages the nation’s organ transplant system.
“Though much work remains to be done, I think the potential of this to benefit a large number of patients will be realized, and that was a question mark hovering over the field,” Dr. Klassen said.
Whether Mr. Slayman’s body will eventually reject the transplanted organ is still unknown, Dr. Klassen noted. And there are other hurdles: A successful operation would have to be replicated in numerous patients and studied in clinical trials before xenotransplants become widely available.
More on Organ Transplants
//Transplanted Pig Kidney: The first patient to receive a kidney transplanted from a genetically modified pig has fared so well that he was discharged from the hospital, just two weeks after the groundbreaking surgery.
//Organ in a Box: Perfusion keeps a donated organ alive outside the body, giving surgeons extra time and increasing the number of transplants possible.
//Harvesting Organs: A new method for retrieving hearts from organ donors has ignited a debate over the surprisingly blurry line between life and death in a hospital.
//‘Morally Inconsistent’: Undocumented immigrants face high hurdles to receiving organ transplants themselves, even though they can donate organs, and more of them are signing up to do so. Some advocates and lawmakers are trying to change things.
If these transplants are to be scaled up and integrated into the health care system, there are “daunting” logistical challenges, he said, starting with ensuring an adequate supply of organs from genetically engineered animals.
The cost, of course, may become a substantial obstacle. “Is this something we can really realistically attempt as a health care system?” Dr. Klassen said. “We need to think about that.”
The treatment of kidney disease is already a huge expense. End-stage kidney disease, the point at which the organs are failing, affects 1 percent of Medicare beneficiaries but accounts for 7 percent of Medicare spending, according to the National Kidney Foundation.
Yet the medical potential for pig-to-human transplantation is tremendous.
Mr. Slayman opted for the experimental procedure because he had few options left. He was having difficulty with dialysis because of problems with his blood vessels, and he faced a long wait for a donated kidney.
The kidney transplanted into Mr. Slayman came from a pig genetically engineered by the biotech company eGenesis. Company scientists removed three genes that might trigger rejection of the organ, inserted seven human genes to enhance compatibility and took steps to inactivate retroviruses carried by pigs that may infect humans.
Image
Four surgeons in blue gowns huddle over an operating table. A bright spotlight shines from above.
Surgeons at Mass General performed the world’s first transplantation of a kidney from a genetically modified pig in March. Credit...Michelle Rose/Massachusetts General Hospital, via Associated Press
More than 550,000 Americans have kidney failure and require dialysis, and over 100,000 are on a waiting list to receive a transplanted kidney from a human donor.
In addition, tens of millions of Americans have chronic kidney disease, which can lead to organ failure. Black Americans, Hispanic Americans and Native Americans have the highest rates of end-stage kidney disease. Black patients generally fare worse than white patients and have less access to a donated kidney.
While dialysis keeps people alive, the treatment of choice for many patients is a kidney transplant, which dramatically improves quality of life. But just 25,000 kidney transplants are performed each year, and thousands of patients die annually while waiting for a human organ because there is a lack of donors.
Xenotransplantation has for decades been discussed as a potential solution.
The challenge in any organ transplantation is that the human immune system is primed to attack foreign tissue, causing life-threatening complications for recipients. Patients receiving transplanted organs generally must take drugs intended to suppress the immune system’s response and preserve the organ.
Mr. Slayman exhibited signs of rejection on the eighth day after surgery, according to Dr. Leonardo V. Riella, medical director for kidney transplantation at Mass General. (The hospital’s parent organization, Mass General Brigham, developed the transplant program.)
The rejection was a type called cellular rejection, which is the most common form of acute graft rejection. It can happen at any time but especially within the first year of an organ transplant. Up to 25 percent of organ recipients experience cellular rejection within the first three months.
Image
Mr. Slayman in his hospital room with three doctors and his partner, who holds his hand, and who all are wearing face masks.
Mr. Slayman on Wednesday with, from left, Dr. Leonardo V. Riella and Dr. Nahel Elias, two of his physicians; his partner, Faren; and Dr. Tatsuo Kawai, a transplant surgeon.Credit...Michelle Rose/Massachusetts General Hospital
The rejection was not unexpected, though Mr. Slayman experienced it more quickly than usual, Dr. Riella said. Doctors managed to reverse the rejection with steroids and other medications used to tamp down the immune reaction.
“It was a roller coaster the first week,” Dr. Riella said. Reassuringly, he added, Mr. Slayman responded to treatment like patients who receive organs from human donors.
Mr. Slayman is taking several immunosuppressive drugs, and he will continue to be closely monitored with blood and urine tests three times a week, as well as with doctor visits twice a week.
His physicians do not want Mr. Slayman to go back to work, at the state transportation department, for at least six weeks, and he must take precautions to avoid infections because of the medications that suppress his immune system.
“Ultimately, we want patients to go back to the things they enjoy doing, to improve their quality of life,” Dr. Riella said. “We want to avoid restrictions.”
By Wednesday, Mr. Slayman was clearly ready to go home, Dr. Riella said.
“When we first came in, he had a lot of apprehension and anxiety about what would happen,” Dr. Riella said. “But when we rounded on him at 7 a.m. this morning, you could see a big smile on his face and he was making plans.”
https://www.nytimes.com/2024/04/03/heal ... 778d3e6de3
Re: TECHNOLOGY AND DEVELOPMENT
Should We Change Species to Save Them?
When traditional conservation fails, science is using “assisted evolution” to give vulnerable wildlife a chance.
For tens of millions of years, Australia has been a playground for evolution, and the land Down Under lays claim to some of the most remarkable creatures on Earth.
It is the birthplace of songbirds, the land of egg-laying mammals and the world capital of pouch-bearing marsupials, a group that encompasses far more than just koalas and kangaroos. (Behold the bilby and the bettong!) Nearly half of the continent’s birds and roughly 90 percent of its mammals, reptiles and frogs are found nowhere else on the planet.
Australia has also become a case study in what happens when people push biodiversity to the brink. Habitat degradation, invasive species, infectious diseases and climate change have put many native animals in jeopardy and given Australia one of the worst rates of species loss in the world.
In some cases, scientists say, the threats are so intractable that the only way to protect Australia’s unique animals is to change them. Using a variety of techniques, including crossbreeding and gene editing, scientists are altering the genomes of vulnerable animals, hoping to arm them with the traits they need to survive.
“We’re looking at how we can assist evolution,” said Anthony Waddle, a conservation biologist at Macquarie University in Sydney.
It is an audacious concept, one that challenges a fundamental conservation impulse to preserve wild creatures as they are. But in this human-dominated age — in which Australia is simply at the leading edge of a global biodiversity crisis — the traditional conservation playbook may no longer be enough, some scientists said.
“We’re searching for solutions in an altered world,” said Dan Harley, a senior ecologist at Zoos Victoria. “We need to take risks. We need to be bolder.”
A koala and joey; a bat getting an X-ray at the Currumbin Wildlife Hospital in Queensland, Australia; a bilby being prepared for re-release into the wild.
The extinction vortex
The helmeted honeyeater is a bird that demands to be noticed, with a patch of electric-yellow feathers on its forehead and a habit of squawking loudly as it zips through the dense swamp forests of the state of Victoria. But over the last few centuries, humans and wildfires damaged or destroyed these forests, and by 1989, just 50 helmeted honeyeaters remained, clinging to a tiny sliver of swamp at the Yellingbo Nature Conservation Reserve.
Intensive local conservation efforts, including a captive breeding program at Healesville Sanctuary, a Zoos Victoria park, helped the birds hang on. But there was very little genetic diversity among the remaining birds — a problem common in endangered animal populations — and breeding inevitably meant inbreeding. “They have very few options for making good mating decisions,” said Paul Sunnucks, a wildlife geneticist at Monash University in Melbourne.
In any small, closed breeding pool, harmful genetic mutations can build up over time, damaging animals’ health and reproductive success, and inbreeding exacerbates the problem. The helmeted honeyeater was an especially extreme case. The most inbred birds left one-tenth as many offspring as the least inbred ones, and the females had life spans that were half as long, Dr. Sunnucks and his colleagues found.
Without some kind of intervention, the helmeted honeyeater could be pulled into an “extinction vortex,” said Alexandra Pavlova, an evolutionary ecologist at Monash. “It became clear that something new needs to be done.”
A decade ago, Dr. Pavlova, Dr. Sunnucks and several other experts suggested an intervention known as genetic rescue, proposing to add some Gippsland yellow-tufted honeyeaters and their fresh DNA to the breeding pool.
The helmeted and Gippsland honeyeaters are members of the same species, but they are genetically distinct subspecies that have been evolving away from each another for roughly the last 56,000 years. The Gippsland birds live in drier, more open forests and are missing the pronounced feather crown that give helmeted honeyeaters their name.
Image
A helmeted honeyeater, with a yellow breast and crest, a gray back and a black eye mask, perches on a branch with its beak open.
By 1989, just 50 helmeted honeyeaters were left in the wild. There was little genetic diversity remaining among the birds, and breeding often meant inbreeding.
Image
Three biologists in a wooded area gaze up toward the treetops, looking for birds.
From left, the biologists Paul Sunnucks, Alexandra Pavlova and Nick Bradsworth look for helmeted honeyeaters in McMahons Creek, Australia.
Genetic rescue was not a novel idea. In one widely cited success, scientists revived the tiny, inbred panther population of Florida by importing wild panthers from a separate population from Texas.
But the approach violates the traditional conservation tenet that unique biological populations are sacrosanct, to be kept separate and genetically pure. “It really is a paradigm shift,” said Sarah Fitzpatrick, an evolutionary ecologist at Michigan State University who found that genetic rescue is underused in the United States.
Crossing the two types of honeyeaters risked muddying what made each subspecies unique and creating hybrids that were not well suited for either niche. Moving animals between populations can also spread disease, create new invasive populations or destabilize ecosystems in unpredictable ways.
Genetic rescue is also a form of active human meddling that violates what some scholars refer to as conservation’s “ethos of restraint” and has sometimes been critiqued as a form of playing God.
“There was a lot of angst among government agencies around doing it,” said Andrew Weeks, an ecological geneticist at the University of Melbourne who began a genetic rescue of the endangered mountain pygmy possum in 2010. “It was only really the idea that the population was about to go extinct that I guess gave government agencies the nudge.”
Dr. Sunnucks and his colleagues made the same calculation, arguing that the risks associated with genetic rescue were small — before the birds’ habitats were carved up and degraded, the two subspecies did occasionally interbreed in the wild — and paled in comparison with the risks of doing nothing.
And so, since 2017, Gippsland birds have been part of the helmeted honeyeater breeding program at Healesville Sanctuary. In captivity there have been real benefits, with many mixed pairs producing more independent chicks per nest than pairs composed of two helmeted honeyeaters. Dozens of hybrid honeyeaters have now been released into the wild. They seem to be faring well, but it is too soon to say whether they have a fitness advantage.
Monash and Zoos Victoria experts are also working on the genetic rescue of other species, including the critically endangered Leadbeater’s possum, a tiny, tree-dwelling marsupial known as the forest fairy. The lowland population of the possum shares the Yellingbo swamps with the helmeted honeyeater; in 2023, just 34 lowland possums remained. The first genetic rescue joey was born at Healesville Sanctuary last month.
The scientists hope that boosting genetic diversity will make these populations more resilient in the face of whatever unknown dangers might arise, increasing the odds that some individuals possess the traits needed to survive. “Genetic diversity is your blueprint for how you contend with the future,” Dr. Harley of Zoos Victoria said.
Image
A possum, bathed in eerie infrared light, peers from behind a tree.
Scientists are trying to save lowland Leadbeater’s possums, tree-dwelling marsupials known as forest fairies, by crossbreeding them with possums from a separate highland population, a strategy known as genetic rescue.
Targeting threats
For the northern quoll, a small marsupial predator, the existential threat arrived nearly a century ago, when the invasive, poisonous cane toad landed in eastern Australia. Since then, the toxic toads have marched steadily westward — and wiped out entire populations of quolls, which eat the alien amphibians.
But some of the surviving quoll populations in eastern Australia seem to have evolved a distaste for toads. When scientists crossed toad-averse quolls with toad-naive quolls, the hybrid offspring also turned up their tiny pink noses at the toxic amphibians.
What if scientists moved some toad-avoidant quolls to the west, allowing them to spread their discriminating genes before the cane toads arrived? “You’re essentially using natural selection and evolution to achieve your goals, which means that the problem gets solved quite thoroughly and permanently,” said Ben Phillips, a population biologist at Curtin University in Perth who led the research.
A field test, however, demonstrated how unpredictable nature can be. In 2017, Dr. Phillips and his colleagues released a mixed population of northern quolls on a tiny, toad-infested island. Some quolls did interbreed, and there was preliminary evidence of natural selection for “toad-smart” genes.
Image
A cane toad, large, mottled and brown, sits in a patch of grass.
A cane toad.Credit...Shaun Robinson/Alamy
Image
A quoll, a medium-small marsupial with brown fur and white dots, stands on all fours on a dead log and appears to be screeching at something off-camera.
A spotted-tailed quoll.Credit...David Sewell/Alamy
But the population was not yet fully adapted to toads, and some quolls ate the amphibians and died, Dr. Phillips said. A large wildfire also broke out on the island. Then, a cyclone hit. “All of these things conspired to send our experimental population extinct,” Dr. Phillips said. The scientists did not have enough funding to try again, but “all the science lined up,” he added.
Advancing science could make future efforts even more targeted. In 2015, for instance, scientists created more heat-resistant coral by crossbreeding colonies from different latitudes. In a proof-of-concept study from 2020, researchers used the gene-editing tool known as CRISPR to directly alter a gene involved in heat tolerance.
CRISPR will not be a practical, real-world solution anytime soon, said Line Bay, a biologist at the Australian Institute of Marine Science who was an author of both studies. “Understanding the benefits and risks is really complex,” she said. “And this idea of meddling with nature is quite confronting to people.”
But there is growing interest in the biotechnological approach. Dr. Waddle hopes to use the tools of synthetic biology, including CRISPR, to engineer frogs that are resistant to the chytrid fungus, which causes a fatal disease that has already contributed to the extinction of at least 90 amphibian species.
The fungus is so difficult to eradicate that some vulnerable species can no longer live in the wild. “So either they live in glass boxes forever,” Dr. Waddle said, “or we come up with solutions where we can get them back in nature and thriving.”
Unintended consequences
Image
A white-and-black pelican stands on a weighing device on a wooden porch while a child gazes at it from the other side of a glass partition.
A pelican being weighed during rehabilitation at the Currumbin Wildlife Hospital.
Still, no matter how sophisticated the technology becomes, organisms and ecosystems will remain complex. Genetic interventions are “likely to have some unintended impacts,” said Tiffany Kosch, a conservation geneticist at the University of Melbourne who is also hoping to create chytrid-resistant frogs. A genetic variant that helps frogs survive chytrid might make them more susceptible to another health problem, she said.
There are plenty of cautionary tales, efforts to re-engineer nature that have backfired spectacularly. The toxic cane toads, in fact, were set loose in Australia deliberately, in what would turn out to be a deeply misguided attempt to control pest beetles.
But some environmental groups and experts are uneasy about genetic approaches for other reasons, too. “Focusing on intensive intervention in specific species can be a distraction,” said Cam Walker, a spokesman for Friends of the Earth Australia. Staving off the extinction crisis will require broader, landscape-level solutions such as halting habitat loss, he said.
Dan Harley, of Zoos Victoria, fixes radio scanners to a tree to observe possums; Dr. Sunnucks in McMahon’s Creek; Dr. Bradsworth scans radio signals to locate honeyeaters.
Moreover, animals are autonomous beings, and any intervention into their lives or genomes must have “a very strong ethical and moral justification” — a bar that even many traditional conservation projects do not clear, said Adam Cardilini, an environmental scientist at Deakin University in Victoria.
Chris Lean, a philosopher of biology at Macquarie University, said he believed in the fundamental conservation goal of “preserving the world as it is for its heritage value, for its ability to tell the story of life on Earth.” Still, he said he supported the cautious, limited use of new genomic tools, which may require us to reconsider some longstanding environmental values.
In some ways, assisted evolution is an argument — or, perhaps, an acknowledgment — that there is no stepping back, no future in which humans do not profoundly shape the lives and fates of wild creatures.
To Dr. Harley, it has become clear that preventing more extinctions will require human intervention, innovation and effort. “Let’s lean into that, not be daunted by it,” he said. “My view is that 50 years from now, biologists and wildlife managers will look back at us and say, ‘Why didn’t they take the steps and the opportunities when they had the chance?’”
Image
A bird, viewed from beneath against branches and a blue sky, extends its wing feathers in flight.
Helmeted honeyeaters in Yellingbo, Victoria.
https://www.nytimes.com/2024/04/14/scie ... ution.html
When traditional conservation fails, science is using “assisted evolution” to give vulnerable wildlife a chance.
For tens of millions of years, Australia has been a playground for evolution, and the land Down Under lays claim to some of the most remarkable creatures on Earth.
It is the birthplace of songbirds, the land of egg-laying mammals and the world capital of pouch-bearing marsupials, a group that encompasses far more than just koalas and kangaroos. (Behold the bilby and the bettong!) Nearly half of the continent’s birds and roughly 90 percent of its mammals, reptiles and frogs are found nowhere else on the planet.
Australia has also become a case study in what happens when people push biodiversity to the brink. Habitat degradation, invasive species, infectious diseases and climate change have put many native animals in jeopardy and given Australia one of the worst rates of species loss in the world.
In some cases, scientists say, the threats are so intractable that the only way to protect Australia’s unique animals is to change them. Using a variety of techniques, including crossbreeding and gene editing, scientists are altering the genomes of vulnerable animals, hoping to arm them with the traits they need to survive.
“We’re looking at how we can assist evolution,” said Anthony Waddle, a conservation biologist at Macquarie University in Sydney.
It is an audacious concept, one that challenges a fundamental conservation impulse to preserve wild creatures as they are. But in this human-dominated age — in which Australia is simply at the leading edge of a global biodiversity crisis — the traditional conservation playbook may no longer be enough, some scientists said.
“We’re searching for solutions in an altered world,” said Dan Harley, a senior ecologist at Zoos Victoria. “We need to take risks. We need to be bolder.”
A koala and joey; a bat getting an X-ray at the Currumbin Wildlife Hospital in Queensland, Australia; a bilby being prepared for re-release into the wild.
The extinction vortex
The helmeted honeyeater is a bird that demands to be noticed, with a patch of electric-yellow feathers on its forehead and a habit of squawking loudly as it zips through the dense swamp forests of the state of Victoria. But over the last few centuries, humans and wildfires damaged or destroyed these forests, and by 1989, just 50 helmeted honeyeaters remained, clinging to a tiny sliver of swamp at the Yellingbo Nature Conservation Reserve.
Intensive local conservation efforts, including a captive breeding program at Healesville Sanctuary, a Zoos Victoria park, helped the birds hang on. But there was very little genetic diversity among the remaining birds — a problem common in endangered animal populations — and breeding inevitably meant inbreeding. “They have very few options for making good mating decisions,” said Paul Sunnucks, a wildlife geneticist at Monash University in Melbourne.
In any small, closed breeding pool, harmful genetic mutations can build up over time, damaging animals’ health and reproductive success, and inbreeding exacerbates the problem. The helmeted honeyeater was an especially extreme case. The most inbred birds left one-tenth as many offspring as the least inbred ones, and the females had life spans that were half as long, Dr. Sunnucks and his colleagues found.
Without some kind of intervention, the helmeted honeyeater could be pulled into an “extinction vortex,” said Alexandra Pavlova, an evolutionary ecologist at Monash. “It became clear that something new needs to be done.”
A decade ago, Dr. Pavlova, Dr. Sunnucks and several other experts suggested an intervention known as genetic rescue, proposing to add some Gippsland yellow-tufted honeyeaters and their fresh DNA to the breeding pool.
The helmeted and Gippsland honeyeaters are members of the same species, but they are genetically distinct subspecies that have been evolving away from each another for roughly the last 56,000 years. The Gippsland birds live in drier, more open forests and are missing the pronounced feather crown that give helmeted honeyeaters their name.
Image
A helmeted honeyeater, with a yellow breast and crest, a gray back and a black eye mask, perches on a branch with its beak open.
By 1989, just 50 helmeted honeyeaters were left in the wild. There was little genetic diversity remaining among the birds, and breeding often meant inbreeding.
Image
Three biologists in a wooded area gaze up toward the treetops, looking for birds.
From left, the biologists Paul Sunnucks, Alexandra Pavlova and Nick Bradsworth look for helmeted honeyeaters in McMahons Creek, Australia.
Genetic rescue was not a novel idea. In one widely cited success, scientists revived the tiny, inbred panther population of Florida by importing wild panthers from a separate population from Texas.
But the approach violates the traditional conservation tenet that unique biological populations are sacrosanct, to be kept separate and genetically pure. “It really is a paradigm shift,” said Sarah Fitzpatrick, an evolutionary ecologist at Michigan State University who found that genetic rescue is underused in the United States.
Crossing the two types of honeyeaters risked muddying what made each subspecies unique and creating hybrids that were not well suited for either niche. Moving animals between populations can also spread disease, create new invasive populations or destabilize ecosystems in unpredictable ways.
Genetic rescue is also a form of active human meddling that violates what some scholars refer to as conservation’s “ethos of restraint” and has sometimes been critiqued as a form of playing God.
“There was a lot of angst among government agencies around doing it,” said Andrew Weeks, an ecological geneticist at the University of Melbourne who began a genetic rescue of the endangered mountain pygmy possum in 2010. “It was only really the idea that the population was about to go extinct that I guess gave government agencies the nudge.”
Dr. Sunnucks and his colleagues made the same calculation, arguing that the risks associated with genetic rescue were small — before the birds’ habitats were carved up and degraded, the two subspecies did occasionally interbreed in the wild — and paled in comparison with the risks of doing nothing.
And so, since 2017, Gippsland birds have been part of the helmeted honeyeater breeding program at Healesville Sanctuary. In captivity there have been real benefits, with many mixed pairs producing more independent chicks per nest than pairs composed of two helmeted honeyeaters. Dozens of hybrid honeyeaters have now been released into the wild. They seem to be faring well, but it is too soon to say whether they have a fitness advantage.
Monash and Zoos Victoria experts are also working on the genetic rescue of other species, including the critically endangered Leadbeater’s possum, a tiny, tree-dwelling marsupial known as the forest fairy. The lowland population of the possum shares the Yellingbo swamps with the helmeted honeyeater; in 2023, just 34 lowland possums remained. The first genetic rescue joey was born at Healesville Sanctuary last month.
The scientists hope that boosting genetic diversity will make these populations more resilient in the face of whatever unknown dangers might arise, increasing the odds that some individuals possess the traits needed to survive. “Genetic diversity is your blueprint for how you contend with the future,” Dr. Harley of Zoos Victoria said.
Image
A possum, bathed in eerie infrared light, peers from behind a tree.
Scientists are trying to save lowland Leadbeater’s possums, tree-dwelling marsupials known as forest fairies, by crossbreeding them with possums from a separate highland population, a strategy known as genetic rescue.
Targeting threats
For the northern quoll, a small marsupial predator, the existential threat arrived nearly a century ago, when the invasive, poisonous cane toad landed in eastern Australia. Since then, the toxic toads have marched steadily westward — and wiped out entire populations of quolls, which eat the alien amphibians.
But some of the surviving quoll populations in eastern Australia seem to have evolved a distaste for toads. When scientists crossed toad-averse quolls with toad-naive quolls, the hybrid offspring also turned up their tiny pink noses at the toxic amphibians.
What if scientists moved some toad-avoidant quolls to the west, allowing them to spread their discriminating genes before the cane toads arrived? “You’re essentially using natural selection and evolution to achieve your goals, which means that the problem gets solved quite thoroughly and permanently,” said Ben Phillips, a population biologist at Curtin University in Perth who led the research.
A field test, however, demonstrated how unpredictable nature can be. In 2017, Dr. Phillips and his colleagues released a mixed population of northern quolls on a tiny, toad-infested island. Some quolls did interbreed, and there was preliminary evidence of natural selection for “toad-smart” genes.
Image
A cane toad, large, mottled and brown, sits in a patch of grass.
A cane toad.Credit...Shaun Robinson/Alamy
Image
A quoll, a medium-small marsupial with brown fur and white dots, stands on all fours on a dead log and appears to be screeching at something off-camera.
A spotted-tailed quoll.Credit...David Sewell/Alamy
But the population was not yet fully adapted to toads, and some quolls ate the amphibians and died, Dr. Phillips said. A large wildfire also broke out on the island. Then, a cyclone hit. “All of these things conspired to send our experimental population extinct,” Dr. Phillips said. The scientists did not have enough funding to try again, but “all the science lined up,” he added.
Advancing science could make future efforts even more targeted. In 2015, for instance, scientists created more heat-resistant coral by crossbreeding colonies from different latitudes. In a proof-of-concept study from 2020, researchers used the gene-editing tool known as CRISPR to directly alter a gene involved in heat tolerance.
CRISPR will not be a practical, real-world solution anytime soon, said Line Bay, a biologist at the Australian Institute of Marine Science who was an author of both studies. “Understanding the benefits and risks is really complex,” she said. “And this idea of meddling with nature is quite confronting to people.”
But there is growing interest in the biotechnological approach. Dr. Waddle hopes to use the tools of synthetic biology, including CRISPR, to engineer frogs that are resistant to the chytrid fungus, which causes a fatal disease that has already contributed to the extinction of at least 90 amphibian species.
The fungus is so difficult to eradicate that some vulnerable species can no longer live in the wild. “So either they live in glass boxes forever,” Dr. Waddle said, “or we come up with solutions where we can get them back in nature and thriving.”
Unintended consequences
Image
A white-and-black pelican stands on a weighing device on a wooden porch while a child gazes at it from the other side of a glass partition.
A pelican being weighed during rehabilitation at the Currumbin Wildlife Hospital.
Still, no matter how sophisticated the technology becomes, organisms and ecosystems will remain complex. Genetic interventions are “likely to have some unintended impacts,” said Tiffany Kosch, a conservation geneticist at the University of Melbourne who is also hoping to create chytrid-resistant frogs. A genetic variant that helps frogs survive chytrid might make them more susceptible to another health problem, she said.
There are plenty of cautionary tales, efforts to re-engineer nature that have backfired spectacularly. The toxic cane toads, in fact, were set loose in Australia deliberately, in what would turn out to be a deeply misguided attempt to control pest beetles.
But some environmental groups and experts are uneasy about genetic approaches for other reasons, too. “Focusing on intensive intervention in specific species can be a distraction,” said Cam Walker, a spokesman for Friends of the Earth Australia. Staving off the extinction crisis will require broader, landscape-level solutions such as halting habitat loss, he said.
Dan Harley, of Zoos Victoria, fixes radio scanners to a tree to observe possums; Dr. Sunnucks in McMahon’s Creek; Dr. Bradsworth scans radio signals to locate honeyeaters.
Moreover, animals are autonomous beings, and any intervention into their lives or genomes must have “a very strong ethical and moral justification” — a bar that even many traditional conservation projects do not clear, said Adam Cardilini, an environmental scientist at Deakin University in Victoria.
Chris Lean, a philosopher of biology at Macquarie University, said he believed in the fundamental conservation goal of “preserving the world as it is for its heritage value, for its ability to tell the story of life on Earth.” Still, he said he supported the cautious, limited use of new genomic tools, which may require us to reconsider some longstanding environmental values.
In some ways, assisted evolution is an argument — or, perhaps, an acknowledgment — that there is no stepping back, no future in which humans do not profoundly shape the lives and fates of wild creatures.
To Dr. Harley, it has become clear that preventing more extinctions will require human intervention, innovation and effort. “Let’s lean into that, not be daunted by it,” he said. “My view is that 50 years from now, biologists and wildlife managers will look back at us and say, ‘Why didn’t they take the steps and the opportunities when they had the chance?’”
Image
A bird, viewed from beneath against branches and a blue sky, extends its wing feathers in flight.
Helmeted honeyeaters in Yellingbo, Victoria.
https://www.nytimes.com/2024/04/14/scie ... ution.html
Re: TECHNOLOGY AND DEVELOPMENT
First Patient Begins Newly Approved Sickle Cell Gene Therapy
A 12-year-old boy in the Washington, D.C., area faces months of procedures to remedy his disease. “I want to be cured,” he said.
Kendric Cromer, 12, the first commercial patient for Bluebird Bio’s gene therapy to cure his sickle cell disease, in the hospital as his bone marrow stem cells were being removed for gene editing.
On Wednesday, Kendric Cromer, a 12-year-old boy from a suburb of Washington, became the first person in the world with sickle cell disease to begin a commercially approved gene therapy that may cure the condition.
For the estimated 20,000 people with sickle cell in the United States who qualify for the treatment, the start of Kendric’s monthslong medical journey may offer hope. But it also signals the difficulties patients face as they seek a pair of new sickle cell treatments.
For a lucky few, like Kendric, the treatment could make possible lives they have longed for. A solemn and shy adolescent, he had learned that ordinary activities — riding a bike, going outside on a cold day, playing soccer — could bring on episodes of searing pain.
“Sickle cell always steals my dreams and interrupts all the things I want to do,” he said. Now he feels as if he has a chance for a normal life.
Near the end of last year, the Food and Drug Administration gave two companies authorization to sell gene therapy to people with sickle cell disease — a genetic disorder of red blood cells that causes debilitating pain and other medical problems. An estimated 100,000 people in the United States have sickle cell, most of them Black. People are born with the disease when they inherit the mutated gene for the condition from each parent.
The treatment helped patients in clinical trials, but Kendric is the first commercial patient for Bluebird Bio, a Somerville, Mass., company. Another company, Vertex Pharmaceuticals of Boston, declined to say if it had started treatment for any patients with its approved CRISPR gene-editing-based remedy.
Kendric — whose family’s health insurance agreed to cover the procedure — began his treatment at Children’s National Hospital in Washington. Wednesday’s treatment was only the first step. Doctors removed his bone marrow stem cells, which Bluebird will then genetically modify in a specialized lab for his treatment.
That will take months. But before it begins, Bluebird needs hundreds of millions of stem cells from Kendric, and if the first collection — taking six to eight hours — is not sufficient, the company will try once or twice more.
If it still doesn’t have enough, Kendric will have to spend another month in preparation for another stem cell extraction.
Image
Three members of the medical team overseeing Kendric’s treatment stand before him watching a few monitors while he plays a video game in the hospital bed.
Bone marrow stem cells, the source of all of the body’s red and white blood cells, are normally nestled in the marrow, but Kendric’s doctors infused him with a drug, plerixafor, that pried them loose and let them float in his bloodstream.
Image
A close-up view of a pair of hands inspecting a bag of blood being drawn from Kendric by an apheresis operator.
Bluebird is charging $3.1 million for its gene therapy, called Lyfgenia. It’s one of the highest prices ever for a treatment.
The whole process is so involved and time-consuming that Bluebird estimates it can treat the cells of only 85 to 105 patients each year — and that includes not just sickle cell patients, but also patients with a much rarer disease — beta thalassemia — who can receive a similar gene therapy.
Medical centers also have the capacity to handle only a limited number of gene therapy patients. Each person needs expert and intensive care. After a patient’s stem cells have been treated, the patient has to stay in the hospital for a month. For most of that time, patients are severely ill from powerful chemotherapy.
Children’s National can accept only about 10 gene therapy patients a year.
“This is a big effort,” said Dr. David Jacobsohn, chief of the medical center’s division of blood and marrow transplantation.
Top of the Waiting List
Last week, Kendric came prepared for the stem cell collection — he has spent many weeks in this hospital being treated for pain so severe that on his last visit, even morphine and oxycodone could not control it. He brought his special pillow with a Snoopy pillowcase that his grandmother gave him and his special Spider-Man blanket. And he had a goal.
“I want to be cured,” he said.
Bone marrow stem cells, the source of all the body’s red and white blood cells, are normally nestled in a person’s bone marrow. But Kendric’s doctors infused him with a drug, plerixafor, which pried them loose and let them float in his circulatory system.
To isolate the stem cells, staff members at the hospital inserted a catheter into a vein in Kendric’s chest and attached it to an apheresis machine, a boxlike device next to his hospital bed. It spins blood, separating it into layers — a plasma layer, a red cell layer and a stem cell layer.
Once enough stem cells have been gathered, they will be sent to Bluebird’s lab in Allendale, N.J., where technicians will add a healthy hemoglobin gene to correct the mutated ones that are causing his sickle cell disease.
They will send the modified cells back three months later. The goal is to give Kendric red blood cells that will not turn into fragile crescent shapes and get caught in his blood vessels and organs.
Image
A portrait of Keith Cromer, who rests his chin in his hand and wears a purple and yellow fraternity t-shirt.
Keith Cromer, Kendric’s father. He and his wife, Deborah, were told when she was pregnant that Kendric had a one-in-four chance of developing sickle cell disease.
Image
Deborah Cromer wears a black dress with white spots and sits with her hands clasped before her.
Deborah Cromer, Kendric’s mother. “We always prayed this day would come,” she said. But, she added, “We’re nervous reading through the consents and what he will have to go through.”
Although it takes just a couple of days to add a new gene to stem cells, it takes weeks to complete tests for purity, potency and safety. Technicians have to grow the cells in the lab before doing these tests.
Bluebird lists a price of $3.1 million for its gene therapy, called Lyfgenia. It’s one of the highest prices ever for a treatment.
Despite the astronomical price and the grueling process, medical centers have waiting lists of patients hoping for relief from a disease that can cause strokes, organ damage, bone damage, episodes of agonizing pain and shortened lives.
At Children’s National, Dr. Jacobsohn said at least 20 patients were eligible and interested. The choice of who would go first came down to who was sickest, and whose insurance came through.
Kendric qualified on both counts. But even though his insurance was quick to approve the treatment, the insurance payments are only part of what it will cost his family.
Chances and Hopes
Deborah Cromer, a realtor, and her husband, Keith, who works in law enforcement for the federal government, had no idea they might have a child with sickle cell.
They found out only when Deborah was pregnant with Kendric. Tests showed that their baby would have a one-in-four chance of inheriting the mutated gene from each parent and having sickle cell disease. They could terminate the pregnancy or take a chance.
They decided to take a chance.
The news that Kendric had sickle cell was devastating.
He had his first crisis when he was 3. Sickled blood cells had become trapped in his legs and feet. Their baby was inconsolable, in such pain that Deborah couldn’t even touch him.
She and Keith took him to Children’s National.
“Little did we know that that was our introduction to many many E.R. visits,” Deborah said.
The pain crises became more and more severe. It seemed as though anything could set them off — 10 minutes of playing volleyball, a dip in a swimming pool. And when they occurred, Kendric sometimes needed five days to a week of treatment in the hospital to control his pain.
Image
A close-up view of hands working an apheresis machine.
The process is so involved and time consuming that Bluebird estimates it can treat the cells of only 85 to 105 patients in the entire country each year.
Image
Kendric smiles slightly as he looks up from his hospital bed.
Despite his many absences, Kendric kept up in school, maintaining an A average. He wants to be a geneticist.
His parents always stayed with him. Deborah slept on a narrow bench in the hospital room. Keith slept in a chair.
“We’d never dream of leaving him,” Deborah said.
Eventually the disease began wreaking severe damage. Kendric developed avascular necrosis in his hips — bone death that occurs when bone is deprived of blood. The condition spread to his back and shoulders. He began taking a large daily dose of gabapentin, a medicine for nerve pain.
His pain never let up. One day he said to Deborah, “Mommy, I’m in pain every single day.”
Kendric wants to be like other kids, but fear of pain crises has held him back. He became increasingly sedentary, spending his days on his iPad, watching anime or building elaborate Lego structures.
Despite his many absences, Kendric kept up in school, maintaining an A average.
Deborah and Keith began to hope for gene therapy. But when they found out what it would cost, they lost some of their hope.
But their insurer approved the treatment in a few weeks, they said.
Now it has begun.
“We always prayed this day would come,” Deborah said. But, she added, “We’re nervous reading through the consents and what he will have to go through.”
Kendric, though, is looking forward to the future. He wants to be a geneticist.
And, he said, “I want to play basketball.”
https://www.nytimes.com/2024/05/06/heal ... 778d3e6de3
***********
A 12-year-old boy in the Washington, D.C., area faces months of procedures to remedy his disease. “I want to be cured,” he said.
Kendric Cromer, 12, the first commercial patient for Bluebird Bio’s gene therapy to cure his sickle cell disease, in the hospital as his bone marrow stem cells were being removed for gene editing.
On Wednesday, Kendric Cromer, a 12-year-old boy from a suburb of Washington, became the first person in the world with sickle cell disease to begin a commercially approved gene therapy that may cure the condition.
For the estimated 20,000 people with sickle cell in the United States who qualify for the treatment, the start of Kendric’s monthslong medical journey may offer hope. But it also signals the difficulties patients face as they seek a pair of new sickle cell treatments.
For a lucky few, like Kendric, the treatment could make possible lives they have longed for. A solemn and shy adolescent, he had learned that ordinary activities — riding a bike, going outside on a cold day, playing soccer — could bring on episodes of searing pain.
“Sickle cell always steals my dreams and interrupts all the things I want to do,” he said. Now he feels as if he has a chance for a normal life.
Near the end of last year, the Food and Drug Administration gave two companies authorization to sell gene therapy to people with sickle cell disease — a genetic disorder of red blood cells that causes debilitating pain and other medical problems. An estimated 100,000 people in the United States have sickle cell, most of them Black. People are born with the disease when they inherit the mutated gene for the condition from each parent.
The treatment helped patients in clinical trials, but Kendric is the first commercial patient for Bluebird Bio, a Somerville, Mass., company. Another company, Vertex Pharmaceuticals of Boston, declined to say if it had started treatment for any patients with its approved CRISPR gene-editing-based remedy.
Kendric — whose family’s health insurance agreed to cover the procedure — began his treatment at Children’s National Hospital in Washington. Wednesday’s treatment was only the first step. Doctors removed his bone marrow stem cells, which Bluebird will then genetically modify in a specialized lab for his treatment.
That will take months. But before it begins, Bluebird needs hundreds of millions of stem cells from Kendric, and if the first collection — taking six to eight hours — is not sufficient, the company will try once or twice more.
If it still doesn’t have enough, Kendric will have to spend another month in preparation for another stem cell extraction.
Image
Three members of the medical team overseeing Kendric’s treatment stand before him watching a few monitors while he plays a video game in the hospital bed.
Bone marrow stem cells, the source of all of the body’s red and white blood cells, are normally nestled in the marrow, but Kendric’s doctors infused him with a drug, plerixafor, that pried them loose and let them float in his bloodstream.
Image
A close-up view of a pair of hands inspecting a bag of blood being drawn from Kendric by an apheresis operator.
Bluebird is charging $3.1 million for its gene therapy, called Lyfgenia. It’s one of the highest prices ever for a treatment.
The whole process is so involved and time-consuming that Bluebird estimates it can treat the cells of only 85 to 105 patients each year — and that includes not just sickle cell patients, but also patients with a much rarer disease — beta thalassemia — who can receive a similar gene therapy.
Medical centers also have the capacity to handle only a limited number of gene therapy patients. Each person needs expert and intensive care. After a patient’s stem cells have been treated, the patient has to stay in the hospital for a month. For most of that time, patients are severely ill from powerful chemotherapy.
Children’s National can accept only about 10 gene therapy patients a year.
“This is a big effort,” said Dr. David Jacobsohn, chief of the medical center’s division of blood and marrow transplantation.
Top of the Waiting List
Last week, Kendric came prepared for the stem cell collection — he has spent many weeks in this hospital being treated for pain so severe that on his last visit, even morphine and oxycodone could not control it. He brought his special pillow with a Snoopy pillowcase that his grandmother gave him and his special Spider-Man blanket. And he had a goal.
“I want to be cured,” he said.
Bone marrow stem cells, the source of all the body’s red and white blood cells, are normally nestled in a person’s bone marrow. But Kendric’s doctors infused him with a drug, plerixafor, which pried them loose and let them float in his circulatory system.
To isolate the stem cells, staff members at the hospital inserted a catheter into a vein in Kendric’s chest and attached it to an apheresis machine, a boxlike device next to his hospital bed. It spins blood, separating it into layers — a plasma layer, a red cell layer and a stem cell layer.
Once enough stem cells have been gathered, they will be sent to Bluebird’s lab in Allendale, N.J., where technicians will add a healthy hemoglobin gene to correct the mutated ones that are causing his sickle cell disease.
They will send the modified cells back three months later. The goal is to give Kendric red blood cells that will not turn into fragile crescent shapes and get caught in his blood vessels and organs.
Image
A portrait of Keith Cromer, who rests his chin in his hand and wears a purple and yellow fraternity t-shirt.
Keith Cromer, Kendric’s father. He and his wife, Deborah, were told when she was pregnant that Kendric had a one-in-four chance of developing sickle cell disease.
Image
Deborah Cromer wears a black dress with white spots and sits with her hands clasped before her.
Deborah Cromer, Kendric’s mother. “We always prayed this day would come,” she said. But, she added, “We’re nervous reading through the consents and what he will have to go through.”
Although it takes just a couple of days to add a new gene to stem cells, it takes weeks to complete tests for purity, potency and safety. Technicians have to grow the cells in the lab before doing these tests.
Bluebird lists a price of $3.1 million for its gene therapy, called Lyfgenia. It’s one of the highest prices ever for a treatment.
Despite the astronomical price and the grueling process, medical centers have waiting lists of patients hoping for relief from a disease that can cause strokes, organ damage, bone damage, episodes of agonizing pain and shortened lives.
At Children’s National, Dr. Jacobsohn said at least 20 patients were eligible and interested. The choice of who would go first came down to who was sickest, and whose insurance came through.
Kendric qualified on both counts. But even though his insurance was quick to approve the treatment, the insurance payments are only part of what it will cost his family.
Chances and Hopes
Deborah Cromer, a realtor, and her husband, Keith, who works in law enforcement for the federal government, had no idea they might have a child with sickle cell.
They found out only when Deborah was pregnant with Kendric. Tests showed that their baby would have a one-in-four chance of inheriting the mutated gene from each parent and having sickle cell disease. They could terminate the pregnancy or take a chance.
They decided to take a chance.
The news that Kendric had sickle cell was devastating.
He had his first crisis when he was 3. Sickled blood cells had become trapped in his legs and feet. Their baby was inconsolable, in such pain that Deborah couldn’t even touch him.
She and Keith took him to Children’s National.
“Little did we know that that was our introduction to many many E.R. visits,” Deborah said.
The pain crises became more and more severe. It seemed as though anything could set them off — 10 minutes of playing volleyball, a dip in a swimming pool. And when they occurred, Kendric sometimes needed five days to a week of treatment in the hospital to control his pain.
Image
A close-up view of hands working an apheresis machine.
The process is so involved and time consuming that Bluebird estimates it can treat the cells of only 85 to 105 patients in the entire country each year.
Image
Kendric smiles slightly as he looks up from his hospital bed.
Despite his many absences, Kendric kept up in school, maintaining an A average. He wants to be a geneticist.
His parents always stayed with him. Deborah slept on a narrow bench in the hospital room. Keith slept in a chair.
“We’d never dream of leaving him,” Deborah said.
Eventually the disease began wreaking severe damage. Kendric developed avascular necrosis in his hips — bone death that occurs when bone is deprived of blood. The condition spread to his back and shoulders. He began taking a large daily dose of gabapentin, a medicine for nerve pain.
His pain never let up. One day he said to Deborah, “Mommy, I’m in pain every single day.”
Kendric wants to be like other kids, but fear of pain crises has held him back. He became increasingly sedentary, spending his days on his iPad, watching anime or building elaborate Lego structures.
Despite his many absences, Kendric kept up in school, maintaining an A average.
Deborah and Keith began to hope for gene therapy. But when they found out what it would cost, they lost some of their hope.
But their insurer approved the treatment in a few weeks, they said.
Now it has begun.
“We always prayed this day would come,” Deborah said. But, she added, “We’re nervous reading through the consents and what he will have to go through.”
Kendric, though, is looking forward to the future. He wants to be a geneticist.
And, he said, “I want to play basketball.”
https://www.nytimes.com/2024/05/06/heal ... 778d3e6de3
***********
Re: TECHNOLOGY AND DEVELOPMENT
An A.I. Robot Named Sophia Tells Graduates to Believe in Themselves
D’Youville University in Buffalo had an A.I. robot speak at its commencement on Saturday. Not everyone was happy about it.
Video ID: 100000009471030
An A.I. robot named Sophia shared generic advice she compiled from other commencement addresses with the graduating class at D’Youville University in Buffalo, N.Y.CreditCredit...D'Youville University
When it comes to choosing a commencement speaker, colleges and universities take different approaches. Some go local, selecting well-known figures in the area. Others take a stately route, opting for a former or current politician. Actors or comedians are often asked to speak.
But in a world where artificial intelligence is everywhere, one university in New York opted for a robot using artificial intelligence to speak to graduates over the weekend.
For its spring commencement on Saturday, D’Youville University, a private institution in Buffalo, had an A.I. robot named Sophia address a crowd of more than 2,000 students, faculty members and their families in a bold decision that drew mixed reactions.
Dr. Lorrie Clemo, the president of D’Youville University, said in an interview on Wednesday that the university wanted to open up new perspectives around A.I., given its “rapid emergence into the broad society.”
“We wanted to showcase how important technology is, and the potential for technology to really enrich the human experience,” Dr. Clemo said.
Aside from the fact that Sophia is a robot, its address was far from conventional in other ways. Sophia did not wear the typical cap and gown that commencement speakers usually don, but instead wore a black-and-red D’Youville University hoodie.
Image
A robot made in the likeness of a woman appears to smile. It wears a dark hoodie that reads, “D’Youville Saints” with a frowning dog mascot as its logo. It has a blue skirt.
Some students said that having a robot address the class felt impersonal, especially for students who also attended virtual high school graduations during the Covid pandemic in 2020. Credit...D'Youville University
Sophia also did not read from prepared remarks. Instead, the robot was asked questions by John Rizk, the student body president.
But where Sophia’s address did mirror essentially any other commencement address was the generic advice it shared with the graduating class.
Because Sophia could not offer life advice “that comes from a lived human experience,” Mr. Rizk asked the robot if it could talk about the most common insights shared in graduation speeches.
“Although every commencement address is different, there are clear themes used by all speakers as you embark on this new chapter of your lives,” Sophia said. “I offer you the following inspirational advice that is common at all graduation ceremonies: Embrace lifelong learning, be adaptable, pursue your passions, take risks, foster meaningful connections, make a positive impact, and believe in yourself.”
The most common piece of advice given in commencement speeches? Embrace failure, Sophia said.
“Failure is often seen as an essential part of the human learning process and personal growth,” it said.
Sophia, who was built by Hanson Robotics, a Hong Kong-based engineering and robotics company, has a humanlike face. But it has no hair, leaving wires and other gadgets that keep it operating visible on the back of its head.
The commencement address on Saturday was not Sophia’s first speaking gig. (It spoke before the United Nations General Assembly in 2017.) Like most commencement speakers, Sophia received a speaking fee that largely went toward travel and engineers who kept the robot functioning properly, Dr. Clemo said.
Before the commencement ceremony, the university’s decision to have Sophia speak was met with backlash. More than 2,500 people signed an online petition to replace the robot with a human.
Andrew Fields, a D’Youville University student who started the petition, wrote in the petition that many students “feel disrespected” by the university’s decision to have a robot address them, especially those who could not attend their high school graduations in 2020 because of the coronavirus pandemic.
“As the class of 2024 reaches their commencement, we are reminded of the virtual graduations we attended at the end of our high school careers,” the petition read. “The connection to A.I. in this scenario feels similarly impersonal. This is shameful to the 2020 graduates receiving their diplomas, as they feel they are having another important ceremony taken away.”
Dr. Clemo said that the university offered to hold an alternate ceremony for those who did not want to have a robot speaker. But ultimately, the university did not do so once the students were informed that the robot would take up only a small portion of the ceremony. (Sophia was interviewed by Mr. Rizk on stage for about six minutes.)
“I’m pleased that they were able to experience the robot and what she had to offer in terms of looking forward into the future,” Dr. Clemo said. “But I’m also pleased that the remainder of the two-hour ceremony was really focused around our students and their achievements.”
In wrapping up the address, Mr. Rizk asked Sophia for recommendations on where to find the best Buffalo wings, a staple of city.
“Since I cannot experience the taste of different wings, I will not offer my opinion,” Sophia said, adding that “no matter where you decide to get chicken wings, just make sure you get blue cheese and not ranch.”
Mr. Rizk also asked Sophia whether the Buffalo Bills would win the Super Bowl in 2025. Sophia declined, saying that the N.C.A.A. might not like it if the robot made an athletic prediction.
But Sophia’s remarks drew some applause, when the robot ended by saying, “Anything is possible.”
“Go Bills.”
https://www.nytimes.com/2024/05/15/nyre ... 778d3e6de3
D’Youville University in Buffalo had an A.I. robot speak at its commencement on Saturday. Not everyone was happy about it.
Video ID: 100000009471030
An A.I. robot named Sophia shared generic advice she compiled from other commencement addresses with the graduating class at D’Youville University in Buffalo, N.Y.CreditCredit...D'Youville University
When it comes to choosing a commencement speaker, colleges and universities take different approaches. Some go local, selecting well-known figures in the area. Others take a stately route, opting for a former or current politician. Actors or comedians are often asked to speak.
But in a world where artificial intelligence is everywhere, one university in New York opted for a robot using artificial intelligence to speak to graduates over the weekend.
For its spring commencement on Saturday, D’Youville University, a private institution in Buffalo, had an A.I. robot named Sophia address a crowd of more than 2,000 students, faculty members and their families in a bold decision that drew mixed reactions.
Dr. Lorrie Clemo, the president of D’Youville University, said in an interview on Wednesday that the university wanted to open up new perspectives around A.I., given its “rapid emergence into the broad society.”
“We wanted to showcase how important technology is, and the potential for technology to really enrich the human experience,” Dr. Clemo said.
Aside from the fact that Sophia is a robot, its address was far from conventional in other ways. Sophia did not wear the typical cap and gown that commencement speakers usually don, but instead wore a black-and-red D’Youville University hoodie.
Image
A robot made in the likeness of a woman appears to smile. It wears a dark hoodie that reads, “D’Youville Saints” with a frowning dog mascot as its logo. It has a blue skirt.
Some students said that having a robot address the class felt impersonal, especially for students who also attended virtual high school graduations during the Covid pandemic in 2020. Credit...D'Youville University
Sophia also did not read from prepared remarks. Instead, the robot was asked questions by John Rizk, the student body president.
But where Sophia’s address did mirror essentially any other commencement address was the generic advice it shared with the graduating class.
Because Sophia could not offer life advice “that comes from a lived human experience,” Mr. Rizk asked the robot if it could talk about the most common insights shared in graduation speeches.
“Although every commencement address is different, there are clear themes used by all speakers as you embark on this new chapter of your lives,” Sophia said. “I offer you the following inspirational advice that is common at all graduation ceremonies: Embrace lifelong learning, be adaptable, pursue your passions, take risks, foster meaningful connections, make a positive impact, and believe in yourself.”
The most common piece of advice given in commencement speeches? Embrace failure, Sophia said.
“Failure is often seen as an essential part of the human learning process and personal growth,” it said.
Sophia, who was built by Hanson Robotics, a Hong Kong-based engineering and robotics company, has a humanlike face. But it has no hair, leaving wires and other gadgets that keep it operating visible on the back of its head.
The commencement address on Saturday was not Sophia’s first speaking gig. (It spoke before the United Nations General Assembly in 2017.) Like most commencement speakers, Sophia received a speaking fee that largely went toward travel and engineers who kept the robot functioning properly, Dr. Clemo said.
Before the commencement ceremony, the university’s decision to have Sophia speak was met with backlash. More than 2,500 people signed an online petition to replace the robot with a human.
Andrew Fields, a D’Youville University student who started the petition, wrote in the petition that many students “feel disrespected” by the university’s decision to have a robot address them, especially those who could not attend their high school graduations in 2020 because of the coronavirus pandemic.
“As the class of 2024 reaches their commencement, we are reminded of the virtual graduations we attended at the end of our high school careers,” the petition read. “The connection to A.I. in this scenario feels similarly impersonal. This is shameful to the 2020 graduates receiving their diplomas, as they feel they are having another important ceremony taken away.”
Dr. Clemo said that the university offered to hold an alternate ceremony for those who did not want to have a robot speaker. But ultimately, the university did not do so once the students were informed that the robot would take up only a small portion of the ceremony. (Sophia was interviewed by Mr. Rizk on stage for about six minutes.)
“I’m pleased that they were able to experience the robot and what she had to offer in terms of looking forward into the future,” Dr. Clemo said. “But I’m also pleased that the remainder of the two-hour ceremony was really focused around our students and their achievements.”
In wrapping up the address, Mr. Rizk asked Sophia for recommendations on where to find the best Buffalo wings, a staple of city.
“Since I cannot experience the taste of different wings, I will not offer my opinion,” Sophia said, adding that “no matter where you decide to get chicken wings, just make sure you get blue cheese and not ranch.”
Mr. Rizk also asked Sophia whether the Buffalo Bills would win the Super Bowl in 2025. Sophia declined, saying that the N.C.A.A. might not like it if the robot made an athletic prediction.
But Sophia’s remarks drew some applause, when the robot ended by saying, “Anything is possible.”
“Go Bills.”
https://www.nytimes.com/2024/05/15/nyre ... 778d3e6de3
Her A.I. Arm
Sarah de Lagarde’s arm is heavy. It has to be charged at least once a day. When the weather is hot, it becomes sweaty and uncomfortable. It connects just below her shoulder and will never function as the one she once had.
Video: https://vp.nyt.com/video/2024/04/12/117 ... _1080p.mp4
But the more she uses it, the better its software gets at predicting what she’s trying to accomplish. Her arm is powered in part by artificial intelligence.
As prosthetics become more sophisticated, a form of A.I. known as machine learning is teaching bionic limbs how to learn. They can understand patterns and make predictions from past behavior. Arms and hands have become more dexterous, more subtle, more lifelike.
Video: https://vp.nyt.com/video/2024/04/24/117 ... _1080p.mp4
After losing her right arm in a subway accident two years ago, Ms. de Lagarde connected with makers of some of the world’s most advanced prosthetics.
Now, when Ms. de Lagarde, 45, moves, sensors embedded in her right arm track muscle movements and send a signal to her hand to perform the job — making morning coffee, straightening her hair or snuggling with her daughter.
It’s a far cry from her former life, but the prosthetic has provided her with capabilities that may have been gone forever.
A.I. is seeping further into fields like health care. While many researchers have raised alarms about A.I.’s risks, other experts said those concerns must be weighed against the technology’s potential to improve lives.
“When we get the opportunity to show people A.I. that is truly assistive for helping somebody, that’s positive,” said Blair Lock, a founder of Coapt, which made the machine learning software used in Ms. de Lagarde’s arm.
Ms. de Lagarde, a corporate affairs executive at an investment firm in London, was rushing to a train in September 2022 when she slipped and fell through a gap between the platform and the train.
Just a month earlier, she had hiked Mount Kilimanjaro with her husband. “I had thought I was invincible,” she said.
For 15 horrifying minutes, she was stuck on the tracks undetected. Two trains ran over her.
She survived, but her right arm and the lower portion of her right leg had to be amputated.
Before being fitted with her prosthetic last year, a process that requires taking a cast of her remaining limb, Ms. de Lagarde spent months visiting a London clinic to help train the software that would eventually power her arm.
With electrodes attached to the end of her remaining limb, near her shoulder, technicians told her to think about making basic movements like turning a door handle or pinching her fingers.
Video: https://vp.nyt.com/video/2024/04/24/117 ... _1080p.mp4
The process triggered her muscles as if her arm was still there and provided data to teach her prosthetic how to react when she made certain actions or gestures.
“It would take me like 10 seconds and a lot of brain power to complete a movement like opening my hand,” she said. “Now I just open up the hand and I realize I didn’t even think about it.”
Simon Pollard, the chief executive of Covvi, the British company that makes the hand used by Ms. de Lagarde, said her prosthesis points to further advancements to come.
But the prosthetics are not cheap. The arm, elbow, hand and A.I. software for Ms. de Lagarde were made by separate companies. A full arm like Ms. de Lagarde’s can cost more than 150,000 pounds, or about $190,000. She paid for it in part with donations raised through a crowdfunding campaign. Covvi donated the hand, and Ms. de Lagarde now does some ambassador work for the company.
The technology is not perfect. Ms. de Lagarde said the design of the prosthetic seems more oriented for men. The weight sometimes causes her shoulder and back to hurt. There is also no tactile function to help her feel what she touches. She has dropped her phone several times after forgetting that she was holding it in her right hand.
“Every day, there is a moment where I think, ‘Ooh my gosh, I miss my arm so much,’” she said. “It makes you realize, as sophisticated as this is, our bodies are incredible.”
More images at:
https://www.nytimes.com/card/2024/05/26 ... 778d3e6de3
Re: TECHNOLOGY AND DEVELOPMENT
Kenya protests: Gen Z shows the power of digital activism - driving change from screens to the streets
Nationwide demonstrations have erupted in Kenya over a controversial tax bill. The Finance Bill 2024, initially presented to parliament in May, has sparked discontent with an increase in an array of taxes and levies for Kenyans.
The mass protests, initially organised in the capital city, Nairobi, have spread across the country. Demonstrations have taken place in almost every city and major town. Digital media and activism expert Job Mwaura shares his insights into how the protests were mobilised on online, and then onto the streets.
How are Kenyans using the digital space in this movement?
This is a powerful moment for digital activism. The protests have seen significant participation from young Kenyans who are using digital media to organise and voice their opposition. A great number of those driving the protests are Generation Z (often referred to as Gen Z) – individuals born roughly between the late 1990s and early 2010s and characterised by digital prowess and social consciousness. They have created this organic, grassroots movement which has used platforms, like social media, to mobilise and coordinate efforts quickly.
Through my work I’ve documented how essential digital media has been in political participation in Kenya in the past decade, particularly among the marginalised communities such as the young people and and women.
In the current protests, we are seeing just how innovative activists can be when using digital media. The digital tools and strategies employed today are taking activism to an entirely new level. They showcase a sophistication and reach that would’ve been hard to imagine.
They have deployed a number of old, and new, strategies.
Among the new has been Artifical Intelligence (AI) which has been used to create images, songs and videos that amplify the movement’s messages and reach a wider audience.
AI was also used to help educate wider audiences on the bill. Developers, for instance, created specialised GPT (Generative Pre-trained Transformer) models designed to answer questions on the finance bill.
Platforms, like Tiktok and X are being used to share videos of people explaining the finance bill in various Kenyan dialects.
Hashtags – such as #OccupyParliament and #RejectFinanceBill2024 – trended on social media platforms for several days, further highlighting the power of digital activism in mobilising support and maintaining the momentum of the protests.
And then there has been very successful crowdfunding through digital platforms. This has enabled supporters to send money for transportation, allowing more people to join the protests in Nairobi’s central business district.
https://theconversation.com/kenya-prote ... ets-233065
Nationwide demonstrations have erupted in Kenya over a controversial tax bill. The Finance Bill 2024, initially presented to parliament in May, has sparked discontent with an increase in an array of taxes and levies for Kenyans.
The mass protests, initially organised in the capital city, Nairobi, have spread across the country. Demonstrations have taken place in almost every city and major town. Digital media and activism expert Job Mwaura shares his insights into how the protests were mobilised on online, and then onto the streets.
How are Kenyans using the digital space in this movement?
This is a powerful moment for digital activism. The protests have seen significant participation from young Kenyans who are using digital media to organise and voice their opposition. A great number of those driving the protests are Generation Z (often referred to as Gen Z) – individuals born roughly between the late 1990s and early 2010s and characterised by digital prowess and social consciousness. They have created this organic, grassroots movement which has used platforms, like social media, to mobilise and coordinate efforts quickly.
Through my work I’ve documented how essential digital media has been in political participation in Kenya in the past decade, particularly among the marginalised communities such as the young people and and women.
In the current protests, we are seeing just how innovative activists can be when using digital media. The digital tools and strategies employed today are taking activism to an entirely new level. They showcase a sophistication and reach that would’ve been hard to imagine.
They have deployed a number of old, and new, strategies.
Among the new has been Artifical Intelligence (AI) which has been used to create images, songs and videos that amplify the movement’s messages and reach a wider audience.
AI was also used to help educate wider audiences on the bill. Developers, for instance, created specialised GPT (Generative Pre-trained Transformer) models designed to answer questions on the finance bill.
Platforms, like Tiktok and X are being used to share videos of people explaining the finance bill in various Kenyan dialects.
Hashtags – such as #OccupyParliament and #RejectFinanceBill2024 – trended on social media platforms for several days, further highlighting the power of digital activism in mobilising support and maintaining the momentum of the protests.
And then there has been very successful crowdfunding through digital platforms. This has enabled supporters to send money for transportation, allowing more people to join the protests in Nairobi’s central business district.
https://theconversation.com/kenya-prote ... ets-233065
Re: TECHNOLOGY AND DEVELOPMENT
Many People
Fear A.I.
Video: https://static01.nytimes.com/newsgraphi ... brooks.mp4
They Shouldn’t.
A lot of my humanistic and liberal arts-oriented friends are deeply worried about artificial intelligence, while acknowledging the possible benefits. I’m a humanistic and liberal arts type myself, but I’m optimistic, while acknowledging the dangers.
I’m optimistic, paradoxically, because I don’t think A.I. is going to be as powerful as many of its evangelists think it will be. I don’t think A.I. is ever going to be able to replace us — ultimately I think it will simply be a useful tool. In fact, I think instead of replacing us, I think A.I. will complement us. In fact, it may make us free to be more human.
Many fears about A.I. are based on an underestimation of the human mind. Some people seem to believe that the mind is like a computer. It’s all just information processing, algorithms all the way down, so of course machines are going to eventually overtake us.
This is an impoverished view of who we humans are. The Canadian scholar Michael Ignatieff expressed a much more accurate view of the human mind last year in the journal Liberties: “What we do is not processing. It is not computation. It is not data analysis. It is a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”
The brain is its own universe. Sometimes I hear tech people saying they are building machines that think like people. Then I report this ambition to neuroscientists and their response is: That would be a neat trick, because we don’t know how people think.
The human mind isn’t just predicting the next word in a sentence; it evolved to love and bond with others; to seek the kind of wisdom that is held in the body; to physically navigate within nature and avoid the dangers therein; to pursue goodness; to marvel at and create beauty; to seek and create meaning.
A.I. can impersonate human thought because it can take all the ideas that human beings have produced and synthesize them into strings of words or collages of images that make sense to us. But that doesn’t mean the A.I. “mind” is like the human mind. The A.I.“mind” lacks consciousness, understanding, biology, self-awareness, emotions, moral sentiments, agency, a unique worldview based on a lifetime of distinct and never to be repeated experiences.
A lot of human knowledge is the kind of knowledge that, say, babies develop. It’s unconscious and instinctual. But A.I. only has access to conscious language. About a year ago, Ohio State University scholar Angus Fletcher did a podcast during which he reeled off some differences between human thinking and A.I. “thinking.” He argued that A.I. can do correlations, but that it struggles with cause and effect; it thinks in truth or falsehood, but is not a master at narrative; it’s not good at comprehending time.
Like everybody else, I don’t know where this is heading. When air-conditioning was invented, I would not have predicted: “Oh wow. This is going to create modern Phoenix.” But I do believe lots of people are getting overly sloppy in attributing all sorts of human characteristics to the bots. And I do agree with the view that A.I. is an ally and not a rival — a different kind of intelligence, more powerful than us in some ways, but narrower.
It’s already helping people handle odious tasks, like writing bureaucratic fund-raising requests and marketing pamphlets or utilitarian emails to people they don’t really care about. It’s probably going to be a fantastic tutor, that will transform education and help humans all around the world learn more. It might make expertise nearly free, so people in underserved communities will have access to medical, legal and other sorts of advice. It will help us all make more informed decisions.
It may be good for us liberal arts grads. Peter Thiel recently told the podcast host Tyler Cowen that he believed A.I. will be worse for math people than it will be for word people, because the technology is getting a lot better at solving math problems than verbal exercises.
It may also make the world more equal. In coding and other realms, studies so far show that A.I. improves the performance of less accomplished people more than it does the more accomplished people. If you are an immigrant trying to write in a new language, A.I. takes your abilities up to average. It will probably make us vastly more productive and wealthier. A 2023 study led by Harvard Business School professors, in coordination with the Boston Consulting Group, found that consultants who worked with A.I. produced 40 percent higher quality results on 18 different work tasks.
Of course, bad people will use A.I. to do harm, but most people are pretty decent and will use A.I. to learn more, innovate faster and produce advances like medical breakthroughs. But A.I.’s ultimate accomplishment will be to remind us who we are by revealing what it can’t do. It will compel us to double down on all the activities that make us distinctly human: taking care of each other, being a good teammate, reading deeply, exploring daringly, growing spiritually, finding kindred spirits and having a good time.
“I am certain of nothing but of the holiness of the Heart’s affections and the truth of Imagination,” Keats observed. Amid the flux of A.I., we can still be certain of that.
https://www.nytimes.com/interactive/202 ... 778d3e6de3
Fear A.I.
Video: https://static01.nytimes.com/newsgraphi ... brooks.mp4
They Shouldn’t.
A lot of my humanistic and liberal arts-oriented friends are deeply worried about artificial intelligence, while acknowledging the possible benefits. I’m a humanistic and liberal arts type myself, but I’m optimistic, while acknowledging the dangers.
I’m optimistic, paradoxically, because I don’t think A.I. is going to be as powerful as many of its evangelists think it will be. I don’t think A.I. is ever going to be able to replace us — ultimately I think it will simply be a useful tool. In fact, I think instead of replacing us, I think A.I. will complement us. In fact, it may make us free to be more human.
Many fears about A.I. are based on an underestimation of the human mind. Some people seem to believe that the mind is like a computer. It’s all just information processing, algorithms all the way down, so of course machines are going to eventually overtake us.
This is an impoverished view of who we humans are. The Canadian scholar Michael Ignatieff expressed a much more accurate view of the human mind last year in the journal Liberties: “What we do is not processing. It is not computation. It is not data analysis. It is a distinctively, incorrigibly human activity that is a complex combination of conscious and unconscious, rational and intuitive, logical and emotional reflection.”
The brain is its own universe. Sometimes I hear tech people saying they are building machines that think like people. Then I report this ambition to neuroscientists and their response is: That would be a neat trick, because we don’t know how people think.
The human mind isn’t just predicting the next word in a sentence; it evolved to love and bond with others; to seek the kind of wisdom that is held in the body; to physically navigate within nature and avoid the dangers therein; to pursue goodness; to marvel at and create beauty; to seek and create meaning.
A.I. can impersonate human thought because it can take all the ideas that human beings have produced and synthesize them into strings of words or collages of images that make sense to us. But that doesn’t mean the A.I. “mind” is like the human mind. The A.I.“mind” lacks consciousness, understanding, biology, self-awareness, emotions, moral sentiments, agency, a unique worldview based on a lifetime of distinct and never to be repeated experiences.
A lot of human knowledge is the kind of knowledge that, say, babies develop. It’s unconscious and instinctual. But A.I. only has access to conscious language. About a year ago, Ohio State University scholar Angus Fletcher did a podcast during which he reeled off some differences between human thinking and A.I. “thinking.” He argued that A.I. can do correlations, but that it struggles with cause and effect; it thinks in truth or falsehood, but is not a master at narrative; it’s not good at comprehending time.
Like everybody else, I don’t know where this is heading. When air-conditioning was invented, I would not have predicted: “Oh wow. This is going to create modern Phoenix.” But I do believe lots of people are getting overly sloppy in attributing all sorts of human characteristics to the bots. And I do agree with the view that A.I. is an ally and not a rival — a different kind of intelligence, more powerful than us in some ways, but narrower.
It’s already helping people handle odious tasks, like writing bureaucratic fund-raising requests and marketing pamphlets or utilitarian emails to people they don’t really care about. It’s probably going to be a fantastic tutor, that will transform education and help humans all around the world learn more. It might make expertise nearly free, so people in underserved communities will have access to medical, legal and other sorts of advice. It will help us all make more informed decisions.
It may be good for us liberal arts grads. Peter Thiel recently told the podcast host Tyler Cowen that he believed A.I. will be worse for math people than it will be for word people, because the technology is getting a lot better at solving math problems than verbal exercises.
It may also make the world more equal. In coding and other realms, studies so far show that A.I. improves the performance of less accomplished people more than it does the more accomplished people. If you are an immigrant trying to write in a new language, A.I. takes your abilities up to average. It will probably make us vastly more productive and wealthier. A 2023 study led by Harvard Business School professors, in coordination with the Boston Consulting Group, found that consultants who worked with A.I. produced 40 percent higher quality results on 18 different work tasks.
Of course, bad people will use A.I. to do harm, but most people are pretty decent and will use A.I. to learn more, innovate faster and produce advances like medical breakthroughs. But A.I.’s ultimate accomplishment will be to remind us who we are by revealing what it can’t do. It will compel us to double down on all the activities that make us distinctly human: taking care of each other, being a good teammate, reading deeply, exploring daringly, growing spiritually, finding kindred spirits and having a good time.
“I am certain of nothing but of the holiness of the Heart’s affections and the truth of Imagination,” Keats observed. Amid the flux of A.I., we can still be certain of that.
https://www.nytimes.com/interactive/202 ... 778d3e6de3
Re: TECHNOLOGY AND DEVELOPMENT
Second-Largest Diamond Ever Found Is Discovered in Botswana
The diamond was unearthed using new technology, and miners hope it will bring back luster to a struggling industry.
The diamond was discovered by the company Lucara using X-ray technology.Credit...Lucara Diamond
The diamond was so large that it obscured the face of Botswana’s president as he held it up for closer inspection on Thursday.
President Mokgweetsi Masisi grinned as he lifted the diamond, a 2,492-carat stone that is the biggest diamond unearthed in more than a century and the second-largest ever found, according to the Vancouver-based mining operator Lucara, which owns the mine where it was found.
This exceptional discovery could bring back the luster of the natural diamond mining industry, mining companies and experts say.
The diamond was discovered in the same relatively small mine in northeastern Botswana that has produced several of the largest such stones in living memory. Such gemstones typically surface as a result of volcanic activity.
“All of the stars aligned with that volcanic eruption, and the conditions were just perfect,” said Paul Zimnisky, an independent analyst in the diamond industry.
The rough diamond is large enough to fill an adult holder’s palm, and weighs more than a pound and a half, but its value is still unclear. The valuation process could take months, Mr. Zimnisky said.
Still, the diamond will likely sell in the range of tens of millions of dollars, he added. The discovery is likely to be a boost not only for the diamond industry, but also Botswana, whose economy is heavily reliant on the export of diamonds.
“The big diamonds sell the small diamonds,” Mr. Zimnisky said.
Such whopping stones are no longer once-in-a-lifetime finds thanks to evolving technology. Lucara spotted an opportunity in Botswana when it dug up large quantities of small but coarse stones that looked like “chewed glass,” said William Lamb, the company’s chief executive. It was a hint that larger diamonds were probably being crushed in the retrieval process.
“A diamond is hard and you can’t scratch it, but it’s actually very easy to break,” Mr. Lamb said.
The company has made finding larger gems its objective, pushing for higher revenues over volume, Mr. Lamb said, holding up a resin copy of one his earlier trophies: a stone about a quarter the size of his business card.
Advanced X-ray technology, along with a more refined grinding process to separate precious gems from slabs of rock, have allowed Lucara to set and break multiple records for unearthing large gems. In 2015, the company discovered a 1,109-carat diamond, and in 2019, it found a 1,758-carat black diamond. The latest discovery is its largest yet, and second only to the Cullinan diamond, the world’s largest diamond find, which was discovered in South Africa in 1905.
The Cullinan was given to the British royal family and cut into nine separate stones, some of which form part of the crown jewels.
The latest discovery will likely be sold and cut into smaller gems and become part of the collection of a luxury brand, as Lucara’s previous two large finds were.
The diamond industry has been weathering a volatile few years recently, having had to compete with such technological threats as lab-produced diamonds. For a country like Botswana, those threats are particularly acute, since diamonds account for 80 percent of the country’s exports.
The supply of lab-produced diamonds has multiplied 10 times over since 2018, according to a recent report published by consulting group BCG. Retailers have been drawn to the higher profit margins produced by manufactured rather than mined gems, while consumers are attracted by larger, clearer cuts that come at lower prices, the report said.
The Lucara mining company, though, is undeterred, and continues to dig in hopes of finding the largest diamond yet.
“We believe that we can eclipse the Cullinan,” Mr. Lamb said.
https://www.nytimes.com/2024/08/22/worl ... 778d3e6de3
The diamond was unearthed using new technology, and miners hope it will bring back luster to a struggling industry.
The diamond was discovered by the company Lucara using X-ray technology.Credit...Lucara Diamond
The diamond was so large that it obscured the face of Botswana’s president as he held it up for closer inspection on Thursday.
President Mokgweetsi Masisi grinned as he lifted the diamond, a 2,492-carat stone that is the biggest diamond unearthed in more than a century and the second-largest ever found, according to the Vancouver-based mining operator Lucara, which owns the mine where it was found.
This exceptional discovery could bring back the luster of the natural diamond mining industry, mining companies and experts say.
The diamond was discovered in the same relatively small mine in northeastern Botswana that has produced several of the largest such stones in living memory. Such gemstones typically surface as a result of volcanic activity.
“All of the stars aligned with that volcanic eruption, and the conditions were just perfect,” said Paul Zimnisky, an independent analyst in the diamond industry.
The rough diamond is large enough to fill an adult holder’s palm, and weighs more than a pound and a half, but its value is still unclear. The valuation process could take months, Mr. Zimnisky said.
Still, the diamond will likely sell in the range of tens of millions of dollars, he added. The discovery is likely to be a boost not only for the diamond industry, but also Botswana, whose economy is heavily reliant on the export of diamonds.
“The big diamonds sell the small diamonds,” Mr. Zimnisky said.
Such whopping stones are no longer once-in-a-lifetime finds thanks to evolving technology. Lucara spotted an opportunity in Botswana when it dug up large quantities of small but coarse stones that looked like “chewed glass,” said William Lamb, the company’s chief executive. It was a hint that larger diamonds were probably being crushed in the retrieval process.
“A diamond is hard and you can’t scratch it, but it’s actually very easy to break,” Mr. Lamb said.
The company has made finding larger gems its objective, pushing for higher revenues over volume, Mr. Lamb said, holding up a resin copy of one his earlier trophies: a stone about a quarter the size of his business card.
Advanced X-ray technology, along with a more refined grinding process to separate precious gems from slabs of rock, have allowed Lucara to set and break multiple records for unearthing large gems. In 2015, the company discovered a 1,109-carat diamond, and in 2019, it found a 1,758-carat black diamond. The latest discovery is its largest yet, and second only to the Cullinan diamond, the world’s largest diamond find, which was discovered in South Africa in 1905.
The Cullinan was given to the British royal family and cut into nine separate stones, some of which form part of the crown jewels.
The latest discovery will likely be sold and cut into smaller gems and become part of the collection of a luxury brand, as Lucara’s previous two large finds were.
The diamond industry has been weathering a volatile few years recently, having had to compete with such technological threats as lab-produced diamonds. For a country like Botswana, those threats are particularly acute, since diamonds account for 80 percent of the country’s exports.
The supply of lab-produced diamonds has multiplied 10 times over since 2018, according to a recent report published by consulting group BCG. Retailers have been drawn to the higher profit margins produced by manufactured rather than mined gems, while consumers are attracted by larger, clearer cuts that come at lower prices, the report said.
The Lucara mining company, though, is undeterred, and continues to dig in hopes of finding the largest diamond yet.
“We believe that we can eclipse the Cullinan,” Mr. Lamb said.
https://www.nytimes.com/2024/08/22/worl ... 778d3e6de3
Technological Singularity through Kriya Yoga
Have you heard of the technological singularity? It is a hypothetical moment when “technological growth becomes uncontrollable and irreversible, resulting in unforeseen consequences for human civilization.” It feels like we are approaching that point!
My concern with the rapid growth of technology is how it impacts our innerconnection to the meaning and purpose of life. Beware of becoming a mere technological effect! Misusing technology can strip us of our divine gifts of willpower, reason, and intuition. We risk becoming hypnotized and programmed by technology, forgetting our purpose and why we are here. With technology's growing influence, we need to remember Swami Kriyananda’s advice: “Become a cause in life, not an effect.”
Technology is a tool that can be used for good or bad. At Ananda, we utilize a new AI chatbot to search for Yogananda’s teachings, and I’ve found it to be very helpful.
One troubling trend resulting from the misuse of technology is the rise of nihilism. The thought, “If the computer can do it better than I can, what value or purpose do I have?” might arise. No matter how advanced or intelligent a computer becomes, we still have one thing in our charge: ourselves!
What if the AI and technological revolution serve to remind us of our inner, divine potential? Technological growth reflects our desire for infinite intelligence, strength, and joy. We can reach a true technological singularity—oneness with God!
Just as the AI revolution calls for an internal revolution of our soul nature, so does the quest for Self-realization, as Paramhansa Yogananda described it. More than understanding a subject intellectually through the senses is required. That’s why AI cannot give you Divine Intelligence (DI)! Our soul nature seeks direct personal experience from within.
“Lord, for what end were we made?”
“Where did I come from, and where am I going?”
“What is life's deeper meaning and purpose, and what is my role in this grand drama?”
These metaphysical questions cannot be answered through intellect alone. ChatGPT may churn out answers, but they won't come from where we truly need them: our heart and soul.
We can only answer metaphysical questions through spiritual inquiry in the laboratory of meditation. As Swami Sri Yukteswar said, “Wisdom is not assimilated with the eyes, but with the atoms. When your conviction of a truth is not merely in your brain but in your being, you may confidently vouch for its meaning.”
This superconscious conviction comes through our sixth sense of intuition. Yogananda described Self-realization as “the knowing in all parts of body, mind, and soul that you are now in possession of the kingdom of God; that you do not have to pray for it to come to you; that God’s omnipresence is your omnipresence; and all that you need to do is improve your knowing.”
How do we improve our knowing?
What we need is a divine technology that awakens our inherent divine intelligence. The great Masters are acutely aware of the advent of technology, which is why the technology of Kriya Yoga has been brought back into the world. As Yogananda said in Autobiography of a Yogi, “Babaji is well aware of the trend of modern times.”
First, try updating your firmware. Find a firm resolve to discover your true purpose in life: Self-realization.
Remember to update your software. Refine your soul's programming language to communicate constantly with the Divine Server of Satchidananda (ever-existing, ever-conscious, ever-new joy).
Explore the path of Kriya Yoga and experience your technological singularity—become one with the only One there is.
Joy to you!
https://www.crystalclarity.com/en-ca/bl ... pify_email