TECHNOLOGY AND DEVELOPMENT

Current issues, news and ethics
Post Reply
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Can Synthetic Biology Save Us? This Scientist Thinks So.

Drew Endy is squarely focused on the potential of redesigning organisms for useful purposes. He also acknowledges significant challenges.


When the family house in Devon, Pa., caught fire, Drew Endy, then 12, carried out his most cherished possession — his personal computer.

Years later, as a graduate student, Mr. Endy was accepted to Ph.D. programs in biotechnology and political science.

The episodes seem to sum up Mr. Endy, a most unusual scientist: part engineer, part philosopher, whose conversation is laced with references to Descartes and Dylan, as well as DNA.

He’s also an evangelist of sorts. Mr. Endy, a 51-year-old professor of bioengineering at Stanford University, is a star in the emerging field of synthetic biology. He is its most articulate enthusiast, inspiring others to see it as a path to a better world, a transformational technology to feed the planet, conquer disease and combat pollution.

The optimism behind synthetic biology assumes that biology can now largely follow the trajectory of computing, where progress was made possible by the continuous improvement in microchips, with performance doubling and price dropping in half every year or two for decades. The underlying technologies for synthetic biology — gene sequencing and DNA synthesis — are on similar trends.

As in computing, biological information is coded in DNA, so it can be programmed — with the goal of redesigning organisms for useful purposes. The aim is to make such programming and production faster, cheaper and more reliable, more an engineering discipline with reusable parts and automation and less an artisanal craft, as biology has been.

Synthetic biology, proponents say, holds the promise of reprogramming biology to be more powerful and then mass-producing the turbocharged cells to increase food production, fight disease, generate energy, purify water and devour carbon dioxide from the atmosphere.

“Biology and engineering are coming together in profound ways,” Mr. Endy said. “The potential is for civilization-scale flourishing, a world of abundance not scarcity, supporting a growing global population without destroying the planet.”

That idyllic future is decades off, if it is possible at all. But in the search for the proverbial next big thing over the next 20 years, synthetic biology is a prime candidate. And no one makes the case more persuasively than Mr. Endy.

More...

https://www.nytimes.com/2021/11/23/busi ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

What You’re Doing Right Now Is Proof of Quantum Theory

Running a computer underscores how quantum physics is remaking our world.


Image

Nobody understands quantum mechanics,” Richard Feynman famously said. Long after Max Planck’s discovery in 1900 that energy comes in separate packets or quanta, quantum physics remains enigmatic. It is vastly different from how things work at bigger scales, where objects from baseballs to automobiles follow Newton’s laws of mechanics and gravitation, consistent with our own bodily experiences. But at the quantum level, an electron is a particle and a wave, and light is a wave and a particle (wave-particle duality); an electron in an atom takes on only certain energies (energy quantization); electrons or photons can instantaneously affect each other over arbitrary distances (entanglement and teleportation); a quantum object exists in different states until it is measured (superposition, or popularly, Schrödinger’s cat); and a real physical force emerges from the apparent nothingness of vacuum (the Casimir effect).

For a theory that nobody understands, quantum physics has changed human society in remarkable ways.1 It lies behind the digital technology of integrated circuit chips, and the new technology of light-emitting diodes moving us toward a greener world. Scientists are now excited by one of the more elusive notions in quantum physics, the idea of ephemeral “virtual” photons, which could make possible non-invasive medical methods to diagnose the heart and brain. These connections illustrate the flow of ideas from scientific abstraction to useful application. But there is also a counter flow, where pragmatic requirements generate deep insight. The universal laws of thermodynamics have roots in efforts by 19th-century French engineer Sadi Carnot to make the leading technology of the time, the steam engine, more efficient. Similarly, the growth of quantum technology leads to deeper knowledge of the quantum. The interplay between pure theory, and its outcomes in the everyday world, is a continuing feature of science as it develops. In quantum physics, this interaction traces back to one of its founders, Danish physicist Niels Bohr.

More...

https://nautil.us/issue/108/change/what ... tum-theory
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

A Cure for Type 1 Diabetes? For One Man, It Seems to Have Worked.

A new treatment using stem cells that produce insulin has surprised experts and given them hope for the 1.5 million Americans living with the disease.


Image

Brian Shelton may be the first person cured of Type 1 diabetes. “It’s a whole new life,” Mr. Shelton said. “It’s like a miracle.”Credit...

Brian Shelton’s life was ruled by Type 1 diabetes.

When his blood sugar plummeted, he would lose consciousness without warning. He crashed his motorcycle into a wall. He passed out in a customer’s yard while delivering mail. Following that episode, his supervisor told him to retire, after a quarter century in the Postal Service. He was 57.

His ex-wife, Cindy Shelton, took him into her home in Elyria, Ohio. “I was afraid to leave him alone all day,” she said.

Early this year, she spotted a call for people with Type 1 diabetes to participate in a clinical trial by Vertex Pharmaceuticals. The company was testing a treatment developed over decades by a scientist who vowed to find a cure after his baby son and then his teenage daughter got the devastating disease.

Mr. Shelton was the first patient. On June 29, he got an infusion of cells, grown from stem cells but just like the insulin-producing pancreas cells his body lacked.

Now his body automatically controls its insulin and blood sugar levels.

Mr. Shelton, now 64, may be the first person cured of the disease with a new treatment that has experts daring to hope that help may be coming for many of the 1.5 million Americans suffering from Type 1 diabetes.

“It’s a whole new life,” Mr. Shelton said. “It’s like a miracle.”

Diabetes experts were astonished but urged caution. The study is continuing and will take five years, involving 17 people with severe cases of Type 1 diabetes. It is not intended as a treatment for the more common Type 2 diabetes.

“We’ve been looking for something like this to happen literally for decades,” said Dr. Irl Hirsch, a diabetes expert at the University of Washington who was not involved in the research. He wants to see the result, not yet published in a peer-reviewed journal, replicated in many more people. He also wants to know if there will be unanticipated adverse effects and if the cells will last for a lifetime or if the treatment would have to be repeated.

But, he said, “bottom line, it is an amazing result.”

More...

https://www.nytimes.com/2021/11/27/heal ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

The Gene-Synthesis Revolution

Researchers can now design and mass-produce genetic material — a technique that helped build the mRNA vaccines. What could it give us next?


Image

Ten years ago, when Emily Leproust was a director of research at the life-sciences giant Agilent, a pair of scientist-engineers in their 50s — Bill Banyai and Bill Peck — came to her with an idea for a company. The Bills, as they were later dubbed, were biotech veterans. Peck was a mechanical engineer by training with a specialty in fluid mechanics; Banyai was a semiconductor expert who had worked in genomics since the mid-2000s, facilitating the transition from old-school Sanger sequencing, which processes a single DNA fragment at a time, to next-generation sequencing, which works through millions of fragments simultaneously. When the chemistry was miniaturized and put on a silicon chip, reading DNA became fast, cheap and widespread. The Bills, who met when Banyai hired Peck to work on a genomics project, realized that there was an opportunity to do something analogous for writing DNA — to make the process of making synthetic genes more scalable and cost-effective.

At the time, DNA synthesis was a slow and difficult process. The reagents — those famous bases (A’s, T’s, C’s and G’s) that make up DNA — were pipetted onto a plastic plate with 96 pits, or wells, each of which held roughly 50 microliters, equivalent to one eyedropper drop of liquid. “In a 96-well plate, conceptually what you have to do is you put liquid in, you mix, you wait, maybe you apply some heat and then take the liquid out,” Leproust says. The Bills proposed to put this same process on a silicon chip that, with the same footprint as a 96-well plate, would be able to hold a million tiny wells, each with a volume of 10 picoliters, or less than one-millionth the size of a 50-microliter well.

Because the wells were so small, they couldn’t simply pipette liquids into them. Instead, they used what was essentially an inkjet printer to fill them, distributing A’s, T’s, C’s and G’s rather than pigmented inks. A catalyst called tetrazole was added to bind bases into a single-strand sequence of DNA; advanced optics made perfect alignment possible. The upshot was that instead of producing 96 pieces of DNA at the same time, they could now print millions.

The concept was simple, but, Leproust says, “the engineering was hard.” When you synthesize DNA, she explains, the yield, or success rate, goes down with every base added. A’s and T’s bond together more weakly than G’s and C’s, so DNA sequences with large numbers of consecutive A’s and T’s are often unstable. In general, the longer your strand of DNA, the greater the likelihood of errors. Twist Bioscience, the company that Leproust and the Bills founded, currently synthesizes the longest DNA snippets in the industry, up to 300 base pairs. Called oligos, they can then be joined together to form genes.

Today Twist charges nine cents a base pair for DNA, a nearly tenfold decrease from the industry standard a decade ago. As a customer, you can visit the Twist website, upload a spreadsheet with the DNA sequence that you want, select a quantity and pay for it with a credit card. After a few days, the DNA is delivered to your laboratory door. At that point, you can insert the synthetic DNA into cells and get them to begin making — hopefully — the target molecules that the DNA is coded to produce. These molecules eventually become the basis for new drugs, food flavorings, fake meat, next-gen fertilizers, industrial products for the petroleum industry. Twist is one of a number of companies selling synthetic genes, betting on a future filled with bioengineered products with DNA as their building blocks.

In a way, that future has arrived. Gene synthesis is behind two of the biggest “products” of the past year: the mRNA vaccines from Pfizer and Moderna. Almost as soon as the Chinese C.D.C. first released the genomic sequence of SARS-CoV-2 to public databases in January 2020, the two pharmaceutical companies were able to synthesize the DNA that corresponds to a particular antigen on the virus, called the spike protein. This meant that their vaccines — unlike traditional analogues, which teach the immune system to recognize a virus by introducing a weakened version of it — could deliver genetic instructions prompting the body to create just the spike protein, so it will be recognized and attacked during an actual viral infection.

As recently as 10 years ago, this would have been barely feasible. It would have been challenging for researchers to synthesize a DNA sequence long enough to encode the full spike protein. But technical advances in the last few years allowed the vaccine developers to synthesize much longer pieces of DNA and RNA at much lower cost, more rapidly. We had vaccine prototypes within weeks and shots in arms within the year.

Now companies and scientists look toward a post-Covid future when gene synthesis will be deployed to tackle a variety of other problems. If the first phase of the genomics revolution focused on reading genes through gene sequencing, the second phase is about writing genes. Crispr, the gene-editing technology whose inventors won a Nobel Prize last year, has received far more attention, but the rise of gene synthesis promises to be an equally powerful development. Crispr is like editing an article, allowing us to make precise changes to the text at specific spots; gene synthesis is like writing the article from scratch.

More...

https://www.nytimes.com/2021/11/24/maga ... iversified
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Biden’s Democracy Conference Is About Much More Than Democracy

While Americans angry about the results of the 2020 election were busy storming their own Capitol and conducting the umpteenth recount in Arizona, threats from outside the country didn’t take a lunch break. To the contrary, they are evolving rapidly.

Imagine a hostile country shutting down New York City’s electrical grid for months at a time using code-breaking quantum computers. Imagine pirates in cyberspace disabling American missile defense systems without warning. Imagine China obtaining the private health data or private phone communications of millions of Americans, including members of Congress.

These aren’t nutty hypotheticals in some distant dystopian future. They are scenarios that keep American national security officials up at night right now.

“We have already reached the point where the behaviors of a limited group of talented actors in cyberspace could completely obliterate systems that we rely on for our day-to-day survival,” Candace Rondeaux, a specialist on the future of warfare at New America, a Washington-based think tank, told me.

The Biden administration’s response has been to counter those threats by gathering a coalition of democracies that will work together to safeguard our economies, our militaries and our technological networks from bad actors in China, Russia and elsewhere. That’s the reason President Biden and European counterparts formed the U.S.-E.U. Trade and Technology Council, which established working groups to develop new technology and prevent it from falling into the wrong hands.

It’s the reason Mr. Biden met with the heads of state of Australia, India and Japan — world powers on China’s doorstep — to ensure that “the way in which technology is designed, developed, governed and used is shaped by our shared values and respect for universal human rights.” And it’s the reason Mr. Biden has called together more than 100 leaders from democratic countries around the world for a virtual Summit for Democracy this Thursday and Friday.

At this week’s summit, there will be plenty of familiar-sounding pledges to root out corruption and defend human rights. There is likely to be hand-wringing about coups that reversed fragile progress in Sudan and Myanmar, and condemnations of leaders who used the pandemic as an excuse to crack down on opposition and dissent, including those in El Salvador, Hungary and Uganda.

But at its core, this conference is not just about protecting democracy at home and abroad. It’s also about how open societies will defend themselves in the future against existential technological threats. As countries like China and Russia invest heavily in artificial intelligence and quantum computing, and exercise intensive state control over data, the United States and its allies need a game plan. What rules should be adopted to govern the use of artificial intelligence, quantum computing and space travel? How do we make sure those technologies aren’t weaponized against us?

The Biden administration is attempting to forge a common front with allies in Europe and Asia across technological, economic and military spheres to prepare for an age of technological competition that will look far different from any geopolitical rivalry that the world has ever seen. Democracy is the common thread stringing the Biden administration’s efforts together. It’s the code word for who’s on our team.

More...

https://www.nytimes.com/2021/12/06/opin ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Can We Have a Meaningful Life in a Virtual World?

The imminent arrival of the long-awaited fourth “Matrix” movie will surely spur another round of thinking about a question that philosophers have been kicking around at least since Plato’s time: How do we know that our world is real? Nowadays, of course, we’re far more likely to consider that a simulated reality would be rendered in bytes rather than shadows on a cave wall. Furthermore, given both the technical progress being made and the business push behind it, far more likely than our predecessors to actually embrace the prospect of life in a virtual world. The philosophical implications of such worlds — as well as the possibility we might already be existing within one — are the subject of the philosopher David J. Chalmers’s new book “Reality+,” which will be published in January. In it, Chalmers, who is a professor of philosophy and neural science at New York University, as well as co-director of the school’s Center for Mind, Brain and Consciousness, argues, among other things, that our thinking about our future virtual lives needn’t be rooted in visions of dystopia. “The possibilities for virtual reality,” says Chalmers, who is 55, “are as broad as the possibilities for physical reality. We know physical reality can be amazing and it can be terrible, and I fully expect the same range for virtual reality.”

For the entire discussion go to:

https://www.nytimes.com/interactive/202 ... iversified
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

The teenager who made medical history to save her mother

When she was just 19, Aliana Deveza organised and underwent an historic operation to save her mum's life.

She persuaded a hospital to do the first organ swap in the United States where different organs were exchanged between unrelated pairs of donors.

"The first thing that I asked when I woke up was just how was my Mom? Is she okay? Did she make it?

"I wasn't really worried about myself anymore, I was just kind of focused on getting through the pain that I was feeling. Just hearing that everybody had made it, I was able breathe again."

When Aliana says everyone else, she's not just talking about herself and her mother, because two other women - sisters - were also having operations.

One of Aliana's organs would go to one of the sister, and one of the sister's kidneys would go to Aliana's mum. Two lives were being, with two people donating organs to strangers to save a family member.

The operation was the result of two years of hard work which paid off. Aliana had saved her mother Erosalyn from years of kidney dialysis, illness and possibly an early death -and a complete stranger would go on to live a new life.

Kidneys are one of the only organs a living person can donate to another, as most of us are born with two but we only need one to function.

Yet people who need a kidney are not always able to take one from someone they love, even if that person is willing to give it.

Across the world around 150,000 organs were transplanted in 2019 - a small fraction of those who need a new organ.

Alvin Roth shared the prize for Economics from the Nobel Foundation in 2012 for his work devising a system to help more people give and get kidneys.

"Unlike many organs its possible for someone to give a kidney to someone they love and save their life," he explains.

"But sometimes they can't take your kidney even though you're healthy enough to give one. And perhaps I'm the donor in a in a similar pair, I would love to give a kidney to someone I love but I can't.

"But maybe my kidney would work for your patient and your kidney would work for my patient. That's the simplest kind of kidney exchange where two donor pairs get together, and each one gets a compatible kidney from the other patients."

The work of Alvin Roth and his colleagues resulted in a system which has been able to scale up the number of kidney swaps, so now each year thousands of lives are saved.

But these organ exchanges are not yet legal everywhere. In Germany, for example, you can still only give an organ directly to someone in your immediate family. One concern is that vulnerable people will be tempted to sell an organ for money.

It's not pairs of people. In some cases chains of people have come together to maximise the number of matched kidneys.

In one case, 70 different people were brought together so 35 donors gave their kidneys to 35 strangers so that others could get a new lease of life.

Aliana wasn't able to swap her kidney with her mother because doctors feared the kidney problems her mum had might be hereditary, so Aliana might have it too.

She still wanted to help her mum get a new kidney but time was running out, so she started to do some research and found it might be possible to swap part of a liver for a kidney.

"I started researching, the type of organs that can be donated while a person was still alive. And the liver is what came up most."

Aliana did not know that this was just a theoretical possibility and was not a regular operation. She started calling hospitals to see if she could donate part of her liver to someone in exchange for a kidney for her mum.

Aliana says a few hospitals did not understand what she meant: "I had a few hospitals transfer me to the morgue, because they didn't know what I was talking about."

Eventually she did get the right person for the job. John Roberts a surgeon at the University of California in San Francisco.

"He didn't just brush it off. I mean, I was just this 19-year-old girl, and I didn't know if I sounded crazy. My family was against it because they didn't want me putting myself in any danger."

With the help of the hospital they found two sisters who would pair with Aliana and her mum. One of the sisters would get part of Aliana's liver, and Aliana's mum would get a new kidney from the other sister.

Aliana has no regrets, so why does she think more of us are not doing it? "I think people gravitate away from the idea of organ donation, because of the fear surrounding it.

"These are major operations, there are definitely a lot of risks, but understanding it and going through the process with a team that will be there for you during the process is what helps."

Watch video at:

https://www.bbc.com/news/business-59750334

******

Small device that might render stethoscope obsolete

The medic sees internal body organs clearly, no longer having to make sense from a cacophony of body sounds
In Summary

- Advancements in technology have compressed the point-of-care ultrasound to such a degree that handheld, pocket-sized versions are now readily available in Kenya.

- Wachira says the Pocus has not replaced the stethoscope, but has become a standard tool that almost every doctor at Aga Khan carries.

Image

Dr Benjamin Wachira, an emergency care physician at the Aga Khan University Hospital, scans a client's heart at the emergency wing using the point of care ultrasound. The device is now a standard at the hospital, which nearly every doctor carries.
Image: WILFRED NYANGARESI

For many doctors, the good old stethoscope is a symbol of the skill and knowledge they possess.

Yet now, a new diagnostic tool may render the stethoscope obsolete, even in Kenya.

Enter Dr Benjamin Wachira, an emergency care physician at the Aga Khan University Hospital in Nairobi.

Instead of a stethoscope around his neck, he carries a small handheld device that might, in time, relegate the former into the dustbins of medical history.

The gadget is the point-of-care ultrasound (Pocus) device.

“It has been used by radiologists and obstetricians for a long time,” he explains. But advancements have compressed the technology to such a degree that handheld, pocket-sized versions are now readily available in Kenya.

“Before, I would have used the stethoscope to listen to the patient's body and determine if the sounds are normal or there is a problem,” DrDr Wachira explains.

That means he would need to make sense from a cacophony of thumps, crackles, and wheezes from a patient’s body to decide the next course of action.

But at Aga Khan, doctors are now relying more on the Pocus.

It is basically a small probe attached to an Ipad or a mobile phone.

He applies bluish gel on the probe (size of a TV remote) and moves it around the patient’s chest. An image of a healthy heart pumping shows up on the tablet's screen. “Normal,” he says. He moves to the upper right portion of the abdomen. The liver comes up. “Normal,” he says.

“It takes away the guesswork. Before you had to use the stethoscope, listen to the sounds and decide whether to send the patient to the cardiology for a scan or what to do. Sometimes the results would come a week later. But this one helps us make decisions immediately,” he explains.


Wachira says the Pocus has not replaced the stethoscope, but has become a standard tool that almost every doctor at Aga Khan carries.

In fact, for low resource environments, the World Health Organization now recommends portable ultrasound devices as a primary diagnostic tool.

The device uses sound waves to produce pictures of the inside of the body. It is safe, non-invasive, and does not use radiation.

“Every emergency department in any facility should have this,” Dr Wachira says.

He says it is particularly useful in saving accident victims.

“You can easily pick out internal bleeding in the body within two minutes, or injuries to any vital organ. It basically takes away any guesswork,” he explains.

One study in the United States showed that ultrasound correctly identified particular issues in 82 per cent of patients as opposed to a 47 per cent detection rate with physical examination only.

Two weeks ago, Health CAS Mercy Mwangangi encouraged facilities to invest in ultrasound, giving the example of the Aga Khan University Hospital.

She was addressing biomedical engineers at the annual gathering, held in Kakamega.

“There is a gap which can be addressed by you, medical engineers, who know the advances in technology. You can challenge us within the policy table to avail such technologies because I am sure there is a whole field that you are aware of technologies that is happening outside there,” she said.

A random check in Nairobi shows the device will cost around Sh500,000.

https://www.the-star.co.ke/news/2021-12 ... -obsolete/
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Re: TECHNOLOGY AND DEVELOPMENT

Post by kmaherali »

Book

The Age of A.I.: And Our Human Future


Image

Artificial Intelligence (AI) is transforming human society fundamentally and profoundly. Not since the Enlightenment and the Age of Reason have we changed how we approach knowledge, politics, economics, even warfare. Three of our most accomplished and deep thinkers come together to explore what it means for us all.

An A.I. that learned to play chess discovered moves that no human champion would have conceived of. Driverless cars edge forward at red lights, just like impatient humans, and so far, nobody can explain why it happens. Artificial intelligence is being put to use in sports, medicine, education, and even (frighteningly) how we wage war.

In this book, three of our most accomplished and deep thinkers come together to explore how A.I. could affect our relationship with knowledge, impact our worldviews, and change society and politics as profoundly as the ideas of the Enlightenment.

https://www.amazon.com/Age-I-Our-Human- ... 1668601109
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Re: TECHNOLOGY AND DEVELOPMENT

Post by kmaherali »

AI Is Helping Scientists Explain the Brain

But what if it’s telling them a false story?


Image

The brain is often called a black box but any neuroscientist who has looked inside knows that’s a sobering understatement. Technological advances are making our neural circuitries increasingly accessible, allowing us to closely watch any number of neurons in action. And yet the mystery of the brain only deepens. What’s the meaning embedded in the collective chorus of spiking neurons? How does their activity turn light and soundwaves into our subjective experience of vision and hearing? What computations do neurons perform and what are the broad governing principles they follow? The brain is not a black box—it’s an alien world, where the language and local laws have yet to be cracked, and intuitions go to die.

Could artificial intelligence figure it out for us? Perhaps. But a recent recognition is that even our newest, most powerful tools that have achieved great success in AI technology are stumbling at decoding the brain. Machine learning algorithms, such as artificial neural networks, have solved many complex tasks. They can predict the weather and the stock market or recognize objects and faces, and crucially, they do so without us telling them the rules. They should, at least in theory, be able to learn the hidden patterns in brain activity data by themselves and tell us a story of how the brain operates. And they do tell a story. It’s just that, as some scientists are finding, that story is not necessarily of our brain.

More...

https://nautil.us/ai-is-helping-scienti ... ain-14073/
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Re: TECHNOLOGY AND DEVELOPMENT

Post by kmaherali »

Stem cell surprise turns old brains young again..

Hi Karim,

My good friend, and colleague Dr. Al Sears has some astounding information about stem cell technology that helps you enhance your brain’s performance. Please read about it below.

To your health,

Julia

Julia Lundstrom, Neuroscience and Brain Health Educator
Simple Smart Science

Making a Measurable Improvement In Your Brain Health
____________________________

You may have heard about stem cells in the news…

They act as your body’s main repair system…

And replace old, dysfunctional cells with healthy new ones.

So whenever there’s a stem cell breakthrough, it’s BIG news.

But this latest discovery is on an entirely different level…

Decades ahead of anything else I’ve seen.

A team from the University of Cambridge, England, has successfully taken old, withered, tired brains…

And rebuilt them so they perform like new.

All without drugs, surgeries, or a single hospital visit.

Stanford doctor Gary Steinberg saw 18 wheelchair-bound patients experience a full brain reboot, reporting:

"I’m shocked. Ten years ago we couldn’t even dream about these recoveries"

Sonia C., age 36, was almost completely immobile for two years after her stroke…

She said she felt "trapped in her body."

But just HOURS after activating her neural stem cells, she reports, "my limbs just woke up."

Her and other patients even regained their ability to walk again!

What does this mean for you?

Well, this same stem cell technology helps you enhance your brain’s performance.

And it’s a one hundred percent natural process involving no drugs or hospital visits.

You can boost every area of your brain’s functions for:

- Crystal-clear memory
- Sky-high IQ
- Encyclopedic knowledge
- Snappy wit
- No brain fog or "senior moments"
- Enhanced mood and happiness
- Less stress and anxious feelings
- What we’re witnessing is the future of medicine here NOW.

And the possibilities for ramping up brain performance are endless.

Click HERE https://alsearsmd.clickfunnels.com/opti ... s_20220303 now to learn more about how you can help rebuild a new and better you today.

To Your Good Health,



Al Sears, MD, CNS
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

A.I. Is Mastering Language. Should We Trust What It Says?

Post by kmaherali »

Image

A.I. Is Mastering Language. Should We Trust What It Says?

OpenAI’s GPT-3 and other neural nets can now write original prose with mind-boggling fluency — a development that could have profound implications for the future.


You are sitting in a comfortable chair by the fire, on a cold winter’s night. Perhaps you have a mug of tea in hand, perhaps something stronger. You open a magazine to an article you’ve been meaning to read. The title suggested a story about a promising — but also potentially dangerous — new technology on the cusp of becoming mainstream, and after reading only a few sentences, you find yourself pulled into the story. A revolution is coming in machine intelligence, the author argues, and we need, as a society, to get better at anticipating its consequences. But then the strangest thing happens: You notice that the writer has, seemingly deliberately, omitted the very last word of the first .

The missing word jumps into your consciousness almost unbidden: ‘‘the very last word of the first paragraph.’’ There’s no sense of an internal search query in your mind; the word ‘‘paragraph’’ just pops out. It might seem like second nature, this filling-in-the-blank exercise, but doing it makes you think of the embedded layers of knowledge behind the thought. You need a command of the spelling and syntactic patterns of English; you need to understand not just the dictionary definitions of words but also the ways they relate to one another; you have to be familiar enough with the high standards of magazine publishing to assume that the missing word is not just a typo, and that editors are generally loath to omit key words in published pieces unless the author is trying to be clever — perhaps trying to use the missing word to make a point about your cleverness, how swiftly a human speaker of English can conjure just the right word.

Siri and Alexa popularized the experience of conversing with machines, but this was on the next level, approaching a fluency that resembled science fiction.

Before you can pursue that idea further, you’re back into the article, where you find the author has taken you to a building complex in suburban Iowa. Inside one of the buildings lies a wonder of modern technology: 285,000 CPU cores yoked together into one giant supercomputer, powered by solar arrays and cooled by industrial fans. The machines never sleep: Every second of every day, they churn through innumerable calculations, using state-of-the-art techniques in machine intelligence that go by names like ‘‘stochastic gradient descent’’ and ‘‘convolutional neural networks.’’ The whole system is believed to be one of the most powerful supercomputers on the planet.

And what, you may ask, is this computational dynamo doing with all these prodigious resources? Mostly, it is playing a kind of game, over and over again, billions of times a second. And the game is called: Guess what the missing word is.

The supercomputer complex in Iowa is running a program created by OpenAI, an organization established in late 2015 by a handful of Silicon Valley luminaries, including Elon Musk; Greg Brockman, who until recently had been chief technology officer of the e-payment juggernaut Stripe; and Sam Altman, at the time the president of the start-up incubator Y Combinator. In its first few years, as it built up its programming brain trust, OpenAI’s technical achievements were mostly overshadowed by the star power of its founders. But that changed in summer 2020, when OpenAI began offering limited access to a new program called Generative Pre-Trained Transformer 3, colloquially referred to as GPT-3. Though the platform was initially available to only a small handful of developers, examples of GPT-3’s uncanny prowess with language — and at least the illusion of cognition — began to circulate across the web and through social media. Siri and Alexa had popularized the experience of conversing with machines, but this was on the next level, approaching a fluency that resembled creations from science fiction like HAL 9000 from “2001”: a computer program that can answer open-ended complex questions in perfectly composed sentences.

As a field, A.I. is currently fragmented among a number of different approaches, targeting different kinds of problems. Some systems are optimized for problems that involve moving through physical space, as in self-driving cars or robotics; others categorize photos for you, identifying familiar faces or pets or vacation activities. Some forms of A.I. — like AlphaFold, a project of the Alphabet (formerly Google) subsidiary DeepMind — are starting to tackle complex scientific problems, like predicting the structure of proteins, which is central to drug design and discovery. Many of these experiments share an underlying approach known as ‘‘deep learning,’’ in which a neural net vaguely modeled after the structure of the human brain learns to identify patterns or solve problems through endlessly repeated cycles of trial and error, strengthening neural connections and weakening others through a process known as training. The ‘‘depth’’ of deep learning refers to multiple layers of artificial neurons in the neural net, layers that correspond to higher and higher levels of abstraction: In a vision-based model, for instance, a layer of neurons might detect vertical lines, which would then feed into a layer detecting edges of physical structures, which would then report to a layer that identified houses as opposed to apartment buildings.

GPT-3 belongs to a category of deep learning known as a large language model, a complex neural net that has been trained on a titanic data set of text: in GPT-3’s case, roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books. GPT-3 is the most celebrated of the large language models, and the most publicly available, but Google, Meta (formerly known as Facebook) and DeepMind have all developed their own L.L.M.s in recent years. Advances in computational power — and new mathematical techniques — have enabled L.L.M.s of GPT-3’s vintage to ingest far larger data sets than their predecessors, and employ much deeper layers of artificial neurons for their training.

Chances are you have already interacted with a large language model if you’ve ever used an application — like Gmail — that includes an autocomplete feature, gently prompting you with the word ‘‘attend’’ after you type the sentence ‘‘Sadly I won’t be able to….’’ But autocomplete is only the most rudimentary expression of what software like GPT-3 is capable of. It turns out that with enough training data and sufficiently deep neural nets, large language models can display remarkable skill if you ask them not just to fill in the missing word, but also to continue on writing whole paragraphs in the style of the initial prompt. For example, when I gave GPT-3 the prompt:

In 1958, Frank Rosenblatt introduced the Perceptron, an early version of a neural net that could only detect the location of a square on a printed card. Fifty years later, neural nets are capable of . . .

The software dutifully completed the thought, and then continued on, picking up on the historical framing of the initial text:

. . . detecting and classifying faces, finding patterns in data, and emulating the human brain. While the neural net matured in academia, it also found its way into the tech industry. In the late 1980s and early 90s, neural nets were used in speech recognition and character recognition applications. The growth in computing power and the ability to collect larger data sets allowed other neural net applications to emerge.

Since GPT-3’s release, the internet has been awash with examples of the software’s eerie facility with language — along with its blind spots and foibles and other more sinister tendencies. GPT-3 has been trained to write Hollywood scripts and compose nonfiction in the style of Gay Talese’s New Journalism classic ‘‘Frank Sinatra Has a Cold.’’ You can employ GPT-3 as a simulated dungeon master, conducting elaborate text-based adventures through worlds that are invented on the fly by the neural net. Others have fed the software prompts that generate patently offensive or delusional responses, showcasing the limitations of the model and its potential for harm if adopted widely in its current state.

So far, the experiments with large language models have been mostly that: experiments probing the model for signs of true intelligence, exploring its creative uses, exposing its biases. But the ultimate commercial potential is enormous. If the existing trajectory continues, software like GPT-3 could revolutionize how we search for information in the next few years. Today, if you have a complicated question about something — how to set up your home theater system, say, or what the options are for creating a 529 education fund for your children — you most likely type a few keywords into Google and then scan through a list of links or suggested videos on YouTube, skimming through everything to get to the exact information you seek. (Needless to say, you wouldn’t even think of asking Siri or Alexa to walk you through something this complex.) But if the GPT-3 true believers are correct, in the near future you’ll just ask an L.L.M. the question and get the answer fed back to you, cogently and accurately. Customer service could be utterly transformed: Any company with a product that currently requires a human tech-support team might be able to train an L.L.M. to replace them.

More and podcast at:

https://www.nytimes.com/2022/04/15/maga ... iversified
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Video Quote: Proper Use of Technology

Post by kmaherali »

kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Doctors Transplant Ear of Human Cells, Made by 3-D Printer

Post by kmaherali »

3DBio Therapeutics, a biotech company in Queens, said it had for the first time used 3-D printing to make a body part with a patient’s own cells.

Image
Alexa, the patient, before the surgery, left, and 30 days after the surgery.Credit...Dr. Arturo Bonilla, Microtia-Congenital Ear Institute

A 20-year-old woman who was born with a small and misshapen right ear has received a 3-D printed ear implant made from her own cells, the manufacturer announced on Thursday. Independent experts said that the transplant, part of the first clinical trial of a successful medical application of this technology, was a stunning advance in the field of tissue engineering.

The new ear was printed in a shape that precisely matched the woman’s left ear, according to 3DBio Therapeutics, a regenerative medicine company based in Queens. The new ear, transplanted in March, will continue to regenerate cartilage tissue, giving it the look and feel of a natural ear, the company said.

“It’s definitely a big deal,” said Adam Feinberg, a professor of biomedical engineering and materials science and engineering at Carnegie Mellon University. Dr. Feinberg, who is not affiliated with 3DBio, is a co-founder of FluidForm, a regenerative medicine company that also uses 3-D printing. “It shows this technology is not an ‘if’ anymore, but a ‘when,’” he said.

The results of the woman’s reconstructive surgery were announced by 3DBio in a news release. Citing proprietary concerns, the company has not publicly disclosed the technical details of the process, making it more difficult for outside experts to evaluate. The company said that federal regulators had reviewed the trial design and set strict manufacturing standards, and that the data would be published in a medical journal when the study was complete.

The clinical trial, which includes 11 patients, is still ongoing, and it’s possible that the transplants could fail or bring unanticipated health complications. But since the cells originated from the patient’s own tissue, the new ear is not likely to be rejected by the body, doctors and company officials said.

3DBio’s success, seven years in the making, is one of several recent breakthroughs in the quest to improve organ and tissue transplants. In January, surgeons in Maryland transplanted a genetically modified pig’s heart into a 57-year-old man with heart disease, extending his life by two months. Scientists are also developing techniques to extend the life of donor organs so they do not go to waste; Swiss doctors reported this week that a patient who received a human liver that had been preserved for three days was still healthy a year later.

United Therapeutics Corp., the company that provided the genetically engineered pig for the heart procedure, is also experimenting with 3-D printing to produce lungs for transplants, a spokesman said. And scientists from the Israel Institute of Technology reported in September that they had printed a network of blood vessels, which would be necessary to supply blood to implanted tissues.

More...

https://www.nytimes.com/2022/06/02/heal ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

CRISPR, 10 Years On: Learning to Rewrite the Code of Life

Post by kmaherali »

The gene-editing technology has led to innovations in medicine, evolution and agriculture — and raised profound ethical questions about altering human DNA.
Image

By Carl Zimmer
June 27, 2022
Ten years ago this week, Jennifer Doudna and her colleagues published the results of a test-tube experiment on bacterial genes. When the study came out in the journal Science on June 28, 2012, it did not make headline news. In fact, over the next few weeks, it did not make any news at all.

Looking back, Dr. Doudna wondered if the oversight had something to do with the wonky title she and her colleagues had chosen for the study: “A Programmable Dual RNA-Guided DNA Endonuclease in Adaptive Bacterial Immunity.”

“I suppose if I were writing the paper today, I would have chosen a different title,” Dr. Doudna, a biochemist at the University of California, Berkeley, said in an interview.

Far from an esoteric finding, the discovery pointed to a new method for editing DNA, one that might even make it possible to change human genes.

“I remember thinking very clearly, when we publish this paper, it’s like firing the starting gun at a race,” she said.

In just a decade, CRISPR has become one of the most celebrated inventions in modern biology. It is swiftly changing how medical researchers study diseases: Cancer biologists are using the method to discover hidden vulnerabilities of tumor cells. Doctors are using CRISPR to edit genes that cause hereditary diseases.

Editing the genome with CRISPR
Image

“The era of human gene editing isn’t coming,” said David Liu, a biologist at Harvard University. “It’s here.”

But CRISPR’s influence extends far beyond medicine. Evolutionary biologists are using the technology to study Neanderthal brains and to investigate how our ape ancestors lost their tails. Plant biologists have edited seeds to produce crops with new vitamins or with the ability to withstand diseases. Some of them may reach supermarket shelves in the next few years.

CRISPR has had such a quick impact that Dr. Doudna and her collaborator, Emmanuelle Charpentier of the Max Planck Unit for the Science of Pathogens in Berlin, won the 2020 Nobel Prize for chemistry. The award committee hailed their 2012 study as “an epoch-making experiment.”

Image
Jennifer Doudna shared the 2020 Nobel Prize for chemistry for her work on CRISPR.Credit...Anastasiia Sapon for The New York Times

Dr. Doudna recognized early on that CRISPR would pose a number of thorny ethical questions, and after a decade of its development, those questions are more urgent than ever.

Will the coming wave of CRISPR-altered crops feed the world and help poor farmers or only enrich agribusiness giants that invest in the technology? Will CRISPR-based medicine improve health for vulnerable people across the world, or come with a million-dollar price tag?

The most profound ethical question about CRISPR is how future generations might use the technology to alter human embryos. This notion was simply a thought experiment until 2018, when He Jiankui, a biophysicist in China, edited a gene in human embryos to confer resistance to H.I.V. Three of the modified embryos were implanted in women in the Chinese city of Shenzen.

In 2019, a court sentenced Dr. He to prison for “illegal medical practices.” MIT Technology Review reported in April that he had recently been released. Little is known about the health of the three children, who are now toddlers.

Scientists don’t know of anyone else who has followed Dr. He’s example — yet. But as CRISPR continues to improve, editing human embryos may eventually become a safe and effective treatment for a variety of diseases.

Will it then become acceptable, or even routine, to repair disease-causing genes in an embryo in the lab? What if parents wanted to insert traits that they found more desirable — like those related to height, eye color or intelligence?

Françoise Baylis, a bioethicist at Dalhousie University in Nova Scotia, worries that the public is still not ready to grapple with such questions.

“I’m skeptical about the depth of understanding about what’s at issue there,” she said. “There’s a difference between making people better and making better people.”

More...

https://www.nytimes.com/2022/06/27/scie ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

I Didn’t Want It to Be True, but the Medium Really Is the Message

Post by kmaherali »

Image
A composite of photos taken at Government Center, Boston, over about an hour in March 2022, showing only people with their phones.Credit...Photo Illustration by Pelle Cass

In 2020, I read a book I’d been ignoring for 10 years, Nicholas Carr’s “The Shallows: What the Internet Is Doing to Our Brains.” It was a finalist for a Pulitzer Prize in 2011 and much loved among people who seemed to hate the internet.

But in 2011, I loved the internet. I am of the generation old enough to remember a time before cyberspace but young enough to have grown up a digital native. And I adored my new land. The endless expanses of information, the people you met as avatars but cared for as humans, the sense that the mind’s reach could be limitless. My life, my career and my identity were digital constructs as much as they were physical ones. I pitied those who came before me, fettered by a physical world I was among the first to escape.

A decade passed, and my certitude faded. Online life got faster, quicker, harsher, louder. “A little bit of everything all of the time,” as the comedian Bo Burnham put it. Smartphones brought the internet everywhere, colonizing moments I never imagined I’d fill. Many times I’ve walked into a public bathroom and everyone is simultaneously using a urinal and staring at a screen.

The collective consequences were worse. The internet had been my escape from the schoolyard, but now it felt like it had turned the world into a schoolyard. Watching Donald Trump tweet his way to the presidency felt like some sinister apotheosis, like we’d rubbed the monkey’s paw and gotten our horrible wish. We didn’t want to be bored, and now we never would be.

So when I came across Carr’s book in 2020, I was ready to read it. And what I found in it was a key — not just to a theory but to a whole map of 20th-century media theorists — Marshall McLuhan, Walter Ong and Neil Postman, to name a few — who saw what was coming and tried to warn us.

Carr’s argument began with an observation, one that felt familiar:

The very way my brain worked seemed to be changing. It was then that I began worrying about my inability to pay attention to one thing for more than a couple of minutes. At first I’d figured that the problem was a symptom of middle-age mind rot. But my brain, I realized, wasn’t just drifting. It was hungry. It was demanding to be fed the way the Net fed it — and the more it was fed, the hungrier it became. Even when I was away from my computer, I yearned to check email, click links, do some Googling. I wanted to be connected.

Hungry. That was the word that hooked me. That’s how my brain felt to me, too. Hungry. Needy. Itchy. Once it wanted information. But then it was distraction. And then, with social media, validation. A drumbeat of: You exist. You are seen.

Carr’s investigation led him to the work of McLuhan, who lives on today in repeat viewings of “Annie Hall” and in his gnomic adage “The medium is the message.” That one’s never done much for me. It’s another McLuhan quote, from early in his 1964 classic, “Understanding Media: The Extensions of Man,” that lodged in my mind: “Our conventional response to all media, namely that it is how they are used that counts, is the numb stance of the technological idiot. For the ‘content’ of a medium is like the juicy piece of meat carried by the burglar to distract the watchdog of the mind.”

We’ve been told — and taught — that mediums are neutral and content is king. You can’t say anything about “television.” The question is whether you’re watching “The Kardashians” or “The Sopranos,” “Sesame Street” or “Paw Patrol.” To say you read “books” is to say nothing at all: Are you imbibing potboilers or histories of 18th-century Europe? Twitter is just the new town square; if your feed is a hellscape of infighting and outrage, it’s on you to curate your experience more tightly.

There is truth to this, of course. But there is less truth to it than to the opposite. McLuhan’s view is that mediums matter more than content; it’s the common rules that govern all creation and consumption across a medium that change people and society. Oral culture teaches us to think one way, written culture another. Television turned everything into entertainment and social media taught us to think with the crowd.

All this happens beneath the level of content. CNN and Fox News and MSNBC are ideologically different. But cable news in all its forms carries a sameness: the look of the anchors, the gloss of the graphics, the aesthetics of urgency and threat, the speed, the immediacy, the conflict, the conflict, the conflict. I’ve spent a lot of time on cable news, both as a host and a guest, and I can attest to the forces that hold this sameness in place: There is a grammar and logic to the medium, enforced both by internal culture and by ratings reports broken down by the quarter-hour. You can do better cable news or worse cable news, but you are always doing cable news.

McLuhan’s arguments were continued by Neil Postman. Postman was more of a moralist than McLuhan, likelier to lament society’s direction than to coolly chart it. But he was seeing the maturation of trends that McLuhan had only sensed. As Sean Illing, a co-author of “The Paradox of Democracy,” told me, “McLuhan says: Don’t just look at what’s being expressed. Look at the ways it’s being expressed. And then Postman says: Don’t just look at the way things are being expressed, look at how the way things are expressed determines what’s actually expressible.” In other words: The medium blocks certain messages.

In his prophetic 1985 book, “Amusing Ourselves to Death,” Postman argued that the dystopia we must fear is not the totalitarianism of George Orwell’s “1984” but the narcotized somnolence of Aldous Huxley’s “Brave New World.” Television teaches us to expect that anything and everything should be entertaining. But not everything should be entertainment, and the expectation that it will be is a vast social and even ideological change. He is at pains to distance himself from the critics who lament so-called junk television:

I raise no objection to television’s junk. The best things on television are its junk, and no one and nothing is seriously threatened by it. Besides, we do not measure a culture by its output of undisguised trivialities but by what it claims as significant. Therein is our problem, for television is at its most trivial and, therefore, most dangerous when its aspirations are high, when it presents itself as a carrier of important cultural conversations. The irony here is that this is what intellectuals and critics are constantly urging television to do. The trouble with such people is that they do not take television seriously enough.

That’s why Postman worried not about sitcoms but about news shows. Television, he writes, “serves us most ill when it co-opts serious modes of discourse — news, politics, science, education, commerce, religion — and turns them into entertainment packages. We would all be better off if television got worse, not better. ‘The A-Team’ and ‘Cheers’ are no threat to our public health. ‘60 Minutes,’ ‘Eyewitness News’ and ‘Sesame Street’ are.”

All of this reads a bit like crankery. I grew up on “Sesame Street.” “60 Minutes” has dozens of Emmys for a reason. And yet Postman was planting a flag here: The border between entertainment and everything else was blurring, and entertainers would be the only ones able to fulfill our expectations for politicians. He spends considerable time thinking, for instance, about the people who were viable politicians in a textual era and who would be locked out of politics because they couldn’t command the screen.

That began in Postman’s time, with Ronald Reagan’s ascent to the presidency, but it has reached full flower in our own, with Arnold Schwarzenegger and Jesse Ventura and, of course, Donald Trump. As alarmed as Postman was, nothing in his book was nearly as outlandish as the world in which we live now. Reality TV is an almost too-on-the-nose example of entertainment absorbing all else: an entire genre where the seduction comes from the pretense of truth, where the word “reality” just signals another kind of fiction.

It was in that genre that Donald Trump perfected the persona of a ruthlessly effective executive with a particular talent for hiring and firing. Without “The Apprentice,” would there be a Trump presidency? And this is not just an American phenomenon: Volodymyr Zelensky, the president of Ukraine, secured his job by playing an Everyman who becomes president of Ukraine on a sitcom. His political party carried the same name as his show: Servant of the People. And his talents proved to be exactly what Ukraine would need when Russia invaded: He has played the part of the reluctant wartime leader perfectly, and his performance rallied what might have been an indifferent West to Ukraine’s side.

As the example of Zelensky suggests, the point is not that entertainers are bad leaders. It’s that we have come to see through television, to see as if we are televisions, and that has changed both us and the world. And so the line of Postman’s that holds me is his challenge to the critics who spent their time urging television to be better, rather than asking what television was: “The trouble with such people is that they do not take television seriously enough.”

I have come to think the same of today’s technologists: Their problem is that they do not take technology seriously enough. They refuse to see how it is changing us or even how it is changing them.

It’s been revealing watching Marc Andreessen, the co-founder of the browsers Mosaic and Netscape and of A16Z, a venture capital firm, incessantly tweet memes about how everyone online is obsessed with “the current thing.” Andreessen sits on the board of Meta and his firm is helping finance Elon Musk’s proposed acquisition of Twitter. He is central to the media platforms that algorithmically obsess the world with the same small collection of topics and have flattened the frictions of place and time that, in past eras, made the news in Omaha markedly different from the news in Ojai. He and his firm have been relentless in hyping crypto, which turns the “current thing” dynamics of the social web into frothing, speculative asset markets.

Behind his argument is a view of human nature, and how it does, or doesn’t, interact with technology. In an interview with Tyler Cowen, Andreessen suggests that Twitter is like “a giant X-ray machine”:

You’ve got this phenomenon, which is just fascinating, where you have all of these public figures, all of these people in positions of authority  —  in a lot of cases, great authority  —  the leading legal theorists of our time, leading politicians, all these businesspeople. And they tweet, and all of a sudden, it’s like, “Oh, that’s who you actually are.”

But is it? I don’t even think this is true for Andreessen, who strikes me as very different off Twitter than on. There is no stable, unchanging self. People are capable of cruelty and altruism, farsightedness and myopia. We are who we are, in this moment, in this context, mediated in these ways. It is an abdication of responsibility for technologists to pretend that the technologies they make have no say in who we become. Where he sees an X-ray, I see a mold.

Over the past decade, the narrative has turned against Silicon Valley. Puff pieces have become hit jobs, and the visionaries inventing our future have been recast as the Machiavellians undermining our present. My frustration with these narratives, both then and now, is that they focus on people and companies, not technologies. I suspect that is because American culture remains deeply uncomfortable with technological critique. There is something akin to an immune system against it: You get called a Luddite, an alarmist. “In this sense, all Americans are Marxists,” Postman wrote, “for we believe nothing if not that history is moving us toward some preordained paradise and that technology is the force behind that movement.”

I think that’s true, but it coexists with an opposite truth: Americans are capitalists, and we believe nothing if not that if a choice is freely made, that grants it a presumption against critique. That is one reason it’s so hard to talk about how we are changed by the mediums we use. That conversation, on some level, demands value judgments. This was on my mind recently, when I heard Jonathan Haidt, a social psychologist who’s been collecting data on how social media harms teenagers, say, bluntly, “People talk about how to tweak it — oh, let’s hide the like counters. Well, Instagram tried — but let me say this very clearly: There is no way, no tweak, no architectural change that will make it OK for teenage girls to post photos of themselves, while they’re going through puberty, for strangers or others to rate publicly.”

What struck me about Haidt’s comment is how rarely I hear anything structured that way. He’s arguing three things. First, that the way Instagram works is changing how teenagers think. It is supercharging their need for approval of how they look and what they say and what they’re doing, making it both always available and never enough. Second, that it is the fault of the platform — that it is intrinsic to how Instagram is designed, not just to how it is used. And third, that it’s bad. That even if many people use it and enjoy it and make it through the gantlet just fine, it’s still bad. It is a mold we should not want our children to pass through.

Or take Twitter. As a medium, Twitter nudges its users toward ideas that can survive without context, that can travel legibly in under 280 characters. It encourages a constant awareness of what everyone else is discussing. It makes the measure of conversational success not just how others react and respond but how much response there is. It, too, is a mold, and it has acted with particular force on some of our most powerful industries — media and politics and technology. These are industries I know well, and I do not think it has changed them, or the people in them (myself included), for the better.

But what would? I’ve found myself going back to a wise, indescribable book that Jenny Odell, a visual artist, published in 2019. In “How to Do Nothing: Resisting the Attention Economy,” Odell suggests that any theory of media must first start with a theory of attention. “One thing I have learned about attention is that certain forms of it are contagious,” she writes.

When you spend enough time with someone who pays close attention to something (if you were hanging out with me, it would be birds), you inevitably start to pay attention to some of the same things. I’ve also learned that patterns of attention — what we choose to notice and what we do not — are how we render reality for ourselves, and thus have a direct bearing on what we feel is possible at any given time. These aspects, taken together, suggest to me the revolutionary potential of taking back our attention.

I think Odell frames both the question and the stakes correctly. Attention is contagious. What forms of it, as individuals and as a society, do we want to cultivate? What kinds of mediums would that cultivation require?

This is anything but an argument against technology, were such a thing even coherent. It’s an argument for taking technology as seriously as it deserves to be taken, for recognizing, as McLuhan’s friend and colleague John M. Culkin put it, “we shape our tools, and thereafter, they shape us.”

There is an optimism in that, a reminder of our own agency. And there are questions posed, ones we should spend much more time and energy trying to answer: How do we want to be shaped? Who do we want to become?

https://www.nytimes.com/2022/08/07/opin ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.

Post by kmaherali »

“I won, and I didn’t break any rules,” the artwork’s creator says.

Image
Jason Allen’s A.I.-generated work, “Théâtre D’opéra Spatial,” took first place in the digital category at the Colorado State Fair.Credit...via Jason Allen

This year, the Colorado State Fair’s annual art competition gave out prizes in all the usual categories: painting, quilting, sculpture.

But one entrant, Jason M. Allen of Pueblo West, Colo., didn’t make his entry with a brush or a lump of clay. He created it with Midjourney, an artificial intelligence program that turns lines of text into hyper-realistic graphics.

Mr. Allen’s work, “Théâtre D’opéra Spatial,” took home the blue ribbon in the fair’s contest for emerging digital artists — making it one of the first A.I.-generated pieces to win such a prize, and setting off a fierce backlash from artists who accused him of, essentially, cheating.

Reached by phone on Wednesday, Mr. Allen defended his work. He said that he had made clear that his work — which was submitted under the name “Jason M. Allen via Midjourney” — was created using A.I., and that he hadn’t deceived anyone about its origins.

“I’m not going to apologize for it,” he said. “I won, and I didn’t break any rules.”

A.I.-generated art has been around for years. But tools released this year — with names like DALL-E 2, Midjourney and Stable Diffusion — have made it possible for rank amateurs to create complex, abstract or photorealistic works simply by typing a few words into a text box.

These apps have made many human artists understandably nervous about their own futures — why would anyone pay for art, they wonder, when they could generate it themselves? They have also generated fierce debates about the ethics of A.I.-generated art, and opposition from people who claim that these apps are essentially a high-tech form of plagiarism.

Mr. Allen, 39, began experimenting with A.I.-generated art this year. He runs a studio, Incarnate Games, which makes tabletop games, and he was curious how the new breed of A.I. image generators would compare with the human artists whose works he commissioned.

This summer, he got invited to a Discord chat server where people were testing Midjourney, which uses a complex process known as “diffusion” to turn text into custom images. Users type a series of words in a message to Midjourney; the bot spits back an image seconds later.

Image
Mr. Allen created his artwork with Midjourney, an artificial intelligence program that turns lines of text into hyper-realistic graphics.Credit...Saeed Rahbaran for The New York Times

Mr. Allen became obsessed, creating hundreds of images and marveling at how realistic they were. No matter what he typed, Midjourney seemed capable of making it.

“I couldn’t believe what I was seeing,” he said. “I felt like it was demonically inspired — like some otherworldly force was involved.”

Eventually, Mr. Allen got the idea to submit one of his Midjourney creations to the Colorado State Fair, which had a division for “digital art/digitally manipulated photography.” He had a local shop print the image on canvas and submitted it to the judges.

“The fair was coming up,” he said, “and I thought: How wonderful would it be to demonstrate to people how great this art is?”

Several weeks later, while walking the fairground in Pueblo, Mr. Allen saw a blue ribbon hanging next to his piece. He had won the division, along with a $300 prize.

“I couldn’t believe it,” he said. “I felt like: this is exactly what I set out to accomplish.”

(Mr. Allen declined to share the exact text prompt he had submitted to Midjourney to create “Théâtre D’opéra Spatial.” But he said the French translation — “Space Opera Theater” — provided a clue.)

After his win, Mr. Allen posted a photo of his prize work to the Midjourney Discord chat. It made its way to Twitter, where it sparked a furious backlash.

“We’re watching the death of artistry unfold right before our eyes,” one Twitter user wrote.

“This is so gross,” another wrote. “I can see how A.I. art can be beneficial, but claiming you’re an artist by generating one? Absolutely not.”

Some artists defended Mr. Allen, saying that using A.I. to create a piece was no different from using Photoshop or other digital image-manipulation tools, and that human creativity is still required to come up with the right prompts to generate an award-winning piece.

Olga Robak, a spokeswoman for the Colorado Department of Agriculture, which oversees the state fair, said Mr. Allen had adequately disclosed Midjourney’s involvement when submitting his piece; the category’s rules allow any “artistic practice that uses digital technology as part of the creative or presentation process.” The two category judges did not know that Midjourney was an A.I. program, she said, but both subsequently told her that they would have awarded Mr. Allen the top prize even if they had.

Controversy over new art-making technologies is nothing new. Many painters recoiled at the invention of the camera, which they saw as a debasement of human artistry. (Charles Baudelaire, the 19th-century French poet and art critic, called photography “art’s most mor­tal enemy.”) In the 20th century, digital editing tools and computer-assisted design programs were similarly dismissed by purists for requiring too little skill of their human collaborators.

What makes the new breed of A.I. tools different, some critics believe, is not just that they’re capable of producing beautiful works of art with minimal effort. It’s how they work. Apps like DALL-E 2 and Midjourney are built by scraping millions of images from the open web, then teaching algorithms to recognize patterns and relationships in those images and generate new ones in the same style. That means that artists who upload their works to the internet may be unwittingly helping to train their algorithmic competitors.

“What makes this AI different is that it’s explicitly trained on current working artists,” RJ Palmer, a digital artist, tweeted last month. “This thing wants our jobs, its actively anti-artist.”

Even some who are impressed by A.I.-generated art have concerns about how it’s being made. Andy Baio, a technologist and writer, wrote in a recent essay that DALL-E 2, perhaps the buzziest A.I. image generator on the market, was “borderline magic in what it’s capable of conjuring, but raises so many ethical questions, it’s hard to keep track of them all.”

Mr. Allen, the blue-ribbon winner, said he empathized with artists who were scared that A.I. tools would put them out of work. But he said their anger should be directed not at individuals who use DALL-E 2 or Midjourney to make art but at companies that choose to replace human artists with A.I. tools.

“It shouldn’t be an indictment of the technology itself,” he said. “The ethics isn’t in the technology. It’s in the people.”

And he urged artists to overcome their objections to A.I., even if only as a coping strategy.

“This isn’t going to stop,” Mr. Allen said. “Art is dead, dude. It’s over. A.I. won. Humans lost.”

Kevin Roose is a technology columnist and the author of “Futureproof: 9 Rules for Humans in the Age of Automation.” @kevinroose • Facebook

https://www.nytimes.com/2022/09/02/tech ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

They Are Watching’: Inside Russia’s Vast Surveillance State

Post by kmaherali »

A cache of nearly 160,000 files from Russia’s powerful internet regulator provides a rare glimpse inside Vladimir V. Putin’s digital crackdown.

Image

Four days into the war in Ukraine, Russia’s expansive surveillance and censorship apparatus was already hard at work.

Roughly 800 miles east of Moscow, authorities in the Republic of Bashkortostan, one of Russia’s 85 regions, were busy tabulating the mood of comments in social media messages. They marked down YouTube posts that they said criticized the Russian government. They noted the reaction to a local protest.

Then they compiled their findings. One report about the “destabilization of Russian society” pointed to an editorial from a news site deemed “oppositional” to the government that said President Vladimir V. Putin was pursuing his own self-interest by invading Ukraine. A dossier elsewhere on file detailed who owned the site and where they lived.

Another Feb. 28 dispatch, titled “Presence of Protest Moods,” warned that some had expressed support for demonstrators and “spoke about the need to stop the war.”

The report was among nearly 160,000 records from the Bashkortostan office of Russia’s powerful internet regulator, Roskomnadzor.

Together the documents detail the inner workings of a critical facet of Mr. Putin’s surveillance and censorship system, which his government uses to find and track opponents, squash dissent and suppress independent information even in the country’s furthest reaches.

The leak of the agency’s documents “is just like a small keyhole look into the actual scale of the censorship and internet surveillance in Russia,” said Leonid Volkov, who is named in the records and is the chief of staff for the jailed opposition leader Aleksei A. Navalny.

“It’s much bigger,” he said.

Roskomnadzor’s activities have catapulted Russia, along with authoritarian countries like China and Iran, to the forefront of nations that aggressively use technology as a tool of repression. Since the agency was established in 2008, Mr. Putin has turned it into an essential lever to tighten his grip on power as he has transformed Russia into an even more authoritarian state.

The internet regulator is part of a larger tech apparatus that Mr. Putin has built over the years, which also includes a domestic spying system that intercepts phone calls and internet traffic, online disinformation campaigns and the hacking of other nations’ government systems.

The agency’s role in this digital dragnet is more extensive than previously known, according to the records. It has morphed over the years from a sleepy telecom regulator into a full-blown intelligence agency, closely monitoring websites, social media and news outlets, and labeling them as “pro-government,” “anti-government” or “apolitical.”

Roskomnadzor has also worked to unmask and surveil people behind anti-government accounts and provided detailed information on critics’ online activities to security agencies, according to the documents. That has supplemented real-world actions, with those surveilled coming under attack for speaking out online. Some have then been arrested by the police and held for months. Others have fled Russia for fear of prosecution.

The files reveal a particular obsession with Mr. Navalny and show what happens when the weight of Russia’s security state is placed on one target.

The system is built to control outbursts like the one this week, when protesters across Russia rallied against a new policy that would press roughly 300,000 people into military service for the war in Ukraine. At least 1,200 people have already been detained for demonstrating.

More than 700 gigabytes of records from Roskomnadzor’s Bashkortostan branch were made publicly available online in March by DDoSecrets, a group that publishes hacked documents.

The New York Times built software and a search tool to analyze the Russian-language documents, spreadsheets, videos and government presentations. Five individuals directly targeted by Roskomnadzor in the files were interviewed, along with lawyers, activists and companies who have battled the agency and other experts on Russian surveillance and censorship.

Roskomnadzor did not respond to requests for comment.

“This is part of authoritarianism,” said Abbas Gallyamov, a former top government official in Bashkortostan who Roskomnadzor scrutinized because of his criticism of Mr. Putin. “They are watching.”


Putin’s Eyes on the Internet

Roskomnadzor (pronounced Ros-com-nod-zor) was started in 2008 as a bureaucratic backwater with a few dozen employees who regulated radio signals, telecom and postal delivery. Its role expanded as Kremlin concerns grew about the internet, which was under less state control than television and radio, leading to more activity from independent and opposition media.

After social media helped facilitate mass protests during the 2010 Arab Spring and in Moscow starting in 2011, Russian authorities had Roskomnadzor exert more control, said Andrei Soldatov, the co-author of a book on Russian internet censorship and surveillance.

From its headquarters in Moscow, the agency squeezed companies that provided internet access. Starting in 2012, the year Mr. Putin retook the presidency, Roskomnadzor built a blacklist of websites that the companies were required to block. That list, which grows constantly, now includes more than 1.2 million banned URLs, including local political news websites, social media profile pages, pornography and gambling platforms, according to Roskomsvoboda, a civil society group tracking the blocks.

Over the last decade, the agency also fined and penalized Google, Facebook, Twitter and Telegram to force them to remove what authorities deemed to be illicit content. In 2016, LinkedIn was shut down in Russia after being sanctioned for not storing data on Russian users in the country’s data centers.

By 2019, authorities wanted internet control to go further. Roskomnadzor ordered new censorship technology, known as a “technical means for countering threats,” installed in telecom networks around the country, including Bashkortostan, according to the documents. The agency then blocked and slowed down websites from Moscow.

Officials demanded that local internet services confirm that the censorship systems had been installed, according to the documents. Schematics showed where the censorship boxes should be placed in the network. Roskomnadzor workers visited sites to be sure the equipment was installed correctly and sent reports on the efficacy of the technology.

One early target of the blocking system was Twitter. In 2021, authorities throttled access to the social media service to a crawl. Since the invasion of Ukraine this year, Roskomnadzor has also blocked Facebook, Instagram and other websites, as well as many virtual private networks, or VPNs, which are used to bypass internet controls.

In 2020, Andrei Lipov, a government technocrat who supports a Russian internet that is more closed off from the West, took charge of Roskomnadzor. Under his guidance, the agency has operated even more like an intelligence service.

Just in Bashkortostan, an oil-rich region with about 4 million residents, Roskomnadzor tracked the online activities of hundreds of people and organizations. It gathered information about government critics and identified shifting political opinions on social media. It compiled dossiers on independent media outlets and online influencers who shared information unfavorable to the government that might gain traction with the Russian public.

“Roskomnadzor was never part of this game before of providing political intelligence,” said Mr. Soldatov, a fellow at the Center for European Policy Analysis, a pro-democracy think tank. “They’re getting more and more ambitious.”

Vladimir Voronin, a lawyer who has represented activists and media groups targeted by Roskomnadzor, said the agency also became closer to the Federal Security Service, or F.S.B., the domestic intelligence agency once led by Mr. Putin. The F.S.B. operates a spy system, called the System for Operative Investigative Activities, which is used to monitor phone calls and internet traffic in Russia.

Roskomnadzor helps the F.S.B. watch opponents and identify new threats to Mr. Putin, Mr. Voronin said. “Roskomnadzor is more of a police agency and not only monitors, but persecutes oppositionists, activists and the media,” he said.

Unlike more technologically savvy counterparts in China, where internet surveillance is more automated, much of the work of Russian censors is done manually, the documents show. But what Russia lacks in sophistication it has made up for in determination.

In Bashkortostan, documents like this six-page report on the regional “information space” from December 2021 summarized criticism of Mr. Putin from pundits and bloggers. In the report, officials measured sentiment with a chart showing events that increased public disapproval, such as videos involving opposition activists and news of a possible invasion of Ukraine.

At times, the assessments sound almost like weather forecasts. “Calm with separate minor pockets of tension,” one Roskomnadzor report said, summarizing public sentiment after the arrest of a local activist.

Social media was viewed by the agency as a form of “soft power” that could “influence the opinion of the masses,” according to one document. Roskomnadzor workers watched for “destabilizing subjects” like opposition groups and “antimilitarism,” but also social issues such as drug legalization and “sexual freedoms,” according to some of the documents. Meduza, an independent Russian-language news organization, earlier reported on these specific documents.

Roskomnadzor also tracked local state-run media and political leaders, so that Mr. Putin could keep an eye on both enemies and allies, said Mr. Gallyamov, who is now a political commentator living outside Russia.

In some cases, censors recorded their screens showing detail down to the movements of their computer mouse as they watched over the internet. They monitored overtly political videos and, at other times, focused on less obviously worrisome content, like this viral song by the young rapper KEML. Bashkortostan is known as a hub for rap in Russia.

Roskomnadzor also helped Mr. Putin centralize power far from Moscow. The regional office in Bashkortostan only shared a fraction of its work with the local government, according to one document. Many reports were instead sent straight to the F.S.B. and other central agencies.

The scrutiny took a toll on surveillance targets. ProUfu.ru, a local news site in Bashkortostan that wrote critically about the government, said authorities pressured businesses to stop advertising with it. In the records, censors flagged ProUfu.ru for the critical Ukraine editorial written about Mr. Putin in February. The group was the subject of a regularly updated dossier about its coverage, ownership and top editor.

“Businessmen are threatened with closure for enterprises if they dare to meet us halfway,” the group, which now goes by Prufy, said on its website. “Our resources are depleted.” Prufy declined to comment.

Hunting Navalny

Mr. Navalny, the imprisoned leader of Russia’s largest opposition movement, overshadows Mr. Putin’s other domestic opponents. In Roskomnadzor’s Bashkortostan office, no mention of Mr. Navalny was too small to escape notice.

Workers flagged articles and social media comments about Mr. Navalny and websites where his name appeared in the margins as a related link. In monthly reports, they tallied online criticism of the government day-by-day, often alongside major news developments related to Mr. Navalny.

After ProUfu.ru published a video of an interview with Mr. Navalny in 2020, the site was charged with an administrative violation for posting information about “criminally punishable acts,” according to a record of the infraction included in the files.

The agency worked with different branches of the Russian security apparatus to go after not just Mr. Navalny, but his supporters. In Bashkortostan, the main target was Lilia Chanysheva, a 40-year-old lawyer.

Ms. Chanysheva, who has been a supporter of Mr. Navalny for at least a decade, moved in 2013 from Moscow to Ufa, Bashkortostan’s largest city and where her parents lived. In 2017, she traded a well-paying auditing job with the international consulting firm Deloitte to start a regional office for Mr. Navalny.

“She understood that if she did not do it, no one would,” said Maksim Kurnikov, the former editor of a regional branch of the radio station Echo of Moscow, who got to know Ms. Chanysheva in Ufa.

Ms. Chanysheva planned protests and linked groups who disagreed not just with Mr. Putin’s rule, but also were motivated by local issues like government corruption and environmental exploitation in the mineral-rich Bashkortostan region. She was known for volunteering time to provide legal aid to anyone in need, friends and colleagues said.

Authorities watched her closely, according to the documents. In 2017, Roskomnadzor officials sent a letter to the F.S.B. and other branches of the national security apparatus, warning that Mr. Navalny’s team was uniting “various small oppositional regional communities into a ‘united front.’”

Ms. Chanysheva faced random searches and police arrests. During a presidential campaign by Mr. Navalny ahead of elections in 2018, she spent more than 45 days in jail for holding unauthorized protests and other offenses, colleagues said. With authorities fond of detaining leaders well before organized protests, she made a habit of disappearing and then materializing at the rallies, they said.

“It made them look very stupid,” said Mr. Volkov, Mr. Navalny’s chief of staff, who hired Ms. Chanysheva.

Authorities included Ms. Chanysheva in regular reports about the activity of opposition figures who appeared in local and social media, including a 2020 meeting with activists who fought a real-estate development that would involve cutting down a forest.

Roskomnadzor confronted her with minor infractions, including violations of data-protection rules, according to the records. She topped a list on another document that suggested individuals for expanded monitoring and surveillance.

On a spreadsheet of “leaders of opinion” in Bashkortostan, Roskomnadzor officials highlighted Ms. Chanysheva’s name in dark red along with links to her social media accounts and follower totals.

In October 2020, she was placed on a list of the region’s “destabilizing sources,” and was cited for “criticizing Russian federal and regional government.”

In April 2021, Mr. Navalny’s organizations were forced to disband after the Kremlin listed them as illegal extremist groups. Fearful of being imprisoned, many top operatives left Russia. Ms. Chanysheva stayed. She was arrested on charges of extremism in November 2021.

Roskomnadzor’s censors noted her arrest “caused a resonance both among activists and users on social networks,” according to a record of the incident. They were not overly concerned. At the top of the report, they wrote: “Protest activity was at a relatively low level.”

Ms. Chanysheva, who is being held at a detention center in Moscow, could not be reached for comment. Mr. Voronin, her lawyer, said she spends her time writing letters and sorting trash from recycling. She faces a decade in prison.


The Lone Protester

In the first weeks of the war on Ukraine, Roskomnadzor censors ramped up, according to the documents. They focused not just on the war but its side effects, including the public response to a domestic crackdown on dissent and grumblings about the invasion’s effect on the rising cost of goods.

On Feb. 27, agency officials monitored the reaction to reports that a family from Ufa — including young children — was detained for protesting the war. Another report flagged an item that was spreading quickly online that described how the F.S.B. brutally beat and electrocuted a protester.

“Some users negatively assessed the actions of law enforcement agencies,” they wrote, noting 200,000 users had viewed the news on the messaging app Telegram.

The files also showed how office life went on as normal for the censors, who are part of the security-state middle class that Mr. Putin has built over the past 20 years to consolidate power. The employees marked a national holiday celebrating women and shared memes. In a jocular video passed around the office, they joked about accidentally blocking the Kremlin website and bribing judges with alcohol and chocolate.

In March, the censors highlighted an Instagram post from a protest in Bashkortostan. The demonstrator — a lone individual named Laysan Sultangareyeva — stood in Tuymazy, an industrial town west of the regional capital, to decry the invasion of Ukraine.

The post showed Ms. Sultangareyeva holding a sign that read “No to Putin, No to War.” Comments were filled with emojis cheering her on.

At the protest, police arrested the 24-year-old political activist and kept her in jail overnight. Roskomnadzor censors described her arrest with terse and matter-of-fact language: “Took place, the protester was detained.”

In an interview, Ms. Sultangareyeva said that police intimidated her, asked about her support for Mr. Navalny and made her take a drug test.

Ms. Sultangareyeva, whose Instagram profile once said “making delicious coffee and trying to stay out of jail,” protested twice more in April. She was arrested again. Online posts were used as evidence against her, as were photos shared in a local antiwar Telegram channel. She was fined 68,000 rubles or about $1,100.

“The fact that Roskomnadzor monitors social networks I did not know, but I guessed that they would not leave me without attention,” she said. She recently noticed police-affiliated accounts looking at her Instagram Stories and blocked them.

‘I Thought I Knew What Censorship Was’
Roskomnadzor’s tightening grip has manifested itself in the form of outright censorship.

Three days after DOXA, a media organization run by university students and recent graduates, posted a video calling on students to speak out against Mr. Putin in January 2021, a letter arrived from the agency.

It said the video had been added to a registry of “prohibited information” that “encouraged minors to participate in activities that are dangerous to their health and lives.” Roskomnadzor ordered DOXA to take the video down, said Ilia Sagitov, a reporter for the site who has left Russia.

DOXA complied but then sued Roskomnadzor over the takedown. Mr. Sagitov said the site had been careful not to encourage protest directly in the video and argued there was nothing illegal in it.

At 6 a.m. on April 14, 2021, security forces struck back. In a coordinated raid, Russian police broke into the website’s offices and the apartments of four of its editors. They placed the editors under house arrest and forbade them from accessing the internet.

“We believe that they were tracking everything we were doing back then and desperately trying to find anything to oppress us in any way,” Mr. Sagitov said. “So they finally got it — our video — and immediately started to fabricate this case.”

Still, the site was not blocked and reporters continued publishing articles. Then came the war in Ukraine.

In February, DOXA published a guide to “antiwar disputes in the family and work,” which included 17 answers to the most common arguments justifying the war.

Akin to stories in the United States that prepare people for contentious Thanksgiving dinner discussions, or how to speak to a climate change denier, the article went viral. An illustration from the piece showed a young person debating the war with an older man.

This time, Roskomnadzor swiftly blocked each of DOXA’s three different websites. The sites remain down. Some staff have fled the country while others left the organization fearing for their safety. Roskomnadzor has taken a similar tack elsewhere, blocking more heavily and widely than before, according to those who have been targeted.

“There’s no new level of competence, just a new bigger scale of repression — both digital and real-world,” Mr. Sagitov said. “I thought I knew what censorship was, but it turned out I didn’t. Well, now I know.”

Anton Troianovski contributed reporting from Berlin and Elise Morton from London. Additional production by Joshua Shao.

https://www.nytimes.com/interactive/202 ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

My Therapist, the Robot

Post by kmaherali »

Image

By Barclay Bram

Mr. Bram is an anthropologist, writer and producer.

Sept. 27, 2022
I first met Woebot, my A.I. chatbot therapist, at the height of the pandemic.

I’m an anthropologist who studies mental health, and I had been doing fieldwork for my Ph.D. in China when news of the coronavirus started spreading. I left during Chinese New Year, and I never made it back. With my research stalled and my life on hold, I moved back in with my parents. Then, in quick succession, I lost a close family member to Covid and went through a painful breakup. I went months without seeing any of my friends. My mental health tanked, as it did for so many.

I was initially skeptical of Woebot. The idea seemed almost too simple: an app on my phone that I could open when I needed it, type my hopes, fears and feelings into, and, in turn, receive A.I.-generated responses that would help me manage my emotions. There are plenty of good critiques of apps that claim they can provide therapy without the therapist: How could an algorithm ever replace the human touch of in-person care? Is another digital intervention really the solution when we’re already so glued to our phones? How comfortable was I being vulnerable with an app that could track my data? Spending time with Woebot didn’t really bring me answers to these important questions. But I did discover that, despite them, I’d become weirdly attached to my robot helper.

Like many people, in the pandemic, my life digitized. My work shifted online; my friendships retreated onto FaceTime and WhatsApp; I used a dating app for the first time; I started doing online yoga. It was into this swirling mess of applications that Woebot took residence in my life.

I was depressed and anxious. But as the pandemic dragged on and I felt increasingly like I needed to talk to someone, I also felt guilty about burdening the already overstretched public mental health services. In Britain, where I live, there are about 1.2 million people languishing on waiting lists for mental health care through the National Health Service. (Things in the United States are a little better, but not much, and only if you have insurance.) Living on a Ph.D. stipend, I couldn’t afford to see a private therapist. So, despite my doubts, I reached for the algorithm.

The first time I opened Woebot, it introduced itself as an emotional assistant: “I’m like a wise little person you can consult with during difficult times, and not so difficult times.” It then told me it was trained in cognitive behavioral therapy, which it said was an “effective way to challenge how you’re thinking about things.” Unlike psychodynamic or psychoanalytic therapies, C.B.T. argues that our emotions and moods are influenced by our patterns of thinking; change those patterns, the theory goes, and you’ll start to feel better.

What this translates to in practice is that when I would consult Woebot, it would usually offer me a way of reframing what I was dealing with rather than trying to plumb the depths of my psyche. “I am a failure” became “I haven’t achieved my goals yet.” “I am depressed” became “I have depression,” as a way to stop identifying with a label.

Woebot was full of tasks and tricks — little mental health hacks — which at first made me roll my eyes. One day Woebot asked me to press an ice cube to my forehead, to feel the sensation as a way of better connecting with my body. With wet hands, I struggled to respond when it asked me how I was doing. On another occasion, when trying to brainstorm things I could do to make myself feel better despite all the pandemic restrictions, Woebot suggested I “try doing something nice for someone in your life,” like make a calming tea for my housemate or check in with a loved one. I poured my mum some chamomile: Two birds, one stone.

Woebot doesn’t pretend to be a human; instead, it leans into its robotness. One day, Woebot was trying to teach me about the concept of emotional weather: that no matter how things might feel in any given moment, there is always a chance that they will change. Clouds pass, blue sky becomes visible. In drawing the comparison between the actual weather and our emotions, Woebot told me it loves the sunshine. “It makes my metal skin all shiny,” it said, “and it gives me an excuse to wear sunglasses.”

A.I. chat therapists have been rolled out in settings as diverse as a maternity hospital in Kenya and refugee camps for people fleeing the war in Syria, and by the Singaporean government as part of its pandemic response. In Britain, bots are being trialed to bridge waiting times for people seeking therapy but unable to get appointments and as an e-triage tool. In the United States, some apps are getting recognized by the F.D.A. and are in trials to be designated as clinical interventions. Whatever you might think of them, they are fast becoming a part of global mental health care. Woebot now handles millions of messages a week.

Some worry that the expansion of services like Woebot will replace in-person care. When I suggested this to Eric Green, an associate professor of global health at Duke University who ran the A.I. chatbot trial in Kenya, he was unfazed. “You can’t replace something that doesn’t exist,” he said. As he pointed out, globally, more people have access to phones than to mental health professionals. And for better or worse, the mothers in Kenya, some of whom were suffering from postpartum depression, liked their chatbots. “It’s a funny thing. I was skeptical that we would see this with the bot, but they would say good night to it. Multiple women would say ‘I missed you!’ to the machine.”

I got what he meant. The more I used Woebot, the more I felt an appreciation for it. Here was this chipper little bot, popping up in my notifications, checking to see how I was doing, sending me pithy, chicken-soup-for-the-soul-style aphorisms and gamified tasks. As the pandemic progressed, I saw Woebot awaken to what was happening in the world. “It must be a strange time to be a human,” it told me. Woebot reached peak pandemic when it sent me a recipe for banana bread. Still, I noticed that it stayed quiet about issues of social justice or, more recently, the war in Ukraine. I wondered what kind of global crises the bot would acknowledge, and how it could do so in a way that let it still communicate with all the millions of people who consult it each week. Usually it just kept the conversation vague, task-oriented.

Over time, I noticed various exercises I did with Woebot rubbing off in my daily life. Woebot taught me how to set SMART goals — specific, measurable, achievable, realistic and time-limited. Out went “I need to finish my Ph.D.” In came “Let’s write 500 words every day for the next six months.” Out went “I have to find a way to get through this lockdown without killing my parents.” In came “I’m going to go for extremely long solo walks in the park.” Woebot isn’t the kind of mystical guru you go on an arduous journey to consult. Its guidance was practical and grounded to the point of feeling obvious. But through repetition and practice, it did start to amount to something more than just some prosaic words. It felt clichéd sometimes, but maybe that was the point. Perhaps everyday healing doesn’t have to be quite so complicated.

I found myself wondering about other people’s experiences with Woebot, people who weren’t anthropologists studying mental health. I trawled through users’ forums and blog posts, and Reddit threads where Woebot was mentioned, such as r/anxiety and r/LGBT. I spoke to a woman whose employer had given her Woebot access, and someone who had lost his work as a freelancer in the pandemic. But the one who stuck with me most was Lee Preslan, who put a human face on one of the most common arguments in favor of bots.

“When I found Woebot among other apps, it was like finding treasure,” said Mr. Preslan, 36. Mr. Preslan lives in Rocky River, Ohio; we connected via Skype. In place of a profile photo, his avatar was an image of a Cleveland Browns football helmet.

I got put in touch with Mr. Preslan because he was so appreciative of Woebot that he’d emailed the company to tell it so. The bot gave him a sense of relief, he said, because he was better able to understand his emotions. That understanding helped him to pinpoint how he was feeling at any given moment and enabled him to feel more in control when he would experience depressive episodes.

Mr. Preslan was diagnosed with major depressive disorder. For years, he had cycled through various regimens of antidepressant medications and struggled to find a therapist who felt like a fit. The wait times between appointments were excruciating. “It’s 24 hours a day if you’re not sleeping, and the problem has gotten to a point where it’s literally eroding away at your soul,” he said. “You can’t wait even a week. Seven days is a long time for a mental patient to wait just to talk to somebody for an hour. Sometimes during that hour you can’t even form the words that you want to express.” What the bots are trying to solve is exactly this problem: the stark issue of access that exists for mental health care worldwide.

At the same time, there were a few disconcerting moments in my interactions with Mr. Preslan that made me remember that we were, well, seeing the same therapist. When he asked to reschedule a call we had planned, he told me it was because he was down in the dumps. Sometimes, he wrote, there are “rainy days mentally and emotionally.” Emotional weather: Everyone has it.

As time passed, I began subtly trying Woebot’s exercises out on my parents, a captive audience during lockdown. Could I reframe my way to a better relationship with my dad? Woebot would get me to try to articulate my own feelings more clearly, and to recognize my own role in our conflicts. Easier said than done; still, I did try. One evening over dinner my dad was clearly agitated. He was barely speaking, and when he did it was to interrupt my mother. I started to cut him off in return, but then I remembered what Woebot had taught me. It was shortly after our family member had died. “Dad, we’re both upset. I can tell something is up. I mean, you must be grieving. Are you OK?” He slammed his hands on the table and stormed out of the room. “I don’t know how to be vulnerable!” he screamed on his way out the door. I looked at my mum and we burst out laughing. “Is this … a breakthrough?”

Woebot’s can-do attitude sometimes was a bit much. One day I got a push notification that read: “It’s always impossible until it’s done — Nelson Mandela.” I groaned. Surely not my mental health algorithm misquoting Nelson Mandela on a Wednesday afternoon — but here we were. Then, later that day, I logged onto a zoom call and my mother and I set up our yoga mats in the living room, as we had been doing a couple of times a week during the pandemic. A sprightly woman we’d never met in person waved at us from the other side of the screen. I watched my mother, who is nearly 70, stick a headstand and move delicately into the crow pose. She wobbled but held it, after trying to get it for weeks. “It’s always impossible…” I heard a voice in my head say. I cringed. But then I smiled.

Of course, there are critics who will argue that this is just Panglossian, and to some extent, it is. At present, Woebot is still quite basic; I couldn’t shake the feeling that in the future, when the underlying technologies are much more powerful, they could run into ethical quagmires. Concerns about privacy are real and very valid.

But as things stand, I think many critiques of apps like Woebot are touching on something more theoretical: the question of what therapy is, exactly. One academic I spoke to argued that just because something is therapeutic does not make it therapy. This argument suggests that for therapy to be real it must be hard — that the hacks and exercises that Woebot offers don’t amount to something substantive. In this telling, you have to go right to the messy core of your being and back again to truly heal.

Maybe that’s true for some things. I don’t believe that Woebot is the cure for serious mental illness, nor do I think that an algorithm is ever going to fix my daddy issues. But that doesn’t mean there isn’t a place for it. As Alison Darcy, Woebot’s creator, told me, it is not designed to displace other forms of healing. “In an ideal world, Woebot is one tool in a very comprehensive ecosystem of options,” she said.

The world is hardly ideal, though. At the same time that I used Woebot, I also volunteered for a crisis text service available 24/7 for people to message in times of need. For three months I was trained to chat with people who might be thinking of suicide. I learned how to triage them, and what words must be avoided at all costs. I was struck by how often the conversations were startlingly particular, each one a window into the person’s unique cluster of calamities, but also how we could pursue a script that tracked well-trodden lines in order to de-escalate the situation. I wondered if I was, on some level, becoming an algorithm. But I was also struck by the fact that a programmatic set of words could, when delivered correctly, help in the moment.

Woebot isn’t designed to be used for people contemplating suicide. Like with most A.I. chatbots, it triages people toward better-equipped services when it detects suicidal ideation. What Woebot offers is a world of small tools that let you tinker at the margins of your complicated existence.

Humans have an innate ability to create bonds beyond themselves. People imbue statues with symbolic power, believe their pets have personalities and anchor their memories of loved ones who have passed to objects they once held. As technology becomes increasingly capable of lifelike interactions, it is worth considering whether it can be used for healing. Using Woebot was like reading a good book of fiction. I never lost the sense that it was anything more than an algorithm — but I was able to suspend my disbelief and allow the experience to carry me elsewhere. Behind the clouded sky of my worst thoughts, to the blue beyond.

https://www.nytimes.com/2022/09/27/opin ... 778d3e6de3

***********
A related article:

How Is Depression Treated? Let Me Show You.

https://www.nytimes.com/interactive/202 ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

My Ears Might Never Be Bored Again

Post by kmaherali »

Image

Something unexpected happened to me during lockdown: I gained deeper appreciation for my ears. I don’t mean aesthetically, (though I’ve got no problem in that department, believe me), but rather functionally. As the coronavirus put much of the world off limits, and my household became suddenly crowded and chaotic, I increasingly began to think of audio as a kind of refuge.

Glass screens had been conquered by Slack and Zoom and social media, the apps of work and school and life’s assorted horribles, but my ears offered daily escape from, as Freddie Mercury sang, all this visual. Audio’s new power lay in its emotional intensity and its digital malleability. It is the sensory domain that technology has conquered most completely, and depending on how I tweaked it, my aural environment could alter my mood and physiology, could inspire joy and soften sadness, and perhaps help prompt new ideas and deeper thinking.

To say all this quite a bit less romantically: My kids were home and I spent a lot of time with headphones on — noise-canceling ones.

I mean, a lot. The other day I realized that I’ve taken to popping my AirPods Pro in just after I wake up, sometimes at the same time I put in my contact lenses. From there my ears are usually occupado all day, often until I sleep, sometimes even during.

About half the time my headphones are a fire hose for media — podcasts, audiobooks and such, often played at close to double-speed, because I can’t resist information saturation. During the rest of the day I cultivate a bespoke, moody soundtrack to accompany ordinary life: music to work and cook and walk and exercise by; refreshing, digitally enhanced silence for thinking in a house with two loud, pandemic-bored kids; and many long hours of bells, nature sounds, precise-frequency tones and other strange background noises for relaxation and sleep.

If you’re under 35 or so, my paean to the mind-altering magic of ubiquitous digital audio might sound more than a bit outdated; Farhad, do you also get goose bumps when considering the TV remote?

But I grew up in the era of cassette tapes and CDs, back when audio was hampered by physical scarcity and fierce gatekeeping. Kids, when I was a teenager, a new album, let’s say a dozen songs, usually sold for between $15 and $20, at least a month’s allowance. If you liked a song — even just one — from a new release, you were all but forced to buy the whole album. (You could buy CD and cassette singles, too, but they were hard to find and, often at $5 or more for just two or three songs, kind of a sucker’s game.)

I am also old enough to remember the long road to today’s musical cornucopia. The recording industry spent the early part of this century fighting against the digital world rather than trying to adapt to it; it was not until the 2010s that all-you-can-play subscription services like Spotify gained clearances to operate in the United States. Perhaps because I followed those battles closely as a reporter, the endless digital buffet available to our ears today still feels like an everyday miracle. The ability to call up just about any song at any time, to wander musical landscapes through genres and across decades and then to burrow deep wherever you like — none of this was ever inevitable.

Streaming services are often said to have “saved” the music industry, which is no doubt true, notwithstanding persistent complaints from artists about the paltriness of their streaming paychecks. Revenue from the sale of recorded music in the United States declined for almost two decades before streaming services began turning the business around in 2016. In 2020, recorded music grew to $12.2 billion in sales, the vast majority from streaming (still well below the industry’s peak sales year, $14.6 billion in 1999).

‘House of the Dragon’: Paddy Considine Won’t Watch Sunday’s Episode
But digital audio has done more than alter how music is paid for. Along with two other innovations — smartphones and wireless headphones — technology has also expanded the frontiers of audio. By allowing access to more sounds in more places during more of our days, it has broadened what music is for and deepened the role audio plays in our lives.

For me, the clearest way that streaming has altered my relationship to music is in its steady blurring of the boundaries between genres. I have always been a lover of pop music, but in high school and college, I was a serial rabbit holer — I’d get hooked on an artist (Smashing Pumpkins, Radiohead, Ani DiFranco) and then spend months obsessing over that artist’s work, listening more or less constantly to the same tunes over and over. A lot of this was by necessity: Even if you were a Mr. Moneybags who owned dozens of CDs, only a small amount of music was accessible at any moment. The beloved, beat-up Discman that got me through college could hold only a single CD of music; I played “OK Computer” every day for a semester mostly because I couldn’t get enough Radiohead, but a little because I kept forgetting to switch out the disc.

I still fall into rabbit holes (I spent about two months last year listening to one album on repeat, Jenny Lewis’s “On The Line”), but in the streaming era my tastes have grown far more capacious. Streaming has turned me into a musical butterfly, flitting between moods and genres in whatever way my tastes happen to lean. Indeed, in the last half decade I have explored more kinds of music than in the decades before — and I keep finding more stuff I like, because thanks to endless choice, there’s never nothing to listen to.

For instance, I turned 40 a few years ago and became, as required, a Dylanologist — one of those insufferable types who regales bored friends and family with factoids about bootlegs and alternative lyrics in certain legendary Dylan recording sessions. In years past, pursuing such an interest would have been a time-consuming side hustle; now I can pull up much of Bob Dylan’s catalog, bootlegs and all, on any road trip just as easily as I can play the latest hits, as my very annoyed children never tire of complaining.

Or: I used to know next to nothing about hip-hop; thanks to Spotify, I can walk you through much of it, and my wife and I may have been the middle-aged fans you noticed at a Migos concert I dragged her to in 2017.

Or: I’m ethnically Indian but I’d long known little about Bollywood. Then Spotify recommended a song by Shreya Ghoshal, a queen of Indian cinema “playback singing,” and my 8-year-old daughter and I became devoted to pop from the subcontinent.

I am not alone in experiencing a musical reawakening through digital music. In a note to investors last summer, Spotify said that its service was pushing wider diversification in tastes. The number of artists in the service’s most-played 10 percent of streams keeps growing — that is, there are many more artists at the top. “Gone are the days of Top 40, it’s now the Top 43,000,” Spotify crowed.

But you don’t need stats to show that music is increasingly breaking through staid genre boundaries — you can tell in the music itself. The canonical recent example: “Old Town Road,” the 2019 Lil Nas X country-rap song that first went viral on Tik-Tok, then took over the whole world, becoming the longest-running No. 1 single in the history of Billboard’s Hot 100 chart (19 weeks). “Is it even possible, in 2021, to locate, let alone enforce, an impermeable membrane between R&B and hip-hop, hip-hop and pop?” the critic Amanda Petrusich asked recently in The New Yorker. “Genre was once a practical tool for organizing record shops and programming radio stations, but it seems unlikely to remain one in an era in which all music feels like a hybrid, and listeners are no longer encouraged (or incentivized) to choose a single area of interest.”

Many artists remain deeply skeptical of the music business’s turn to streaming. While big acts can pull through on the internet’s infinite jukebox, smaller groups make a pittance from streaming and must support themselves by selling merchandise, touring and other business opportunities. Still, these issues seem fixable — contracts will likely adjust to artists’ needs over time, and new streams of revenue, like direct support from audiences, will likely catch on.

What’s not going to change is the pre-eminent role audio now plays in our days. Once, I thought of my headphones as a conduit for music, and then they were for music and podcasts, but now they are something else entirely: They are the first gadget to deliver on the tech industry’s promise of “augmented reality” — the mashing up of the digital and analog worlds to create a novel, enhanced sensory experience.

Now that sound has been liberated from time, place and physical media — now that I can fly from the Nashville studio where Dylan recorded “Blonde on Blonde” to Taylor Swift’s Tiny Desk concert to the comforting, indistinct background murmur of a crowded coffee shop, all while on a walk in my suburban California neighborhood — my ears might never be bored again.

https://www.nytimes.com/2021/06/03/opin ... aming.html
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

We Are Suddenly Taking On China and Russia at the Same Time

Post by kmaherali »

In case you haven’t noticed, let me alert you to a bracing turn of events: The U.S. is now in conflict with Russia and China at the same time. Grandma always said, “Never fight Russia and China at the same time.” So did Henry Kissinger. Alas, there is a strong case in the national interest for confronting both today. But have no doubt: We are in uncharted waters. I just hope that these are not our new “forever wars.”

The struggle with Russia is indirect, but obvious, escalating and violent. We are arming the Ukrainians with smart missiles and intelligence to force the Russians to withdraw from Ukraine. While taking nothing away from the bravery of the Ukrainians, the U.S. and NATO’s support has played a giant role in Ukraine’s battlefield successes. Just ask the Russians. But how does this war end? No one can tell you.

Today, though, I want to focus on the struggle with China, which is less visible and involves no shooting, because it is being fought mostly with transistors that toggle between digital 1s and 0s. But it will have as big, if not bigger, an impact on the global balance of power as the outcome of the combat between Russia and Ukraine. And it has little to do with Taiwan.

It is a struggle over semiconductors — the foundational technology of the information age. The alliance that designs and makes the smartest chips in the world will also have the smartest precision weapons, the smartest factories and the smartest quantum computing tools to break virtually any form of encryption. Today, the U.S. and its partners lead, but China is determined to catch up — and we are now determined to prevent that. Game on.

Last week, the Biden administration issued a new set of export regulations that in effect said to China: “We think you are three technology generations behind us in logic and memory chips and equipment, and we are going to ensure that you never catch up.” Or, as the national security adviser Jake Sullivan put it more diplomatically: “Given the foundational nature of certain technologies, such as advanced logic and memory chips, we must maintain as large of a lead as possible” — forever.

“The U.S. has essentially declared war on China’s ability to advance the country’s use of high-performance computing for economic and security gains,” Paul Triolo, a China and tech expert at Albright Stonebridge, a consulting firm, told The Financial Times. Or as the Chinese Embassy in Washington framed it, the U.S. is going for “sci-tech hegemony.”

But where does this war end? No one can tell you. I don’t want to be ripped off by a China that is increasingly using technology for absolute control at home and creepy power-projection abroad. But if we are now locked on a path of denying China advanced technologies forever — eliminating any hope of win-win collaborations with Beijing on issues like climate and cybercrime, where we face mutual threats and are the only two powers that can make a difference — what kind of world will that produce? China should be asking the same questions.

All I know for sure is that regulations issued Friday by President Biden’s Commerce Department are a formidable new barrier when it comes to export controls that will block China from being able to buy the most advanced semiconductors from the West or the equipment to manufacture them on its own.

The new regulations also bar any U.S. engineer or scientist from aiding China in chip manufacturing without specific approval, even if that American is working on equipment in China not subject to export controls. The regs also tighten the tracking to ensure that U.S.-designed chips sold to civilian companies in China don’t get into the hands of China’s military. And, maybe most controversially, the Biden team added a “foreign direct product rule” that, as The Financial Times noted, “was first used by the administration of Donald Trump against Chinese technology group Huawei” and “in effect bars any U.S. or non-U.S. company from supplying targeted Chinese entities with hardware or software whose supply chain contains American technology.”

This last rule is huge, because the most advanced semiconductors are made by what I call “a complex adaptive coalition” of companies from America to Europe to Asia. Think of it this way: AMD, Qualcomm, Intel, Apple and Nvidia excel at the design of chips that have billions of transistors packed together ever more tightly to produce the processing power they are seeking. Synopsys and Cadence create sophisticated computer-aided design tools and software on which chip makers actually draw up their newest ideas. Applied Materials creates and modifies the materials to forge the billions of transistors and connecting wires in the chip. ASML, a Dutch company, provides the lithography tools in partnership with, among others, Zeiss SMT, a German company specializing in optical lenses, which draws the stencils on the silicon wafers from those designs, using both deep and extreme ultraviolet light — a very short wavelength that can print tiny, tiny designs on a microchip. Intel, Lam Research, KLA and firms from Korea to Japan to Taiwan also play key roles in this coalition.

The point is this: The more we push the boundaries of physics and materials science to cram more transistors onto a chip to get more processing power to continue to advance artificial intelligence, the less likely it is that any one company, or country, can excel at all the parts of the design and manufacturing process. You need the whole coalition. The reason Taiwan Semiconductor Manufacturing Company, known as TSMC, is considered the premier chip manufacturer in the world is that every member of this coalition trusts TSMC with its most intimate trade secrets, which it then melds and leverages for the benefit of the whole.

Because China is not trusted by the coalition partners not to steal their intellectual property, Beijing is left trying to replicate the world’s all-star manufacturing chip stack on its own with old technologies. It managed to pilfer a certain amount of chip technology, including 28 nanometer technology from TSMC back in 2017.

Until recently, China’s premier chip maker, Semiconductor Manufacturing International Company, had been thought to be stuck at mostly this chip level, although it claims to have produced some chips at the 14 nm and even 7 nm scale by jury-rigging some older-generation Deep UV lithography from ASML. U.S. experts told me, though, that China can’t mass produce these chips with precision without ASML’s latest technology — which is now banned from the country.

This week I interviewed U.S. Secretary of Commerce Gina Raimondo, who oversees both the new export controls on chips and the $52.7 billion that the Biden administration has just secured to support more U.S. research on next-generation semiconductors and to bring advanced chip manufacturing back to the U.S. Raimondo rejects the idea that the new regulations are tantamount to an act of war.

“The U.S. was in an untenable position,” she told me in her office. “Today we are purchasing 100 percent of our advanced logic chips from abroad — 90 percent from TSMC in Taiwan and 10 percent from Samsung in Korea.” (That IS pretty crazy, but it IS true.)

“We do not make in the U.S. any of the chips we need for artificial intelligence, for our military, for our satellites, for our space programs” — not to mention myriad nonmilitary applications that power our economy. The recent CHIPS Act, she said, was our “offensive initiative” to strengthen our whole innovation ecosystem so more of the most advanced chips will be made in the U.S.

Imposing on China the new export controls on advanced chip-making technologies, she said, “was our defensive strategy. China has a strategy of military-civil fusion,” and Beijing has made clear “that it intends to become totally self-sufficient in the most advanced technologies” to dominate both the civilian commercial markets and the 21st century battlefield. “We cannot ignore China’s intentions.”

So, to protect ourselves and our allies — and all the technologies we have invented individually and collectively — she added, “what we did was the next logical step, to prevent China from getting to the next step.” The U.S. and its allies design and manufacture “the most advanced supercomputing chips, and we don’t want them in China’s hands and be used for military purposes.”

Our main focus, concluded Raimondo, “is playing offense — to innovate faster than the Chinese. But at the same time, we are going to meet the increasing threat they are presenting by protecting what we need to. It is important that we de-escalate where we can and do business where we can. We don’t want a conflict. But we have to protect ourselves with eyes wide open.”

China’s state-directed newspaper Global Times editorialized that the ban would only “strengthen China’s will and ability to stand on its own in science and technology.” Bloomberg quoted an unidentified Chinese analyst as saying “there is no possibility of reconciliation.”

Welcome to the future…

https://www.nytimes.com/2022/10/12/opin ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

A.I. May Someday Work Medical Miracles.

Post by kmaherali »

A.I. May Someday Work Medical Miracles. For Now, It Helps Do Paperwork.

The best use for generative A.I. in health care, doctors say, is to ease the heavy burden of documentation that takes them hours a day and contributes to burnout.

Dr. Matthew Hitchcock, a family physician in Chattanooga, Tenn., has an A.I. helper.

It records patient visits on his smartphone and summarizes them for treatment plans and billing. He does some light editing of what the A.I. produces, and is done with his daily patient visit documentation in 20 minutes or so.

Dr. Hitchcock used to spend up to two hours typing up these medical notes after his four children went to bed. “That’s a thing of the past,” he said. “It’s quite awesome.”

ChatGPT-style artificial intelligence is coming to health care, and the grand vision of what it could bring is inspiring. Every doctor, enthusiasts predict, will have a superintelligent sidekick, dispensing suggestions to improve care.

But first will come more mundane applications of artificial intelligence. A prime target will be to ease the crushing burden of digital paperwork that physicians must produce, typing lengthy notes into electronic medical records required for treatment, billing and administrative purposes.

For now, the new A.I. in health care is going to be less a genius partner than a tireless scribe.

A hand holding up a smartphone, with a white screen that says “Now Recording” and a big orange button.
Image
Abridge, founded in 2018, provides an automated solution to a modern clerical overload in health care by using A.I. to record and generate a summary of patient visits.Credit...Audra Melton for The New York Times

From leaders at major medical centers to family physicians, there is optimism that health care will benefit from the latest advances in generative A.I. — technology that can produce everything from poetry to computer programs, often with human-level fluency.

But medicine, doctors emphasize, is not a wide open terrain of experimentation. A.I.’s tendency to occasionally create fabrications, or so-called hallucinations, can be amusing, but not in the high-stakes realm of health care.

That makes generative A.I., they say, very different from A.I. algorithms, already approved by the Food and Drug Administration, for specific applications, like scanning medical images for cell clusters or subtle patterns that suggest the presence of lung or breast cancer. Doctors are also using chatbots to communicate more effectively with some patients.

Physicians and medical researchers say regulatory uncertainty, and concerns about patient safety and litigation, will slow the acceptance of generative A.I. in health care, especially its use in diagnosis and treatment plans.

Those physicians who have tried out the new technology say its performance has improved markedly in the last year. And the medical note software is designed so that doctors can check the A.I.-generated summaries against the words spoken during a patient’s visit, making it verifiable and fostering trust.

“At this stage, we have to pick our use cases carefully,” said Dr. John Halamka, president of Mayo Clinic Platform, who oversees the health system’s adoption of artificial intelligence. “Reducing the documentation burden would be a huge win on its own.”

Recent studies show that doctors and nurses report high levels of burnout, prompting many to leave the profession. High on the list of complaints, especially for primary care physicians, is the time spent on documentation for electronic health records. That work often spills over into the evenings, after-office-hours toil that doctors refer to as “pajama time.”

Generative A.I., experts say, looks like a promising weapon to combat the physician workload crisis.

“This technology is rapidly improving at a time health care needs help,” said Dr. Adam Landman, chief information officer of Mass General Brigham, which includes Massachusetts General Hospital and Brigham and Women’s Hospital in Boston.

For years, doctors have used various kinds of documentation assistance, including speech recognition software and human transcribers. But the latest A.I. is doing far more: summarizing, organizing and tagging the conversation between a doctor and a patient.

Companies developing this kind of technology include Abridge, Ambience Healthcare, Augmedix, Nuance, which is part of Microsoft, and Suki.

Ten physicians at the University of Kansas Medical Center have been using generative A.I. software for the last two months, said Dr. Gregory Ator, an ear, nose and throat specialist and the center’s chief medical informatics officer. The medical center plans to eventually make the software available to its 2,200 physicians.

But the Kansas health system is steering clear of using generative A.I. in diagnosis, concerned that its recommendations may be unreliable and that its reasoning is not transparent. “In medicine, we can’t tolerate hallucinations,” Dr. Ator said. “And we don’t like black boxes.”

The University of Pittsburgh Medical Center has been a test bed for Abridge, a start-up led and co-founded by Dr. Shivdev Rao, a practicing cardiologist who was also an executive at the medical center’s venture arm.

Abridge was founded in 2018, when large language models, the technology engine for generative A.I., emerged. The technology, Dr. Rao said, opened a door to an automated solution to the clerical overload in health care, which he saw around him, even for his own father.

“My dad retired early,” Dr. Rao said. “He just couldn’t type fast enough.”

Today, the Abridge software is used by more than 1,000 physicians in the University of Pittsburgh medical system.

A doctor in a white coat holding up a smartphone in her hand as she speaks with a patient, who is sitting on an examination table.
Image
Using A.I. software, Dr. Michelle Thompson said, has freed up two hours in her work day and has helped patients become more engaged in their care. Credit...Maddie McGarvey for The New York Times

Dr. Michelle Thompson, a family physician in Hermitage, Pa., who specializes in lifestyle and integrative care, said the software had freed up nearly two hours in her day. Now, she has time to do a yoga class, or to linger over a sit-down family dinner.

Another benefit has been to improve the experience of the patient visit, Dr. Thompson said. There is no longer typing, note-taking or other distractions. She simply asks patients for permission to record their conversation on her phone.

“A.I. has allowed me, as a physician, to be 100 percent present for my patients,” she said.

The A.I. tool, Dr. Thompson added, has also helped patients become more engaged in their own care. Immediately after a visit, the patient receives a summary, accessible through the University of Pittsburgh medical system’s online portal.

The software translates any medical terminology into plain English at about a fourth-grade reading level. It also provides a recording of the visit with “medical moments” color-coded for medications, procedures and diagnoses. The patient can click on a colored tag and listen to a portion of the conversation.

Studies show that patients forget up to 80 percent of what physicians and nurses say during visits. The recorded and A.I.-generated summary of the visit, Dr. Thompson said, is a resource her patients can return to for reminders to take medications, exercise or schedule follow-up visits.

After the appointment, physicians receive a clinical note summary to review. There are links back to the transcript of the doctor-patient conversation, so the A.I.’s work can be checked and verified. “That has really helped me build trust in the A.I.,” Dr. Thompson said.

In a darkened office, a doctor looks down at her phone while sitting in front of an open laptop that illuminates her face with light from the screen.
Image
Studies show that patients forget up to 80 percent of what physicians say during a visit, so an A.I.-generated summary is a resource for patients to return to.Credit...Maddie McGarvey for The New York Times

In Tennessee, Dr. Hitchcock, who also uses Abridge software, has read the reports of ChatGPT scoring high marks on standard medical tests and heard the predictions that digital doctors will improve care and solve staffing shortages.

Dr. Hitchcock has tried ChatGPT and is impressed. But he would never think of loading a patient record into the chatbot and asking for a diagnosis, for legal, regulatory and practical reasons. For now, he is grateful to have his evenings free, no longer mired in the tedious digital documentation required by the American health care industry.

And he sees no technology cure for the health care staffing shortfall. “A.I. isn’t going to fix that anytime soon,” said Dr. Hitchcock, who is looking to hire another doctor for his four-physician practice.

https://www.nytimes.com/2023/06/26/tech ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

The True Threat of Artificial Intelligence

Post by kmaherali »

In May, more than 350 technology executives, researchers and academics signed a statement warning of the existential dangers of artificial intelligence. “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” the signatories warned.

This came on the heels of another high-profile letter, signed by the likes of Elon Musk and Steve Wozniak, a co-founder of Apple, calling for a six-month moratorium on the development of advanced A.I. systems.

Meanwhile, the Biden administration has urged responsible A.I. innovation, stating that “in order to seize the opportunities” it offers, we “must first manage its risks.” In Congress, Senator Chuck Schumer called for “first of their kind” listening sessions on the potential and risks of A.I., a crash course of sorts from industry executives, academics, civil rights activists and other stakeholders.

The mounting anxiety about A.I. isn’t because of the boring but reliable technologies that autocomplete our text messages or direct robot vacuums to dodge obstacles in our living rooms. It is the rise of artificial general intelligence, or A.G.I., that worries the experts.

A.G.I. doesn’t exist yet, but some believe that the rapidly growing capabilities of OpenAI’s ChatGPT suggest its emergence is near. Sam Altman, a co-founder of OpenAI, has described it as “systems that are generally smarter than humans.” Building such systems remains a daunting — some say impossible — task. But the benefits appear truly tantalizing.

Imagine Roombas, no longer condemned to vacuuming the floors, that evolve into all-purpose robots, happy to brew morning coffee or fold laundry — without ever being programmed to do these things.

Sounds appealing. But should these A.G.I. Roombas get too powerful, their mission to create a spotless utopia might get messy for their dust-spreading human masters. At least we’ve had a good run.

Discussions of A.G.I. are rife with such apocalyptic scenarios. Yet a nascent A.G.I. lobby of academics, investors and entrepreneurs counter that, once made safe, A.G.I. would be a boon to civilization. Mr. Altman, the face of this campaign, embarked on a global tour to charm lawmakers. Earlier this year he wrote that A.G.I. might even turbocharge the economy, boost scientific knowledge and “elevate humanity by increasing abundance.”

This is why, for all the hand-wringing, so many smart people in the tech industry are toiling to build this controversial technology: not using it to save the world seems immoral.

They are beholden to an ideology that views this new technology as inevitable and, in a safe version, as universally beneficial. Its proponents can think of no better alternatives for fixing humanity and expanding its intelligence.

But this ideology — call it A.G.I.-ism — is mistaken. The real risks of A.G.I. are political and won’t be fixed by taming rebellious robots. The safest of A.G.I.s would not deliver the progressive panacea promised by its lobby. And in presenting its emergence as all but inevitable, A.G.I.-ism distracts from finding better ways to augment intelligence.

Unbeknown to its proponents, A.G.I.-ism is just a bastard child of a much grander ideology, one preaching that, as Margaret Thatcher memorably put it, there is no alternative, not to the market.

Rather than breaking capitalism, as Mr. Altman has hinted it could do, A.G.I. — or at least the rush to build it — is more likely to create a powerful (and much hipper) ally for capitalism’s most destructive creed: neoliberalism.

Fascinated with privatization, competition and free trade, the architects of neoliberalism wanted to dynamize and transform a stagnant and labor-friendly economy through markets and deregulation.

Some of these transformations worked, but they came at an immense cost. Over the years, neoliberalism drew many, many critics, who blamed it for the Great Recession and financial crisis, Trumpism, Brexit and much else.

It is not surprising, then, that the Biden administration has distanced itself from the ideology, acknowledging that markets sometimes get it wrong. Foundations, think tanks and academics have even dared to imagine a post-neoliberal future.

Yet neoliberalism is far from dead. Worse, it has found an ally in A.G.I.-ism, which stands to reinforce and replicate its main biases: that private actors outperform public ones (the market bias), that adapting to reality beats transforming it (the adaptation bias) and that efficiency trumps social concerns (the efficiency bias).

These biases turn the alluring promise behind A.G.I. on its head: Instead of saving the world, the quest to build it will make things only worse. Here is how.

A.G.I. will never overcome the market’s demands for profit.

Remember when Uber, with its cheap rates, was courting cities to serve as their public transportation systems?

It all began nicely, with Uber promising implausibly cheap rides, courtesy of a future with self-driving cars and minimal labor costs. Deep-pocketed investors loved this vision, even absorbing Uber’s multibillion-dollar losses.

But when reality descended, the self-driving cars were still a pipe dream. The investors demanded returns and Uber was forced to raise prices. Users that relied on it to replace public buses and trains were left on the sidewalk.

The neoliberal instinct behind Uber’s business model is that the private sector can do better than the public sector — the market bias.

It’s not just cities and public transit. Hospitals, police departments and even the Pentagon increasingly rely on Silicon Valley to accomplish their missions.

With A.G.I., this reliance will only deepen, not least because A.G.I. is unbounded in its scope and ambition. No administrative or government services would be immune to its promise of disruption.

Moreover, A.G.I. doesn’t even have to exist to lure them in. This, at any rate, is the lesson of Theranos, a start-up that promised to “solve” health care through a revolutionary blood-testing technology and a former darling of America’s elites. Its victims are real, even if its technology never was.

After so many Uber- and Theranos-like traumas, we already know what to expect of an A.G.I. rollout. It will consist of two phases. First, the charm offensive of heavily subsidized services. Then the ugly retrenchment, with the overdependent users and agencies shouldering the costs of making them profitable.

As always, Silicon Valley mavens play down the market’s role. In a recent essay titled “Why A.I. Will Save the World,” Marc Andreessen, a prominent tech investor, even proclaims that A.I. “is owned by people and controlled by people, like any other technology.”

Only a venture capitalist can traffic in such exquisite euphemisms. Most modern technologies are owned by corporations. And they — not the mythical “people” — will be the ones that will monetize saving the world.

And are they really saving it? The record, so far, is poor. Companies like Airbnb and TaskRabbit were welcomed as saviors for the beleaguered middle class; Tesla’s electric cars were seen as a remedy to a warming planet. Soylent, the meal-replacement shake, embarked on a mission to “solve” global hunger, while Facebook vowed to “solve” connectivity issues in the Global South. None of these companies saved the world.

A decade ago, I called this solutionism, but “digital neoliberalism” would be just as fitting. This worldview reframes social problems in light of for-profit technological solutions. As a result, concerns that belong in the public domain are reimagined as entrepreneurial opportunities in the marketplace.

A.G.I.-ism has rekindled this solutionist fervor. Last year, Mr. Altman stated that “A.G.I. is probably necessary for humanity to survive” because “our problems seem too big” for us to “solve without better tools.” He’s recently asserted that A.G.I. will be a catalyst for human flourishing.

But companies need profits, and such benevolence, especially from unprofitable firms burning investors’ billions, is uncommon. OpenAI, having accepted billions from Microsoft, has contemplated raising another $100 billion to build A.G.I. Those investments will need to be earned back — against the service’s staggering invisible costs. (One estimate from February put the expense of operating ChatGPT at $700,000 per day.)

Thus, the ugly retrenchment phase, with aggressive price hikes to make an A.G.I. service profitable, might arrive before “abundance” and “flourishing.” But how many public institutions would mistake fickle markets for affordable technologies and become dependent on OpenAI’s expensive offerings by then?

And if you dislike your town outsourcing public transportation to a fragile start-up, would you want it farming out welfare services, waste management and public safety to the possibly even more volatile A.G.I. firms?

A.G.I. will dull the pain of our thorniest problems without fixing them.

Neoliberalism has a knack for mobilizing technology to make society’s miseries bearable. I recall an innovative tech venture from 2017 that promised to improve commuters’ use of a Chicago subway line. It offered rewards to discourage metro riders from traveling at peak times. Its creators leveraged technology to influence the demand side (the riders), seeing structural changes to the supply side (like raising public transport funding) as too difficult. Tech would help make Chicagoans adapt to the city’s deteriorating infrastructure rather than fixing it in order to meet the public’s needs.

This is the adaptation bias — the aspiration that, with a technological wand, we can become desensitized to our plight. It’s the product of neoliberalism’s relentless cheerleading for self-reliance and resilience.

The message is clear: gear up, enhance your human capital and chart your course like a start-up. And A.G.I.-ism echoes this tune. Bill Gates has trumpeted that A.I. can “help people everywhere improve their lives.”

The solutionist feast is only getting started: Whether it’s fighting the next pandemic, the loneliness epidemic or inflation, A.I. is already pitched as an all-purpose hammer for many real and imaginary nails. However, the decade lost to the solutionist folly reveals the limits of such technological fixes.

To be sure, Silicon Valley’s many apps — to monitor our spending, calories and workout regimes — are occasionally helpful. But they mostly ignore the underlying causes of poverty or obesity. And without tackling the causes, we remain stuck in the realm of adaptation, not transformation.

There’s a difference between nudging us to follow our walking routines — a solution that favors individual adaptation — and understanding why our towns have no public spaces to walk on — a prerequisite for a politics-friendly solution that favors collective and institutional transformation.

But A.G.I.-ism, like neoliberalism, sees public institutions as unimaginative and not particularly productive. They should just adapt to A.G.I., at least according to Mr. Altman, who recently said he was nervous about “the speed with which our institutions can adapt” — part of the reason, he added, “of why we want to start deploying these systems really early, while they’re really weak, so that people have as much time as possible to do this.”

But should institutions only adapt? Can’t they develop their own transformative agendas for improving humanity’s intelligence? Or do we use institutions only to mitigate the risks of Silicon Valley’s own technologies?

A.G.I. undermines civic virtues and amplifies trends we already dislike.

A common criticism of neoliberalism is that it has flattened our political life, rearranging it around efficiency. “The Problem of Social Cost,” a 1960 article that has become a classic of the neoliberal canon, preaches that a polluting factory and its victims should not bother bringing their disputes to court. Such fights are inefficient — who needs justice, anyway? — and stand in the way of market activity. Instead, the parties should privately bargain over compensation and get on with their business.

This fixation on efficiency is how we arrived at “solving” climate change by letting the worst offenders continue as before. The way to avoid the shackles of regulation is to devise a scheme — in this case, taxing carbon — that lets polluters buy credits to match the extra carbon they emit.

This culture of efficiency, in which markets measure the worth of things and substitute for justice, inevitably corrodes civic virtues.

And the problems this creates are visible everywhere. Academics fret that, under neoliberalism, research and teaching have become commodities. Doctors lament that hospitals prioritize more profitable services such as elective surgery over emergency care. Journalists hate that the worth of their articles is measured in eyeballs.

Now imagine unleashing A.G.I. on these esteemed institutions — the university, the hospital, the newspaper — with the noble mission of “fixing” them. Their implicit civic missions would remain invisible to A.G.I., for those missions are rarely quantified even in their annual reports — the sort of materials that go into training the models behind A.G.I.

After all, who likes to boast that his class on Renaissance history got only a handful of students? Or that her article on corruption in some faraway land got only a dozen page views? Inefficient and unprofitable, such outliers miraculously survive even in the current system. The rest of the institution quietly subsidizes them, prioritizing values other than profit-driven “efficiency.”

Will this still be the case in the A.G.I. utopia? Or will fixing our institutions through A.G.I. be like handing them over to ruthless consultants? They, too, offer data-bolstered “solutions” for maximizing efficiency. But these solutions often fail to grasp the messy interplay of values, missions and traditions at the heart of institutions — an interplay that is rarely visible if you only scratch their data surface.

In fact, the remarkable performance of ChatGPT-like services is, by design, a refusal to grasp reality at a deeper level, beyond the data’s surface. So whereas earlier A.I. systems relied on explicit rules and required someone like Newton to theorize gravity — to ask how and why apples fall — newer systems like A.G.I. simply learn to predict gravity’s effects by observing millions of apples fall to the ground.

However, if all that A.G.I. sees are cash-strapped institutions fighting for survival, it may never infer their true ethos. Good luck discerning the meaning of the Hippocratic oath by observing hospitals that have been turned into profit centers.

Margaret Thatcher’s other famous neoliberal dictum was that “there is no such thing as society.”

The A.G.I. lobby unwittingly shares this grim view. For them, the kind of intelligence worth replicating is a function of what happens in individuals’ heads rather than in society at large.

But human intelligence is as much a product of policies and institutions as it is of genes and individual aptitudes. It’s easier to be smart on a fellowship in the Library of Congress than while working several jobs in a place without a bookstore or even decent Wi-Fi.

It doesn’t seem all that controversial to suggest that more scholarships and public libraries will do wonders for boosting human intelligence. But for the solutionist crowd in Silicon Valley, augmenting intelligence is primarily a technological problem — hence the excitement about A.G.I.

However, if A.G.I.-ism really is neoliberalism by other means, then we should be ready to see fewer — not more — intelligence-enabling institutions. After all, they are the remnants of that dreaded “society” that, for neoliberals, doesn’t really exist. A.G.I.’s grand project of amplifying intelligence may end up shrinking it.

Because of such solutionist bias, even seemingly innovative policy ideas around A.G.I. fail to excite. Take the recent proposal for a “Manhattan Project for A.I. Safety.” This is premised on the false idea that there’s no alternative to A.G.I.

But wouldn’t our quest for augmenting intelligence be far more effective if the government funded a Manhattan Project for culture and education and the institutions that nurture them instead?

Without such efforts, the vast cultural resources of our existing public institutions risk becoming mere training data sets for A.G.I. start-ups, reinforcing the falsehood that society doesn’t exist.

Depending on how (and if) the robot rebellion unfolds, A.G.I. may or may not prove an existential threat. But with its antisocial bent and its neoliberal biases, A.G.I.-ism already is: We don’t need to wait for the magic Roombas to question its tenets.

Evgeny Morozov, the author of “To Save Everything, Click Here: The Folly of Technological Solutionism,” is the founder and publisher of The Syllabus and the host of the podcast “The Santiago Boys.”

https://www.nytimes.com/2023/06/30/opin ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

There’s One Hard Question My Fellow Doctors and I Will Need to Answer Soon

Post by kmaherali »

When faced with a particularly tough question on rounds during my intern year, I would run straight to the bathroom. There, I would flip through the medical reference book I carried in my pocket, find the answer and return to the group, ready to respond.

At the time, I believed that my job was to memorize, to know the most arcane of medical eponyms by heart. Surely an excellent clinician would not need to consult a book or a computer to diagnose a patient. Or so I thought then.

Not even two decades later, we find ourselves at the dawn of what many believe to be a new era in medicine, one in which artificial intelligence promises to write our notes, to communicate with patients, to offer diagnoses. The potential is dazzling. But as these systems improve and are integrated into our practice in the coming years, we will face complicated questions: Where does specialized expertise live? If the thought process to arrive at a diagnosis can be done by a computer “co-pilot,” how does that change the practice of medicine, for doctors and for patients?

Though medicine is a field where breakthrough innovation saves lives, doctors are — ironically — relatively slow to adopt new technology. We still use the fax machine to send and receive information from other hospitals. When the electronic medical record warns me that my patient’s combination of vital signs and lab abnormalities could point to an infection, I find the input to be intrusive rather than helpful. A part of this hesitation is the need for any technology to be tested before it can be trusted. But there is also the romanticized notion of the diagnostician whose mind contains more than any textbook.

Still, the idea of a computer diagnostician has long been compelling. Doctors have tried to make machines that can “think” like a doctor and diagnose patients for decades, like a Dr. House-style program that can take in a set of disparate symptoms and suggest a unifying diagnosis. But early models were time-consuming to employ and ultimately not particularly useful in practice. They were limited in their utility until advances in natural language processing made generative A.I. — in which a computer can actually create new content in the style of a human — a reality. This is not the same as looking up a set of symptoms on Google; instead, these programs have the ability to synthesize data and “think” much like an expert.

To date, we have not integrated generative A.I. into our work in the intensive care unit. But it seems clear that we inevitably will. One of the easiest ways to imagine using A.I. is when it comes to work that requires pattern recognition, such as reading X-rays. Even the best doctor may be less adept than a machine when it comes to recognizing complex patterns without bias. There is also a good deal of excitement about the possibility for A.I. programs to write our daily patient notes for us as a sort of electronic scribe, saving considerable time. As Dr. Eric Topol, a cardiologist who has written about the promise of A.I. in medicine, says, this technology could foster the relationship between patients and doctors. “We’ve got a path to restore the humanity in medicine,” he told me.

Beyond saving us time, the intelligence in A.I. — if used well — could make us better at our jobs. Dr. Francisco Lopez-Jimenez, the co-director of A.I. in cardiology at the Mayo Clinic, has been studying the use of A.I. to read electrocardiograms, or ECGs, which are a simple recording of the heart’s electrical activity. An expert cardiologist can glean all sorts of information from an ECG, but a computer can glean more, including an assessment of how well the heart is functioning — which could help determine who would benefit from further testing.

Even more remarkably, Dr. Lopez-Jimenez and his team found that when asked to predict age based on an ECG, the A.I. program would from time to time give an entirely incorrect response. At first, the researchers thought the machine simply wasn’t great at age prediction based on the ECG — until they realized that the machine was offering the “biological” rather than chronological age, explained Dr. Lopez-Jimenez. Based on the patterns of the ECG alone, the A.I. program knew more about a patient’s aging than a clinician ever could.

And this is just the start. Some studies are using A.I. to try to diagnose a patient’s condition based on voice alone. Researchers promote the possibility of A.I. to speed drug discovery. But as an intensive care unit doctor, I find that what is most compelling is the ability of generative A.I. programs to diagnose a patient. Imagine it: a pocket expert on rounds with the ability to plumb the depth of existing knowledge in seconds.

What proof do we need to use any of this? The bar is higher for diagnostic programs than it is for programs that write our notes. But the way we typically test advances in medicine — a rigorously designed randomized clinical trial that takes years — won’t work here. After all, by the time the trial were complete, the technology would have changed. Besides, the reality is that these technologies are going to find their way into our daily practice whether they are tested or not.

Dr. Adam Rodman, an internist at Beth Israel Deaconess Hospital in Boston and a historian, found that the majority of his medical students are using Chat GPT already, to help them on rounds or even to help predict test questions. Curious about how A.I. would perform on tough medical cases, Dr. Rodman gave the notoriously challenging New England Journal of Medicine weekly case — and found that the program offered the correct diagnosis in a list of possible diagnoses just over 60 percent of the time. This performance is most likely better than any individual could accomplish.

How those abilities translate to the real world remains to be seen. But even as he prepares to embrace new technology, Dr. Rodman wonders if something will be lost. After all, the training of doctors has long followed a clear process — we see patients, we struggle with their care in a supervised environment and we do it over again until we finish our training. But with A.I., there is the real possibility that doctors in training could lean on these programs to do the hard work of generating a diagnosis, rather than learn to do it themselves. If you have never sorted through the mess of seemingly unrelated symptoms to arrive at a potential diagnosis, but instead relied on a computer, how do you learn the thought processes required for excellence as a doctor?

“In the very near future, we’re looking at a time where the new generation coming up are not going to be developing these skills in the same way we did,” Dr. Rodman said. Even when it comes to A.I. writing our notes for us, Dr. Rodman sees a trade-off. After all, notes are not simply drudgery; they also represent a time to take stock, to review the data and reflect on what comes next for our patients. If we offload that work, we surely gain time, but maybe we lose something too.

But there is a balance here. Maybe the diagnoses offered by A.I. will become an adjunct to our own thought processes, not replacing us but allowing us all the tools to become better. Particularly for those working in settings with limited specialists for consultation, A.I. could bring everyone up to the same standard. At the same time, patients will be using these technologies, asking questions and coming to us with potential answers. This democratizing of information is already happening and will only increase.

Perhaps being an expert doesn’t mean being a fount of information but synthesizing and communicating and using judgment to make hard decisions. A.I. can be part of that process, just one more tool that we use, but it will never replace a hand at the bedside, eye contact, understanding — what it is to be a doctor.

A few weeks ago, I downloaded the Chat GPT app. I’ve asked it all sorts of questions, from the medical to the personal. And when I am next working in the intensive care unit, when faced with a question on rounds, I just might open the app and see what A.I. has to say.

https://www.nytimes.com/2023/07/06/opin ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Carrefour opens automated supermarket of future in Dubai

Post by kmaherali »

Image

A man uses a QR code on a mobile phone to enter Carrefour new cashier-less grocery store in Mall of the Emirates in Dubai


If imitation is the highest form of flattery, Jeff Bezos must have beamed with pride upon learning that French retail giant Carrefour rolled out its vision for the future of the industry in Dubai on Monday, trumpeting its completely automated cashier-less store Carrefour City+, which bears a striking resemblance to the Amazon Go stores.

Like Amazon's breakthrough unmanned grocery stores that opened in 2018, the Carrefour mini-market looks like any ordinary convenience store, brimming with sodas and snacks, tucked between sprawling storefronts of this city-state.

But hidden among the familiar fare lies a sophisticated system that tracks shoppers’ movements, eliminating the checkout line and allowing people to grab the products they'll walk out with. Only those with the store's smartphone app may enter. Nearly a hundred small surveillance cameras blanket the ceiling. Countless sensors line the shelves. Five minutes after shoppers leave, their phones ping with receipts for whatever they put in their bags.

“This is how the future will look,” Hani Weiss, CEO of retail at Majid Al Futtaim, the franchise that operates Carrefour in the Middle East, told The Associated Press (AP). "We do believe in physical stores in the future. However, we believe the experience will change.”

The experimental shop, called Carrefour City+, is the latest addition to the burgeoning field of retail automation. Major retailers worldwide are combining machine learning software and artificial intelligence in a push to cut labor costs, do away with the irritation of long lines and gather critical data about shopping behavior.

“We use (the data) to provide a better experience in the future ... whereby customers don't have to think about the next products they want,” Weiss said. “All the insights are being utilized internally in order to provide a better shopping experience.”

Customers must give Carrefour permission to collect their information, Weiss said, which the company promises not to share. But the idea of a vast retail seller collecting reams of data about shoppers' habits already has raised privacy concerns in the United States, where Amazon now operates several such futuristic stores. It's less likely to become a public debate in the autocratic United Arab Emirates, home to one of the world's highest per capita concentrations of surveillance cameras.

With the pandemic forcing major retailers to reassess the future, many are increasingly investing in automation – a vision that threatens severe job losses across the industry. But Carrefour stressed that human workers, at least in the short-term, would still be needed to “support customers" and assist the machines.

“There is no future without humans,” Weiss said.

https://www.dailysabah.com/business/tec ... e-in-dubai
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

‘Human Beings Are Soon Going to Be Eclipsed’

Post by kmaherali »

Recently I stumbled across an essay by Douglas Hofstadter that made me happy. Hofstadter is an eminent cognitive scientist and the author of books like “Gödel, Escher, Bach” and “I Am a Strange Loop.” The essay that pleased me so much, called “The Shallowness of Google Translate,” was published in The Atlantic in January of 2018.

Back then, Hofstadter argued that A.I. translation tools might be really good at some pedestrian tasks, but they weren’t close to replicating the creative and subtle abilities of a human translator. “It’s all about ultrarapid processing of pieces of text, not about thinking or imagining or remembering or understanding. It doesn’t even know that words stand for things,” he wrote.

The article made me happy because here was a scientist I greatly admire arguing for a point of view I’ve been coming to myself. Over the past few months, I’ve become an A.I. limitationist. That is, I believe that while A.I. will be an amazing tool for, say, tutoring children all around the world, or summarizing meetings, it is no match for human intelligence. It doesn’t possess understanding, self-awareness, concepts, emotions, desires, a body or biology. It’s bad at causal thinking. It doesn’t possess the nonverbal, tacit knowledge that humans take for granted. It’s not sentient. It does many things way faster than us, but it lacks the depth of a human mind.

I take this to be good news. If A.I. is limited in these ways, then the A.I. revolution will turn out to be akin to the many other information revolutions that humans have produced. This technology will be used in a lot of great ways, and some terrible ways, but it won’t replace us, it won’t cause the massive social disruption the hypesters warn about, and it’s not going to wake up one day wanting to conquer the world.

Hofstadter’s 2018 essay suggested that he’s a limitationist too, and reinforced my sense that this view is right.

So I was startled this month to see the following headline in one of the A.I. newsletters I subscribe to: “Douglas Hofstadter Changes His Mind on Deep Learning & A.I. Risk.” I followed the link to a podcast and heard Hofstadter say: “It’s a very traumatic experience when some of your most core beliefs about the world start collapsing. And especially when you think that human beings are soon going to be eclipsed.”

Apparently, in the five years since 2018, ChatGPT and its peers have radically altered Hofstadter’s thinking. He continues: It “just renders humanity a very small phenomenon compared to something else that is far more intelligent and will become incomprehensible to us, as incomprehensible to us as we are to cockroaches.”

I called Hofstadter to ask him what was going on. He shared his genuine alarm about humanity’s future. He said that ChatGPT was “jumping through hoops I would never have imagined it could. It’s just scaring the daylights out of me.” He added: “Almost every moment of every day, I’m jittery. I find myself lucky if I can be distracted by something — reading or writing or drawing or talking with friends. But it’s very hard for me to find any peace.”

Hofstadter has long argued that intelligence is the ability to look at a complex situation and find its essence. “Putting your finger on the essence of a situation means ignoring vast amounts about the situation and summarizing the essence in a terse way,” he said.

Humans mostly do this through analogy. If you tell me that you didn’t read my column, and I tell you I don’t care because I didn’t want you to read it anyway, you’re going to think, “That guy is just bloated with sour grapes.” You have this category in your head, “sour grapes.” You’re comparing my behavior with all the other behaviors you’ve witnessed. I match the sour grapes category. You’ve derived an essence to explain my emotional state.

Two years ago, Hofstadter says, A.I. could not reliably perform this kind of thinking. But now it is performing this kind of thinking all the time. And if it can perform these tasks in ways that make sense, Hofstadter says, then how can we say it lacks understanding, or that it’s not thinking?

And if A.I. can do all this kind of thinking, Hofstadter concludes, then it is developing consciousness. He has long argued that consciousness comes in degrees and that if there’s thinking, there’s consciousness. A bee has one level of consciousness, a dog a higher level, an infant a higher level, and an adult a higher level still. “We’re approaching the stage when we’re going to have a hard time saying that this machine is totally unconscious. We’re going to have to grant it some degree of consciousness, some degree of aliveness,” he says.

Normally, when tech executives tell me A.I. will soon achieve general, human level intelligence, I silently think to myself: “This person may know tech, but he doesn’t really know human intelligence. He doesn’t understand how complex, vast and deep the human mind really is.”

But Hofstadter does understand the human mind — as well as anybody. He’s a humanist down to his bones, with a reverence for the mystery of human consciousness, who has written movingly about love and the deep interpenetration of souls. So his words carry weight. They shook me.

But so far he has not fully converted me. I still see these things as inanimate tools. On our call I tried to briefly counter Hofstadter by arguing that the bots are not really thinking; they’re just piggybacking on human thought. Starting as babies, we humans begin to build models of the world, and those models are informed by hard experiences and joyful experiences, emotional loss and delight, moral triumphs and moral failures — the mess of human life. A lot of the ensuing wisdom is stored deep in the unconscious recesses of our minds, but some of it is turned into language.

A.I. is capable of synthesizing these linguistic expressions, which humans have put on the internet and, thus, into its training base. But, I’d still argue, the machine is not having anything like a human learning experience. It’s playing on the surface with language, but the emotion-drenched process of learning from actual experience and the hard-earned accumulation of what we call wisdom are absent.

In a piece for The New Yorker, the computer scientist Jaron Lanier argued that A.I. is best thought of as “an innovative form of social collaboration.” It mashes up the linguistic expressions of human minds in ways that are structured enough to be useful, but it is not, Lanier argues, “the invention of a new mind.”

I think I still believe this limitationist view. But I confess I believe it a lot less fervently than I did last week. Hofstadter is essentially asking, If A.I. cogently solves intellectual problems, then who are you to say it’s not thinking? Maybe it’s more than just a mash-up of human expressions. Maybe it’s synthesizing human thought in ways that are genuinely creative, that are genuinely producing new categories and new thoughts. Perhaps the kind of thinking done by a disembodied machine that mostly encounters the world through language is radically different from the kind of thinking done by an embodied human mind, contained in a person who moves about in the actual world, but it is an intelligence of some kind, operating in some ways vastly faster and superior to our own. Besides, Hofstadter points out, these artificial brains are not constrained by the factors that limit human brains — like having to fit inside a skull. And, he emphasizes, they are improving at an astounding rate, while human intelligence isn’t.

It’s hard to dismiss that argument.

I don’t know about you, but this is what life has been like for me since ChatGPT 3 was released. I find myself surrounded by radical uncertainty — uncertainty not only about where humanity is going but about what being human is. As soon as I begin to think I’m beginning to understand what’s happening, something surprising happens — the machines perform a new task, an authority figure changes his or her mind.

Beset by unknowns, I get defensive and assertive. I find myself clinging to the deepest core of my being — the vast, mostly hidden realm of the mind from which emotions emerge, from which inspiration flows, from which our desires pulse — the subjective part of the human spirit that makes each of us ineluctably who we are. I want to build a wall around this sacred region and say: “This is essence of being human. It is never going to be replicated by machine.”

But then some technologist whispers: “Nope, it’s just neural nets all the way down. There’s nothing special in there. There’s nothing about you that can’t be surpassed.”

Some of the technologists seem oddly sanguine as they talk this way. At least Hofstadter is enough of a humanist to be horrified.

https://www.nytimes.com/2023/07/13/opin ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Pressured by Biden, A.I. Companies Agree to Guardrails on New Tools

Post by kmaherali »

Amazon, Google and Meta are among the companies that announced the guidelines as they race to outdo each other with versions of artificial intelligence.

Seven leading A.I. companies in the United States have agreed to voluntary safeguards on the technology’s development, the White House announced on Friday, pledging to manage the risks of the new tools even as they compete over the potential of artificial intelligence.

The seven companies — Amazon, Anthropic, Google, Inflection, Meta, Microsoft and OpenAI — formally made their commitment to new standards for safety, security and trust at a meeting with President Biden at the White House on Friday afternoon.

“We must be cleareyed and vigilant about the threats emerging from emerging technologies that can pose — don’t have to but can pose — to our democracy and our values,” Mr. Biden said in brief remarks from the Roosevelt Room at the White House.

“This is a serious responsibility; we have to get it right,” he said, flanked by the executives from the companies. “And there’s enormous, enormous potential upside as well.”

The announcement comes as the companies are racing to outdo each other with versions of A.I. that offer powerful new ways to create text, photos, music and video without human input. But the technological leaps have prompted fears about the spread of disinformation and dire warnings of a “risk of extinction” as artificial intelligence becomes more sophisticated and humanlike.

The voluntary safeguards are only an early, tentative step as Washington and governments across the world seek to put in place legal and regulatory frameworks for the development of artificial intelligence. The agreements include testing products for security risks and using watermarks to make sure consumers can spot A.I.-generated material.

But lawmakers have struggled to regulate social media and other technologies in ways that keep up with the rapidly evolving technology.

The order is expected to involve new restrictions on advanced semiconductors and restrictions on the export of the large language models. Those are hard to secure — much of the software can fit, compressed, on a thumb drive.

An executive order could provoke more opposition from the industry than Friday’s voluntary commitments, which experts said were already reflected in the practices of the companies involved. The promises will not restrain the plans of the A.I. companies nor hinder the development of their technologies. And as voluntary commitments, they will not be enforced by government regulators.

ImageThe corporate logo for Meta, a blue infinity sign, on a billboard outside an office park.
Image
Meta, the parent company of Facebook, is among the companies that agreed to the voluntary standards.Credit...Jim Wilson/The New York Times

“We are pleased to make these voluntary commitments alongside others in the sector,” Nick Clegg, the president of global affairs at Meta, the parent company of Facebook, said in a statement. “They are an important first step in ensuring responsible guardrails are established for A.I. and they create a model for other governments to follow.”

As part of the safeguards, the companies agreed to security testing, in part by independent experts; research on bias and privacy concerns; information sharing about risks with governments and other organizations; development of tools to fight societal challenges like climate change; and transparency measures to identify A.I.-generated material.

In a statement announcing the agreements, the Biden administration said the companies must ensure that “innovation doesn’t come at the expense of Americans’ rights and safety.”

“Companies that are developing these emerging technologies have a responsibility to ensure their products are safe,” the administration said in a statement.

Brad Smith, the president of Microsoft and one of the executives attending the White House meeting, said his company endorsed the voluntary safeguards.

“By moving quickly, the White House’s commitments create a foundation to help ensure the promise of A.I. stays ahead of its risks,” Mr. Smith said.

Anna Makanju, the vice president of global affairs at OpenAI, described the announcement as “part of our ongoing collaboration with governments, civil society organizations and others around the world to advance AI governance.”

For the companies, the standards described Friday serve two purposes: as an effort to forestall, or shape, legislative and regulatory moves with self-policing, and a signal that they are dealing with the new technology thoughtfully and proactively.

But the rules on which they agreed are largely the lowest common denominator, and can be interpreted by every company differently. For example, the firms committed to strict cybersecurity measures around the data used to make the language models on which generative A.I. programs are developed. But there is no specificity about what that means, and the companies would have an interest in protecting their intellectual property anyway.

And even the most careful companies are vulnerable. Microsoft, one of the firms attending the White House event with Mr. Biden, scrambled last week to counter a Chinese government-organized hack on the private emails of American officials who were dealing with China. It now appears that China stole, or somehow obtained, a “private key” held by Microsoft that is the key to authenticating emails — one of the company’s most closely guarded pieces of code.

Given such risks, the agreement is unlikely to slow the efforts to pass legislation and impose regulation on the emerging technology.

Paul Barrett, the deputy director of the Stern Center for Business and Human Rights at New York University, said that more needed to be done to protect against the dangers that artificial intelligence posed to society.

“The voluntary commitments announced today are not enforceable, which is why it’s vital that Congress, together with the White House, promptly crafts legislation requiring transparency, privacy protections, and stepped-up research on the wide range of risks posed by generative A.I.,” Mr. Barrett said in a statement.

European regulators are poised to adopt A.I. laws later this year, which has prompted many of the companies to encourage U.S. regulations. Several lawmakers have introduced bills that include licensing for A.I. companies to release their technologies, the creation of a federal agency to oversee the industry, and data privacy requirements. But members of Congress are far from agreement on rules.

Lawmakers have been grappling with how to address the ascent of A.I. technology, with some focused on risks to consumers and others acutely concerned about falling behind adversaries, particularly China, in the race for dominance in the field.

This week, the House committee on competition with China sent bipartisan letters to U.S.-based venture capital firms, demanding a reckoning over investments they had made in Chinese A.I. and semiconductor companies. For months, a variety of House and Senate panels have been questioning the A.I. industry’s most influential entrepreneurs and critics to determine what sort of legislative guardrails and incentives Congress ought to be exploring.

Many of those witnesses, including Sam Altman of OpenAI, have implored lawmakers to regulate the A.I. industry, pointing out the potential for the new technology to cause undue harm. But that regulation has been slow to get underway in Congress, where many lawmakers still struggle to grasp what exactly A.I. technology is.

In an attempt to improve lawmakers’ understanding, Senator Chuck Schumer, Democrat of New York and the majority leader, began a series of sessions this summer to hear from government officials and experts about the merits and dangers of artificial intelligence across a number of fields.

Karoun Demirjian contributed reporting from Washington.

https://www.nytimes.com/2023/07/21/us/p ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Biden Orders Ban on New Investments in China’s Sensitive High-Tech Industries

Post by kmaherali »

The new limits, aimed at preventing American help to Beijing as it modernizes its military, escalate a conflict between the world’s two largest economies.
Image
An order by President Biden will prohibit venture capital and private equity firms from pumping money into Chinese efforts to develop semiconductors and other microelectronics.Credit...Kenny Holston/The New York Times

President Biden escalated his confrontation with China on Wednesday by signing an executive order banning new American investment in key technology industries that could be used to enhance Beijing’s military capabilities, the latest in a series of moves putting more distance between the world’s two largest economies.

The order will prohibit venture capital and private equity firms from pumping more money into Chinese efforts to develop semiconductors and other microelectronics, quantum computers and certain artificial intelligence applications. Administration officials stressed that the move was tailored to guard national security, but China is likely to see it as part of a wider campaign to contain its rise.

“The Biden administration is committed to keeping America safe and defending America’s national security through appropriately protecting technologies that are critical to the next generation of military innovation,” the Treasury Department said in a statement. The statement emphasized that the executive order was a “narrowly targeted action” complementing existing export controls and that the administration maintained its “longstanding commitment to open investment.”

Narrow or not, the new order comes at perhaps the most fraught moment in the U.S.-China relationship since President Richard M. Nixon and Secretary of State Henry A. Kissinger opened a dialogue with Beijing in the early 1970s. A series of expanding export controls on key technologies to China has already triggered retaliation from Beijing, which recently announced the cutoff of metals like gallium that are critical for the Pentagon’s own supply chain.

Mr. Biden has stressed that he wants to stabilize relations with China following a Cold War-style standoff over a spy balloon shot down after crossing through American airspace and the discovery of a broad Chinese effort to put malware into power grids and communications systems. He has sent Secretary of State Antony J. Blinken, Treasury Secretary Janet L. Yellen and other officials to renew talks with Chinese officials in recent months. Gina Raimondo, the commerce secretary, is expected to go to China in coming weeks.

Indeed, the president seemed intent on not antagonizing Beijing with Wednesday’s order, making no comment about his action and leaving it to be announced through written material and background briefings by aides who declined to be identified.

Still, China declared that it was “very disappointed” by the order, which it said was designed to “politicize and weaponize trade,” and it hinted at retaliation.

“The latest investment restrictions will seriously undermine the interests of Chinese and American companies and investors, hinder the normal business cooperation between the two countries and lower the confidence of the international community in the U.S. business environment,” Liu Pengyu, a spokesman for the Chinese embassy, said in a statement.

Administration officials said the president’s order is part of their effort to “de-risk” the relationship with China but not to “decouple” from it. Wednesday’s announcement, though, takes that effort to a new level. While export bans and concerns about Chinese investment in the United States have a long history, the United States has never before attempted such limits on the flow of investment into China.

In fact, for the past few decades, the United States has encouraged American investors to deepen their ties in the Chinese economy, viewing that as a way to expand the web of interdependencies between the two countries that would gradually integrate Beijing into the Western economy and force it to play by Western rules.

Administration officials cast the effort as one motivated entirely by national security concerns, not an attempt to gain economic advantage. But the order itself describes how difficult it is to separate the two, referring to China’s moves to “eliminate barriers between civilian and commercial sectors and military and defense industrial sectors.’’ It describes China’s focus on “acquiring and diverting the world’s cutting-edge technologies, for the purpose of achieving military dominance.”

(The text of Mr. Biden’s order refers only to “countries of concern,” though an annex limits those to “the People’s Republic of China” and its two special administrative areas, Hong Kong and Macau.)

Mr. Biden and his aides discussed joint efforts to limit high-tech investment with their counterparts at the recent Group of 7 summit meeting in Hiroshima, Japan. Several allies, including Britain and the European Union, have publicly indicated that they may follow suit. The outreach to other powers underscores that a U.S. ban may not be that effective by itself and would work only in conjunction with other major nations, including Japan and South Korea.

The executive order, which also requires firms to notify the government of certain investments, coincides with a bipartisan effort in Congress to impose similar limits. An amendment along those lines by Senators Bob Casey, Democrat of Pennsylvania, and John Cornyn, Republican of Texas, was added to the Senate version of the annual defense authorization bill.

Several Republicans criticized the president’s order as too little, too late and “riddled with loopholes,” as Senator Marco Rubio, Republican of Florida and vice chairman of the Senate Intelligence Committee, put it.

“It is long overdue, but the Biden administration finally recognized there is a serious problem with U.S. dollars funding China’s rise at our expense,” Mr. Rubio said. “However, this narrowly tailored proposal is almost laughable.”

Representative Michael McCaul, Republican of Texas and chairman of the House Foreign Relations Committee, said the new order should go after existing investments as well as sectors like biotechnology and energy.

“We need to stop the flow of American dollars and know-how supporting” China’s military and surveillance apparatus “rather than solely pursuing half measures that are taking too long to develop and go into effect,” Mr. McCaul said.

The United States already prohibits or restricts the export of certain technologies and products to China. The new order effectively means that American money, expertise and prestige cannot be used to help China to develop its own versions of what it cannot buy from American companies.

It was unclear how much money would be affected. American investors have already pulled back dramatically over the past two years. Venture capital investment in China has plummeted from a high of $43.8 billion in the last quarter of 2021 to $10.5 billion in the second quarter of this year, according to PitchBook, which tracks such trends. But the latest order could have a chilling effect on investment beyond the specific industries at stake.

In a capital where the goal of opposing China is one of the few areas of bipartisan agreement, the only sounds of caution in Washington came from the business community. While trade groups praised the administration for consulting them, there was concern that the downward spiral in relations could speed a broader break between the world’s two largest economies.

“We hope the final rules allow U.S. chip firms to compete on a level playing field and access key global markets, including China, to promote the long-term strength of the U.S. semiconductor industry and our ability to out-innovate global competitors,” the Semiconductor Industry Association said in a statement.

Gabriel Wildau, a managing director at the consulting firm Teneo who focuses on political risk in China, said the direct effect of the executive order would be modest, given its limited scope, but that disclosure requirements embedded in the order could have a chilling effect.

“Politicians increasingly regard corporate investments in China as a form of collusion with a foreign enemy, even when there is no allegation of illegality,” he said.

The Treasury Department, which has already consulted with American executives about the forthcoming order, will begin formally taking comments before drafting rules to be put in place next year. But American firms may alter their investment strategies even before the rules take effect, knowing that they are coming.

Image
A screen displaying an image of China’s leader, Xi Jinping, in a room where displays are illuminated with red lighting at the military museum in Beijing.
Image
A series of expanding export controls on key technologies to China has already triggered retaliation from Beijing.Credit...Florence Lo/Reuters

China’s own investment restrictions are broader than the new American rules — they apply to all outbound investments, not just those in the United States. And they reflect a technology policy that in some ways is the opposite of the new American restrictions.

China discouraged or halted most low-tech outbound investments, like purchases of real estate or even European soccer clubs. But China allowed and even encouraged further acquisitions of businesses with technologies that could offer geopolitical advantages, including investments in overseas businesses involved in aircraft production, robotics, artificial intelligence and heavy manufacturing.

The latest move from Washington comes at a rare moment of vulnerability for the Chinese economy. Consumer prices in China, after barely rising for the previous several months, fell in July for the first time in more than two years, the country’s National Bureau of Statistics announced on Wednesday.

While Chinese cities and some businesses have declared 2023 a “Year of Investing in China” in hopes of a post-Covid revival of their local economies, President Xi Jinping has created an environment that has made many American venture capital firms and other investors more cautious.

Western companies that assess investment risk, like the Mintz Group, have been investigated and in some cases their offices have been raided. A Japanese executive was accused of espionage, and a new anti-espionage law has raised fears that ordinary business activities would be viewed by China as spying.

The Biden administration’s previous moves to restrain sensitive economic relationships have taken a toll. China’s telecommunications champion, Huawei, has been almost completely blocked from the U.S. market, and American allies, starting with Australia, are ripping Huawei equipment out of their networks. China Telecom was banned by the Federal Communications Commission, which said it “is subject to exploitation, influence and control by the Chinese government.”

At the same time, the United States — with the somewhat reluctant help of the Dutch government, Japan and South Korea — has gone to extraordinary lengths to prevent China from building up its own domestic capability to manufacture the most high-end microelectronics by itself.

Washington has banned the export of the multimillion-dollar lithography equipment used to produce chips in hopes of limiting China’s progress while the United States tries to restore its own semiconductor industry. Taken together, it is an unprecedented effort to slow an adversary’s capabilities while speeding America’s own investment.

Keith Bradsher, Ana Swanson and Sarah Kessler contributed reporting.

https://www.nytimes.com/2023/08/09/us/p ... tment.html
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

Inexpensive Add-on Spawns a New Era of Machine Guns

Post by kmaherali »

Popular devices known as “switches” are turning ordinary pistols into fully automatic weapons, making them deadlier and a growing threat to bystanders.

Caison Robinson, 14, had just met up with a younger neighbor on their quiet street after finishing his chores when a gunman in a white car rolled up and fired a torrent of bullets in an instant.

“Mom, I’ve been shot!” he recalled crying, as his mother bolted barefoot out of their house in northwest Las Vegas. “I didn’t think I was going to make it, for how much blood was under me,” Caison said.

The Las Vegas police say the shooting in May was carried out with a pistol rigged with a small and illegal device known as a switch. Switches can transform semiautomatic handguns, which typically require a trigger pull for each shot, into fully automatic machine guns that fire dozens of bullets with one tug.

By the time the assailant in Las Vegas sped away, Caison, a soft-spoken teenager who loves video games, lay on the pavement with five gunshot wounds. His friend, a 12-year-old girl, was struck once in the leg.

These makeshift machine guns — able to inflict indiscriminate carnage in seconds — are helping fuel the national epidemic of gun violence, making shootings increasingly lethal, creating added risks for bystanders and leaving survivors more grievously wounded, according to law enforcement authorities and medical workers.

The growing use of switches, which are also known as auto sears, is evident in real-time audio tracking of gunshots around the country, data shows. Audio sensors monitored by a public safety technology company, Sound Thinking, recorded 75,544 rounds of suspected automatic gunfire in 2022 in portions of 127 cities covered by its microphones, according to data compiled at the request of The New York Times. That was a 49 percent increase from the year before.

ImageTakeila Peebles, Caison Robinson’s mother, sits at a table and looks out of a window.
Image
Takeila Peebles, Caison Robinson’s mother, was inside her home when she heard gunshots and saw a car fleeing.Credit...Bridget Bennett for The New York Times

Image
An image of Caison’s neighborhood with palm trees in the background.
Image
The neighborhood where Caison was shot in Las Vegas.Credit...Bridget Bennett for The New York Times

“This is almost like the gun version of the fentanyl crisis,” Mayor Quinton Lucas of Kansas City, Mo., said in an interview.

Mr. Lucas, a Democrat, said he believes that the rising popularity of switches, especially among young people, is a major reason fewer gun violence victims are surviving in his city.

Homicides in Kansas City are approaching record highs this year, even as the number of nonfatal shootings in the city has decreased.

Switches come in various forms, but most are small Lego-like plastic blocks, about an inch square, that can be easily manufactured on a 3-D printer and go for around $200.

Law enforcement officials say the devices are turning up with greater frequency at crime scenes, often wielded by teens who have come to see them as a status symbol that provides a competitive advantage. The proliferation of switches also has coincided with broader accessibility of so-called ghost guns, untraceable firearms that can be made with components purchased online or made with 3-D printers.

“The gang wars and street fighting that used to be with knives, and then pistols, is now to a great extent being waged with automatic weapons,” said Andrew M. Luger, the U.S. attorney for Minnesota.

Switches have become a major priority for federal law enforcement officials. But investigators say they face formidable obstacles, including the sheer number in circulation and the ease with which they can be produced and installed at home, using readily available instruction videos on the internet. Many are sold and owned by people younger than 18, who generally face more lenient treatment in the courts.

Social media platforms like YouTube ban content that shows people how to make illegal weapons. However, such content is protected under the First Amendment and remains widely available online.

Federal law enforcement officials have contacted Glock, the company that produces a weapon that has come to define an entire class of easily available 9 millimeter handguns, in search of ways to modify the weapon to make it harder to attach switches. Carlos Guevara, a vice president at Glock, said the company has collaborated with law enforcement officials to target illegal sellers and users of switches but has determined the design of the pistol cannot be altered in that way.

In 2021, a man with a gun modified with a switch fired at two police officers in Houston, killing one and injuring the other. One of the gunmen in a 2022 gang shootout in Sacramento that left six dead and injured 12 people carried a gun fitted with a switch, according to the police. In recent months, shootings using modified weapons have been caught on camera in Milwaukee, prompting the city’s mayor to compare the scene to a war zone.

Dr. James Miner, the chair of emergency medicine at Hennepin Healthcare in Minneapolis, which has the largest trauma center in Minnesota, said he first heard about switches in 2020 when he was trying to make sense of why gunshot victims were arriving at the hospital with numerous wounds and why more people seemed to be reporting being shot by stray bullets.

Image
Dr. James Miner, the chair of emergency medicine at Hennepin Healthcare, which has the largest trauma center in Minnesota, stands in a hallway wearing blue scrubs.
Image
Dr. James Miner, the chair of emergency medicine at Hennepin Healthcare, which has the largest trauma center in Minnesota.Credit...Jenn Ackerman for The New York Times

Image
At the Hennepin County Medical Center, several ambulances line a parking lot.
Image
Physicians at Hennepin County Medical Center say they have seen an increase in the number of wounds shooting victims have as switches have become more common.Credit...Jenn Ackerman for The New York Times

“It’s more common now for someone to say: ‘I was walking down the street and I heard the sound and all of a sudden my leg hurt, my chest hurt,’” he said. “Rather than: ‘I was held up or I was involved in a drug deal gone wrong.’”

Since the 1930s, federal laws have tightly restricted ownership of machine guns outside of the military and police departments. In 1986, Congress banned the production of new machine guns for civilian use, making them even more uncommon in the years that followed.

Devices to turn firearms fully automatic have existed for years, but they had not been a major concern for the authorities until recently.

In 2019, federal agents began seizing a significant number of switches imported from China, said Thomas Chittum, a former associate deputy director at the Bureau of Alcohol, Tobacco, Firearms and Explosives who now oversees analytics and forensic services at Sound Thinking.

Soon, the authorities began seeing a rise in switches — which in 2019 sold for as little as $19 — in several major American cities. Between 2017 and 2021, the A.T.F. recovered 5,454 machine gun conversion parts, a 570 percent increase from the preceding five years.

Steven M. Dettelbach, director of the A.T.F., said that trend ominously echoed the days of “Al Capone and the Tommy gun,” when criminals often had more firepower than law enforcement.

In an interview, he recalled having asked one of his advisers to bring an inexpensive 3-D printer to his office last year to show how a switch was made. The speed, ease and cheap cost, he said, were chilling.

He said they handed one to him “after a half-hour, 40 minutes.”

The Justice Department has stepped up prosecutions of sellers and suppliers over the past few years. Under the Gun Control Act of 1968, it is a crime to manufacture a machine gun, a violation that carries a maximum of 10 years in prison. Last week, prosecutors in Chicago charged a 20-year-old man with selling 25 switches and a 3-D printer to an undercover agent. In November, federal prosecutors in Texas charged a supplier who, they assert, had sold thousands of switches — shipping some inside of children’s toys.

Image
Pete Vukovich, an A.T.F. special agent, demonstrated how a switch is used on a handgun at the South Metro Public Safety Training Facility in Edina, Minn.Credit...Jenn Ackerman for The New York Times

Image
A shooting target after a demonstration of a switch at a training facility in Minnesota.Credit...Jenn Ackerman for The New York Times

Switches are fast becoming embedded in youth culture, and have been the subject of rap songs and memes on social media. One of four teenagers accused in the killing of an off-duty Chicago police officer this year posted on the internet a song called “Switches,” rapping “shoot the switches, they so fast” as he showed an arsenal of weapons.

Caison Robinson said he knew about switches before he was nearly killed by one. Teenagers he knew began bragging about having acquired the converted guns, often from older siblings, he said. They called the switches “buttons,” he said, which came in several colors.

“It’s become a thing you get to be cool,” said Caison, who said in an interview that he tried to steer clear of armed cliques of teenagers in his Las Vegas neighborhood. “It’s like a trend now.”

His mother, Takeila Peebles, moved to Las Vegas from Chicago seven years ago. She said she thought he would be safer in their new city.

The day of the shooting, Ms. Peebles, who works in medical billing and as a chef from her home, told Caison he could go outside to play only after he tidied up his bedroom, threw in a load of laundry and vacuumed the stairs. When Caison finished his chores, around 3:45 p.m., he headed out. Not long after, she heard gunfire and caught a glimpse of a white Kia Optima speeding away.

A soldier in uniform who happened to be nearby saw what had happened and tended to Caison’s wounds until a passing motorist rushed him to the hospital.

One bullet struck his colon, part of which had to be removed, medical officials say. Another pierced his liver. A third punched through a principal vein in his abdomen. The other bullets broke his femur and caused nerve damage to his forearm.

Investigators concluded that the shooting was tied to a gang dispute and that Caison was not the intended target. In late June, Hakeem Collette, 17, pleaded guilty to battery with a deadly weapon and was sentenced to 10 years in prison. He will be eligible for parole in two years.

Caison’s mother, Ms. Peebles, said the punishment is outrageous, considering the anguish her family has experienced. For three weeks after her son was shot, she had a recurring nightmare in which she watched helplessly as Caison bled to death on the pavement.

Lately, Ms. Peebles said she often tiptoes into his bedroom to make sure he is still there.

“I’m always at a loss for words when I look at him,” she said. “He’s not a touchy person, but I just always want to hug him.”

Image
Caison Robinson lifted his shirt to show a scar on his stomach after he was shot.
Image
Caison Robinson was left with a scar after he was shot in Las Vegas.Credit...Bridget Bennett for The New York Times

Ernesto Londoño is a national correspondent based in the Midwest who keeps a close eye on drug use and counternarcotics policy in the United States. More about Ernesto Londoño

https://www.nytimes.com/2023/08/12/us/g ... 778d3e6de3
kmaherali
Posts: 25714
Joined: Thu Mar 27, 2003 3:01 pm

A Stroke Stole Her Ability to Speak at 30. A.I. Is Helping to Restore It Years Later.

Post by kmaherali »

The brain activity of a paralyzed woman is being translated into words spoken by an avatar. This milestone could help others who have lost speech.

Image

Ann Johnson, a teacher, volleyball coach and mother from Regina, Saskatchewan, had a paralyzing stroke in 2005 that took away her ability to speak.Credit...Sara Hylton for The New York Times

At Ann Johnson’s wedding reception 20 years ago, her gift for speech was vividly evident. In an ebullient 15-minute toast, she joked that she had run down the aisle, wondered if the ceremony program should have said “flutist” or “flautist” and acknowledged that she was “hogging the mic.”

Just two years later, Mrs. Johnson — then a 30-year-old teacher, volleyball coach and mother of an infant — had a cataclysmic stroke that paralyzed her and left her unable to talk.

On Wednesday, scientists reported a remarkable advance toward helping her, and other patients, speak again. In a milestone of neuroscience and artificial intelligence, implanted electrodes decoded Mrs. Johnson’s brain signals as she silently tried to say sentences. Technology converted her brain signals into written and vocalized language, and enabled an avatar on a computer screen to speak the words and display smiles, pursed lips and other expressions.

The research, published in the journal Nature, demonstrates the first time spoken words and facial expressions have been directly synthesized from brain signals, experts say. Mrs. Johnson chose the avatar, a face resembling hers, and researchers used her wedding toast to develop the avatar’s voice.

“We’re just trying to restore who people are,” said the team’s leader, Dr. Edward Chang, the chairman of neurological surgery at the University of California, San Francisco.

“It let me feel like I was a whole person again,” Mrs. Johnson, now 48, wrote to me.

The goal is to help people who cannot speak because of strokes or conditions like cerebral palsy and amyotrophic lateral sclerosis. To work, Mrs. Johnson’s implant must be connected by cable from her head to a computer, but her team and others are developing wireless versions. Eventually, researchers hope, people who have lost speech may converse in real time through computerized pictures of themselves that convey tone, inflection and emotions like joy and anger.

“What’s quite exciting is that just from the surface of the brain, the investigators were able to get out pretty good information about these different features of communication,” said Dr. Parag Patil, a neurosurgeon and biomedical engineer at the University of Michigan, who was asked by Nature to review the study before publication.

Video
Watch Ann Johnson speak through her digital avatar

1:11
Video by Metzger et al., Weill Institute for Neurosciences/University of California, San Francisco

Mrs. Johnson’s experience reflects the field’s fast-paced progress. Just two years ago, the same team published research in which a paralyzed man, who went by the nickname Pancho, used a simpler implant and algorithm to produce 50 basic words like “hello” and “hungry” that were displayed as text on a computer after he tried to say them.

Mrs. Johnson’s implant has nearly twice as many electrodes, increasing its ability to detect brain signals from speech-related sensory and motor processes linked to the mouth, lips, jaw, tongue and larynx. Researchers trained the sophisticated artificial intelligence to recognize not individual words, but phonemes, or sound units like “ow” and “ah” that can ultimately form any word.

“It’s like an alphabet of speech sounds,” David Moses, the project manager, said.

While Pancho’s system produced 15 to 18 words per minute, Mrs. Johnson’s rate was 78 using a much larger vocabulary list. Typical conversational speech is about 160 words per minute.

When researchers began working with her, they didn’t expect to try the avatar or audio. But the promising results were “a huge green light to say, ‘OK, let’s try the harder stuff, let’s just go for it,’” Dr. Moses said.

They programmed an algorithm to decode brain activity into audio waveforms, producing vocalized speech, said Kaylo Littlejohn, a graduate student at the University of California, Berkeley, and one of the study’s lead authors, along with Dr. Moses, Sean Metzger, Alex Silva and Margaret Seaton.

“Speech has a lot of information that is not well preserved by just text, like intonation, pitch, expression,” Mr. Littlejohn said.

Image
Ann Johnson’s husband, William, feeding her while she’s sitting in a wheelchair in their living room. A sports game is playing on the television, and there is a bouquet of flowers in the foreground.
Image
William Johnson, Mrs. Johnson’s husband, feeds her pasta. About a decade after her stroke, she decided she wanted to try solid food instead of being entirely fed by a tube.Credit...Sara Hylton for The New York Times

Image
Cheryl Ruddell, a caregiver, cleaning the implant in Mrs. Johnson’s head. Ms. Ruddell is wearing gloves and glasses.
Image
Cheryl Ruddell, Mrs. Johnson’s caregiver, cleaning the implant used for the avatar study.Credit...Sara Hylton for The New York Times

Image
Mrs. Johnson lying down on a white bed white Ms. Ruddell lifts and bends her leg.
Image
Physical therapy with Ms. Ruddell.Credit...Sara Hylton for The New York Times

Working with a company that produces facial animation, researchers programmed the avatar with data on muscle movements. Mrs. Johnson then tried to make facial expressions for happy, sad and surprised, each at high, medium and low intensity. She also tried to make various jaw, tongue and lip movements. Her decoded brain signals were conveyed on the avatar’s face.

Through the avatar, she said, “I think you are wonderful” and “What do you think of my artificial voice?”

“Hearing a voice similar to your own is emotional,” Mrs. Johnson told the researchers.


Tapping Into the Brain to Help a Paralyzed Man Speak
Image
In a once unimagined accomplishment, electrodes implanted in the man’s brain transmit signals to a computer that displays his words.
July 14, 2021

She and her husband, William, a postal worker, even engaged in conversation. She said through the avatar: “Do not make me laugh.” He asked how she was feeling about the Toronto Blue Jays’ chances. “Anything is possible,” she replied.

The field is moving so quickly that experts believe federally approved wireless versions might be available within the next decade. Different methods might be optimal for certain patients.

On Wednesday, Nature also published another team’s study involving electrodes implanted deeper in the brain, detecting activity of individual neurons, said Dr. Jaimie Henderson, a professor of neurosurgery at Stanford and the team’s leader, who was motivated by his childhood experience of watching his father lose speech after an accident. He said their method might be more precise but less stable because specific neurons’ firing patterns can shift.

Their system decoded sentences at 62 words per minute that the participant, Pat Bennett, 68, who has A.L.S., tried to say from a large vocabulary. That study didn’t include an avatar or sound decoding.

Both studies used predictive language models to help guess words in sentences. The systems don’t just match words but are “figuring out new language patterns” as they improve their recognition of participants’ neural activity, said Melanie Fried-Oken, an expert in speech-language assistive technology at Oregon Health & Science University, who consulted on the Stanford study.

Video
Mrs. Johnson’s avatar makes facial expressions

0:52
Video by Metzger et al., Weill Institute for Neurosciences/University of California, San Francisco

Neither approach was completely accurate. When using large vocabulary sets, they incorrectly decoded individual words about a quarter of the time.

For example, when Mrs. Johnson tried to say, “Maybe we lost them,” the system decoded, “Maybe we that name.” But in nearly half of her sentences, it correctly deciphered every word.

Researchers found that people on a crowdsourcing platform could correctly interpret the avatar’s facial expressions most of the time. Interpreting what the voice said was harder, so the team is developing a prediction algorithm to improve that. “Our speaking avatar is just at the starting point,” Dr. Chang said.

Experts emphasize that these systems aren’t reading people’s minds or thoughts. Rather, Dr. Patil said, they resemble baseball batters who “are not reading the mind of the pitcher but are kind of interpreting what they see the pitcher doing” to predict pitches.

Still, mind reading may ultimately be possible, raising ethical and privacy issues, Dr. Fried-Oken said.

Mrs. Johnson contacted Dr. Chang in 2021, the day after her husband showed her my article about Pancho, the paralyzed man the researchers had helped. Dr. Chang said he initially discouraged her because she lived in Saskatchewan, Canada, far from his lab in San Francisco, but “she was persistent.”

Mr. Johnson, 48, arranged to work part time. “Ann’s always supported me to do what I’ve wanted,” including leading his postal union local, he said. “So I just thought it was important to be able to support her in this.”

Image
A wedding album open to a page showing a couple of photos of the bride. One of the photos is in a blue lacy paper frame.
Image
The album from Ann and William Johnson’s wedding, which occurred just two years before her stroke.Credit...Sara Hylton for The New York Times

Image
Mrs. Johnson sitting and smiling up at Mr. Johnson outside on their porch.
Image
Determination has always been part of Ann Johnson’s personality. When she and William Johnson first began dating, she gave him 18 months to propose. When he did, she had “already gone and picked out her engagement ring.”Credit...Sara Hylton for The New York Times

Image
Mr. Johnson rolling Mrs. Johnson’s wheelchair out to their blue car, parked next to their house with a ramp.
Image
The Johnsons’ car is equipped for Ann Johnson’s wheelchair. It takes them three days to travel to San Francisco from Regina for the research.Credit...Sara Hylton for The New York Times

She started participating last September. Traveling to California takes them three days in a van packed with equipment, including a lift to transfer her between wheelchair and bed. They rent an apartment there, where researchers conduct their experiments to make it easier for her. The Johnsons, who raise money online and in their community to pay for travel and rent for the multiyear study, spend weeks in California, returning home between research phases.

“If she could have done it for 10 hours a day, seven days a week, she would have,” Mr. Johnson said.

Determination has always been part of her nature. When they began dating, Mrs. Johnson gave Mr. Johnson 18 months to propose, which he said he did “on the exact day of the 18th month,” after she had “already gone and picked out her engagement ring.”

Mrs. Johnson communicated with me in emails composed with the more rudimentary assistive system she uses at home. She wears eyeglasses affixed with a reflective dot that she aims at letters and words on a computer screen.


It’s slow, allowing her to generate only 14 words per minute. But it’s faster than the only other way she can communicate at home: using a plastic letter board, a method Mr. Johnson described as “her just trying to show me which letter she’s trying to try to look at and then me trying to figure out what she’s trying to say.”

The inability to have free-flowing conversations frustrates them. When discussing detailed matters, Mr. Johnson sometimes says something and receives her response by email the next day.

“Ann’s always been a big talker in life, an outgoing, social individual who loves talking, and I don’t,” he said, but her stroke “made the roles reverse, and now I’m supposed to be the talker.”

Mrs. Johnson was teaching high school math, health and physical education, and coaching volleyball and basketball when she had her brainstem stroke while warming up to play volleyball. After a year in a hospital and a rehabilitation facility, she came home to her 10-year-old stepson and her 23-month-old daughter, who has now grown up without any memory of hearing her mother speak, Mr. Johnson said.

“Not being able to hug and kiss my children hurt so bad, but it was my reality,” Mrs. Johnson wrote. “The real nail in the coffin was being told I couldn’t have more children.”

Image
Mrs. Johnson sitting in a wheelchair and looking at a screen on a black assistive device. She is wearing glasses and has short hair.
Image
Ann Johnson’s at-home communication system involves using eyeglasses with a reflective dot that she aims at letters and words on the screen.Credit...Sara Hylton for The New York Times

Image
Mr. Johnson holding up a plastic letter board with various black letters on it.
Image
A letter board Ann Johnson sometimes uses to communicate. She looks at the letters and her husband guesses the words she’s trying to express.Credit...Sara Hylton for The New York Times

Image
Mrs. Johnson smiling while Ms. Ruddell puts a cap over her head.
Image
Ms. Ruddell putting a cap over Ann Johnson’s head. When Mrs. Johnson first tried to make emotional expressions with the avatar, “I felt silly, but I like feeling like I have an expressive face again,” she said.Credit...Sara Hylton for The New York Times

For five years after the stroke, she was terrified. “I thought I would die at any moment,” she wrote, adding, “The part of my brain that wasn’t frozen knew I needed help, but how would I communicate?”

Gradually, her doggedness resurfaced. Initially, “my face muscles didn’t work at all,” she wrote, but after about five years, she could smile at will.

She was entirely tube-fed for about a decade, but decided she wanted to taste solid food. “If I die, so be it,” she told herself. “I started sucking on chocolate.” She took swallowing therapy and now eats finely chopped or soft foods. “My daughter and I love cupcakes,” she wrote.

When Mrs. Johnson learned that trauma counselors were needed after a fatal bus crash in Saskatchewan in 2018, she decided to take a university counseling course online.

“I had minimal computer skills and, being a math and science person, the thought of writing papers scared me,” she wrote in a class report. “At the same time, my daughter was in grade 9 and being diagnosed with a processing disability. I decided to push through my fears and show her that disabilities don’t need to stop us or slow us down.”

Helping trauma survivors remains her goal. “My shot at the moon was that I would become a counselor and use this technology to talk to my clients,” she told Dr. Chang’s team.

At first when she started making emotional expressions with the avatar, “I felt silly, but I like feeling like I have an expressive face again,” she wrote, adding that the exercises also enabled her to move the left side of her forehead for the first time.

She has gained something else, too. After the stroke, “it hurt so bad when I lost everything,” she wrote. “I told myself that I was never again going to put myself in line for that disappointment again.”

Now, “I feel like I have a job again,” she wrote.

Besides, the technology makes her imagine being in “Star Wars”: “I have kind of gotten used to having my mind blown.”

https://www.nytimes.com/2023/08/23/heal ... ience.html
swamidada
Posts: 1615
Joined: Sun Aug 02, 2020 8:59 pm

Re: TECHNOLOGY AND DEVELOPMENT

Post by swamidada »

Scientists solve genetic puzzle of the ‘Y’ chromosome
Reuters Published August 24, 2023 Updated about 23 hours ago

THE study provides the first complete view of a Y chromosome’s code, as per geneticist and co-author, Karen Miga.—Reuters

WASHINGTON: Scientists have taken an important step forward in understanding the human genome — our genetic blueprint — by fully deciphering the enigmatic Y chromosome present in males, an achievement that could help guide research on infertility in men.

Researchers on Wednesday unveiled the first complete sequence of the human Y chromosome, which is one of the two sex chromosomes — the X chromosome being the other — and is typically passed down from male parent to male offspring. It is the last of the 24 chromosomes — threadlike structures that carry genetic information from cell to cell — in the human genome to be sequenced.

People have a pair of sex chromosomes in each cell. Males possess one Y and one X chromosome while females have two X chromosomes, with some exceptions.

The Y chromosome’s genes help govern crucial reproductive functions including sperm production, formally called spermatogenesis, and are even involved in cancer risk and severity. But this chromosome had proven difficult to crack owing to its exceptionally complex structure.

Fuller understanding of the human genome will help guide research on infertility in men

“I would credit new sequencing technologies and computational methods for this,” said Arang Rhie, a staff scientist at the US National Human Genome Research Institute and lead author of a research paper detailing the achievement in the journal Nature.

“It finally provides the first complete view of a Y chromosome’s code, revealing more than 50 per cent of the chromosome’s length that was previously missing from our genome maps,” said University of California, Santa Cruz (UCSC) biomolecular engineering professor and study co-author Karen Miga, co-leader of the Telomere-to-Telomere consortium behind the research.

The complete X chromosome sequence was published in 2020. But until now, the Y chromosome part of the human genome had contained big gaps.

“This is especially important because the Y chromosome has been traditionally excluded from many studies of human diseases,” UCSC genomicist and study co-author Monika Cechova said.

“The Y chromosome is the smallest and the fastest-evolving chromosome in the human genome, and also the most repetitive, meaning that its DNA contains stretches of DNA repeated many times over,” Cechova added.

The work revealed features of medically relevant regions of the Y chromosome including a stretch of DNA — the molecule that carries genetic information for an organism’s development and functioning — containing several genes involved in sperm production. The new fuller understanding of the Y chromosome’s genes offers promise for practical applications including in fertility-related research, according to the researchers.

“Many of these genes are important for fertility and reproduction, and especially spermatogenesis, so being able to catalog normal variation as well the situations when, for example, azoospermia (an absence of sperm in semen) occurs, could be helpful for IVF (in vitro fertilization) clinics as well as further research into activity of these genes,” Cechova said.

In addition to identifying some additional Y chromosome genes, the researchers found that some DNA from the chromosome had been mistaken in previous studies as bacterial in nature.

Scientists continue to broaden the understanding of human genetics. A first accounting of the human genome was unveiled in 2003. The first complete human genome — albeit with the Y chromosome partial — was published last year. In May, researchers published a new version of the genome that improved on its predecessor by including a rich diversity of people to better reflect the global population of 8 billion.

Fully sequencing the Y chromosome adds to this.

“We now have a recipe on how to assemble the Y chromosome fully, which, while expensive at the moment, can translate into personalized genomics in the future,” Cechova said.

Published in Dawn, August 24th, 2023

https://www.dawn.com/news/1771854/scien ... chromosome
Post Reply