The Concept of Post Fact Society

Recent history (19th-21st Century)
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Misinformation Is Endangering India’s Election

The country’s political parties are spreading propaganda about their opponents to gain votes. It’s working


NEW DELHI—In the days following a suicide bombing against Indian security forces in Kashmir this year, a message began circulating in WhatsApp groups across the country. It claimed that a leader of the Congress Party, the national opposition, had promised a large sum of money to the attacker’s family, and to free other “terrorists” and “stone pelters” from prison, if the state voted for Congress in upcoming parliamentary elections.

The message was posted to dozens of WhatsApp groups that appeared to promote Prime Minister Narendra Modi’s governing Bharatiya Janata Party, and seemed aimed at painting the BJP’s main national challenger as being soft on militancy in Kashmir, which remains contested between India and Pakistan, just as the two countries seemed to be on the brink of war.

The claim, however, was fake. No member of Congress, at either a national or a state level, had made any such statement. Yet delivered in the run-up to the election, and having spread with remarkable speed, that message offered a window into a worsening problem here.

India is facing information wars of an unprecedented nature and scale. Indians are bombarded with fake news and divisive propaganda on a near-constant basis from a wide range of sources, from television news to global platforms like Facebook and WhatsApp. But unlike in the United States, where the focus has been on foreign-backed misinformation campaigns shaping elections and public discourse, the fake news circulating here isn’t manufactured abroad.

Many of India’s misinformation campaigns are developed and run by political parties with nationwide cyberarmies; they target not only political opponents, but also religious minorities and dissenting individuals, with propaganda rooted in domestic divisions and prejudices. The consequences of such targeted misinformation are extreme, from death threats to actual murders—in the past year, more than two dozen people have been lynched by mobs spurred by nothing more than rumors sent over WhatsApp.

Elections beginning this month will stoke those tensions, and containing fake news will be one of India’s biggest challenges. It won’t be easy.

More....
https://www.theatlantic.com/internation ... MDU2NzM4S0
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

How to Fight India’s Fake-News Epidemic

Disinformation can be defeated by treating the crisis as we responded to infectious diseases in the past.


Excerpt:

Fake news is not a technological or scientific problem with a quick fix. It should be treated as a new kind of public health crisis in all its social and human complexity. The answer might lie in looking back at how we responded to the epidemics, the infectious diseases in the 19th and early 20th centuries, which have similar characteristics.

In response to infectious diseases, over a period of more than a century, nations created the public health infrastructure — a combination of public and private institutions that track outbreaks, fund research, develop medicines and provide health services. We need a similar response to tackle disinformation and fake news.

Epidemics taught us that citizen education is the first and most critical step for a solution. Without the widespread knowledge that washing hands with soap can prevent infections, all other interventions would have sunk under the sheer volume of patients. No number of tweaks to the Facebook algorithm, no size of fact-checking teams, no amount of government regulations can have the same impact as a citizen who critically examines the information being circulated.

Public education might seem a soft measure compared with regulation, but informing the people is the best investment to tackle the problem. In the long term, it will be effective because content distribution will be cheaper and the political and commercial incentives to spread lies will only grow.

More...

https://www.nytimes.com/2019/04/29/opin ... dline&te=1
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Deepfakes Are Coming. We Can No Longer Believe What We See.

It will soon be as easy to produce convincing fake video as it is to lie. We need to be prepared.


On June 1, 2019, the Daily Beast published a story exposing the creator of a now infamous fake video that appeared to show House Speaker Nancy Pelosi drunkenly slurring her words. The video was created by taking a genuine clip, slowing it down, and then adjusting the pitch of her voice to disguise the manipulation.

Judging by social media comments, many people initially fell for the fake, believing that Ms. Pelosi really was drunk while speaking to the media. (If that seems an absurd thing to believe, remember Pizzagate; people are happy to believe absurd things about politicians they don’t like.)

The video was made by a private citizen named Shawn Brooks, who seems to have been a freelance political operative producing a wealth of pro-Trump web content. (Mr. Brooks denies creating the video, though according to the Daily Beast, Facebook confirmed he was the first to upload it.) Some commenters quickly suggested that the Daily Beast was wrong to expose Mr. Brooks. After all, they argued, he’s only one person, not a Russian secret agent or a powerful public relations firm; and it feels like “punching down” for a major news organization to turn the spotlight on one rogue amateur. Seth Mandel, an editor at the Washington Examiner, asked, “Isn’t this like the third Daily Beast doxxing for the hell of it?”

It’s a legitimate worry, but it misses an important point. There is good reason for journalists to expose the creators of fake web content, and it’s not just the glee of watching provocateurs squirm. We live in a time when knowing the origin of an internet video is just as important as knowing what it shows.

More....

https://www.nytimes.com/2019/06/10/opin ... y_20190610
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

We Must Prepare for the Next Pandemic

We’ll have to battle both the disease and the fake news.


When the next pandemic strikes, we’ll be fighting it on two fronts. The first is the one you immediately think about: understanding the disease, researching a cure and inoculating the population. The second is new, and one you might not have thought much about: fighting the deluge of rumors, misinformation and flat-out lies that will appear on the internet.

The second battle will be like the Russian disinformation campaigns during the 2016 presidential election, only with the addition of a deadly health crisis and possibly without a malicious government actor. But while the two problems — misinformation affecting democracy and misinformation affecting public health — will have similar solutions, the latter is much less political. If we work to solve the pandemic disinformation problem, any solutions are likely to also be applicable to the democracy one.

Pandemics are part of our future. They might be like the 1968 Hong Kong flu, which killed a million people, or the 1918 Spanish flu, which killed over 40 million. Yes, modern medicine makes pandemics less likely and less deadly. But global travel and trade, increased population density, decreased wildlife habitats, and increased animal farming to satisfy a growing and more affluent population have made them more likely. Experts agree that it’s not a matter of if — it’s only a matter of when.

When the next pandemic strikes, accurate information will be just as important as effective treatments. We saw this in 2014, when the Nigerian government managed to contain a subcontinentwide Ebola epidemic to just 20 infections and eight fatalities. Part of that success was because of the ways officials communicated health information to all Nigerians, using government-sponsored videos, social media campaigns and international experts. Without that, the death toll in Lagos, a city of 21 million people, would have probably been greater than the 11,000 the rest of the continent experienced.

More...

https://www.nytimes.com/2019/06/17/opin ... y_20190618
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

How WhatsApp is used and misused in Africa

The continent’s most popular messaging platform


When i open my phone, I am swamped by news,” says Matthew Stanley, a driver in Abuja, Nigeria’s capital. He scrolls through WhatsApp, a messaging service, bringing up a slick video forwarded into his church group. In a tone befitting a trailer for a horror film, the narrator falsely claims that Muhammadu Buhari, Nigeria’s Muslim president, is plotting to kill Christians. Mr Stanley squints at the tiny screen. “I think it’s fake news,” he says. “I need to check the source.”

If only everyone were so sceptical. WhatsApp, which has 1.5bn users globally, is especially influential in Africa. It is the most popular social platform in countries such as Nigeria, Ghana, Kenya and South Africa. In the West it is common for people to use multiple platforms such as Facebook and Twitter (see Graphic detail) but in African countries, where money is tighter and internet connections patchy, WhatsApp is an efficient one-stop-shop. The ability to leave audio notes makes it popular among illiterate people. But WhatsApp’s ubiquity also makes it a political tool.

That much is clear from Nigerian presidential and state elections in February and March. As recent research by Nic Cheeseman, Jamie Hitchen, Jonathan Fisher and Idayat Hassan indicates, Nigerians’ use of WhatsApp both reflects and exploits the country’s social structures.

For example, Nigerians belong to much larger WhatsApp groups than Westerners do. A survey by Mr Hitchen and Ms Hassan in Kano, a northern city, found that locals are typically in groups ofat least 50 people. These may be made up of school acquaintances, work colleagues or fellow worshippers. The larger the group, the more quickly information can spread. And since these groups often comprise friends and community leaders, recipients are inclined to trust what they read.

More...

https://www.economist.com/middle-east-a ... -in-africa
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

How a misleading YouTube video is stoking fears about Shariah law before the federal election

A short, grainy YouTube video circulating on social media purports to show evidence of an imam claiming that if Prime Minister Justin Trudeau is re-elected, he will institute Shariah law, the legal code of Islam, based on the Qur'an.

But the video was taken out of context, according to the man featured in it, and it was created by Sandra Solomon, known for her anti-Islam views.

The video has about 50,000 views on YouTube, a middling amount, but it has been posted on at least three different Facebook groups that are critical of Trudeau. Altogether, the groups have more than 185,000 likes, and posts of the video were shared more than 7,000 times.

The three pages get high engagement in terms of reactions, comments and shares, and they are in some of the most popular groups spreading memes and disinformation online. These groups equal or often exceed many traditional media outlets for engagement on Facebook.

The video itself includes a short section from a speech about Islam delivered by Mufti Aasim Rashid in Kamloops, B.C., in October 2017. It also features a picture of Justin Trudeau praying at a mosque and ends on a clip of Trudeau championing diversity, which is then covered up by a photo illustration of a small child wearing a "Make Canada Great Again" hat.

More...

https://www.msn.com/en-ca/news/canada/h ... ailsignout
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

What is a deepfake?

Computers can generate convincing representations of events that never happened


SUSAN SONTAG understood that photographs are unreliable narrators. “Despite the presumption of veracity that gives all photographs authority, interest, seductiveness,” she wrote, “the work that photographers do is no generic exception to the usually shady commerce between art and truth.” But what if even that presumption of veracity disappeared? Today, the events captured in realistic-looking or -sounding video and audio recordings need never have happened. They can instead be generated automatically, by powerful computers and machine-learning software. The catch-all term for these computational productions is “deepfakes”.

The term first appeared on Reddit, a messaging board, as the username for an account which was producing fake videos of female celebrities having sex. An entire community sprung up around the creation of these videos, writing software tools that let anyone automatically paste one person’s face onto the body of another. Reddit shut the community down, but the technology was out there. Soon it was being applied to political figures and actors. In one uncanny clip Jim Carrey’s face is melded with Jack Nicholson’s in a scene from “The Shining”.

Tools for editing media manually have existed for decades—think Photoshop. The power and peril of deepfakes is that they make fakery cheaper than ever before. Before deepfakes, a powerful computer and a good chunk of a university degree were needed to produce a realistic fake video of someone. Now some photos and an internet connection are all that is required.

More....

https://www.economist.com/the-economist ... a/291011/n
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Jeffrey Epstein and When to Take Conspiracies Seriously

Sometimes conspiracy theories point toward something worth investigating. A few point toward the truth.


The challenge in thinking about a case like the suspicious suicide of Jeffrey Epstein, the supposed “billionaire” who spent his life acquiring sex slaves and serving as a procurer to the ruling class, can be summed up in two sentences. Most conspiracy theories are false. But often some of the things they’re trying to explain are real.

Conspiracy theories are usually false because the people who come up with them are outsiders to power, trying to impose narrative order on a world they don’t fully understand — which leads them to imagine implausible scenarios and impossible plots, to settle on ideologically convenient villains and assume the absolute worst about their motives, and to imagine an omnicompetence among the corrupt and conniving that doesn’t actually exist.

Or they are false because the people who come up with them are insiders trying to deflect blame for their own failings, by blaming a malign enemy within or an evil-genius rival for problems that their own blunders helped create.

Or they are false because the people pushing them are cynical manipulators and attention-seekers trying to build a following who don’t care a whit about the truth.

For all these reasons serious truth-seekers are predisposed to disbelieve conspiracy theories on principle, and journalists especially are predisposed to quote Richard Hofstadter on the “paranoid style” whenever they encounter one — an instinct only sharpened by the rise of Donald Trump, the cynical conspiracist par excellence.

But this dismissiveness can itself become an intellectual mistake, a way to sneer at speculation while ignoring an underlying reality that deserves attention or investigation. Sometimes that reality is a conspiracy in full, a secret effort to pursue a shared objective or conceal something important from the public. Sometimes it’s a kind of unconscious connivance, in which institutions and actors behave in seemingly concerted ways because of shared assumptions and self-interest. But in either case, an admirable desire to reject bad or wicked theories can lead to a blindness about something important that these theories are trying to explain.

More....

https://www.nytimes.com/2019/08/13/opin ... y_20190813
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

This Video May Not Be Real

What should we really be worried about when it comes to “deepfakes”? An expert in online manipulation explains.


In the video Op-Ed above, Claire Wardle responds to growing alarm around “deepfakes” — seemingly realistic videos generated by artificial intelligence. First seen on Reddit with pornographic videos doctored to feature the faces of female celebrities, deepfakes were made popular in 2018 by a fake public service announcement featuring former President Barack Obama. Words and faces can now be almost seamlessly superimposed. The result: We can no longer trust our eyes.

In June, the House Intelligence Committee convened a hearing on the threat deepfakes pose to national security. And platforms like Facebook, YouTube and Twitter are contemplating whether, and how, to address this new disinformation format. It’s a conversation gaining urgency in the lead-up to the 2020 election.

Yet deepfakes are no more scary than their predecessors, “shallowfakes,” which use far more accessible editing tools to slow down, speed up, omit or otherwise manipulate context. The real danger of fakes — deep or shallow — is that their very existence creates a world in which almost everything can be dismissed as false.

Video at:

https://www.nytimes.com/2019/08/14/opin ... y_20190815
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Do We Really Understand ‘Fake News’?

We think we are sharing facts, but we are really expressing emotions in the outrage factory.


Given how much it’s talked, tweeted about and worried over, you’d think we’d know a lot about fake news. And in some sense, we do. We know that false stories posing as legitimate journalism have been used to try to sway elections; we know they help spread conspiracy theories; they may even cause false memories. And yet we also know that the term “fake news” has become a trope, so widely used and abused that it no longer serves its original function.

Why is that? And why, given all our supposed knowledge of it, is fake news — the actual phenomenon — still effective? Reflection on our emotions, together with a little help from contemporary philosophy of language and neuroscience, suggests an answer to both questions.

We are often confused about the role that emotion plays in our lives. For one thing, we like to think, with Plato, that reason drives the chariot of our mind and keeps the unruly wild horses of emotion in line. But most people would probably admit that much of the time, Hume was closer to the truth when he said that reason is the slave of the passions. Moreover, we often confuse our feelings with reality itself: Something makes us feel bad, and so we say it is bad.

As a result, our everyday acts of communication can function as vehicles for emotion without our noticing it. This was a point highlighted by mid-20th century philosophers of language often called “expressivists.” Their point was that people sometimes think they are talking about facts when they are really expressing themselves emotionally. The expressivists applied this thought quite widely to all ethical communication about right or wrong, good or bad. But even if we don’t go that far, their insight says something about what is going on when we share or retweet news posts — fake or otherwise — online.

More...

https://www.nytimes.com/2019/09/23/opin ... d=45305309
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Mawlana Hazar Imam Aga Khan IV: “technologies alone will not save us – the critical variable ….always lie in the disposition of human hearts and minds”
Posted by Nimira Dewji

“From the development of written language to the invention of printing, to the development of electronic and digital media – quantitative advances in communication technology have not necessarily produced qualitative progress in mutual understanding.

To be sure, each improvement in communications technology has triggered new waves of political optimism. But sadly, if information can be shared more easily as technology advances, so can misinformation and disinformation. If truth can spread more quickly and more widely, then so can error and falsehood.

Throughout history, the same tools – the printing press, the telegraph, the microphone, the television camera, the cell phone, the internet – that promised to bring us together, have also been used to drive us apart.”

Mawlana Hazar Imam
at the International New York Times Athens Democracy Forum, September 15, 2015
Speech

******

“We have more communication, but we also have more confrontation. Even as we exclaim about growing connectivity we seem to experience greater disconnection….technological advance does not necessarily mean human progress. Sometimes it can mean the reverse.

Mawlana Hazar Imam
Samuel L. and Elizabeth Jodidi Lecture, Harvard University, November 12, 2015
Speech

*****

“In the final analysis, the key to human cooperation and concord has not depended on advances in the technologies of communication, but rather on how human beings go about using – or abusing – their technological tools.”

Mawlana Hazar Imam Aga Khan IV
Stephen Odgen Lecture at Brown University, Providence, USA, March 10, 2014
Speech

******

“It is ironic that a sense of intensified conflict comes at a time of unprecedented breakthroughs in communication technology. At the very time that we talk more and more about global convergence, we also seem to experience more and more social divergence. The lesson it seems to me is that technologies alone will not save us– the critical variable will always be and will always lie in the disposition of human hearts and minds.”

Mawlana Hazar Imam
North-South Prize Ceremony, Lisbon, Portugal, June 12, 2014
Speech

*****

Aga Khan Hazar Imam
Mawlana Hazar Imam addresses the North-South Prize Ceremony in the Senate Hall of the Portuguese Parliament as His Excellency Aníbal Cavaco Silva, the President of the Republic of Portugal and President of the Assembly of the Republic, Maria Assunção Esteves look on. Photo: AKDN/ José Manuel Boavida Caria

“Technologies, after all, are merely instruments – they can be used for good or ill. How we use them will depend – in every age and in every culture – not on what sits on our desktops, but on what is in our heads – and in our hearts.”

Mawlana Hazar Imam
The LaFontaine-Baldwin Lecture, Toronto, Canada, October 15, 2010
Speech

https://nimirasblog.wordpress.com/2019/ ... rce=Direct
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Women in public life are increasingly subject to sexual slander. Don’t believe it

As deepfake technology spreads, expect more bogus sex tapes of female politicians


Adulterer, pervert, traitor, murderer. In France in 1793, no woman was more relentlessly slandered than Marie Antoinette. Political pamphlets spread baseless rumours of her depravity. Some drawings showed her with multiple lovers, male and female. Others portrayed her as a harpy, a notoriously disagreeable mythical beast that was half bird-of-prey, half woman. Such mudslinging served a political purpose. The revolutionaries who had overthrown the monarchy wanted to tarnish the former queen’s reputation before they cut off her head.

She was a victim of something ancient and nasty that is becoming worryingly common: sexualised disinformation to undercut women in public life (see article). People have always invented rumours about such women. But three things have changed. Digital technology makes it easy to disseminate libel widely and anonymously. “Deepfake” techniques (manipulating images and video using artificial intelligence) make it cheap and simple to create convincing visual evidence that people have done or said things which they have not. And powerful actors, including governments and ruling parties, have gleefully exploited these new opportunities. A report by researchers at Oxford this year found well-organised disinformation campaigns in 70 countries, up from 48 in 2018 and 28 in 2017.

Consider the case of Rana Ayyub, an Indian journalist who tirelessly reports on corruption, and who wrote a book about the massacre of Muslims in the state of Gujarat when Narendra Modi, now India’s prime minister, was in charge there. For years, critics muttered that she was unpatriotic (because she is a Muslim who criticises the ruling party) and a prostitute (because she is a woman). In April 2018 the abuse intensified. A deepfake sex video, which grafted her face over that of another woman, was published and went viral. Digital mobs threatened to rape or kill her. She was “doxxed”: someone published her home address and phone number online. It is hard to prove who was behind this campaign of intimidation, but its purpose is obvious: to silence her, and any other woman thinking of criticising the mighty.

Similar tactics are used to deter women from running for public office. In the run-up to elections in Iraq last year, two female candidates were humiliated with explicit videos, which they say were faked. One pulled out of the race. The types of image used to degrade women vary from place to place. In Myanmar, where antipathy towards Muslims is widespread, detractors of Aung San Suu Kyi, the country’s de facto leader, circulated a photo manipulated to show her wearing a hijab. By contrast in Iran, an Islamist theocracy, a woman was disqualified from taking the seat she had won when a photo, which she claims is doctored, leaked showing her without one.

High-tech sexual slander has not replaced the old-fashioned sort, which remains rife wherever politicians and their propagandists can get away with it. In Russia, female dissidents are dubbed sexual deviants in pro-Kremlin media. In the Philippines, President Rodrigo Duterte has joked about showing a pornographic video of a female opponent, which she says is a fake, to the pope. In China, mainland-based trolls have spread lewd quotes falsely attributed to Tsai Ing-wen, Taiwan’s first female president. Beijing’s state media say she is “extreme” and “emotional” as a result of being unmarried and childless.

Stamping out the problem altogether will be impossible. Anyone can make a deepfake sex video, or hire someone to do it, for a pittance, and then distribute it anonymously. Politicians will inevitably be targets. Laws against libel or invasion of privacy may deter some abuses, but they are not much use when the perpetrator is unknown. Reputable tech firms will no doubt try to remove the most egregious content, but there will always be other platforms, some of them hosted by regimes that actively sow disinformation in the West.

So the best defence against sexual lies is scepticism. People should assume that videos showing female politicians naked or having sex are probably bogus. Journalists should try harder to expose the peddlers of fake footage, rather than mindlessly linking to it. Some day, one hopes, voters may even decide that it is none of their business what public figures look like under their clothes, or which consenting adults they sleep with.■

https://www.economist.com/leaders/2019/ ... a/341817/n
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

The Incredibly True Story of Fake Headlines

Are you still reading? Editors frequently use this space to include important contextual information about a news story


Fake news is back in the news again (thanks to Mark Zuckerberg). But did it ever really leave? For some people, legitimate news from traditional media has become unreliable, no longer to be trusted. Is this at all fair?

Keeping the news in a state of good health, in the age of social media, has become more urgent than ever. The way we talk about things, in debates over the defining issues of our time, ends up determining what we do about them. Fake news can be deliberately manipulated by those with vested interests to shape and frame and control public opinions, which result in the problematic actions (and inactions) on existential issues, such as climate change or human rights.

Many, like Zuckerberg, may not be motivated to see these little words on a page as a major problem. Cynics among us might point out that this is really nothing new, and newsflash, fake news is just a kind of propaganda, which has long lived on the dark side of the printed word. Zuckerberg’s strange reluctance to ban or fact-check certain paid political propaganda that employs the long, global reach of Facebook to intentionally broadcast lies to an unsuspecting public is yet another facet of how powerfully language in the information age can be weaponized by those with the means to do so.

Although the tricks of persuasion may be as old as time, that doesn’t mean we shouldn’t worry. Fake news is sometimes hard to recognize for what it is, constantly evolving to fit seamlessly into the community spaces many of us feel safe and comfortable in, those social places and platforms where we share stories and connect with people we’re inclined to trust: our friends, families, and colleagues (rather than the once widely respected gatekeepers of reliable information, the traditional press).

What is unprecedented is the speed at which massive misinformation, from deliberate propaganda and fake news to trolling to inadvertent misunderstanding, flows around the world like “digital wildfire,” thanks to social media. Hunt Allcott and Matthew Gentzkow’s recent study “Social Media and Fakes News in the 2016 Election” noted three things:

“62 percent of US adults get news on social media,”

“the most popular fake news stories were more widely shared on Facebook than the most popular mainstream news stories,” and

“many people who see fake news stories report that they believe them.”

In fact, the World Economic Forum in 2016 considered digital misinformation one of the biggest threats to global society. Researcher Vivian Roese furthermore points out that while traditional media has lost credibility with readers, for some reason internet sources of news have actually gained in credibility. This may do lasting damage to public trust of the news, as well as public understanding of important issues, such as when scientific or political information is being repackaged and retold by the media, especially when coupled with our collectively deteriorating ability to interpret information critically and see propaganda for what it is.

More..

https://daily.jstor.org/the-incredibly- ... dium=email
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Who Will Tell the Truth About the Free Press?

“Concocting fake news to attract eyeballs is a habitual trick of America’s New York Times, and this newspaper suffered a crisis of credibility for its fakery,” the Chinese government declared after The Times broke the news this month of government documents detailing the internment of Uighurs, Kazaks and other Muslims in the northwestern region of Xinjiang.

Who would have guessed that history had such a perverse development in store for us? As the historian Timothy Snyder has written in The Times, Adolf Hitler and the Nazis came up with the slogan “Lügenpresse” — translated as “lying press” — in order to discredit independent journalism. Now the tactic has been laundered through an American president, Donald Trump, who adopted the term “fake news” as a candidate and has used it hundreds of times in office.

That is how, barely a generation after the murder of millions of Jews in Nazi death camps, the term “fake news” has come to be deployed so brazenly by another repressive regime to act against another minority, to cover up the existence of prison camps for hundreds of thousands of Muslims.

Mr. Trump surely didn’t intend this. He’s not a strategic or particularly ideological person. He tends to act instead out of personal or political interest and often on impulse, based on what he thinks his core supporters in the country or the cable television studios want from him. When he yanks troops out of Syria or pardons war criminals, it’s safe to assume he’s not thinking about the long-term balance of power in the Middle East or the reputation and morale of the American military. He is maneuvering, as ever, for some perceived immediate political advantage.

So it is with his attacks on the news media. Mr. Trump loves the press. He has catered to it and been nurtured by it since he first began inventing himself as a celebrity in the 1970s. But he has needed a way to explain to his followers why there are so many upsetting revelations about incompetent administration officials, broken campaign promises and Trump family self-dealing. He’s now tweeted out the term “fake news” more than 600 times.

When an American president attacks the independent press, despots rush to imitate his example. Dozens of officials around the world — including leaders of other democracies — have used the term since Mr. Trump legitimized it. Why bother to contend with facts when you can instead just pretend they don’t exist? That’s what the Chinese government did. It simply called the Times report fake, though it was based on the government’s own documents, and declared it “unworthy of refutation.”

Following the same Oval Office script, a senior government official in Burundi trotted out “fake news” to explain why his government was banning the BBC. In Myanmar, where the government is systematically persecuting an ethnic minority, the Rohingya, an official told The Times that the very existence of such a group is “fake news.” The Russian foreign ministry uses the image of a big red “FAKE” stamp on its website to mark news reports that it does not like.

Jordan has introduced a law allowing the government to punish those who publish “false news.” Cameroon has actually jailed journalists for publishing “fake news.” Chad banned social media access nationwide for more than a year, citing “fake news.”

More...

https://www.nytimes.com/interactive/201 ... 3053091201
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Only You Can Prevent Dystopia

How to survive the internet in 2020. (It’s not going to be easy.)


The new year is here, and online, the forecast calls for several seasons of hell. Tech giants and the media have scarcely figured out all that went wrong during the last presidential election — viral misinformation, state-sponsored propaganda, bots aplenty, all of us cleaved into our own tribal reality bubbles — yet here we go again, headlong into another experiment in digitally mediated democracy.

I’ll be honest with you: I’m terrified. I spend a lot of my time looking for edifying ways of interacting with technology. In the last year, I’ve told you to meditate, to keep a digital journal, to chat with people on the phone and to never tweet. Still, I enter the new decade with a feeling of overwhelming dread. There’s a good chance the internet will help break the world this year, and I’m not confident we have the tools to stop it.

Unless, that is, we are all really careful. As Smokey Bear might say of our smoldering online discourse: Only you can prevent dystopia!

And so: Here are a few tips for improving the digital world in 2020.

Virality is a red flag. Suspect it.

Resist the easy dunk.

Find a well-moderated corner of the internet.

Details and more...

https://www.nytimes.com/2020/01/01/opin ... ogin-email
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Fake News: How to deal with misinformation

It’s likely we all know someone who has unfortunately shared inaccurate information on social media, or on a WhatsApp group.

We are living in a different world compared to just three months ago. Critical parts of our lives have been uprooted and turned upside down, which has led to a further spiral of worry and stress. We want to be helpful, so we tend to share information that comforts and reassures us - however, this doesn’t necessarily mean it’s accurate, and in fact, it often contributes to the growing uncertainty.

The spread of false information during the coronavirus outbreak has been rapid, with well-meaning friends and family sharing messages on WhatsApp and Facebook warning of everything from premature government lockdowns to unusual home remedies that claim to beat the virus.

“We’re not just fighting an epidemic; we’re fighting an infodemic,” said Tedros Adhanom Ghebreyesus, Director General of the World Health Organisation. “Fake news spreads faster and more easily than this virus, and is just as dangerous.”

It’s likely we all know someone who has unfortunately shared inaccurate information about the risk of the outbreak, however, according to research, it likely wasn’t intentional.

Studies indicate that 46 percent of Internet-using adults in the UK viewed false or misleading information about the virus in the first week of the country’s lockdown. Furthermore, researchers at King’s College London questioned people about Covid-19 conspiracy theories, such as the false idea that the virus was linked to the rollout of 5G mobile networks. Those who believed these theories were less likely to believe there was a good reason for the lockdown in the UK, potentially increasing the risk to their health if they chose to ignore government instructions.

These forwarded messages may contain useless, incorrect, or even harmful information and advice, which can hamper the public health response and add to social disorder and division.

Even within our own community, the temporary closure of our Jamatkhana spaces has resulted in the appearance of “virtual Jamatkhanas,” the details of which were forwarded onto many others without a second thought. These virtual gatherings are not appropriate, as Jamatkhana may only be established by the Imam-of-the-Time, through his institutions and appointed Mukhi-Kamadias.

Additionally, a recent report from one province in Iran found that hundreds of people had died from drinking industrial-strength methanol — based on a false claim that it could protect from contracting Covid-19.

Regardless of the consequences of taking notice of fake news, the “infodemic” will likely continue. While companies such as Instagram, Facebook, and Twitter have introduced new measures (Facebook recently introduced an Information Centre with a mix of curated information and medical advice), we must be proactive and act rationally, with prudence and sound judgment.

In East Africa in 2016, Mawlana Hazar Imam explained that digital mediums have produced a global flood of voices in the form of websites, blogs, and social media, saying that, “The result is often a wild mix of messages: good information and bad information, superficial impressions, fleeting images, and a good deal of confusion and conflict. And this is true all over the world.”

At a time of intensifying emotions and growing polarisation, this is resulting in a society in which people feel “entitled to their own facts.”

“In such a world, it is absolutely critical – more than ever – that the public should have somewhere to turn for reliable, balanced, objective, and accurate information,” Hazar Imam continued.

In a world filled with a mixture of information, what do these dramatic changes mean for the Jamat worldwide? Here are some research-backed suggestions to combat misinformation:

- Source: Question the source - references have been made to lots of institutions and experts during this outbreak. Check on official mainstream media to see if the story is repeated there. If it was forwarded from a “friend of a friend,” assume it’s a rumour, unless proven otherwise.
- Use fact-checking websites: Websites like APFactCheck and FullFact separate out true claims from false ones. While it’s far easier to just forward a WhatsApp message that someone else sent to you, a quick search on fact-checking sites will inform you if it’s been flagged as fake news by more trusted sources.
- Over-encouragement to share: Be wary if the message asks you to share - this is how viral messaging works.
- Listen to advice from official institutions: the best places to go for health information about Covid-19 are government health websites and the World Health Organisation website.
- At this time in particular, it is critical that we understand the risks of misinformation and miscommunication, and rely only on credible government and Jamati institutional sources, such as The Ismaili, the official website of the Ismaili community.

If we are mindful of our own online behaviour and think about the factual basis of the news we consume and forward on, we can stem the spread of misinformation while helping one another to decide what to trust for the betterment of our lives.

https://the.ismaili/global/news/feature ... -173435533
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Welcome to the Next Level of Bullshit

Image

One of the most salient features of our culture is that there is so much bullshit.” These are the opening words of the short book On Bullshit, written by the philosopher Harry Frankfurt. Fifteen years after the publication of this surprise bestseller, the rapid progress of research on artificial intelligence is forcing us to reconsider our conception of bullshit as a hallmark of human speech, with troubling implications. What do philosophical reflections on bullshit have to do with algorithms? As it turns out, quite a lot.

In May this year the company OpenAI, co-founded by Elon Musk in 2015, introduced a new language model called GPT-3 (for “Generative Pre-trained Transformer 3”). It took the tech world by storm. On the surface, GPT-3 is like a supercharged version of the autocomplete feature on your smartphone; it can generate coherent text based on an initial input. But GPT-3’s text-generating abilities go far beyond anything your phone is capable of. It can disambiguate pronouns, translate, infer, analogize, and even perform some forms of common-sense reasoning and arithmetic. It can generate fake news articles that humans can barely detect above chance. Given a definition, it can use a made-up word in a sentence. It can rewrite a paragraph in the style of a famous author. Yes, it can write creative fiction. Or generate code for a program based on a description of its function. It can even answer queries about general knowledge. The list goes on.

More...

http://nautil.us/issue/89/the-dark-side ... b00bf1f6eb
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

I Spoke to a Scholar of Conspiracy Theories and I’m Scared for Us

The big lesson of 2020 is that everything keeps getting more dishonest.


Lately, I have been putting an embarrassing amount of thought into notions like jinxes and knocking on wood. The polls for Joe Biden look good, but in 2020, any hint of optimism feels dangerously naïve, and my brain has been working overtime in search of potential doom.

I have become consumed with an alarming possibility: that neither the polls nor the actual outcome of the election really matter, because to a great many Americans, digital communication has already rendered empirical, observable reality beside the point.

If I sound jumpy, it’s because I spent a couple of hours recently chatting with Joan Donovan, the research director of the Shorenstein Center on Media, Politics and Public Policy at Harvard’s Kennedy School. Donovan is a pioneering scholar of misinformation and media manipulation — the way that activists, extremists and propagandists surf currents in our fragmented, poorly moderated media ecosystem to gain attention and influence society.

Donovan’s research team studies online lies the way crash-scene investigators study aviation disasters. They meticulously take apart specific hoaxes, conspiracy theories, viral political memes, harassment campaigns and other toxic online campaigns in search of the tactics that made each one explode into the public conversation.

This week, Donovan’s team published “The Media Manipulation Casebook,” a searchable online database of their research. It makes for grim reading — an accounting of the many failures of journalists, media companies, tech companies, policymakers, law enforcement officials and the national security establishment to anticipate and counteract the liars who seek to dupe us. Armed with these investigations, Donovan hopes we can all do better.

I hope she’s right. But studying her work also got me wondering whether we’re too late. Many Americans have become so deeply distrustful of one another that whatever happens on Nov. 3, they may refuse to accept the outcome. Every day I grow more fearful that the number of those Americans will be large enough to imperil our nation’s capacity to function as a cohesive society.

“I’m worried about political violence,” Donovan told me. America is heavily armed, and from Portland to Kenosha to the Michigan governor’s mansion, we have seen young men radicalized and organized online beginning to take the law into their own hands. Donovan told me she fears that “people who are armed are going to become dangerous, because they see no other way out.”

Media manipulation is a fairly novel area of research. It was only when Donald Trump won the White House by hitting it big with right-wing online subcultures — and after internet-mobilized authoritarians around the world pulled similar tricks — that serious scholars began to take notice.

The research has made a difference. In the 2016 election, tech companies and the mainstream media were often blind to the ways that right-wing groups, including white supremacists, were using bots, memes and other tricks of social media to “hack” the public’s attention, as the researchers Alice Marwick and Rebecca Lewis documented in 2017.

But the war since has been one of attrition. Propagandists keep discovering new ways to spread misinformation; researchers like Donovan and her colleagues keep sussing them out, and, usually quite late, media and tech companies move to fix the flaws — by which time the bad guys have moved on to some other way of spreading untruths.

More...

https://www.nytimes.com/2020/10/21/opin ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

To Recognize Misinformation in Media, Teach a Generation While It’s Young

There is no silver bullet to slay internet lies and fictions. But students can be taught to know when information is reliable.


The Instagram post looked strange to Amulya Panakam, a 16-year-old high school student who lives near Atlanta. In February, a friend showed her a sensational headline on her phone that declared, “Kim Jong Un is personally killing soldiers who have Covid-19!” Of course, the news wasn’t real. “I was immediately suspicious,” Ms. Panakam said. She searched online and found no media outlets reporting the fake story. But her friends had already shared it on social media.

Ms. Panakam was startled by how often students “grossly handle and spread misinformation without knowing it,” she said. Yet media literacy is not part of her school’s curriculum.

So Ms. Panakam ­­contacted Media Literacy Now, a nonprofit organization based near Boston that works to spread media literacy education. With its help, she wrote to her state and local representatives to discuss introducing media literacy in schools.

The subject was hardly new. Well before the internet, many scholars analyzed media influence on society. In recent decades, colleges have offered media studies to examine advertising, propaganda, biases, how people are portrayed in films and more.

But in a digital age, media literacy also includes understanding how websites profit from fictional news, how algorithms and bots work, and how to scrutinize suspicious websites that mimic real news outlets.

Now, during the global Covid-19 crisis, identifying reliable health information can be a matter of life or death. And as racial tensions run high in America, hostile actors can harness social media to sow discord and spread disinformation and false voting information, as they did in the 2016 elections and may well be repeating in the current elections.

Indeed, Facebook and Twitter recently shut down fake accounts linked to the Internet Research Agency, backed by Russia. Twitter said this month that it suspended nearly 1,600 accounts, including some in Iran that “amplified conversations on politically sensitive topics” like race and social justice.

Online misinformation might seem like an incurable virus, but social media companies, policymakers and nonprofits are beginning to address the problem more directly. In March, big internet companies like Facebook and Twitter started removing misleading Covid-19 posts. And many policymakers are pushing for tighter regulations about harmful content.

What still needs more attention, however, is more and earlier education. Teaching media literacy skills to teenagers and younger students can protect readers and listeners from misinformation, just as teaching good hygiene reduces disease.

More...

https://www.nytimes.com/2020/10/23/opin ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

When the World Seems Like One Big Conspiracy

Understanding the structure of global cabal theories can shed light on their allure — and their inherent falsehood.


Conspiracy theories come in all shapes and sizes, but perhaps the most common form is the global cabal theory. A recent survey of 26,000 people in 25 countries asked respondents whether they believe there is “a single group of people who secretly control events and rule the world together.”

Thirty-seven percent of Americans replied that this is “definitely or probably true.” So did 45 percent of Italians, 55 percent of Spaniards and 78 percent of Nigerians.

Conspiracy theories, of course, weren’t invented by QAnon; they’ve been around for thousands of years. Some of them have even had a huge impact on history. Take Nazism, for example. We normally don’t think about Nazism as a conspiracy theory. Since it managed to take over an entire country and launch World War II, we usually consider Nazism an “ideology,” albeit an evil one.

But at its heart, Nazism was a global cabal theory based on this anti-Semitic lie: “A cabal of Jewish financiers secretly dominates the world and are plotting to destroy the Aryan race. They engineered the Bolshevik Revolution, run Western democracies, and control the media and the banks. Only Hitler has managed to see through all their nefarious tricks — and only he can stop them and save humanity.”

Understanding the common structure of such global cabal theories can explain both their attractiveness — and their inherent falsehood.

The Structure

Global cabal theories argue that underneath the myriad events we see on the surface of the world lurks a single sinister group. The identity of this group may change: Some believe the world is secretly ruled by Freemasons, witches or Satanists; others think it’s aliens, reptilian lizard people or sundry other cliques.

But the basic structure remains the same: The group controls almost everything that happens, while simultaneously concealing this control.

Global cabal theories take particular delight in uniting opposites. Thus the Nazi conspiracy theory said that on the surface, communism and capitalism look like irreconcilable enemies, right? Wrong! That’s exactly what the Jewish cabal wants you to think! And you might think that the Bush family and the Clinton family are sworn rivals, but they’re just putting on a show — behind closed doors, they all go to the same Tupperware parties.

From these premises, a working theory of the world emerges. Events in the news are a cunningly designed smoke screen aimed at deceiving us, and the famous leaders that distract our attention are mere puppets in the hands of the real rulers.

The Lure

Global cabal theories are able to attract large followings in part because they offer a single, straightforward explanation to countless complicated processes. Our lives are repeatedly rocked by wars, revolutions, crises and pandemics. But if I believe some kind of global cabal theory, I enjoy the comforting feeling that I do understand everything.

The war in Syria? I don’t need to study Middle Eastern history to comprehend what’s happening there. It’s part of the big conspiracy. The development of 5G technology? I don’t need to do any research on the physics of radio waves. It’s the conspiracy. The Covid-19 pandemic? It has nothing to do with ecosystems, bats and viruses. It’s obviously part of the conspiracy.

The skeleton key of global cabal theory unlocks all the world’s mysteries and offers me entree into an exclusive circle — the group of people who understand. It makes me smarter and wiser than the average person and even elevates me above the intellectual elite and the ruling class: professors, journalists, politicians. I see what they overlook — or what they try to conceal.

The Flaw

Global cabal theories suffer from the same basic flaw: They assume that history is very simple. The key premise of global cabal theories is that it is relatively easy to manipulate the world. A small group of people can understand, predict and control everything, from wars to technological revolutions to pandemics.

Particularly remarkable is this group’s ability to see 10 moves ahead on the global board game. When they release a virus somewhere, they can predict not only how it will spread through the world, but also how it will affect the global economy a year later. When they unleash a political revolution, they can control its course. When they start a war, they know how it will end.

But of course, the world is much more complicated. Consider the American invasion of Iraq, for example. In 2003, the world’s sole superpower invaded a medium-size Middle Eastern country, claiming it wanted to eliminate the country’s weapons of mass destruction and end Saddam Hussein’s regime. Some suspected that it also wouldn’t have minded the chance to gain hegemony over the region and dominate the vital Iraqi oil fields. In pursuit of its goals, the United States deployed the best army in the world and spent trillions of dollars.

Fast forward a few years, and what were the results of this tremendous effort? A complete debacle. There were no weapons of mass destruction, and the country was plunged into chaos. The big winner of the war was actually Iran, which became the dominant power in the region.

So should we conclude that George W. Bush and Donald Rumsfeld were actually undercover Iranian moles, executing a devilishly clever Iranian plot? Not at all. Instead, the conclusion is that it is incredibly difficult to predict and control human affairs.

You don’t need to invade a Middle Eastern country to learn this lesson. Whether you’ve served on a school board or local council, or merely tried to organize a surprise birthday party for your mom, you probably know how difficult it is to control humans. You make a plan, and it backfires. You try to keep something a secret, and the next day everybody is talking about it. You conspire with a trusted friend, and at the crucial moment he stabs you in the back.

Global cabal theories ask us to believe that while it is very difficult to predict and control the actions of 1,000 or even 100 humans, it is surprisingly easy to puppet master nearly eight billion.

The Reality

There are, of course, many real conspiracies in the world. Individuals, corporations, organizations, churches, factions and governments are constantly hatching and pursuing various plots. But that is precisely what makes it so hard to predict and control the world in its entirety.

In the 1930s, the Soviet Union really was conspiring to ignite communist revolutions throughout the world; capitalist banks were employing all kinds of dodgy strategies; the Roosevelt administration was planning to re-engineer American society in the New Deal; and the Zionist movement pursued its plan to establish a homeland in Palestine. But these and countless other plans often collided, and there wasn’t a single group of people running the whole show.

Today, too, you are probably the target of many conspiracies. Your co-workers may be plotting to turn the boss against you. A big pharmaceutical corporation may be bribing your doctor to give you harmful opioids. Another big corporation may be pressuring politicians to block environmental regulations and allow it to pollute the air you breathe. Some tech giant may be busy hacking your private data. A political party may be gerrymandering election districts in your state. A foreign government may be trying to foment extremism in your country. These could all be real conspiracies, but they are not part of a single global plot.

Sometimes a corporation, a political party or a dictatorship does manage to gather a significant part of all the world’s power into its hands. But when such a thing happens, it’s almost impossible to keep it hush-hush. With great power comes great publicity.

Indeed, in many cases great publicity is a prerequisite for gaining great power. Lenin, for example, would never have won power in Russia by avoiding the public gaze. And Stalin at first was much fonder of scheming behind closed doors, but by the time he monopolized power in the Soviet Union, his portrait was hanging in every office, school and home from the Baltic to the Pacific. Stalin’s power depended on this personality cult. The idea that Lenin and Stalin were just a front for the real behind-the-scenes rulers contradicts all historical evidence.

Realizing that no single cabal can secretly control the entire world is not just accurate — it is also empowering. It means that you can identify the competing factions in our world, and ally yourself with some groups against others. That’s what real politics is all about.

Yuval Noah Harari is a historian and the author of “Sapiens: A Graphic History.”

https://www.nytimes.com/2020/11/20/opin ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Designed to Deceive: Do These People Look Real to You?

The people in this story may look familiar, like ones you’ve seen on Facebook or Twitter or Tinder. But they don’t exist. They were born from the mind of a computer, and the technology behind them is improving at a startling pace.


There are now businesses that sell fake people. On the website Generated.Photos, you can buy a “unique, worry-free” fake person for $2.99, or 1,000 people for $1,000. If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.

These simulated people are starting to show up around the internet, used as masks by real people with nefarious intent: spies who don an attractive face in an effort to infiltrate the intelligence community; right-wing propagandists who hide behind fake profiles, photo and all; online harassers who troll their targets with a friendly visage.

We created our own A.I. system to understand how easy it is to generate different fake faces.

The A.I. system sees each face as a complex mathematical figure, a range of values that can be shifted. Choosing different values — like those that determine the size and shape of eyes — can alter the whole image.

For other qualities, our system used a different approach. Instead of shifting values that determine specific parts of the image, the system first generated two images to establish starting and end points for all of the values, and then created images in between.

The creation of these types of fake images only became possible in recent years thanks to a new type of artificial intelligence called a generative adversarial network. In essence, you feed a computer program a bunch of photos of real people. It studies them and tries to come up with its own photos of people, while another part of the system tries to detect which of those photos are fake.

The back-and-forth makes the end product ever more indistinguishable from the real thing. The portraits in this story were created by The Times using GAN software that was made publicly available by the computer graphics company Nvidia.

Given the pace of improvement, it’s easy to imagine a not-so-distant future in which we are confronted with not just single portraits of fake people but whole collections of them — at a party with fake friends, hanging out with their fake dogs, holding their fake babies. It will become increasingly difficult to tell who is real online and who is a figment of a computer’s imagination.

Fake images and more..

https://www.nytimes.com/interactive/202 ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

The QAnon Delusion Has Not Loosened Its Grip

Millions of Americans continue to actively participate in multiple conspiracy theories. Why?


A conspiracy theory promulgated by Donald Trump, the loser of the 2020 presidential election, has gripped American politics since Nov. 3. It has been willingly adopted by millions of his followers, as well as by a majority of Republican members of Congress — 145 to 108 — and by thousands of Republican state and local officials, all of whom have found it expedient to capitulate to the fantastical claim that the election was stolen by the Democratic Party, its officeholders, operatives and supporters.

Trump’s sprawling conspiracy theory is “being reborn as the new normal of the Republican Party,” Justin Ling wrote in Foreign Policy on Jan. 6.

A Dec 30 NPR/Ipsos poll found that “recent misinformation, including false claims related to Covid-19 and QAnon, are gaining a foothold among some Americans.”

According to the survey, nearly a fifth of American adults, 17 percent, believe that “a group of Satan-worshiping elites who run a child sex ring are trying to control our politics.” Almost a third “believe that voter fraud helped Joe Biden win the 2020 election.” Even more, 39 percent, agree that “there is a deep state working to undermine President Trump.”

The spread of these beliefs has wrought havoc — as demonstrated by the Jan. 6 assault on Congress, as well as by the overwhelming support Republicans continue to offer to the former president.

More...

https://www.nytimes.com/2021/02/03/opin ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Don’t Go Down the Rabbit Hole

Critical thinking, as we’re taught to do it, isn’t helping in the fight against misinformation.


For an academic, Michael Caulfield has an odd request: Stop overthinking what you see online.

Mr. Caulfield, a digital literacy expert at Washington State University Vancouver, knows all too well that at this very moment, more people are fighting for the opportunity to lie to you than at perhaps any other point in human history.

Misinformation rides the greased algorithmic rails of powerful social media platforms and travels at velocities and in volumes that make it nearly impossible to stop. That alone makes information warfare an unfair fight for the average internet user. But Mr. Caulfield argues that the deck is stacked even further against us. That the way we’re taught from a young age to evaluate and think critically about information is fundamentally flawed and out of step with the chaos of the current internet.

“We’re taught that, in order to protect ourselves from bad information, we need to deeply engage with the stuff that washes up in front of us,” Mr. Caulfield told me recently. He suggested that the dominant mode of media literacy (if kids get taught any at all) is that “you’ll get imperfect information and then use reasoning to fix that somehow. But in reality, that strategy can completely backfire.”

In other words: Resist the lure of rabbit holes, in part, by reimagining media literacy for the internet hellscape we occupy.

It’s often counterproductive to engage directly with content from an unknown source, and people can be led astray by false information. Influenced by the research of Sam Wineburg, a professor at Stanford, and Sarah McGrew, an assistant professor at the University of Maryland, Mr. Caulfield argued that the best way to learn about a source of information is to leave it and look elsewhere, a concept called lateral reading.

For instance, imagine you were to visit Stormfront, a white supremacist message board, to try to understand racist claims in order to debunk them. “Even if you see through the horrible rhetoric, at the end of the day you gave that place however many minutes of your time,” Mr. Caulfield said. “Even with good intentions, you run the risk of misunderstanding something, because Stormfront users are way better at propaganda than you. You won’t get less racist reading Stormfront critically, but you might be overloaded by information and overwhelmed.”

Our current information crisis, Mr. Caulfield argues, is an attention crisis.

“The goal of disinformation is to capture attention, and critical thinking is deep attention,” he wrote in 2018. People learn to think critically by focusing on something and contemplating it deeply — to follow the information’s logic and the inconsistencies.

That natural human mind-set is a liability in an attention economy. It allows grifters, conspiracy theorists, trolls and savvy attention hijackers to take advantage of us and steal our focus. “Whenever you give your attention to a bad actor, you allow them to steal your attention from better treatments of an issue, and give them the opportunity to warp your perspective,” Mr. Caulfield wrote.

One way to combat this dynamic is to change how we teach media literacy: Internet users need to learn that our attention is a scarce commodity that is to be spent wisely.

In 2016, Mr. Caulfield met Mr. Wineburg, who suggested modeling the process after the way professional fact checkers assess information. Mr. Caulfield refined the practice into four simple principles:

1. Stop.

2. Investigate the source.

3. Find better coverage.

4. Trace claims, quotes and media to the original context.

Otherwise known as SIFT.

Mr. Caulfield walked me through the process using an Instagram post from Robert F. Kennedy Jr., a prominent anti-vaccine activist, falsely alleging a link between the human papillomavirus vaccine and cancer. “If this is not a claim where I have a depth of understanding, then I want to stop for a second and, before going further, just investigate the source,” Mr. Caulfield said. He copied Mr. Kennedy’s name in the Instagram post and popped it into Google. “Look how fast this is,” he told me as he counted the seconds out loud. In 15 seconds, he navigated to Wikipedia and scrolled through the introductory section of the page, highlighting with his cursor the last sentence, which reads that Mr. Kennedy is an anti-vaccine activist and a conspiracy theorist.

“Is Robert F. Kennedy Jr. the best, unbiased source on information about a vaccine? I’d argue no. And that’s good enough to know we should probably just move on,” he said.

He probed deeper into the method to find better coverage by copying the main claim in Mr. Kennedy’s post and pasting that into a Google search. The first two results came from Agence France-Presse’s fact-check website and the National Institutes of Health. His quick searches showed a pattern: Mr. Kennedy’s claims were outside the consensus — a sign they were motivated by something other than science.

The SIFT method and the instructional teaching unit (about six hours of class work) that accompanies it has been picked up by dozens of universities across the country and in some Canadian high schools. What is potentially revolutionary about SIFT is that it focuses on making quick judgments. A SIFT fact check can and should take just 30, 60, 90 seconds to evaluate a piece of content.

The four steps are based on the premise that you often make a better decision with less information than you do with more. Also, spending 15 minutes to determine a single fact in order to decipher a tweet or a piece of news coming from a source you’ve never seen before will often leave you more confused than you were before. “The question we want students asking is: Is this a good source for this purpose, or could I find something better relatively quickly?” Mr. Caulfield said. “I’ve seen in the classroom where a student finds a great answer in three minutes but then keeps going and ends up won over by bad information.”

More...

https://www.nytimes.com/2021/02/18/opin ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

A Better Way to Think About Conspiracies

People will always be interested in conspiracy theories. They need a tool kit for discriminating among different fringe ideas.


No problem concerns journalists and press-watchers so much these days as the proliferation of conspiracy theories and misinformation on the internet. “We never confronted this level of conspiracy thinking in the U.S. previously,” Marty Baron, the former executive editor of The Washington Post, told Der Spiegel in a recent interview. His assumption, widely shared in our profession, is that the internet has forged an age of false belief, encouraged by social media companies and exploited by Donald Trump, that requires new thinking about how to win the battle for the truth.

Some of that new thinking leads to surprising places. For instance, my colleague Kevin Roose recently reported that some experts wish that the Biden administration would appoint a “reality czar” — a dystopian-sounding title, he acknowledged, for an official charged with coordinating anti-disinformation efforts — as “the tip of the spear for the federal government’s response to the reality crisis.”

Meanwhile, my fellow Opinion writer Charlie Warzel recently explored the work of the digital literacy expert Michael Caulfield, who argues that the usually laudable impulse toward critical thinking and investigation is actually the thing that most often leads online information-seekers astray. Instead of always going deeper, following arguments wherever they seem to lead, he suggests that internet users be taught to simplify: to check arguments quickly against mainstream sources, determine whether a given arguer is a plausible authority, and then move on if the person isn’t.

I’m pretty doubtful of the “reality czar” concept, but Caulfield’s arguments were more interesting. We should be skeptical that the scale of conspiracy thinking today is a true historical novelty; the conspiracy theories of the Revolutionary era, for instance, would be entirely at home on today’s internet. But we’re clearly dealing with a new way in which people absorb and spread conspiracies, and a mind-altering technology like the internet probably does require a new kind of education, to help keep people from losing their senses in the online wilds or settling in as citizens of partisan dreamscapes.

But that education won’t be effective if it tells a too simplistic story, where all consensus claims are true and all conspiracy theories empty. In reality, a consensus can be wrong, and a conspiracy theory can sometimes point toward an overlooked or hidden truth — and the approach that Caulfield proposes, to say nothing of the idea of a centralized Office of Reality, seem likely to founder on these rocks. If you tell people not to listen to some prominent crank because that person doesn’t represent the establishment view or the consensus position, you’re setting yourself up to be written off as a dupe or deceiver whenever the consensus position fails or falls apart.

I could multiply examples of how this falling apart happens — I am old enough to remember, for instance, when only cranks doubted that Saddam Hussein had weapons of mass destruction — but the last year has given us a thuddingly obvious case study: In January and February of 2020, using a follow-the-consensus method of online reading could have given you a wildly misleading picture of the disease’s risks, how it was transmitted, whether to wear masks and more.

Is there an alternative to leaning so heavily on the organs of consensus? I think there might be. It would start by taking conspiracy thinking a little more seriously — recognizing not only that it’s ineradicable, but also that it’s a reasonable response to both elite failures and the fact that conspiracies and cover-ups often do exist.

If you assume that people will always believe in conspiracies, and that sometimes they should, you can try to give them a tool kit for discriminating among different fringe ideas, so that when they venture into outside-the-consensus territory, they become more reasonable and discerning in the ideas they follow and bring back.

Here are a few ideas that belong in that kind of tool kit.

Prefer simple theories to baroque ones

Consider two theories about Covid-19: the conceit that it was designed by the Gates Foundation for some sort of world-domination scheme, and the theory that it was accidentally released by a Chinese virology lab in Wuhan, a disaster that the Beijing government then sought to cover up. If you just follow the official media consensus, you’ll see both these theories labeled misinformation and conspiracy. But in fact the two are wildly different, and the latter is vastly more plausible than the former — so plausible that it might even be true

What makes it plausible is that it doesn’t depend on some complex plot for a one-world government; it just depends on the human and bureaucratic capacity for error and the authoritarian tendency toward cover-up. And this points to an excellent rule for anyone who looks at an official narrative and thinks that something seems suspicious: In following your suspicions, never leap to a malignant conspiracy to explain something that can be explained by incompetence and self-protection first.

Avoid theories that seem tailored to fit a predetermined conclusion

After the November election, I spent a fair amount of time arguing with conservatives who were convinced that it had been stolen for Joe Biden, and after a while I noticed that I was often playing Whac-a-Mole: They would raise a fishy-seeming piece of evidence, I would show them something debunking it, and then they would just move on to a different piece of evidence that assumed a different kind of conspiracy — shifting from stuffed ballot boxes in urban districts to computer shenanigans in suburban districts, say — without losing an iota in their certainty.

That kind of shift doesn’t prove the new example false, but it should make you suspect that what’s happening is a search for facts to fit a predetermined narrative, rather than just the observation of a suspicious fact with an open mind about where it leads. If you’re reading someone who can’t seem to internalize the implications of having an argument proved wrong, or who constantly cites easily discredited examples, you’re not being discerning; you’ve either wandered into someone’s ideological fixation or you’re a mark for intentional fake news.

Take fringe theories more seriously when the mainstream narrative has holes
For example: If you tell me that the C.I.A. killed John F. Kennedy, I will be dismissive, because the boring official narrative of his assassination — hawkish president killed by a Marxist loner who previously tried to assassinate a right-wing general — fits the facts perfectly well on its own. But if you tell me that some mysterious foreign intelligence agency was involved in Jeffrey Epstein’s strange career, I will be more open to your theories, because so much about Epstein’s dizzying ascent from prep school math teacher to procurer to the famous and the rich remains mystifying even now.

Likewise, every fringe theory about U.F.O.s — that they’re some kind of secret military supertechnology, that they’re really aliens, that they’re something stranger still — became a lot more plausible in the last couple of years, because the footage released by Pentagon sources created a mystery that no official or consensus narrative has adequately explained.

Just because you start to believe in one fringe theory, you don’t have to believe them all

This kind of slippage is clearly a feature of conspiratorial thinking: Joining an out-group that holds one specific outlandish opinion seems to encourage a sense that every out-group must be on to something, every outlandish opinion must be right. Thus the person who starts out believing that Epstein didn’t kill himself ends up going full QAnon. Or the person who decides that the Centers for Disease Control is wrong about their chronic illness ends up refusing chemotherapy for cancer.

But at the same time, there is no intellectually necessary reason why believing in one piece of secret knowledge, one specific conspiracy theory, should require a general belief in every fringe idea.

Here revealed religion offers a useful model. To be a devout Christian or a believing Jew or Muslim is to be a bit like a conspiracy theorist, in the sense that you believe that there is an invisible reality that secular knowledge can’t recognize and a set of decisive events in history that fall outside of nature’s laws.

But the great religions are also full of warnings against false prophets and fraudulent revelations. My own faith, Roman Catholicism, is both drenched in the supernatural and extremely scrupulous about the miracles and seers that it validates. And it allows its flock to be simply agnostic about a range of possibly supernatural claims and phenomena, to allow that they might be real, or might not, without making them the basis of your faith.

Some version of that careful agnosticism, that mixture of openness and caution, seems like a better spirit with which to approach the internet and all its rabbit holes than either a naïve credulity or a brittle confidence in mainstream media consensus. And I suspect that’s actually what a lot of polling on conspiracy theories traditionally captures: not a blazing certainty about what really happened on 9/11 or who killed Kennedy or how “they” faked the moon landing, but a kind of studied uncertainty about our strange world and its secrets.

What we should hope for, reasonably, is not a world where a “reality czar” steers everyone toward perfect consensus about the facts, but a world where a conspiracy-curious uncertainty persists as uncertainty, without hardening into the zeal that drove election truthers to storm the Capitol.

It’s that task that our would-be educators should be taking up: not a rigid defense of conventional wisdom, but the cultivation of a consensus supple enough to accommodate the doubter, instead of making people feel as if their only options are submission or revolt.

https://www.nytimes.com/2021/03/02/opin ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

Far-Right Extremists Move From ‘Stop the Steal’ to Stop the Vaccine

Extremist organizations are now bashing the safety and efficacy of coronavirus vaccines in an effort to try to undermine the government.


Adherents of far-right groups who cluster online have turned repeatedly to one particular website in recent weeks — the federal database showing deaths and adverse reactions nationwide among people who have received Covid-19 vaccinations.

Although negative reactions have been relatively rare, the numbers are used by many extremist groups to try to bolster a rash of false and alarmist disinformation in articles and videos with titles like “Covid-19 Vaccines Are Weapons of Mass Destruction — and Could Wipe out the Human Race” or “Doctors and Nurses Giving the Covid-19 Vaccine Will be Tried as War Criminals.”

If the so-called Stop the Steal movement appeared to be chasing a lost cause once President Biden was inaugurated, its supporters among extremist organizations are now adopting a new agenda from the anti-vaccination campaign to try to undermine the government.

Bashing of the safety and efficacy of vaccines is occurring in chatrooms frequented by all manner of right-wing groups including the Proud Boys; the Boogaloo movement, a loose affiliation known for wanting to spark a second Civil War; and various paramilitary organizations.

These groups tend to portray vaccines as a symbol of excessive government control. “If less people get vaccinated then the system will have to use more aggressive force on the rest of us to make us get the shot,” read a recent post on the Telegram social media platform, in a channel linked to members of the Proud Boys charged in storming the Capitol.

The marked focus on vaccines is particularly striking on discussion channels populated by followers of QAnon, who had falsely prophesied that Donald J. Trump would continue as president while his political opponents were marched off to jail.

“They rode the shift in the national conversation away from Trump to what was happening with the massive ramp up in vaccines,” said Devin Burghart, the head of the Seattle-based Institute for Research and Education on Human Rights, which monitors far-right movements, referring to followers of QAnon. “It allowed them to pivot away from the failure of their previous prophecy to focus on something else.”

Apocalyptic warnings about the vaccine feed into the far-right narrative that the government cannot be trusted, the sentiment also at the root of the Jan. 6 Capitol riot. The more vaccine opponents succeed in preventing or at least delaying herd immunity, experts noted, the longer it will take for life to return to normal and that will further undermine faith in the government and its institutions.

More...

https://www.nytimes.com/2021/03/26/us/f ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

The Fight Against Misinformation Isn’t Just on Facebook

Broadcast television and talk radio are just as problematic as social media.


The plague of misinformation — false rumors about the legitimacy of the 2020 presidential election, the ineffectiveness of face masks and the safety of 5G, to name a few examples — is usually blamed on social media. But false and damaging information isn’t just available online. It’s also abundant in broadcast media, and as politicians debate whether or how to regulate technology companies, they should also consider creating systems to address the dangers implicit in allowing and enabling the spread of misinformation, wherever it’s published.

The Constitution safeguards the freedom of speech from direct government interference, but lawmakers also recognize the need for thoughtful intervention. Politicians have been concerned about the power of online platforms for years. Last week, leaders of Google, Facebook and Twitter were again asked to answer questions from members of Congress about how their platforms handle false or harmful material. Both the House and the Senate are considering legislation that would revise Section 230 of the Communications Decency Act, which currently exempts technology companies from being held responsible for the material they publish. Facebook has been advocating the law’s reform. Technology companies are also facing congressional scrutiny for potential antitrust violations.

But it is not at all clear that reducing the dominance of technology companies will go far enough. And oversight boards run by tech companies themselves, such as the one that Facebook created to hear issues of online safety and free speech, are not sufficient, as those efforts can never be truly independent if they are assembled by, and are financially tied to, the very companies they are tasked with overseeing. Furthermore, addressing only the technology industry won’t cure the problem, because misinformation that is spread in one medium is reinforced and amplified by falsehoods spread on another. A phrase that’s based on a lie and trends on Facebook and Twitter — “Stop the Steal,” for example — becomes fortified and legitimized when it’s picked up by television and radio reporters or commentators, whose words then reappear on social media, fueling a tornado of misinformation.

More...

https://www.nytimes.com/2021/03/29/opin ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Post by kmaherali »

As the Press Weakens, So Does Democracy

I came to The New York Times in 1992, 29 years ago this summer, as the first intern in its graphics department. I arrived in Manhattan, a little Black boy from a hick town in Louisiana, and it blew my mind.

In those first months I saw how one of the best newsrooms in the country covered some of the biggest stories of the era, and it shaped me as a journalist and in my reverence for the invaluable role journalists play in society.

I arrived weeks after the Los Angeles riots following the acquittal of police officers in the Rodney King beating, and just before the Supreme Court reaffirmed Roe v. Wade. The city was under the control of the first Black mayor in its history, David Dinkins.

I would soon watch in person as Bill Clinton was nominated for president at the Democratic National Convention in Madison Square Garden, just about 10 blocks south of The Times’s offices, and I would watch a massive — and very political — gay pride parade march through Times Square as the community reeled from the scourge of AIDS. In 1992, a staggering 33,590 Americans died of the disease as it became “the number one cause of death among men aged 25-44 years,” according to the C.D.C.


This, in many ways, was an extraordinary time to be a journalist.

Newsroom employment was at a high, and throughout the 1990s, and even into the early 2000s, a slight majority of Americans still had a great deal or fair amount of trust in the news media to report the news “fully, accurately and fairly,” according to Gallup.

In 1992, there was no MSNBC or Fox News, no Google, Facebook, Twitter, Instagram or TikTok. Also, there weren’t many, if any, mainstream news organizations online. The Times didn’t start online publication until 1996, and then it was not the truly transformative force it would become.

Since the 1990s, newsrooms have seen tremendous, truly terrifying, contraction. On Tuesday, Pew Research Center issued a report that found “newsroom employment in the United States has dropped by 26 percent since 2008.”

Last month, Poynter reported on a survey that found that “the United States ranks last in media trust — at 29 percent — among 92,000 news consumers surveyed in 46 countries.”

Furthermore, a report last year by the Knight Foundation and the University of North Carolina found:

Since 2004, the United States has lost one-fourth — 2,100 — of its newspapers. This includes 70 dailies and more than 2,000 weeklies or nondailies.

At end of 2019, the United States had 6,700 newspapers, down from almost 9,000 in 2004.

Today, more than 200 of the nation’s 3,143 counties and equivalents have no newspaper and no alternative source of credible and comprehensive information on critical issues. Half of the counties have only one newspaper, and two-thirds do not have a daily newspaper.

Many communities that lost newspapers were the most vulnerable — struggling economically and isolated.

The news industry is truly struggling, but the public is oblivious to this. A Pew Research Center survey conducted in 2018 found that “most Americans think their local news media are doing just fine financially.”

The report explains, “About seven-in-ten say their local news media are doing either somewhat or very well financially (71 percent).”

I guess I can understand the illusion in some ways. We have celebrity journalists — writers, radio personalities and anchors — in a way that didn’t exist before.

There were popular and trusted news figures, to be sure, but the proliferation of sensational, personality journalists is a newer and growing sector of journalism.

Also, we are now able to access and share more news than ever before. This all leads to a feeling that we are drowning in news, when in fact pond after pond is drying up and the lakes are getting smaller.

I share all that to say this: Democracies cannot survive without a common set of facts and a vibrant press to ferret them out and present them. Our democracy is in terrible danger. The only way that lies can flourish as they now do is because the press has been diminished in both scale and stature. Lies advance when truth is in retreat.

The founders understood the supreme value of the press, and that’s why they protected it in the Constitution. No other industry can claim the same.

But, protection from abridgment is not protection from shrinkage or obsoletion.

We are moving ever closer to a country where the corrupt can deal in the darkness with no fear of being exposed by the light.

https://www.nytimes.com/2021/07/18/opin ... 778d3e6de3
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Re: The Concept of Post Fact Society

Post by kmaherali »

Deepfake Luke Skywalker Should Scare Us

The power to create convincing deepfake icons could destabilize society.


Image

Tell it to me in Star Wars,” Tracy Jordan demands of Frank Rossitano in an episode of 30 Rock, when Frank tries to explain the “uncanny valley.” As Frank puts it, people like droids like C-3PO (vaguely human-like) and scoundrels with hearts of gold like Han Solo (a real person), but are creeped out by weirdly unnatural CGI stormtroopers (or Tom Hanks in The Polar Express). More than a decade after that scene aired, Star Wars is still clambering out of the uncanny valley: Most fans were thrilled to see a digital cameo from a young Luke Skywalker in The Mandalorian. But he—along with recreations of young Princess Leia and Grand Moff Tarkin from the original Star Wars film—didn’t look real, which for some was unnerving.

These are all examples of what have come to be known as deepfakes, a catchall term for synthetically manipulated or generated images or videos of people or just about anything. The name derives from the fact that most methods for creating deepfakes rely on deep neural networks, the same basic architecture that underlies most of the AI we encounter on a daily basis, like face and voice recognition. And because the technology is constantly and rapidly improving, so are the deepfakes. Deepfake Luke recently appeared again in The Book of Boba Fett, just about a year later, looking impressively realistic.

Human and computer perception is fundamentally different.

Currently, one of the most advanced methods for creating deepfakes is called a generative adversarial network. It’s actually two separate neural networks paired together: one network might generate fake faces (the generative part) and the other tries to discriminate the fake faces generated by the first network from a set of real faces in a database. Like training with a partner, over time, both networks get better together at their respective tasks (the adversarial part). The discriminator network gradually gets better at telling the fake faces from the real faces. And the generator network gradually gets better at fooling the discriminator by generating more realistic fake faces.

Great for the scrappy little generator network. Good for Star Wars fans and the now immortal Luke Skywalker. Possibly bad for society.

Creating digital actors for TV shows is relatively harmless. But the power to create convincing deepfakes of world leaders, for example, could severely destabilize society. An inflammatory and well-timed deepfake video of a politician could hypothetically swing an election, spark riots, or raise tensions between two countries to volatile levels.

Whether this is a serious concern yet depends on how good current deepfake technology is and how good people are at discerning fakes from the real thing. In two recent studies, researchers asked thousands of people to judge the authenticity of real and fake images and videos to try to answer this question.

More...

https://nautil.us/deepfake-luke-skywalk ... -us-14254/
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Re: The Concept of Post Fact Society

Post by kmaherali »

Time to assume that health research is fraudulent until proven otherwise?

Image

Health research is based on trust. Health professionals and journal editors reading the results of a clinical trial assume that the trial happened and that the results were honestly reported. But about 20% of the time, said Ben Mol, professor of obstetrics and gynaecology at Monash Health, they would be wrong. As I’ve been concerned about research fraud for 40 years, I wasn’t that surprised as many would be by this figure, but it led me to think that the time may have come to stop assuming that research actually happened and is honestly reported, and assume that the research is fraudulent until there is some evidence to support it having happened and been honestly reported. The Cochrane Collaboration, which purveys “trusted information,” has now taken a step in that direction.

As he described in a webinar last week, Ian Roberts, professor of epidemiology at the London School of Hygiene & Tropical Medicine, began to have doubts about the honest reporting of trials after a colleague asked if he knew that his systematic review showing the mannitol halved death from head injury was based on trials that had never happened. He didn’t, but he set about investigating the trials and confirmed that they hadn’t ever happened. They all had a lead author who purported to come from an institution that didn’t exist and who killed himself a few years later. The trials were all published in prestigious neurosurgery journals and had multiple co-authors. None of the co-authors had contributed patients to the trials, and some didn’t know that they were co-authors until after the trials were published. When Roberts contacted one of the journals the editor responded that “I wouldn’t trust the data.” Why, Roberts wondered, did he publish the trial? None of the trials have been retracted.

Later Roberts, who headed one of the Cochrane groups, did a systematic review of colloids versus crystalloids only to discover again that many of the trials that were included in the review could not be trusted. He is now sceptical about all systematic reviews, particularly those that are mostly reviews of multiple small trials. He compared the original idea of systematic reviews as searching for diamonds, knowledge that was available if brought together in systematic reviews; now he thinks of systematic reviewing as searching through rubbish. He proposed that small, single centre trials should be discarded, not combined in systematic reviews.

Mol, like Roberts, has conducted systematic reviews only to realise that most of the trials included either were zombie trials that were fatally flawed or were untrustworthy. What, he asked, is the scale of the problem? Although retractions are increasing, only about 0.04% of biomedical studies have been retracted, suggesting the problem is small. But the anaesthetist John Carlisle analysed 526 trials submitted to Anaesthesia and found that 73 (14%) had false data, and 43 (8%) he categorised as zombie. When he was able to examine individual patient data in 153 studies, 67 (44%) had untrustworthy data and 40 (26%) were zombie trials. Many of the trials came from the same countries (Egypt, China, India, Iran, Japan, South Korea, and Turkey), and when John Ioannidis, a professor at Stanford University, examined individual patient data from trials submitted from those countries to Anaesthesia during a year he found that many were false: 100% (7/7) in Egypt; 75% (3/ 4) in Iran; 54% (7/13) in India; 46% (22/48) in China; 40% (2/5) in Turkey; 25% (5/20) in South Korea; and 18% (2/11) in Japan. Most of the trials were zombies. Ioannidis concluded that there are hundreds of thousands of zombie trials published from those countries alone.

Others have found similar results, and Mol’s best guess is that about 20% of trials are false. Very few of these papers are retracted.

We have long known that peer review is ineffective at detecting fraud, especially if the reviewers start, as most have until now, by assuming that the research is honestly reported. I remember being part of a panel in the 1990s investigating one of Britain’s most outrageous cases of fraud, when the statistical reviewer of the study told us that he had found multiple problems with the study and only hoped that it was better done than it was reported. We asked if had ever considered that the study might be fraudulent, and he told us that he hadn’t.

We have now reached a point where those doing systematic reviews must start by assuming that a study is fraudulent until they can have some evidence to the contrary. Some supporting evidence comes from the trial having been registered and having ethics committee approval. Andrew Grey, an associate professor of medicine at the University of Auckland, and others have developed a checklist with around 40 items that can be used as a screening tool for fraud (you can view the checklist here). The REAPPRAISED checklist (Research governance, Ethics, Authorship, Plagiarism, Research conduct, Analyses and methods, Image manipulation, Statistics, Errors, Data manipulation and reporting) covers issues like “ethical oversight and funding, research productivity and investigator workload, validity of randomisation, plausibility of results and duplicate data reporting.” The checklist has been used to detect studies that have subsequently been retracted but hasn’t been through the full evaluation that you would expect for a clinical screening tool. (But I must congratulate the authors on a clever acronym: some say that dreaming up the acronym for a study is the most difficult part of the whole process.)

Roberts and others wrote about the problem of the many untrustworthy and zombie trials in The BMJ six years ago with the provocative title: “The knowledge system underpinning healthcare is not fit for purpose and must change.” They wanted the Cochrane Collaboration and anybody conducting systematic reviews to take very seriously the problem of fraud. It was perhaps coincidence, but a few weeks before the webinar the Cochrane Collaboration produced guidelines on reviewing studies where there has been a retraction, an expression of concern, or the reviewers are worried about the trustworthiness of the data.

Retractions are the easiest to deal with, but they are, as Mol said, only a tiny fraction of untrustworthy or zombie studies. An editorial in the Cochrane Library accompanying the new guidelines recognises that there is no agreement on what constitutes an untrustworthy study, screening tools are not reliable, and “Misclassification could also lead to reputational damage to authors, legal consequences, and ethical issues associated with participants having taken part in research, only for it to be discounted.” The Collaboration is being cautious but does stand to lose credibility—and income—if the world ceases to trust Cochrane Reviews because they are thought to be based on untrustworthy trials.

Research fraud is often viewed as a problem of “bad apples,” but Barbara K Redman, who spoke at the webinar insists that it is not a problem of bad apples but bad barrels if not, she said, of rotten forests or orchards. In her book Research Misconduct Policy in Biomedicine: Beyond the Bad-Apple Approach she argues that research misconduct is a systems problem—the system provides incentives to publish fraudulent research and does not have adequate regulatory processes. Researchers progress by publishing research, and because the publication system is built on trust and peer review is not designed to detect fraud it is easy to publish fraudulent research. The business model of journals and publishers depends on publishing, preferably lots of studies as cheaply as possible. They have little incentive to check for fraud and a positive disincentive to experience reputational damage—and possibly legal risk—from retracting studies. Funders, universities, and other research institutions similarly have incentives to fund and publish studies and disincentives to make a fuss about fraudulent research they may have funded or had undertaken in their institution—perhaps by one of their star researchers. Regulators often lack the legal standing and the resources to respond to what is clearly extensive fraud, recognising that proving a study to be fraudulent (as opposed to suspecting it of being fraudulent) is a skilled, complex, and time consuming process. Another problem is that research is increasingly international with participants from many institutions in many countries: who then takes on the unenviable task of investigating fraud? Science really needs global governance.

Everybody gains from the publication game, concluded Roberts, apart from the patients who suffer from being given treatments based on fraudulent data.

Stephen Lock, my predecessor as editor of The BMJ, became worried about research fraud in the 1980s, but people thought his concerns eccentric. Research authorities insisted that fraud was rare, didn’t matter because science was self-correcting, and that no patients had suffered because of scientific fraud. All those reasons for not taking research fraud seriously have proved to be false, and, 40 years on from Lock’s concerns, we are realising that the problem is huge, the system encourages fraud, and we have no adequate way to respond. It may be time to move from assuming that research has been honestly conducted and reported to assuming it to be untrustworthy until there is some evidence to the contrary.

Richard Smith was the editor of The BMJ until 2004.

https://blogs.bmj.com/bmj/2021/07/05/ti ... otherwise/

**********
The illusion of evidence based medicine

Image

Evidence based medicine has been corrupted by corporate interests, failed regulation, and commercialisation of academia, argue these authors

The advent of evidence based medicine was a paradigm shift intended to provide a solid scientific foundation for medicine. The validity of this new paradigm, however, depends on reliable data from clinical trials, most of which are conducted by the pharmaceutical industry and reported in the names of senior academics. The release into the public domain of previously confidential pharmaceutical industry documents has given the medical community valuable insight into the degree to which industry sponsored clinical trials are misrepresented.1234 Until this problem is corrected, evidence based medicine will remain an illusion.

The philosophy of critical rationalism, advanced by the philosopher Karl Popper, famously advocated for the integrity of science and its role in an open, democratic society. A science of real integrity would be one in which practitioners are careful not to cling to cherished hypotheses and take seriously the outcome of the most stringent experiments.5 This ideal is, however, threatened by corporations, in which financial interests trump the common good. Medicine is largely dominated by a small number of very large pharmaceutical companies that compete for market share, but are effectively united in their efforts to expanding that market. The short term stimulus to biomedical research because of privatisation has been celebrated by free market champions, but the unintended, long term consequences for medicine have been severe. Scientific progress is thwarted by the ownership of data and knowledge because industry suppresses negative trial results, fails to report adverse events, and does not share raw data with the academic research community. Patients die because of the adverse impact of commercial interests on the research agenda, universities, and regulators.

The pharmaceutical industry’s responsibility to its shareholders means that priority must be given to their hierarchical power structures, product loyalty, and public relations propaganda over scientific integrity. Although universities have always been elite institutions prone to influence through endowments, they have long laid claim to being guardians of truth and the moral conscience of society. But in the face of inadequate government funding, they have adopted a neo-liberal market approach, actively seeking pharmaceutical funding on commercial terms. As a result, university departments become instruments of industry: through company control of the research agenda and ghostwriting of medical journal articles and continuing medical education, academics become agents for the promotion of commercial products.6 When scandals involving industry-academe partnership are exposed in the mainstream media, trust in academic institutions is weakened and the vision of an open society is betrayed.

The corporate university also compromises the concept of academic leadership. Deans who reached their leadership positions by virtue of distinguished contributions to their disciplines have in places been replaced with fundraisers and academic managers, who are forced to demonstrate their profitability or show how they can attract corporate sponsors. In medicine, those who succeed in academia are likely to be key opinion leaders (KOLs in marketing parlance), whose careers can be advanced through the opportunities provided by industry. Potential KOLs are selected based on a complex array of profiling activities carried out by companies, for example, physicians are selected based on their influence on prescribing habits of other physicians.7 KOLs are sought out by industry for this influence and for the prestige that their university affiliation brings to the branding of the company’s products. As well paid members of pharmaceutical advisory boards and speakers’ bureaus, KOLs present results of industry trials at medical conferences and in continuing medical education. Instead of acting as independent, disinterested scientists and critically evaluating a drug’s performance, they become what marketing executives refer to as “product champions.”

Ironically, industry sponsored KOLs appear to enjoy many of the advantages of academic freedom, supported as they are by their universities, the industry, and journal editors for expressing their views, even when those views are incongruent with the real evidence. While universities fail to correct misrepresentations of the science from such collaborations, critics of industry face rejections from journals, legal threats, and the potential destruction of their careers.8 This uneven playing field is exactly what concerned Popper when he wrote about suppression and control of the means of science communication.9 The preservation of institutions designed to further scientific objectivity and impartiality (i.e., public laboratories, independent scientific periodicals and congresses) is entirely at the mercy of political and commercial power; vested interest will always override the rationality of evidence.10

Regulators receive funding from industry and use industry funded and performed trials to approve drugs, without in most cases seeing the raw data. What confidence do we have in a system in which drug companies are permitted to “mark their own homework” rather than having their products tested by independent experts as part of a public regulatory system? Unconcerned governments and captured regulators are unlikely to initiate necessary change to remove research from industry altogether and clean up publishing models that depend on reprint revenue, advertising, and sponsorship revenue.

Our proposals for reforms include: liberation of regulators from drug company funding; taxation imposed on pharmaceutical companies to allow public funding of independent trials; and, perhaps most importantly, anonymised individual patient level trial data posted, along with study protocols, on suitably accessible websites so that third parties, self-nominated or commissioned by health technology agencies, could rigorously evaluate the methodology and trial results. With the necessary changes to trial consent forms, participants could require trialists to make the data freely available. The open and transparent publication of data are in keeping with our moral obligation to trial participants—real people who have been involved in risky treatment and have a right to expect that the results of their participation will be used in keeping with principles of scientific rigour. Industry concerns about privacy and intellectual property rights should not hold sway.

https://www.bmj.com/content/376/bmj.o702
kmaherali
Posts: 25705
Joined: Thu Mar 27, 2003 3:01 pm

Defeating disinformation

Post by kmaherali »

If you happen to receive unverified content, pause and think about the source and its accuracy before forwarding on.

We’re experts at ignoring adverts: slogans on billboards, jingles on TV – we tend to dismiss them as hype. But attempts to manipulate us are becoming more effective and difficult to avoid.

From libellous Roman emperors two thousand years ago to “deepfake” videos today, false information holds the power to form public opinion and change the course of history. How can we defend ourselves and others from deliberate disinformation?

What is disinformation?

Not all misleading information is malicious. It might be satire, or just a joke. Many British TV viewers were taken in by a news report on spaghetti trees on 1 April 1957, becoming sceptical only when told to grow their own tree by placing spaghetti into a tin of tomatoes.

Sensational content known as clickbait aims to lure readers to a webpage. The site may make money simply from displaying ads - or by selling information about our browsing behaviour. Look out for ‘too good to be true’ headlines like ‘Drink this miracle herbal potion and stop feeling tired!’, or for excited promises: ‘You’ll never believe this…’

Or we might be targeted by a scam, with emails supposedly from stranded friends requesting an emergency loan, or a text message from a parcel delivery company demanding a redelivery fee.

In recent times, a number of unverified and untrue claims and stories have circulated among the Jamat. These have been designed to capture our attention, provoke an emotional reaction, and make us want to share with others. If you happen to receive this type of content, pause, and think about the source and its accuracy before forwarding on.

Malicious disinformation

Disinformation is false information designed to mislead people. Some societies today seem polarised about everything from climate change to vaccines. Scepticism about proven narratives is widespread and conspiracy theories abound. This has led to extreme views which a few decades ago might have remained marginal but can now take hold rapidly. Propaganda churned out by a few influencers is spread with one click by thousands of social media users. Fake accounts, often run by automated software called bots, multiply the problem.

Meanwhile, some seize the opportunity to deny that facts even exist, calling them ‘fake news’.

Digital self-defence

Three ways to avoid being tricked are to find more reliable information; understand how to evaluate what we see; and finally to be aware of our own reactions.

Find out more

The internet offers us an unprecedented amount of verifiable information. We can cross-check against news stories, academic sites, online archives and image banks, educating ourselves on interpreting information and arguments.

Sometimes an image is real, but misattributed to another place, time, or situation in order to construct a lie. Or a statistic may be based on data selected to give a misleading fragment of the whole picture. When it becomes hard to judge, we can consult a fact checking site; these may geolocate and date images and video, check for digital manipulation, look up original records to verify or disprove claims, and perhaps trace the content back to its creator, whether human or bot.

Who benefits?

Second, consider who has created the content and why.

- Do they have access to verifiable facts? What is their expertise on this subject? What do they want us to feel or do as a result? How do they benefit? If the source and evidence are not transparent, be wary.

- Are they using emotive language to engage our heart rather than our brain? Are they appealing to fear or anger? Read up on the principles of rhetoric, persuasion, and how to spot flawed logic. If content is emotive, unsourced, makes an extreme claim, or wants us to change our attitude or behaviour - look out.

Know ourselves

Third, we must know how our own personality will prime us to respond. Suppose:

- Someone receives a forwarded post saying that 5G causes Covid.

- They see that their friends – selected for their similar outlooks - trust this information enough to share it, and they trust their friends’ judgement more than that of strangers.

- Following the media that appeals to their perspective, they think that this view is mainstream - and are persuaded by what they see as the wisdom of crowds.

- The story may fit with their preferred world view - that technology is frightening, or that the government wants to harm them.

- They may also enjoy feeling more enlightened than others who don’t subscribe to this conspiracy theory.

- One click and a hundred new people start looking suspiciously at their local cellphone tower.

Knowing about these universal tendencies, becoming aware of our own biases, and being willing to read or watch content that challenges our views can help protect us against disinformation.

In a speech made in Athens in 2015, Mawlana Hazar Imam spoke of how fast and easy information and falsehoods can be spread: “…if information can be shared more easily as technology advances, so can misinformation and disinformation,” he said. “If truth can spread more quickly and more widely, then so can error and falsehood.”

If in any doubt, avoid the trap of forwarding lies. In some circumstances, sharing really isn’t caring.

https://the.ismaili/global/news/feature ... nformation
Post Reply