168极速赛车开奖官网 fake news Archives - The Cincinnati Herald https://thecincinnatiherald.com/tag/fake-news/ The Herald is Cincinnati and Southwest Ohio's leading source for Black news, offering health, entertainment, politics, sports, community and breaking news Fri, 07 Mar 2025 17:13:46 +0000 en-US hourly 1 https://thecincinnatiherald.com/wp-content/uploads/2023/05/cropped-cinciherald-high-quality-transparent-2-150x150.webp?crop=1 168极速赛车开奖官网 fake news Archives - The Cincinnati Herald https://thecincinnatiherald.com/tag/fake-news/ 32 32 149222446 168极速赛车开奖官网 Strategies to combat misinformation with media overload https://thecincinnatiherald.com/2025/03/09/staying-informed-without-misinformation/ https://thecincinnatiherald.com/2025/03/09/staying-informed-without-misinformation/#respond Sun, 09 Mar 2025 12:00:00 +0000 https://thecincinnatiherald.com/?p=50749

By Seth Ashley, Boise State UniversityDon’t tune out. Do be strategic about where, how and when you get your information. A media literacy expert explains how to have good ‘news hygiene.’

The post Strategies to combat misinformation with media overload appeared first on The Cincinnati Herald .

]]>

By Seth Ashley, Boise State University

Political spin is nothing new, and identifying reliable news and information can be hard to do during any presidency. But the return of Donald Trump to the White House has reignited debates over truth, accountability and the role of media in a deeply divided America.

Misinformation is an umbrella term that covers all kinds of false and misleading content, and there is lots of it out there.

During Trump’s chaotic first presidency, the president himself promoted false claims about COVID-19, climate change and the 2020 election.

Now, in his second term, Trump is again using the bully pulpit of the presidency to spread false claims – for example, on Ukraine and Canada as well as immigration, inflation and, still, the 2020 election.

Meanwhile, social media platforms such as Meta have ended fact-checking programs created after Trump’s first election win, and presidential adviser Elon Musk continues to use social media platform X to amplify Trump’s false claims and his own conspiracy theories.

To stay informed while also arming yourself against misinformation, it’s crucial to practice what I call good “news hygiene” by developing strong news literacy skills.

News literacy, as I argue in my open-access 2020 book “News Literacy and Democracy” and in recent research with colleagues, is about more than fact-checking and detecting AI-generated fakes. It’s about understanding how modern media works and how content is influenced, from TikTok “newsfluencers” to FOX News to The New York Times.

Here are six ways to become a smarter, saner news consumer.

1. Recognize the influence of algorithms

Algorithms are the hidden computer formulas that mediate everything news consumers read, watch, click on and react to online. Despite the illusion of neutrality, algorithms shape people’s perceptions of reality and are designed to maximize engagement.

Algorithmic recommendation engines that power everything from X to YouTube can even contribute to a slow-burn destabilization of American society by shoving consumers into partisan echo chambers that increase polarization and erode social trust.

Sometimes, algorithms can feed falsehoods that warp people’s perceptions or tell them to engage in dangerous behavior. Facebook groups spreading “Stop the Steal” messages contributed to the Jan. 6, 2021, Capitol insurrection. TikTok algorithms had people drinking laundry detergent in the “borax challenge.” Dylann Roof killed nine Black people based on falsehoods from hate groups he found in search results.

Rather than passively consuming whatever appears in your feeds – allowing brain rot to set in – actively seek out a variety of sources to inform you about current events. The news shouldn’t just tell you what you want to hear.

And spread the word. People who simply understand that algorithms filter information are more likely to take steps to combat misinformation.

2. Understand the economics of corporate news

Media outlets operate within economic systems that shape their priorities.

For-profit newsrooms, which produce the bulk of news consumed in the U.S., rely heavily on advertising revenue, which can reduce the quality of news and create a commercial bias. Places such as ABC, CNN and FOX, as well as local network TV affiliates, can still do good work, but their business model helps to explain sensational horse-race election coverage and false-balance reporting that leaves room for doubt on established facts about climate change and vaccines.

At the same time, the economic outlook for news is not good. Declining revenues and staff cuts also reduce the quality of news.

Nonprofit newsrooms and public media provide alternatives that generally prioritize public interest over profit. And if you have the budget, paying for quality journalism with a subscription can help credible outlets survive.

Traditional journalism has never been perfect, but the collapse of the news business is unquestionably bad for democracy. Countries with better funding for public media tend to have stronger democracies, and compared with other rich nations, the U.S. spends almost nothing on public service broadcasting.

3. Focus on source evaluation and verification

Particularly with AI-generated content on the rise, source evaluation and verification are essential skills. Here are some ways to identify trustworthy journalism:

  • Quality of evidence: Are claims verified with support from a variety of informed individuals and perspectives?
  • Transparency about sources: Is the reporter clear about where their information came from and who shared it?
  • Adherence to ethical guidelines: Does the outlet follow the basic journalistic principles of accuracy and independence?
  • Corrections: Does the outlet correct its errors and follow up on incomplete reporting?

Be cautious with content that lacks the author’s name, relies heavily on anonymous sources – or uses no sources at all – or is published by outlets with a clear ideological agenda. These aren’t immediate disqualifiers – some credible news magazines such as The Economist have no bylines, for example, and some sources legitimately need anonymity for protection – but watch out for news operations that routinely engage in these practices and obscure their motive for doing so.

A good online verification practice is called “lateral reading.” That’s when you open new browser tabs to verify claims you see on news sites and social media. Ask: Is anyone else covering this, and have they reached similar conclusions?

4. Examine your emotional reactions

One of the hallmarks of misinformation is its ability to provoke strong emotional responses, whether outrage, fear or validation.

These reactions, research shows, can cloud judgment and make people more susceptible to false or misleading information. The primitive brains of humans are wired to reject information that challenges our beliefs and to accept information we like, a phenomenon known as confirmation bias.

When encountering content that sparks an emotional reaction, ask yourself: Who benefits from this narrative? What evidence supports it? Is this information informative or manipulative?

If the answers make you suspicious, investigate further before acting or sharing.

5. Guard against propaganda

Everyone in politics works to shape narratives in order to gain support for their agenda. It’s called spin.

But Trump goes further, spreading documented lies to pump up his followers and undermine the legitimacy of basic democratic institutions.

He also targets media he doesn’t like. From discrediting critical outlets as “fake news” or calling journalists the “enemy of the people,” these tactics silence dissent, undermine public trust in journalism and alter perceptions around acceptable public discourse and behavior.

Meanwhile, he amplifies information and people who support his political causes. This is called propaganda.

Understanding the mechanics of propaganda – its use of repetition, emotional appeal, scapegoating, scare tactics and unrealistic promises – can help inoculate people against its influence.

6. Stay engaged

Democracy relies on an informed and active citizenry to hold accountable their government and the officials who work in it as well as other powerful players in society. Yet the sheer volume of misinformation and bad news these days can feel overwhelming.

Rather than tuning out – what scholars call “news avoidance” – you can practice critical consumption of news.

Read deeply, look beyond headlines and short video clips, question the framing of stories, and encourage discussions about the role of media in society. Share reliable information with your friends and colleagues, and model good news hygiene for others.

Correcting misinformation is notoriously hard, so if someone you know shares it, start a dialogue by asking – privately and gently – where they heard it and whether they think it’s really true.

Finally, set goals for your consumption. What are your information needs at any given moment, and where can you meet that need? Some experts say 30 minutes a day is enough. Don’t waste your time on garbage.

Touch grass

While it’s important to stay engaged, so is getting outside and connecting with nature to calm and soothe your busy brain. Logging off and connecting with people in real life will keep your support system strong for when things are tough. Protect your mental health by turning off notifications and taking breaks from your phone.

Practicing good news hygiene isn’t just about protecting ourselves – it’s about fostering a media environment that supports democracy and informed participation.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Seth Ashley, Boise State University

Read more:

Seth Ashley does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Feature Image: Not all news sources are created equal. Noah Berger/AP Images

The post Strategies to combat misinformation with media overload appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2025/03/09/staying-informed-without-misinformation/feed/ 0 50749
168极速赛车开奖官网 Meta shifts to crowdsourcing in misinformation fight https://thecincinnatiherald.com/2025/01/16/meta-shifts-to-crowdsourcing-in-misinformation-fight/ https://thecincinnatiherald.com/2025/01/16/meta-shifts-to-crowdsourcing-in-misinformation-fight/#respond Thu, 16 Jan 2025 13:00:00 +0000 https://thecincinnatiherald.com/?p=46835

Content moderation is a thorny issue, often pitting safety against free speech. But does it even work, and which approach is best?

The post Meta shifts to crowdsourcing in misinformation fight appeared first on The Cincinnati Herald .

]]>

By Anjana Susarla, Michigan State University

Meta’s decision to change its content moderation policies by replacing centralized fact-checking teams with user-generated community labeling has stirred up a storm of reactions. But taken at face value, the changes raise the question of the effectiveness of Meta’s old policy, fact-checking, and its new one, community comments.

With billions of people worldwide accessing their services, platforms such as Meta’s Facebook and Instagram have a responsibility to ensure that users are not harmed by consumer fraud, hate speech, misinformation or other online ills. Given the scale of this problem, combating online harms is a serious societal challenge. Content moderation plays a role in addressing these online harms.

Moderating content involves three steps. The first is scanning online content – typically, social media posts – to detect potentially harmful words or images. The second is assessing whether the flagged content violates the law or the platform’s terms of service. The third is intervening in some way. Interventions include removing posts, adding warning labels to posts, and diminishing how much a post can be seen or shared.

Content moderation can range from user-driven moderation models on community-based platforms such as Wikipedia to centralized content moderation models such as those used by Instagram. Research shows that both approaches are a mixed bag.

Does fact-checking work?

Meta’s previous content moderation policy relied on third-party fact-checking organizations, which brought problematic content to the attention of Meta staff. Meta’s U.S. fact-checking organizations were AFP USA, Check Your Fact, Factcheck.org, Lead Stories, PolitiFact, Science Feedback, Reuters Fact Check, TelevisaUnivision, The Dispatch and USA TODAY.

Fact-checking relies on impartial expert review. Research shows that it can reduce the effects of misinformation but is not a cure-all. Also, fact-checking’s effectiveness depends on whether users perceive the role of fact-checkers and the nature of fact-checking organizations as trustworthy.

Crowdsourced content moderation

In his announcement, Meta CEO Mark Zuckerberg highlighted that content moderation at Meta would shift to a community notes model similar to X, formerly Twitter. X’s community notes is a crowdsourced fact-checking approach that allows users to write notes to inform others about potentially misleading posts.

Studies are mixed on the effectiveness of X-style content moderation efforts. A large-scale study found little evidence that the introduction of community notes significantly reduced engagement with misleading tweets on X. Rather, it appears that such crowd-based efforts might be too slow to effectively reduce engagement with misinformation in the early and most viral stage of its spread.

There have been some successes from quality certifications and badges on platforms. However, community-provided labels might not be effective in reducing engagement with misinformation, especially when they’re not accompanied by appropriate training about labeling for a platform’s users. Research also shows that X’s Community Notes is subject to partisan bias.

Crowdsourced initiatives such as the community-edited online reference Wikipedia depend on peer feedback and rely on having a robust system of contributors. As I have written before, a Wikipedia-style model needs strong mechanisms of community governance to ensure that individual volunteers follow consistent guidelines when they authenticate and fact-check posts. People could game the system in a coordinated manner and up-vote interesting and compelling but unverified content.

Misinformation researcher Renée DiResta analyzes Meta’s change in content moderation policy.

Content moderation and consumer harms

A safe and trustworthy online space is akin to a public good, but without motivated people willing to invest effort for the greater common good, the overall user experience could suffer.

Algorithms on social media platforms aim to maximize engagement. However, given that policies that encourage engagement can also result in harm, content moderation also plays a role in consumer safety and product liability.

This aspect of content moderation has implications for businesses that either use Meta for advertising or to connect with their consumers. Content moderation is also a brand safety issue because platforms have to balance their desire to keep the social media environment safer against that of greater engagement.

AI content everywhere

Content moderation is likely to be further strained by growing amounts of content generated by artificial intelligence tools. AI detection tools are flawed, and developments in generative AI are challenging people’s ability to differentiate between human-generated and AI-generated content.

In January 2023, for example, OpenAI launched a classifier that was supposed to differentiate between texts generated by humans and those generated by AI. However, the company discontinued the tool in July 2023 due to its low accuracy.

There is potential for a flood of inauthentic accounts – AI bots – that exploit algorithmic and human vulnerabilities to monetize false and harmful content. For example, they could commit fraud and manipulate opinions for economic or political gain.

Generative AI tools such as ChatGPT make it easier to create large volumes of realistic-looking social media profiles and content. AI-generated content primed for engagement can also exhibit significant biases, such as race and gender. In fact, Meta faced a backlash for its own AI-generated profiles, with commentators labeling it “AI-generated slop.”

More than moderation

Regardless of the type of content moderation, the practice alone is not effective at reducing belief in misinformation or at limiting its spread.

Ultimately, research shows that a combination of fact-checking approaches in tandem with audits of platforms and partnerships with researchers and citizen activists are important in ensuring safe and trustworthy community spaces on social media.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Anjana Susarla, Michigan State University

Read more:

Anjana Susarla receives funding from the National Institute of Health

Feature Image: Meta stirred up controversy when it ditched fact-checking. Chesnot/Getty Images

The post Meta shifts to crowdsourcing in misinformation fight appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2025/01/16/meta-shifts-to-crowdsourcing-in-misinformation-fight/feed/ 0 46835
168极速赛车开奖官网 Musk’s feud raises questions on regulation https://thecincinnatiherald.com/2024/09/11/brazil-supreme-court-x-musk-disinformation/ https://thecincinnatiherald.com/2024/09/11/brazil-supreme-court-x-musk-disinformation/#respond Wed, 11 Sep 2024 23:00:00 +0000 https://thecincinnatiherald.com/?p=38124

Elon Musk and Brazil's Supreme Court Justice Alexandre de Moraes are engaged in a public spat over platform regulation and disinformation, which has raised questions about the balance between free speech and combating disinformation in a polarized environment.

The post Musk’s feud raises questions on regulation appeared first on The Cincinnati Herald .

]]>

By Yasmin Curzi de Mendonça, University of Virginia

Brazil’s Supreme Court Justice Alexandre de Moraes faces off against X’s Elon Musk. Ton Molina/NurPhoto via Getty Images / AP Photo/Kirsty Wigglesworth

It is easy to get distracted by the barbs, swipes and bluster of the ongoing and very public spat between the world’s richest man and a fierce justice on Brazil’s highest court. Elon Musk, the billionaire owner of X, posts regularly of his contempt for Supreme Court Justice Alexandre de Moraes – a man Musk has labeled a “dictator” and “Brazil’s Darth Vader.” He makes these comments on a social media platform that Moraes has banned in Latin America’s most populous country as part of a lengthy campaign against disinformation.

But as an expert on Brazilian digital law, I see this as more than just a bitter personal feud. X’s legal battle with Brazil’s Supreme Court raises important questions about platform regulation and how to combat disinformation while protecting free speech. And while the focus is on Brazil and Musk, it is a debate being echoed around the world.

Countdown to the big fight

Things came to a head between Musk and Moraes in August 2024, but the battle has been years in the making.

In 2014, Brazil passed the “Marco Civil da Internet” or the “Internet Bill of Rights,” as it is commonly known. Backed by bipartisan support, this framework for internet regulation outlined principles for protecting user privacy and free speech while also creating penalties for platforms that break the rules.

It included a “judicial notice and takedown” system under which internet platforms are liable for harmful user-generated content only if they fail to remove the content after receiving a specific court order.

The approach was designed to strike a balance between protecting free speech and ensuring that illegal and harmful content can be removed. It prevents social media platforms, messaging apps and online forums from being held automatically responsible for users’ posts, while empowering courts to intervene when necessary.

But the 2014 law stops short of creating detailed rules for content moderation, leaving much of the responsibility in the hands of platforms such as Facebook and X.

And the rise of disinformation in recent years, especially around Brazil’s 2022 presidential elections, exposed the limitations of the framework.

The president at the time, far-right populist Jair Bolsonaro, and his supporters were accused of using social media platforms such as X to spread falsehoods, sow doubts about the integrity of Brazil’s electoral system and incite violence. When Bolsonaro was defeated at the ballot by the leftist Luiz Inácio Lula da Silva, an online campaign of election denialism flourished. It culminated in the Jan. 8, 2023, storming of the Brazilian Congress, Supreme Court and the presidential palace by Bolsonaro’s supporters – an event similar to the U.S. Capitol riots two years earlier.

The fight gets personal …

In response to the disinformation campaigns and the attacks, Brazil’s Supreme Court initiated two inquiries – the digital militias inquiry and the antidemocratic acts inquiry – to investigate groups involved in the plot.

As part of those inquiries, the Supreme Court requested social media platforms – such as Facebook, Instagram and X – to hand over the IP addresses and suspend accounts linked to those illegal activities.

But by this time, Musk, who has described himself as a free-speech fundamentalist, had acquired the platform, promising to support free speech, reinstate banned accounts and decrease significantly the platform’s content moderation policy.

Men in restraints holding their arms behind their backs kneel on the floor with security guards around them.
Security forces arrest supporters of President Jair Bolsonaro after retaking control of the presidential palace on Jan. 8, 2023.
Ton Molina/AFP via Getty Images

As a result, Musk has been openly defying the Supreme Court’s orders since the beginning. In April 2024, X’s global government affairs team began sharing information with the public on what it deemed as “illegal” demands from the Supreme Court.

The feud escalated in late August when X’s legal representative in Brazil resigned and Musk refused to name a new legal representation – a move that was interpreted by Moraes as an attempt to evade the law. In response, Moraes ordered the platform’s ban on Aug. 31, 2024.

The move was accompanied by heavy penalties for Brazilians attempting to circumvent the ban. Anyone using virtual private networks, or VPNs, to access X faces daily fines of nearly US$9,000 – more than the average annual income of many Brazilians. Those decisions were confirmed by a panel consisting of five Supreme Court justices on Sept. 2, 2024. Amid criticism of judicial overreach, however, the full court of 11 justices will discuss and potentially revisit this part of Moraes’ decision.

… then turns political

The X v. Brazil Supreme Court fight has become deeply politicized. On Sept. 7, thousands of Bolsonaro supporters took part in a “pro-free speech” protest. Lula’s administration and the Supreme Court have become targets, with the opposition and right-wing factions framing the platform’s suspension as a symbol of state overreach.

The rhetoric contrasts sharply with the more balanced, deliberative efforts to regulate platforms that began over a decade ago with the Marco Civil da Internet. It also highlights the challenge of finding a balance between free speech and combating disinformation in a deeply polarized environment – an issue that Brazil is far from alone in grappling with.

The political heat surrounding the banning of X doesn’t bode well for Brazil’s ongoing efforts to counter online disinformation and hold platforms accountable for harmful content.

A “fake news bill,” as it has been dubbed by Brazilian media, was introduced by the country’s Congress in 2020. It seeks to create oversight mechanisms and increase transparency around political advertising and content moderation policies.

But despite its good intentions and a very light “regulated self-regulation” approach, the last version of the draft bill was blocked in the Brazilian Congress after three years of debate.

It follows a campaign by right-wing politicians and Big Tech lobbyists who labeled the legislation a “censorship bill,” arguing that it would infringe on free speech and stifle political discourse. As of now, the fate of the bill looks uncertain.

Meanwhile, on Aug. 23, the Supreme Court announced that it will look at two key parts of the Marco Civil as part of a judicial review taking place in November.

The first is the “judicial notice and takedown” process that critics complain is too slow and allows platforms to choose not to adopt more robust content moderation mechanisms. Supporters, however, maintain that judicial oversight is necessary to prevent platforms from arbitrarily removing content, which could lead to censorship.

The second area under review is the part of the Marco Civil that outlines the penalties for companies that fail to follow the rules. The debate centers on whether the current penalties, particularly service suspensions, are proportionate and constitutional. Critics argue that suspending an entire platform violates users’ rights to free speech and access to information, while proponents insist that it is a necessary tool to ensure compliance with Brazilian law and safeguard sovereignty.

The fate of both the “fake news bill” and the Supreme Court’s review could set in place new legal standards for platforms in Brazil and determine how far the country can go in enforcing its laws against global tech companies as it seeks to battle disinformation.

And while the Supreme Court did not directly link the review to the ongoing feud with X, the fight with Musk forms the unavoidable political backdrop to discussions over the future direction of Brazil’s experiment in platform regulation. As such, the fallout of this seemingly personal spat could have major regulatory consequences for Brazil and potentially other countries.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Yasmin Curzi de Mendonça, University of Virginia

Read more:

Yasmin Curzi de Mendonça does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The post Musk’s feud raises questions on regulation appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2024/09/11/brazil-supreme-court-x-musk-disinformation/feed/ 0 38124