168极速赛车开奖官网 AI (Artificial Intelligence) Archives - The Cincinnati Herald https://thecincinnatiherald.com/tag/ai-artificial-intelligence/ The Herald is Cincinnati and Southwest Ohio's leading source for Black news, offering health, entertainment, politics, sports, community and breaking news Fri, 07 Mar 2025 17:11:08 +0000 en-US hourly 1 https://thecincinnatiherald.com/wp-content/uploads/2023/05/cropped-cinciherald-high-quality-transparent-2-150x150.webp?crop=1 168极速赛车开奖官网 AI (Artificial Intelligence) Archives - The Cincinnati Herald https://thecincinnatiherald.com/tag/ai-artificial-intelligence/ 32 32 149222446 168极速赛车开奖官网 Balanced approach to AI governance: A path for innovation, accountability https://thecincinnatiherald.com/2025/03/08/ai-governance-model/ https://thecincinnatiherald.com/2025/03/08/ai-governance-model/#respond Sat, 08 Mar 2025 23:00:00 +0000 https://thecincinnatiherald.com/?p=50739

By Paulo Carvão, Harvard Kennedy SchoolAI innovation and governance can coexist. The key is combining public-private partnerships, market audits and accountability.

The post Balanced approach to AI governance: A path for innovation, accountability appeared first on The Cincinnati Herald .

]]>

By Paulo Carvão, Harvard Kennedy School

Imagine a not-too-distant future where you let an intelligent robot manage your finances. It knows everything about you. It follows your moves, analyzes markets, adapts to your goals and invests faster and smarter than you can. Your investments soar. But then one day, you wake up to a nightmare: Your savings have been transferred to a rogue state, and they’re gone.

You seek remedies and justice but find none. Who’s to blame? The robot’s developer? The artificial intelligence company behind the robot’s “brain”? The bank that approved the transactions? Lawsuits fly, fingers point, and your lawyer searches for precedents, but finds none. Meanwhile, you’ve lost everything.

This is not the doomsday scenario of human extinction that some people in the AI field have warned could arise from the technology. It is a more realistic one and, in some cases, already present. AI systems are already making life-altering decisions for many people, in areas ranging from education to hiring and law enforcement. Health insurance companies have used AI tools to determine whether to cover patients’ medical procedures. People have been arrested based on faulty matches by facial recognition algorithms.

By bringing government and industry together to develop policy solutions, it is possible to reduce these risks and future ones. I am a former IBM executive with decades of experience in digital transformation and AI. I now focus on tech policy as a senior fellow at Harvard Kennedy School’s Mossavar-Rahmani Center for Business and Government. I also advise tech startups and invest in venture capital.

Drawing from this experience, my team spent a year researching a way forward for AI governance. We conducted interviews with 49 tech industry leaders and members of Congress, and analyzed 150 AI-related bills introduced in the last session of Congress. We used this data to develop a model for AI governance that fosters innovation while also offering protections against harms, like a rogue AI draining your life savings.

Striking a balance

The increasing use of AI in all aspects of people’s lives raises a new set of questions to which history has few answers. At the same time, the urgency to address how it should be governed is growing. Policymakers appear to be paralyzed, debating whether to let innovation flourish without controls or risk slowing progress. However, I believe that the binary choice between regulation and innovation is a false one.

Instead, it’s possible to chart a different approach that can help guide innovation in a direction that adheres to existing laws and societal norms without stifling creativity, competition and entrepreneurship.

Bloomberg Intelligence analyst Tamlin Bason explains the regulatory landscape and the need for a balanced approach to AI governance.

The U.S. has consistently demonstrated its ability to drive economic growth. The American tech innovation system is rooted in entrepreneurial spirit, public and private investment, an open market and legal protections for intellectual property and trade secrets. From the early days of the Industrial Revolution to the rise of the internet and modern digital technologies, the U.S. has maintained its leadership by balancing economic incentives with strategic policy interventions.

In January 2025, President Donald Trump issued an executive order calling for the development of an AI action plan for America. My team and I have developed an AI governance model that can underpin an action plan.

A new governance model

Previous presidential administrations have waded into AI governance, including the Biden administration’s since-recinded executive order. There has also been an increasing number of regulations concerning AI passed at the state level. But the U.S. has mostly avoided imposing regulations on AI. This hands-off approach stems in part from a disconnect between Congress and industry, with each doubting the other’s understanding of the technologies requiring governance.

The industry is divided into distinct camps, with smaller companies allowing tech giants to lead governance discussions. Other contributing factors include ideological resistance to regulation, geopolitical concerns and insufficient coalition-building that have marked past technology policymaking efforts. Yet, our study showed that both parties in Congress favor a uniquely American approach to governance.

Congress agrees on extending American leadership, addressing AI’s infrastructure needs and focusing on specific uses of the technology – instead of trying to regulate the technology itself. How to do it? My team’s findings led us to develop the Dynamic Governance Model, a policy-agnostic and nonregulatory method that can be applied to different industries and uses of the technology. It starts with a legislative or executive body setting a policy goal and consists of three subsequent steps:

  1. Establish a public-private partnership in which public and private sector experts work together to identify standards for evaluating the policy goal. This approach combines industry leaders’ technical expertise and innovation focus with policymakers’ agenda of protecting the public interest through oversight and accountability. By integrating these complementary roles, governance can evolve together with technological developments.
  2. Create an ecosystem for audit and compliance mechanisms. This market-based approach builds on the standards from the previous step and executes technical audits and compliance reviews. Setting voluntary standards and measuring against them is good, but it can fall short without real oversight. Private sector auditing firms can provide oversight so long as those auditors meet fixed ethical and professional standards.
  3. Set up accountability and liability for AI systems. This step outlines the responsibilities that a company must bear if its products harm people or fail to meet standards. Effective enforcement requires coordinated efforts across institutions. Congress can establish legislative foundations, including liability criteria and sector-specific regulations. It can also create mechanisms for ongoing oversight or rely on existing government agencies for enforcement. Courts will interpret statutes and resolve conflicts, setting precedents. Judicial rulings will clarify ambiguous areas and contribute to a sturdier framework.

Benefits of balance

I believe that this approach offers a balanced path forward, fostering public trust while allowing innovation to thrive. In contrast to conventional regulatory methods that impose blanket restrictions on industry, like the one adopted by the European Union, our model:

  • is incremental, integrating learning at each step.
  • draws on the existing approaches used in the U.S. for driving public policy, such as competition law, existing regulations and civil litigation.
  • can contribute to the development of new laws without imposing excessive burdens on companies.
  • draws on past voluntary commitments and industry standards, and encourages trust between the public and private sectors.

The U.S. has long led the world in technological growth and innovation. Pursuing a public-private partnership approach to AI governance should enable policymakers and industry leaders to advance their goals while balancing innovation with transparency and responsibility. We believe that our governance model is aligned with the Trump administration’s goal of removing barriers for industry but also supports the public’s desire for guardrails.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Paulo Carvão, Harvard Kennedy School

Read more:

Carvão advises tech startups and invests in venture capital.

Feature Image: One of President Donald Trump’s first executive orders in his second term called for developing an AI action plan. Photo by Anna Moneymaker/Getty Images

The post Balanced approach to AI governance: A path for innovation, accountability appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2025/03/08/ai-governance-model/feed/ 0 50739
168极速赛车开奖官网 Generative AI: The double-edged sword for creative control https://thecincinnatiherald.com/2025/02/26/generative-ai-the-double-edged-sword-for-creative-control/ https://thecincinnatiherald.com/2025/02/26/generative-ai-the-double-edged-sword-for-creative-control/#respond Wed, 26 Feb 2025 19:00:00 +0000 https://thecincinnatiherald.com/?p=49910

The technology is always on call and is quite competent. But it lacks the contemplation and attention to detail that yield great works of art.

The post Generative AI: The double-edged sword for creative control appeared first on The Cincinnati Herald .

]]>

By John P. Nelson, Georgia Institute of Technology

Generative AI tools such as ChatGPT and Midjourney can produce text, images and videos far more quickly than any one person can accomplish by hand.

But as someone who studies the societal impacts of AI, I’ve noticed an interesting trade-off: The technology can certainly save time, but it does so precisely to the extent that the user is willing to surrender control over the final product.

For this reason, generative AI is probably most useful for things we care about the least.

Ceding creative control

Let’s use the example of AI image generators. You probably have a rough idea of how they work. Just type what you want – “a panda surfing,” “a piece of toast that is also a car” – and the generative tool draws it.

But this glosses over the countless possible iterations of the desired image.

Will the image appear as a watercolor painting or a pencil sketch? How lifelike will the panda be? How big is the wave? Is the toast-car parked or moving? Is there anyone inside of it?

When the images are generated, these questions have been answered – but not by the user. Rather, the generative AI tool has “decided.”

Of course, the user can be more specific: Imitate the style of Monet. Make the wave twice the height of the panda. Maybe the panda should look worried, since it isn’t used to surfing.

You can also pop open an image editor and modify the output yourself, down to the individual pixel. But, of course, drafting detailed instructions and revising the image take time, effort and skill. Generative AI promises to lighten the load. But as every manager knows, exercising control is work.

The devil is in the details

In all art and expression, power lies in the details.

In great paintings, not every brushstroke is planned – but each is carefully considered and accepted. And its overall effect on the viewer depends on all those considered brushstrokes together.

Filmmakers shoot take after take of the same scene, each subtly or radically different. Only a small fraction of that footage makes it into the final cut – the fraction that the editors feel does the job best. Great artists use their judgment to ensure every detail helps to achieve the effect they want.

Of course, there’s nothing new about putting someone else in charge of the details. People are used to delegating authority – even about matters of expression – to marketers, speechwriters, social media managers and the like.

Generative AI makes a new sort of contractor available. It’s always on call, and in certain ways it is very technically competent.

But compared with skilled humans, it has a limited ability to understand what you want. Moreover, it lacks intention, contemplation and the comprehensive mastery of detail that yield great expressive achievements – or even the comprehensive idiosyncrasy that spawns very unique ones.

Ask ChatGPT for a film script, plus casting and shooting instructions. It will give you neither Francis Ford Coppola’s masterpiece “The Godfather” nor Tommy Wiseau’s bizarre “The Room.”

You could, perhaps, approach a masterpiece, or a true oddity. But to do so, you’d have to exercise more and more time, more and more effort, and more and more control.

An era of ‘cheap speech’

What generative AI makes possible, above all, is low-effort, low-control expression.

In the time I took to write and revise this article, I could have used ChatGPT to generate 200 grammatically correct, well-structured articles, and then I could have posted them online without even reading them. I wouldn’t have had to carefully parse each word and decide whether it really helped me make my point. I wouldn’t have even had to decide whether I agreed with any of the AI-generated write-ups.

This is not a merely hypothetical example. Low-quality, AI-generated e-books of ambiguous provenance are already making their way into online vendors’ catalogs – and into the libraries those vendors serve.

Similarly, using image generators, I could now flood the internet with superficially appealing images, dedicating only a fraction of a second to decide whether any of them express what I want them to express or achieve what I want them to achieve.

But in doing so, I would not just be skipping over drudgery. Writing, drawing and painting are not just labor but processes of considering, reviewing and deciding exactly what I want to put out into the world. By skipping over those processes, I surrender that decision-making process to the AI tool.

Some scholars argue that the internet has produced an era of “cheap speech.” People no longer have to invest a lot of resources – nor even face the judgment of their neighbors – to broadcast whatever they want to the world.

With generative AI, expression is even cheaper. You don’t even have to make things yourself to put them out into the world. For the first time in human history, the ability to produce writing, art and expression has been decoupled from the necessity of actually paying attention to what you’re making or saying.

Illustration of red maze with small, axe-wielding figure chopping through the walls.
Generative AI allows you to blow through the thousands of little decisions that go into a work of art.
C.J. Burton/The Image Bank via Getty Images

When intention and effort matter

I suspect that great art, journalism and scholarship will still demand great attention and effort. Some of that effort may even include custom-developing AI tools tailored to an individual artist’s concerns.

But unless people become much better at curation, great work will be increasingly difficult to locate amid the flood of low-effort content, which is also known as “AI slop.”

It’s appropriate that generative AI becomes more useful the sloppier its users are willing to be – that is, the less they care about the details.

I could end with some dire prognosis – that working artists and writers will be replaced with mediocre automation, that online discourse will get even stupider, that people will isolate themselves in personalized cocoons of AI-generated media.

All these things are possible. But it’s probably more useful to offer a suggestion to you, the reader.

When you need an image or a piece of writing, take a moment to decide: How important are the details? Would the process of making this yourself, or working with a collaborator or contractor, be useful? Would it yield a better output, or give me the chance to learn, or begin or strengthen a relationship, or help you reflect on something important to you?

In short, is it worth putting in real care and effort? The answer will not always be yes. But it often will.

Art, writing, films – these are not just products, but acts. They are things humans make, through a process of thousands of little decisions that encompass what we stand for and what we want to say.

So when it comes to art, expression and argument, if you want it done right, it’s probably still best to do it yourself.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: John P. Nelson, Georgia Institute of Technology

Read more:

John P. Nelson does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Feature Image: The creative process involves choices that lead artists to places they couldn’t have imagined. Eoneren/E+ via Getty Images

The post Generative AI: The double-edged sword for creative control appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2025/02/26/generative-ai-the-double-edged-sword-for-creative-control/feed/ 0 49910
168极速赛车开奖官网 We are enough: Writing, teaching and owning our history https://thecincinnatiherald.com/2025/01/30/we-are-enough-writing-teaching-and-owning-our-history/ https://thecincinnatiherald.com/2025/01/30/we-are-enough-writing-teaching-and-owning-our-history/#comments Thu, 30 Jan 2025 13:00:00 +0000 https://thecincinnatiherald.com/?p=47955

As a child, I was naturally curious, always asking, “Why?” and “How?” I think back to my third-grade teacher, Mrs. Gregory, at Winton Terrace, who taught me to seek answers through the five W’s and H: who, what, when, where, why and how. But my first real lesson in curiosity came from my mother, Alice, […]

The post We are enough: Writing, teaching and owning our history appeared first on The Cincinnati Herald .

]]>

As a child, I was naturally curious, always asking, “Why?” and “How?” I think back to my third-grade teacher, Mrs. Gregory, at Winton Terrace, who taught me to seek answers through the five W’s and H: who, what, when, where, why and how. But my first real lesson in curiosity came from my mother, Alice, who often said, “Be careful what you wish for, you just might get it.” That lesson came after one of my famous foot stomps and whispered rebellions: “I can’t wait until I’m grown and move out.”

One day, a door-to-door salesman knocked on our door, selling encyclopedias. I begged my mother to buy a set. Eventually, after much pleading, she did. Those books became my first portal to the world. I spent hours flipping through pages, absorbing stories and histories — but even then, I knew there were gaps.

Fast forward to today, and my journey for knowledge continues — from encyclopedias to modern tools like Statista for data analysis and AI tools like ChatGPT. As technology evolves, I have grown increasingly curious about how AI handles historical information, especially Black cultural narratives, and what I have discovered is concerning.

There’s a problem with AI and history. Recently, I tested ChatGPT’s Consensus feature, which pulls from peer-reviewed studies. My question was historical rather than scientific: “In the 19th century, Samuel George Morton published Crania Americana, claiming that brain size determines intelligence. How did his work impact society?” Morton’s “findings” were used to justify slavery and racial hierarchies. His work was a lie, disguised as science, reinforcing systemic oppression.

What’s troubling is that AI tools are perpetuating these same biases.

As 2025 began, I realized we are at risk of losing more than we have gained in this digital age. If we do not take control of our stories now, algorithms will replicate the very biases that sought to silence us in the past reminiscent of the Reconstruction Era, Jim Crow laws and countless other moments of cultural erasure throughout U.S. history.

The danger lies in algorithmic bias. AI systems are trained on data riddled with historical gaps and prejudice. They absorb societal norms that elevate certain stories while erasing others. Black voices already marginalized in traditional archives face an even greater threat of digital invisibility. It is unsettling to think that as book bans spread across the country, AI could amplify this suppression. What happens when these systems recommend content? Whose stories get prioritized? Without active intervention, AI risks becoming a gatekeeper of knowledge, reinforcing exclusion instead of dismantling it.

1955 Lockland Wayne Team Row 1: Roland Bolds, Earl Fredricks, Dennie Ballew, Virgil Thompson, Alton Smith, Coach Joe Martin. Row 2: George Lewis, Taylor Penn, Joseph Martin, Richard Ellison, Clifford Ralls, Richard Lewis. Row 3: G. Henderson, Lloyd Johnson, Jim Johnson, Leroy Cauthen, Albert Seay. Photo from Wikipedia: Lockland Wayne High School

Here’s a real example of digital erasure. I put my theory to the test with a simple question: “Who was the first all-Black high school boys’ basketball team to win a state championship?” Within seconds, ChatGPT responded: “Crispus Attucks High School in Indiana, 1955.”

It was not wrong, but it was not complete. The system omitted Lockland Wayne High School in Ohio, a significant piece of Black history that I personally helped document. In 2016, my close family friend Albert Seay, a proud graduate of Lockland Wayne High School, asked me to help document its basketball team’s story for Wikipedia. The research took 12 months of trial and error, persistence and countless edits. But the result was a Wikipedia entry that told the story of the first all-Black high school to win a state championship in 1952, three years before Crispus Attucks.

This achievement was monumental. It happened during segregation, at a time when Black athletes faced enormous barriers. A key figure in the story was Coach Joe Martin, whose leadership guided Lockland Wayne to historic victories. Coach Martin later became an assistant coach at the University of Cincinnati, where he helped shape the city’s basketball legacy.

Yet when I asked ChatGPT, Lockland Wayne’s legacy and Coach Martin’s contributions were missing. It makes me wonder what else is missing. What other stories are being erased, distorted or overlooked?

AI Bias: A Digital Warning. When I pressed ChatGPT further, it apologized and confirmed that Lockland Wayne’s victories were historically accurate, but the damage was done. Imagine your child preparing a Black history presentation using AI as their primary source. What critical stories will be left out?

One of my mother’s greatest lessons was, “You can never go back.” The omission of Lockland Wayne shows what happens when we do not actively protect our stories. AI tools are only as good as the data they are fed, and right now, that data is incomplete and biased. When others control our narrative, they distort it.

Preserving our history is wealth building. Some might ask, “Why focus on history when we need to build wealth?” To that, I say: “We cannot build wealth if we do not own our stories.” Our history is part of our cultural capital. The stories we tell shape how we see ourselves, how others see us and how we move in the world. Preserving our history is preserving intellectual property, and ownership of that narrative is essential to building generational wealth.

Think about how other communities build wealth through media, education and culture. Hollywood, publishing, museums and universities all profit from storytelling. Who owns the archives? Who controls the images of our ancestors, our movements and our contributions? If we do not claim it, others will.

Wealth is not just about money. It is about power, influence and legacy. Protecting our history means ensuring future generations see themselves reflected with dignity and pride.

When we own our stories, we create industries, books, films, tech, education that keep wealth circulating within our communities. So yes, we must build wealth. But if we lose control of our history, we lose control of the narrative that underpins all wealth-building efforts.

Our 2025 Playbook: Reclaim, Protect, Pass Down. The 2025 playbook outlined in “The Mandate for Leadership: The Conservative Promise” is a 920-page agenda aimed at rolling back Civil Rights protections, suppressing Black history and silencing our contributions. It is not just about AI bias — it is about political forces actively working to dismantle our progress.

But we have our own playbook. We are the Griots, the keepers of stories. Like the Indigenous people of the Americas, we cannot rely on mainstream narratives or AI tools to tell our stories. We must do it ourselves. For centuries, Indigenous communities have preserved their histories through oral traditions, songs and art. We must do the same. Our stories are living archives of truth, resilience and identity. They endure — even when the world tries to erase them.

What we must do now to reclaim our stories is support Black museums and archives, donate to Black history initiatives, make Black history part of everyday life — not just a once-a-year celebration — teach our children to question AI outputs and dig deeper.

Because we are enough. Our stories are enough. We do not need validation from AI to know our worth, but we do have a responsibility to ensure our history is preserved, celebrated and passed down to future generations.

Together, we can say with pride: ‘We Are Enough,’ and build our wealth. In this digital age, storytelling is resistance. If we do not take control of our narratives now, future generations will inherit stories written by those who never lived our experiences. Let us ensure that does not happen. We are more than data points. We are living history. Let us write a 2025 Playbook that ensures our legacy endures.

The post We are enough: Writing, teaching and owning our history appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2025/01/30/we-are-enough-writing-teaching-and-owning-our-history/feed/ 1 47955
168极速赛车开奖官网 AI misunderstands some people’s words more than others https://thecincinnatiherald.com/2025/01/27/ai-misunderstands-some-peoples-words-more-than-others/ https://thecincinnatiherald.com/2025/01/27/ai-misunderstands-some-peoples-words-more-than-others/#respond Mon, 27 Jan 2025 23:00:00 +0000 https://thecincinnatiherald.com/?p=47759

Speaking with an AI bot can be amusing and even helpful – if it understands you. How well AIs do that is a matter of whose speech they’ve been trained on.

The post AI misunderstands some people’s words more than others appeared first on The Cincinnati Herald .

]]>

By Roberto Rey Agudo, Dartmouth College

The idea of a humanlike artificial intelligence assistant that you can speak with has been alive in many people’s imaginations since the release of “Her,” Spike Jonze’s 2013 film about a man who falls in love with a Siri-like AI named Samantha. Over the course of the film, the protagonist grapples with the ways in which Samantha, real as she may seem, is not and never will be human.

Twelve years on, this is no longer the stuff of science fiction. Generative AI tools like ChatGPT and digital assistants like Apple’s Siri and Amazon’s Alexa help people get driving directions, make grocery lists, and plenty else. But just like Samantha, automatic speech recognition systems still cannot do everything that a human listener can.

You have probably had the frustrating experience of calling your bank or utility company and needing to repeat yourself so that the digital customer service bot on the other line can understand you. Maybe you’ve dictated a note on your phone, only to spend time editing garbled words.

Linguistics and computer science researchers have shown that these systems work worse for some people than for others. They tend to make more errors if you have a non-native or a regional accent, are Black, speak in African American Vernacular English, code-switch, if you are a woman, are old, are too young or have a speech impediment.

Tin ear

Unlike you or me, automatic speech recognition systems are not what researchers call “sympathetic listeners.” Instead of trying to understand you by taking in other useful clues like intonation or facial gestures, they simply give up. Or they take a probabilistic guess, a move that can sometimes result in an error.

As companies and public agencies increasingly adopt automatic speech recognition tools in order to cut costs, people have little choice but to interact with them. But the more that these systems come into use in critical fields, ranging from emergency first responders and health care to education and law enforcement, the more likely there will be grave consequences when they fail to recognize what people say.

Imagine sometime in the near future you’ve been hurt in a car crash. You dial 911 to call for help, but instead of being connected to a human dispatcher, you get a bot that’s designed to weed out nonemergency calls. It takes you several rounds to be understood, wasting time and raising your anxiety level at the worst moment.

What causes this kind of error to occur? Some of the inequalities that result from these systems are baked into the reams of linguistic data that developers use to build large language models. Developers train artificial intelligence systems to understand and mimic human language by feeding them vast quantities of text and audio files containing real human speech. But whose speech are they feeding them?

If a system scores high accuracy rates when speaking with affluent white Americans in their mid-30s, it is reasonable to guess that it was trained using plenty of audio recordings of people who fit this profile.

With rigorous data collection from a diverse range of sources, AI developers could reduce these errors. But to build AI systems that can understand the infinite variations in human speech arising from things like gender, age, race, first vs. second language, socioeconomic status, ability and plenty else, requires significant resources and time.

‘Proper’ English

For people who do not speak English – which is to say, most people around the world – the challenges are even greater. Most of the world’s largest generative AI systems were built in English, and they work far better in English than in any other language. On paper, AI has lots of civic potential for translation and increasing people’s access to information in different languages, but for now, most languages have a smaller digital footprint, making it difficult for them to power large language models.

Even within languages well-served by large language models, like English and Spanish, your experience varies depending on which dialect of the language you speak.

Right now, most speech recognition systems and generative AI chatbots reflect the linguistic biases of the datasets they are trained on. They echo prescriptive, sometimes prejudiced notions of “correctness” in speech.

In fact, AI has been proved to “flatten” linguistic diversity. There are now AI startup companies that offer to erase the accents of their users, drawing on the assumption that their primary clientele would be customer service providers with call centers in foreign countries like India or the Philippines. The offering perpetuates the notion that some accents are less valid than others.

Human connection

AI will presumably get better at processing language, accounting for variables like accents, code-switching and the like. In the U.S., public services are obligated under federal law to guarantee equitable access to services regardless of what language a person speaks. But it is not clear whether that alone will be enough incentive for the tech industry to move toward eliminating linguistic inequities.

Many people might prefer to talk to a real person when asking questions about a bill or medical issue, or at least to have the ability to opt out of interacting with automated systems when seeking key services. That is not to say that miscommunication never happens in interpersonal communication, but when you speak to a real person, they are primed to be a sympathetic listener.

With AI, at least for now, it either works or it doesn’t. If the system can process what you say, you are good to go. If it cannot, the onus is on you to make yourself understood.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Roberto Rey Agudo, Dartmouth College

Read more:

Roberto Rey Agudo does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Feature Image: Speech recognition systems are less accurate for women and Black people, among other demographics. Jacob Wackerhausen/iStock via Getty Images

The post AI misunderstands some people’s words more than others appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2025/01/27/ai-misunderstands-some-peoples-words-more-than-others/feed/ 0 47759
168极速赛车开奖官网 Tech law in 2025: A look ahead at policies under Trump https://thecincinnatiherald.com/2025/01/10/tech-law-in-2025-a-look-ahead-at-policies-under-trump/ https://thecincinnatiherald.com/2025/01/10/tech-law-in-2025-a-look-ahead-at-policies-under-trump/#respond Fri, 10 Jan 2025 13:00:00 +0000 https://thecincinnatiherald.com/?p=45996

The Trump administration has different interests and priorities than those of the Biden administration for regulating technology. For some issues like AI regulation, big changes are on tap.

The post Tech law in 2025: A look ahead at policies under Trump appeared first on The Cincinnati Herald .

]]>

By Sylvia Lu, University of Michigan

Artificial intelligence harms, problematic social media content, data privacy violations – the issues are the same, but the policymakers and regulators who deal with them are about to change.

As the federal government transitions to a new term under the renewed leadership of Donald Trump, the regulatory landscape for technology in the United States faces a significant shift.

The Trump administration’s stated approach to these issues signals changes. It is likely to move away from the civil rights aspect of Biden administration policy toward an emphasis on innovation and economic competitiveness. While some potential policies would pull back on stringent federal regulations, others suggest new approaches to content moderation and ways of supporting AI-related business practices. They also suggest avenues for state legislation.

I study the intersection of law and technology. Here are the key tech law issues likely to shape the incoming administration’s agenda in 2025.

AI regulation: innovation vs. civil rights

The rapid evolution of AI technologies has led to an expansion of AI policies and regulatory activities, presenting both opportunities and challenges. The federal government’s approach to AI regulation is likely to undergo notable changes under the incoming Trump administration.

The Biden administration’s AI Bill of Rights and executive order on AI established basic principles and guardrails to protect safety, privacy and civil rights. These included requirements for developers of powerful AI systems to report safety test results, and a mandate for the National Institute of Standards and Technology to create rigorous safety standards. They also required government agencies to use AI in responsible ways.

Unlike the Biden era, the Trump administration’s deregulatory approach suggests a different direction. The president-elect has signaled his intention to repeal Biden’s executive order on AI, citing the need to foster free speech. Trump’s nominee to head the Federal Trade Commission, Andrew Ferguson, has echoed this sentiment. He has stated his opposition to restrictive AI regulations and the adoption of a comprehensive federal AI law.

AI policy experts discuss likely changes in federal regulation of technology in the Trump administration.

With limited prospects for federal AI legislation under the Trump administration, states are likely to lead the charge in addressing emerging AI harms. In 2024, at least 45 states introduced AI-related bills. For example, Colorado passed comprehensive legislation to address algorithmic discrimination. In 2025, state lawmakers may either follow Colorado’s example by enacting broad AI regulations or focus on targeted laws for specific applications, such as automated decision-making, deepfakes, facial recognition and AI chatbots.

Data privacy: federal or state leadership?

Data privacy remains a key area of focus for policymakers, and 2025 is a critical year to see whether Congress will enact a federal privacy law. The proposed American Privacy Rights Act, introduced in 2024, represents a bipartisan effort to create a comprehensive federal privacy framework. The bill includes provisions for preempting state laws and allowing private rights of action, meaning allowing individuals to sue over alleged violations. The bill aims to simplify compliance and reduce the patchwork of state regulations.

These issues are likely to spark key debates in the year ahead. Lawmakers are also likely to wrestle with balancing regulatory burdens on smaller businesses with the need for comprehensive privacy protections.

In the absence of federal action, states may continue to dominate privacy regulation. Since California passed the Consumer Privacy Rights Act in 2019, 19 states have passed comprehensive privacy laws. Recent state privacy laws have differing scopes, rights and obligations, which creates a fragmented regulatory environment. In 2024, key issues included defining sensitive data, protecting minors’ privacy, incorporating data minimization principles, and addressing compliance challenges for medium or small businesses.

At the federal level in 2024, the Biden administration issued an executive order authorizing the U.S. attorney general to restrict cross-border data transfers to protect national security. These efforts may continue in the new administration.

Cybersecurity, health privacy and online safety

States have become key players in strengthening cybersecurity protections, with roughly 30 states requiring businesses to adhere to cybersecurity standards. The California Privacy Protection Agency Board, for example, has proposed rulemaking on cybersecurity audits, data protection risk assessments and automated decision-making.

Meanwhile, there is a growing trend toward strengthening health data privacy and protecting children online. Washington state and Nevada, for example, have adopted laws that expand the protection of health data beyond the scope of the federal Health Insurance Portability and Accountability Act.

Numerous states, such as California, Colorado, Utah and Virginia, have recently expanded protections for young users’ data. In the absence of federal regulation, state governments are likely to continue leading efforts to address pressing privacy and cybersecurity concerns in 2025.

Social media and Section 230

Online platform regulation has been a contentious issue under both the Biden and Trump administrations. There are federal efforts to reform Section 230, which shields online platforms from liability for user-generated content, and federal- and state-level efforts to address misinformation and hate speech.

While Trump’s previous administration criticized Section 230 for allegedly enabling censorship of conservative voices, the Biden administration focused on increasing transparency and accountability for companies that fail to remove concerning content.

Section 230 explained.

With Trump coming back to office, Congress is likely to consider proposals to prohibit certain forms of content moderation in the name of free speech protections.

On the other hand, states like California and Connecticut have recently passed legislation requiring platforms to disclose information about hate speech and misinformation. Some existing state laws regulating online platforms are facing U.S. Supreme Court challenges on First Amendment grounds.

In 2025, debates are likely to continue on how to balance platform neutrality with accountability at both federal and state levels.

Changes in the wind

Overall, while federal efforts on issues like Section 230 reform and children’s online protection may advance, federal-level AI regulation and data privacy laws could potentially slow down due to the administration’s deregulatory stance. Whether long-standing legislative efforts like federal data privacy protection materialize will depend on the balance of power between Congress, the courts and the incoming administration.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Sylvia Lu, University of Michigan

Read more:

Sylvia Lu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

Feature Image: The incoming Trump administration is poised to shake up tech regulation. Adam Gray/AFP via Getty Images

The post Tech law in 2025: A look ahead at policies under Trump appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2025/01/10/tech-law-in-2025-a-look-ahead-at-policies-under-trump/feed/ 0 45996
168极速赛车开奖官网 Language AIs in 2024: Size, guardrails and steps toward AI agents https://thecincinnatiherald.com/2025/01/02/language-ais-in-2024-size-guardrails-and-steps-toward-ai-agents/ https://thecincinnatiherald.com/2025/01/02/language-ais-in-2024-size-guardrails-and-steps-toward-ai-agents/#respond Thu, 02 Jan 2025 23:00:00 +0000 https://thecincinnatiherald.com/?p=45759

The rubber met the road for language AIs in 2024. The hard realities led to new, smaller models and safety measures for the big ones. 2024’s R&D also set the stage for the next big thing: AI agents.

The post Language AIs in 2024: Size, guardrails and steps toward AI agents appeared first on The Cincinnati Herald .

]]>

By John Licato, University of South Florida

I research the intersection of artificial intelligence, natural language processing and human reasoning as the director of the Advancing Human and Machine Reasoning lab at the University of South Florida. I am also commercializing this research in an AI startup that provides a vulnerability scanner for language models.

From my vantage point, I observed significant developments in the field of AI language models in 2024, both in research and the industry.

Perhaps the most exciting of these are the capabilities of smaller language models, support for addressing AI hallucination, and frameworks for developing AI agents.

Small AIs make a splash

At the heart of commercially available generative AI products like ChatGPT are large language models, or LLMs, which are trained on vast amounts of text and produce convincing humanlike language. Their size is generally measured in parameters, which are the numerical values a model derives from its training data. The larger models like those from the major AI companies have hundreds of billions of parameters.

There is an iterative interaction between large language models and smaller language models, which seems to have accelerated in 2024.

First, organizations with the most computational resources experiment with and train increasingly larger and more powerful language models. Those yield new large language model capabilities, benchmarks, training sets and training or prompting tricks. In turn, those are used to make smaller language models – in the range of 3 billion parameters or less – which can be run on more affordable computer setups, require less energy and memory to train, and can be fine-tuned with less data.

No surprise, then, that developers have released a host of powerful smaller language models – although the definition of small keeps changing: Phi-3 and Phi-4 from Microsoft, Llama-3.2 1B and 3B, and Qwen2-VL-2B are just a few examples.

These smaller language models can be specialized for more specific tasks, such as rapidly summarizing a set of comments or fact-checking text against a specific reference. They can work with their larger cousins to produce increasingly powerful hybrid systems.

What are small language model AIs – and why would you want one?

Wider access

Increased access to highly capable language models large and small can be a mixed blessing. As there were many consequential elections around the world in 2024, the temptation for the misuse of language models was high.

Language models can give malicious users the ability to generate social media posts and deceptively influence public opinion. There was a great deal of concern about this threat in 2024, given that it was an election year in many countries.

And indeed, a robocall faking President Joe Biden’s voice asked New Hampshire Democratic primary voters to stay home. OpenAI had to intervene to disrupt over 20 operations and deceptive networks that tried to use its models for deceptive campaigns. Fake videos and memes were created and shared with the help of AI tools.

Despite the anxiety surrounding AI disinformation, it is not yet clear what effect these efforts actually had on public opinion and the U.S. election. Nevertheless, U.S. states passed a large amount of legislation in 2024 governing the use of AI in elections and campaigns.

Misbehaving bots

Google started including AI overviews in its search results, yielding some results that were hilariously and obviously wrong – unless you enjoy glue in your pizza. However, other results may have been dangerously wrong, such as when it suggested mixing bleach and vinegar to clean your clothes.

Large language models, as they are most commonly implemented, are prone to hallucinations. This means that they can state things that are false or misleading, often with confident language. Even though I and others continually beat the drum about this, 2024 still saw many organizations learning about the dangers of AI hallucination the hard way.

Despite significant testing, a chatbot playing the role of a Catholic priest advocated for baptism via Gatorade. A chatbot advising on New York City laws and regulations incorrectly said it was “legal for an employer to fire a worker who complains about sexual harassment, doesn’t disclose a pregnancy or refuses to cut their dreadlocks.” And OpenAI’s speech-capable model forgot whose turn it was to speak and responded to a human in her own voice.

Fortunately, 2024 also saw new ways to mitigate and live with AI hallucinations. Companies and researchers are developing tools for making sure AI systems follow given rules pre-deployment, as well as environments to evaluate them. So-called guardrail frameworks inspect large language model inputs and outputs in real time, albeit often by using another layer of large language models.

And the conversation on AI regulation accelerated, causing the big players in the large language model space to update their policies on responsibly scaling and harnessing AI.

But although researchers are continually finding ways to reduce hallucinations, in 2024, research convincingly showed that AI hallucinations are always going to exist in some form. It may be a fundamental feature of what happens when an entity has finite computational and information resources. After all, even human beings are known to confidently misremember and state falsehoods from time to time.

The rise of agents

Large language models, particularly those powered by variants of the transformer architecture, are still driving the most significant advances in AI. For example, developers are using large language models to not only create chatbots, but to serve as the basis of AI agents. The term “agentic AI” shot to prominence in 2024, with some pundits even calling it the third wave of AI.

To understand what an AI agent is, think of a chatbot expanded in two ways: First, give it access to tools that provide the ability to take actions. This might be the ability to query an external search engine, book a flight or use a calculator. Second, give it increased autonomy, or the ability to make more decisions on its own.

For example, a travel AI chatbot might be able to perform a search of flights based on what information you give it, but a tool-equipped travel agent might plan out an entire trip itinerary, including finding events, booking reservations and adding them to your calendar.

AI agents can perform multiple steps of a task on their own.

In 2024, new frameworks for developing AI agents emerged. Just to name a few, LangGraph, CrewAI, PhiData and AutoGen/Magentic-One were released or improved in 2024.

Companies are just beginning to adopt AI agents. Frameworks for developing AI agents are new and rapidly evolving. Furthermore, security, privacy and hallucination risks are still a concern.

But global market analysts forecast this to change: 82% of organizations surveyed plan to use agents within 1-3 years, and 25% of all companies currently using generative AI are likely to adopt AI agents in 2025.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: John Licato, University of South Florida

Read more:

John Licato is the founder and owner of an AI startup called Actualization AI, LLC. He receives funding from various federal agencies, including the National Science Foundation, Army Research Office, Air Force Office of Scientific Research, and Air Force Research Lab.

Feature Image: 2024 saw smaller models and new guardrails for language AIs. pagadesign/E+ via Getty Images

The post Language AIs in 2024: Size, guardrails and steps toward AI agents appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2025/01/02/language-ais-in-2024-size-guardrails-and-steps-toward-ai-agents/feed/ 0 45759
168极速赛车开奖官网 How to avoid the latest generation of scams this holiday season https://thecincinnatiherald.com/2024/12/22/how-to-avoid-the-latest-generation-of-scams-this-holiday-season/ https://thecincinnatiherald.com/2024/12/22/how-to-avoid-the-latest-generation-of-scams-this-holiday-season/#respond Sun, 22 Dec 2024 13:00:00 +0000 https://thecincinnatiherald.com/?p=45072

Today’s scams aren’t like yesteryear’s.

The post How to avoid the latest generation of scams this holiday season appeared first on The Cincinnati Herald .

]]>

By Shaila Rana, Purdue University and Nelly Mulleneaux, Purdue University

Imagine this: Two days before your family holiday party, you get a text about an online order you placed a week ago, saying the package is at your door. It comes with a photo – of someone else’s door. When you click the attached link, it takes you to the online store, where you enter your username and password. Somehow that doesn’t work, even though you answered your security questions.

Frustrated, you call customer service. They tell you not to worry since your package is still on the way. You receive your package a day later and forget all about the earlier hassle. In the end, it was just a mistake.

You are unaware of the terrifying thing happening in the background.

You’ve fallen for a classic package-delivery scam, and a form of “smishing,” or SMS phishing. And you’re not alone. One in three Americans have fallen victim to cybercrime, according to a 2023 poll. That’s up from 1 in 4 in 2018. As cybersecurity researchers, we want to spread the word to help people protect themselves.

Old-fashioned threats haven’t disappeared – identity thieves still steal wallets, dumpster dive for personal information and skim cards at ATMs – but the internet has made scamming easier than ever.

Digital threats include phishing attacks that use fake emails and websites, data breaches at major companies, malware that steals your information, and unsecured Wi-Fi networks in public places.

A whole new world of scams

Generative AI – which refers to artificial intelligence that generates text, images and other things – has improved dramatically over the past few years. That’s been great for scammers trying to make a buck during the holiday season.

Consider online shopping. In some cases, scammers craft deepfake videos of fake testimonials from satisfied “customers” to trick unsuspecting shoppers. Scam victims can encounter these videos on cloned versions of legitimate sites, social media platforms, messaging apps and forums.

Scammers also generate AI-cloned voices of social media influencers appearing to endorse counterfeit products and create convincing but fraudulent shopping websites populated with AI-generated product photos and reviews. Some scammers use AI to impersonate legitimate brands through personalized phishing emails and fake customer service interactions. Since AI-generated content can appear remarkably authentic, it’s become harder for consumers to distinguish legitimate online stores from sophisticated scam operations.

But it doesn’t stop there. “Family emergency scams” exploit people’s emotional vulnerability through deepfake technology. Scammers use AI to clone the voices of family members, especially children, and then make panic-inducing calls to relatives where they claim to be in serious trouble and need immediate financial help.

Some scammers combine voice deepfakes with AI-generated video clips showing the “loved one” in apparent distress. These manufactured emergency scenarios often involve hospital bills, bail money or ransom demands that must be paid immediately. The scammer may also use AI to impersonate authority figures like doctors, police officers and lawyers to add credibility to the scheme.

Since the voice sounds authentic and the emotional manipulation is intense, even cautious people can be caught off guard and make rushed decisions.

How to protect yourself

Protecting yourself against scams requires a multilayered defense strategy.

When shopping, verify retailers through official websites by checking the URL carefully – it should start with the letters “HTTPS” – and closely examining the site design and its content. Since fake websites often provide fake contact information, checking the “Contact Us” section can be a good idea. Before making purchases from unfamiliar sites, cross-reference the business on legitimate review platforms and verify their physical address.

It’s essential to keep all software updated, including your operating system, browser, apps and antivirus software. Updates often include security patches that fix vulnerabilities hackers could exploit.

For more information on the importance of software updates and how to manage them, check out resources like StaySafeOnline or your device manufacturer’s official website. Regular updates are a crucial step in maintaining a secure online shopping experience.

Make sure you only provide necessary information for purchases – remember, no one needs your Social Security number to sell you a sweater. And keeping an eye on your bank statements will help you catch any unauthorized activity early. It may seem like another chore, and it probably is, but this is the reality of our digital world.

To protect against family emergency scams, establish family verification codes, or a safe word, or security questions that only real family members would know. If you do get a distressed call from loved ones, remain calm and take time to verify the situation by contacting family members directly through known and trusted phone numbers. Educate your relatives about these scams and encourage them to never send money without first confirming the emergency with other family members or authorities through verified channels.

If you discover that your identity has been stolen, time is critical. Your first steps should be to immediately contact your banks and credit card companies, place a fraud alert with the credit bureaus, and file a report with the Federal Trade Commission and your local police.

In the following days, you’ll need to change all passwords, review your credit reports, consider a credit freeze, and document everything. While this process can be overwhelming – and extremely cumbersome – taking quick action can significantly limit the damage.

Staying informed about AI scam tactics through reputable cybersecurity resources is essential. Reporting suspected scams to relevant authorities not only protects you, but it also helps safeguard others. A key takeaway is that staying vigilant is critical to defending against these threats.

Awareness helps communities push back against digital threats. More importantly, it’s key to understand how today’s scams aren’t like yesteryear’s.

Recognizing the signs of scams can provide stronger defense during this holiday season. And as you develop your threat identification techniques, don’t forget to share with your family and friends.

Who knows? You could save someone from becoming a victim.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Shaila Rana, Purdue University and Nelly Mulleneaux, Purdue University

Read more:

The authors do not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and have disclosed no relevant affiliations beyond their academic appointment.

Feature Image: Photo by Alexander Grey on Unsplash

The post How to avoid the latest generation of scams this holiday season appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2024/12/22/how-to-avoid-the-latest-generation-of-scams-this-holiday-season/feed/ 0 45072
168极速赛车开奖官网 Teens find social media algorithms reflect their identities https://thecincinnatiherald.com/2024/04/30/teens-social-media-algorithms-identity/ https://thecincinnatiherald.com/2024/04/30/teens-social-media-algorithms-identity/#respond Tue, 30 Apr 2024 19:00:00 +0000 https://thecincinnatiherald.com/?p=28400

Teens are exposed to algorithmically selected content on social media, which presents a mirror image of themselves, but the risks of algorithms to their self-identity and privacy are often overlooked.

The post Teens find social media algorithms reflect their identities appeared first on The Cincinnati Herald .

]]>

By Nora McDonald, George Mason University

Teens say ‘for you’ algorithms get them right. Photo illustration by Spencer Platt/Getty Images

Social media apps regularly present teens with algorithmically selected content often described as “for you,” suggesting, by implication, that the curated content is not just “for you” but also “about you” – a mirror reflecting important signals about the person you are.

All users of social media are exposed to these signals, but researchers understand that teens are at an especially malleable stage in the formation of personal identity. Scholars have begun to demonstrate that technology is having generation-shaping effects, not merely in the way it influences cultural outlook, behavior and privacy, but also in the way it can shape personality among those brought up on social media.

The prevalence of the “for you” message raises important questions about the impact of these algorithms on how teens perceive themselves and see the world, and the subtle erosion of their privacy, which they accept in exchange for this view.

Teens like their algorithmic reflection

Inspired by these questions, my colleagues John Seberger and Afsaneh Razi of Drexel University and I asked: How are teens navigating this algorithmically generated milieu, and how do they recognize themselves in the mirror it presents?

In our qualitative interview study of teens 13-17, we found that personalized algorithmic content does seem to present what teens interpret as a reliable mirror image of themselves, and that they very much like the experience of seeing that social media reflection.

Teens we spoke with say they prefer a social media completely customized for them, depicting what they agree with, what they want to see and, thus, who they are.

If I look up something that is important to me that will show up as one of the top posts [and] it’ll show, like, people [like me] that are having a nice discussion.

It turns out that the teens we interviewed believe social media algorithms like TikTok’s have gotten so good that they see the reflections of themselves in social media as quite accurate. So much so that teens are quick to attribute content inconsistencies with their self-image as anomalies – for instance, the result of inadvertent engagement with past content, or just a glitch.

At some point I saw something about that show, maybe on TikTok, and I interacted with it without actually realizing.

When personalized content is not agreeable or consistent with their self-image, the teens we interviewed say they scroll past it, hoping never to see it again. Even when these perceived anomalies take the form of extreme hypermasculine or “nasty” content, teens do not attribute this to anything about themselves specifically, nor do they claim to look for an explanation in their own behaviors. According to teens in our interviews, the social media mirror does not make them more self-reflective or challenge their sense of self.

One thing that surprised us was that while teens were aware that what they see in their “for you” feed is the product of their scrolling habits on social media platforms, they are largely unaware or unconcerned that that data captured across apps contributes to this self-image. Regardless, they don’t see their “for you” feed as a challenge to their sense of self, much less a risk to their self-identity – nor, for that matter, any basis for concern at all.

The human brain continues to develop during adolescence.

Shaping identity

Research on identity has come a long way since sociologist Erving Goffman proposed the “presentation of self” in 1959. He posited that people manage their identities through social performance to maintain equilibrium between who they think they are and how others perceive them.

When Goffman first proposed his theory, there was no social media interface available to hold up a handy mirror of the self as experienced by others. People were obligated to create their own mosaic image, derived from multiple sources, encounters and impressions. In recent years, social media recommender algorithms have inserted themselves into what is now a three-way negotiation among self, public and social media algorithm.

“For you” offerings create a private-public space through which teens can access what they feel is a largely accurate test of their self-image. At the same time, they say they can easily ignore it if it seems to disagree with that self-image.

The pact teens make with social media, exchanging personal data and relinquishing privacy to secure access to that algorithmic mirror, feels to them like a good bargain. They represent themselves as confidently able to tune out or scroll past recommended content that seems to contradict their sense of self, but research shows otherwise.

They have, in fact, proven themselves highly vulnerable to self-image distortion and other mental health problems based on social media algorithms explicitly designed to create and reward hypersensitivities, fixations and dysmorphia – a mental health disorder where people fixate on their appearance.

Given what researchers know about the teen brain and that stage of social development – and given what can reasonably be surmised about the malleability of self-image based on social feedback – teens are wrong to believe that they can scroll past the self-identity risks of algorithms.

U.S. Surgeon General Vivek Murthy discusses the harms teens face from social media.

Interventions

Part of the remedy could be to build new tools using artificial intelligence to detect unsafe interactions while also protecting privacy. Another approach is to help teens reflect on these “data doubles” that they have constructed.

My colleagues and I are now exploring more deeply how teens experience algorithmic content and what types of interventions can help them reflect on it. We encourage researchers in our field to design ways to challenge the accuracy of algorithms and expose them as reflecting behavior and not being. Another part of the remedy may involve arming teens with tools to restrict access to their data, including limiting cookies, having different search profiles and turning off location when using certain apps.

We believe that these are all steps that are likely to reduce the accuracy of algorithms, creating much-needed friction between algorithm and self, even if teens are not necessarily happy with the results.

Getting the kids involved

Recently, my colleagues and I conducted a Gen Z workshop with young people from Encode Justice, a global organization of high school and college students advocating for safe and equitable AI. The aim was to better understand how they are thinking about their lives under algorithms and AI. Gen Zers say they are concerned but also eager to be involved in shaping their future, including mitigating algorithm harms. Part of our workshop goal was to call attention to and foster the need for teen-driven investigations of algorithms and their effects.

What researchers are also confronting is that we don’t actually know what it means to constantly negotiate identity with an algorithm. Many of us who study teens are too old to have grown up in an algorithmically moderated world. For the teens we study, there is no “before AI.”

I believe that it’s perilous to ignore what algorithms are doing. The future for teens can be one in which society acknowledges the unique relationship between teens and social media. This means involving them in the solutions, while still providing guidance.

This article is republished from The Conversation, a nonprofit, independent news organization bringing you facts and trustworthy analysis to help you make sense of our complex world. It was written by: Nora McDonald, George Mason University

Read more:

Nora McDonald does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

The post Teens find social media algorithms reflect their identities appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2024/04/30/teens-social-media-algorithms-identity/feed/ 0 28400
168极速赛车开奖官网 Hollywood mogul Tyler Perry fears job losses from new AI model Sora https://thecincinnatiherald.com/2024/03/05/tyler-perry-artificial-intelligence-job-loss/ https://thecincinnatiherald.com/2024/03/05/tyler-perry-artificial-intelligence-job-loss/#respond Tue, 05 Mar 2024 20:00:00 +0000 https://thecincinnatiherald.com/?p=25366

Tyler Perry is putting his $800 million studio expansion plans on hold due to a new text-to-video artificial intelligence model, which is expected to cause job loss in the movie industry.

The post Hollywood mogul Tyler Perry fears job losses from new AI model Sora appeared first on The Cincinnati Herald .

]]>

By Lauren Victoria Burke,

NNPA Newswire Contributor

Tyler Perry was planning an $800 million expansion of his studio in Atlanta. Now the plans are on hold. Why? Because a new text-to-video artificial intelligence (AI) model. The new AI model by ChatGTP entitled “Sora” creates video from a text prompt.

In an interview with the Hollywood Reporter on Feb. 23, Perry, who is worth over $1 billion, said that the new technology will cause job loss in the movie industry. The question of how artificial intelligence technology will impact employment across fields is a growing concern.

In the creative fields around special effects and animation design, artificial intelligence is all but certain to cause impact and create job loss. But there are other jobs that are likely to be impacted.

With the rise of e-commerce and automated checkout systems, traditional retail roles may diminish. Cashiers: Similar to retail salespersons, automated checkout systems are reducing the need for human cashiers. Telemarketers: AI-driven chatbots and voice recognition systems are increasingly handling customer inquiries. Data Entry Clerks: Automation tools can handle routine data entry tasks more efficiently. Bookkeepers and Accounting Clerks: AI can automate many financial tasks, potentially reducing the need for manual bookkeeping.

Over the years, Tyler Perry has expanded his talents from filmmaking, television production, and writing. He established Tyler Perry Studios, one of the largest film production studios in the United States, located in Atlanta, in 2006. Perry’s films often explore themes of faith, family, and resilience, resonating strongly with Black audiences.

Some of Perry’s notable films include: “Diary of a Mad Black Woman” (2005), “Madea’s Family Reunion” (2006), “Why Did I Get Married?” (2007), and “For Colored Girls” (2010).

In addition to his film work, Perry has created successful television series such as “Tyler Perry’s House of Payne” and “The Haves and the Have Nots.”

Now like so many others in an ever-changing industry impacted by changing technology, Perry will navigate changes brought on by AI.

The post Hollywood mogul Tyler Perry fears job losses from new AI model Sora appeared first on The Cincinnati Herald .

]]>
https://thecincinnatiherald.com/2024/03/05/tyler-perry-artificial-intelligence-job-loss/feed/ 0 25366