Anthropic introduces Prompt Shield ahead of US elections

Anthropic introduces Prompt Shield ahead of US elections

Tech journalist
     

    Anthropic, a safety-first artificial intelligence (AI) company partially owned by Amazon, has announced a new precaution ahead of the US elections in 2024. First reported in TechCrunch, the company is testing a new technology called Prompt Shield.

    Claude is Anthropic’s chatbot AI, released in 2023. It has been trained with a focus on being a ‘helpful, honest, and harmless’ AI option, according to Anthropic’s website.

    Prompt Shield uses a range of AI rules to detect when a US-based user is asking about politics, elections, and related topics. Instead of answering, Claude will redirect users to a tool called TurboVote, which provides nonpartisan and accurate political information. It has been created by the organization Democracy Works.

    Because Claude doesn’t have an ongoing stream of training data, it lacks up-to-date information on multiple topics, including politics. It can’t provide real-time information about specific elections, and Anthropic has acknowledged this as a shortcoming, hence the solution being tested.

    “We’ve had ‘prompt shield’ in place since we launched Claude — it flags several different types of harms, based on our acceptable user policy,” an Anthropic spokesperson told TechCrunch. “We’ll be launching our election-specific prompt shield intervention in the coming weeks and we intend to monitor use and limitations … We’ve spoken to a variety of stakeholders including policymakers, other companies, civil society and nongovernmental agencies, and election-specific consultants [in developing this].”

    AI and the US Election

    State legislatures began pushing for laws that would control and limit the impact of AI, particularly generative AI and deepfakes, on elections earlier this year. There has not yet been a definitive law passed on this issue.

    OpenAI, creators of ChatGPT and the newly announced video generation tool Sora, announced their measures to handle elections in January. They said they intend to focus on the anticipation of abuse, on providing transparency over AI-generated content and providing access to authoritative political information.

    OpenAI introduced a new layer of fact-checking, done by humans, to expedite their safeguarding plans. Like Anthropic, they also placed an emphasis on providing reliable, authoritative information to users.

    Both Anthropic and OpenAI have reinforced their commitment to prohibiting users from using their tools for the purposes of political lobbying and campaigning.

    Featured image credit: Anthropic

    The post Anthropic introduces Prompt Shield ahead of US elections appeared first on ReadWrite.

    ReadWrite

    Ali Rees

    Tech journalist

    Ali Rees is a freelance writer based in the UK. They have worked as a data and analytics consultant, a software tester, and a digital marketing and SEO specialist. They have been a keen gamer and tech enthusiast since their childhood in are currently the Gaming and Tech editor at Brig Newspaper. They also have a Substack where they review short video games. During the pandemic, Ali turned their hand to live streaming and is a fan of Twitch. When not writing, Ali enjoys playing video and board games, live music, and reading. They have two cats and both of…

    (11)