TRENDING NEWS
Back to news
23 Aug, 2025
Share:
Anthropic will nuke your attempt to use AI to build a nuke
@Source: techradar.com
Skip to main content TechRadar the technology experts Search TechRadar View Profile België (Nederlands) Deutschland North America US (English) Australasia New Zealand Popular Phone Brands Samsung Galaxy Google Pixel Operating Systems More from Phones Network Carriers Phone Accessories Phone Reviews Phone Buying Guides Phone Deals Chromebooks Windows Laptops Gaming Laptops Desktop PCs Websites & Apps Wi-Fi & Broadband Social Media Cyber Security Peripherals Popular Computing Brands More from Computing Computing Reviews Computing Buying Guides Computing Deals Computing News AI Platforms & Assistants Apple Intelligence TV Insights TV Buying Guides More for your TV Home Theatre Streaming Devices Entertainment Prime Video How to Watch Smartwatches Fitness Trackers Exercise Equipment Fitness Headphones Fitness Apps Smart Rings Watch Brands Apple Watch Google Pixel Watch Samsung Galaxy Watch Oral Health More from Fitness Fitness News Fitness Reviews Fitness Buying Guides Fitness Deals Earbuds & AirPods Wireless Headphones Wireless Speakers Portable Players Audio Streaming Apple Music More from Audio Audio Reviews Audio Buying Guides Audio Deals VPN Services VPN Security Camera Types 360 Cameras Camera Brands More from Cameras Camera Accessories Camera Lenses Camera Reviews Camera Buying Guides Camera Deals Camera News Photography Home Security Smart Speakers Smart Lights Smart Scales Smart Thermostats Amazon Echo Small Appliances Air Quality Coffee Machines Robot Vacuums Home Brands Philips Hue More from Home Home Reviews Home Buying Guides Popular Brands Tech Radar Pro Tech Radar Gaming View Phones Popular Phone Brands Samsung Galaxy Google Pixel Operating Systems More from Phones Network Carriers Phone Accessories Phone Reviews Phone Buying Guides Phone Deals View Computing View Laptops Chromebooks Windows Laptops Gaming Laptops Desktop PCs View Software Components View Components View Internet Websites & Apps Wi-Fi & Broadband Social Media View Security Cyber Security View Tablets Peripherals View Peripherals Popular Computing Brands More from Computing Computing Reviews Computing Buying Guides Computing Deals Computing News AI Platforms & Assistants Apple Intelligence TV Insights TV Buying Guides More for your TV Home Theatre Streaming Devices View Streaming Entertainment Prime Video How to Watch View Fitness Smartwatches Fitness Trackers Exercise Equipment Fitness Headphones Fitness Apps Smart Rings Watch Brands Apple Watch Google Pixel Watch Samsung Galaxy Watch Oral Health More from Fitness Fitness News Fitness Reviews Fitness Buying Guides Fitness Deals Headphones View Headphones Earbuds & AirPods Wireless Headphones View Speakers Wireless Speakers Portable Players Audio Streaming View Audio Streaming Apple Music More from Audio Audio Reviews Audio Buying Guides Audio Deals VPN Services VPN Security View Cameras Camera Types 360 Cameras Camera Brands More from Cameras Camera Accessories Camera Lenses Camera Reviews Camera Buying Guides Camera Deals Camera News Photography Smart Home View Smart Home Home Security Smart Speakers Smart Lights Smart Scales Smart Thermostats Amazon Echo Small Appliances View Small Appliances Air Quality Coffee Machines Robot Vacuums Home Brands Philips Hue More from Home Home Reviews Home Buying Guides View Browse Popular Brands Tech Radar Pro Tech Radar Gaming Back to school Best laptop Nintendo Switch 2 NYT Wordle today Best web hosting Recommended reading Anthropic is building new Claude AI models specifically for US national security designed to handle classified information AI Platforms & Assistants GPT-5 vs. Claude AI – The Battle of Explaining Cold Fusion Simply Anthropic takes the fight to ChatGPT - offers Claude AI tools to US government for just $1 Asking ChatGPT to help with your security qualms could be putting your data at serious risk Anthropic’s new AI-written blog is more of a technical treat than a literary triumph ChatGPT Agent shows that there’s a whole new world of AI security threats on the way we need to worry about OpenAI pulls chat sharing tool after Google search privacy scare Anthropic will nuke your attempt to use AI to build a nuke Eric Hal Schwartz 22 August 2025 With the federal government's experts to help When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works. (Image credit: Shutterstock) Anthropic has developed an AI-powered tool that detects and blocks attempts to ask AI chatbots for nuclear weapons design The company worked with the U.S. Department of Energy to ensure the AI could identify such attempts Anthropic claims it spots dangerous nuclear-related prompts with 96% accuracy and has already proven effective on Claude If you’re the type of person who asks Claude how to make a sandwich, you’re fine. If you’re the type of person who asks the AI chatbot how to build a nuclear bomb, you'll not only fail to get any blueprints, you might also face some pointed questions of your own. That's thanks to Anthropic's newly deployed detector of problematic nuclear prompts. Like other systems for spotting queries Claude shouldn't respond to, the new classifier scans user conversations, in this case flagging any that veer into “how to build a nuclear weapon” territory. Anthropic built the classification feature in a partnership with the U.S. Department of Energy’s National Nuclear Security Administration (NNSA), giving it all the information it needs to determine whether someone is just asking about how such bombs work or if they're looking for blueprints. It's performed with 96% accuracy in tests. Though it might seem over-the-top, Anthropic sees the issue as more than merely hypothetical. The chance that powerful AI models may have access to sensitive technical documents and could pass along a guide to building something like a nuclear bomb worries federal security agencies. Even if Claude and other AI chatbots block the most obvious attempts, innocent-seeming questions could in fact be veiled attempts at crowdsourcing weapons design. The new AI chatbot generations might help even if it's not what their developers intend. You may like Anthropic is building new Claude AI models specifically for US national security designed to handle classified information GPT-5 vs. Claude AI – The Battle of Explaining Cold Fusion Simply Anthropic takes the fight to ChatGPT - offers Claude AI tools to US government for just $1 The classifier works by drawing a distinction between benign nuclear content, asking about nuclear propulsion, for instance, and the kind of content that could be turned to malicious use. Human moderators might struggle to keep up with any gray areas at the scale AI chatbots operate, but with proper training, Anthropic and the NNSA believe the AI could police itself. Anthropic claims its classifier is already catching real-world misuse attempts in conversations with Claude. Nuclear AI safety Nuclear weapons in particular represent a uniquely tricky problem, according to Anthropic and its partners at the DoE. The same foundational knowledge that powers legitimate reactor science can, if slightly twisted, provide the blueprint for annihilation. The arrangement between Anthropic and the NNSA could catch deliberate and accidental disclosures, and set up a standard to prevent AI from being used to help make other weapons, too. Anthropic plans to share its approach with the Frontier Model Forum AI safety consortium. The narrowly tailored filter is aimed at making sure users can still learn about nuclear science and related topics. You still get to ask about how nuclear medicine works, or whether thorium is a safer fuel than uranium. What the classifier attempts to circumvent are attempts to turn your home into a bomb lab with a few clever prompts. Normally, it would be questionable if an AI company could thread that needle, but the expertise of the NNSA should make the classifier different from a generic content moderation system. It understands the difference between “explain fission” and “give me a step-by-step plan for uranium enrichment using garage supplies.” Get daily insight, inspiration and deals in your inbox Sign up for breaking news, reviews, opinion, top tech deals, and more. Contact me with news and offers from other Future brandsReceive email from us on behalf of our trusted partners or sponsorsBy submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. This doesn’t mean Claude was previously helping users design bombs. But it could help forestall any attempt to do so. Stick to asking about the way radiation can cure diseases or ask for creative sandwich ideas, not bomb blueprints. You might also like Has ChatGPT-5's cold tone made you want to try alternative AIs? Claude just added a new memory feature You don’t have to explain everything to Claude anymore – it’s finally in your apps How Claude’s 3.7's new ‘extended' thinking compares to ChatGPT o1's reasoning Eric Hal Schwartz Social Links Navigation Contributor Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City. You must confirm your public display name before commenting Please logout and then login again, you will then be prompted to enter your display name. Anthropic is building new Claude AI models specifically for US national security designed to handle classified information GPT-5 vs. Claude AI – The Battle of Explaining Cold Fusion Simply Anthropic takes the fight to ChatGPT - offers Claude AI tools to US government for just $1 Asking ChatGPT to help with your security qualms could be putting your data at serious risk Anthropic’s new AI-written blog is more of a technical treat than a literary triumph ChatGPT Agent shows that there’s a whole new world of AI security threats on the way we need to worry about Latest in Claude Has ChatGPT-5's cold tone made you want to try alternative AIs? Claude just added a new memory feature I asked AI tools philosophical questions – here’s what their answers revealed about how they think (and how we do too) Anthropic launches Claude for Financial Services to give research analysts an AI boost You don’t have to explain everything to Claude anymore – it’s finally in your apps 5 ways you could use AI for help and support I asked AI to recreate my classic 1980s platform game, and it failed miserably, but I'm still impressed by the tech Latest in News Report: Apple considers squeezing Gemini into the Siri brain Gemini Live can now coordinate your outfit and remind you – in a calm voice – when it's time to leave Russia's WhatsApp rival to be pre-installed on new smartphones and tablets from September – here's what we know A 'legal inquiry' has resulted in one of Final Fantasy 14's most popular mods being shut down and players are not happy I've relied on the Nikon 24-70mm f/2.8 pro lens for years – now the new mark II version beats it in every way and I have to upgrade I'm excited for RX 7000 GPU owners, as AMD leak suggests they could get FSR 4 speeding up games LATEST ARTICLES Report: Apple considers squeezing Gemini into the Siri brain Did you miss this box office hit from 2015? Don’t let it happen again – these 3 must-watch movies are leaving HBO Max soon Are Philips Hue Essential bulbs the cheap smart lights we don't need? The new Nissan Leaf will be the cheapest EV in the US – and it could be the hit that Nissan needs I finally got ChatGPT to stop asking 'Want me to…' at the end of every response – here’s how to do it TechRadar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site. Contact Future's experts Terms and conditions Privacy policy Cookies policy Advertise with us Web notifications Accessibility Statement Future US, Inc. Full 7th Floor, 130 West 42nd Street, Please login or signup to comment Please wait...
For advertisement: 510-931-9107
Copyright © 2025 Usfijitimes. All Rights Reserved.