✨ Read this awesome post from WIRED 📖
📂 **Category**: Business,Business / Artificial Intelligence,Business / Big Tech,Robot Generals
💡 **What You’ll Learn**:
When the user asks “What enemy military unit is in the area?” The AIP aide believed it was “likely an armored assault battalion based on the pattern of equipment.” This prompts the analyst to request an MQ-9 Reaper drone to survey the scene. They then ask the AIP assistant to “establish three courses of action to target enemy equipment,” and within moments, the assistant suggests attacking the unit using either “air assets,” “long-range artillery,” or a “tactical team.” The user asks the assistant to send these options to a fictional commander, who ultimately chooses the tactical team.
The final steps happen quickly: the analyst asks the AIP assistant to “analyze the battlefield,” then to “establish a route” for troops to reach the enemy, and finally to “assign jammers” to sabotage their communications equipment. Within seconds, the analyst finalizes the battle plan and orders troops to mobilize.
In this scenario, Claude would be the “voice” of the AIP Assistant, and the “logic” he uses to generate responses. Other AIP demos show that users interact with large language models in much the same way. In a blog post last week, for example, Palantir detailed how NATO, a Maven Smart Systems customer, could use an AIP agent within the tool.
In one infographic, Palantir shows how an outside defense contractor can choose from several AI models built into Palantir, including different versions of OpenAI’s ChatGPT and Meta’s Llama. The user chooses OpenAI’s GPT 4.1, but apparently, this may be where the soldier will also have the option to choose Claude instead.
The analyst then displays a digital map showing the locations of troops and weapons. In a panel labeled “COA” (Courses of Action), they click a button that triggers a tool powered by GPT-4.1 to generate five possible military strategies, including one called “Support with Fire, Penetrate, Shock, and Destroy.”
Another example shows how the system can help interpret satellite images: the analyst selects three detections of tanker trucks on a map, loads them into the AIP operator’s chat interface, and asks him to “interpret” the images and suggest options for what to do next.
CLOUD can also be used by the military to create intelligence assessments that may inform strike planning later. In June 2025, WIRED saw a demo by Kunal Sharma, a public sector leader at Anthropic, showing how the enterprise version of Claude could be used to generate “advanced” reports about a real-life Ukrainian drone strike dubbed “Operation Spider’s Web.” Sharma explained that Claude was relying only on publicly available information in the demo. But by partnering with Palantir, the federal government can also tap into internal data sets.
“This is the thing where I might sit for about five hours with a cup of coffee, read Google, go to think tanks, start writing reports, writing citations, etc.,” Sharma said. “But I don’t have that kind of time.”
In the demo, Sharma asked Claude to create an “interactive dashboard” containing information about the spiderweb process, then translate it into “object types” that could be analyzed in Foundry, one of Palantir’s off-the-shelf software products. He also asked Claude to write a detailed analysis of recent developments in Russia’s border provinces, as well as a 200-word summary of the “military and political implications” of the operation.
“Honestly, I’ve been reading these kinds of things for 20 years, and I’ve been writing them, and I’m an academic myself, and that’s actually pretty good,” Sharma said.
🔥 **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Palantirs #demos #show #military #AIpowered #chatbots #devise #war #plans**
🕒 **Posted on**: 1773648801
🌟 **Want more?** Click here for more info! 🌟
