Anduril and Palantir Battlefield Communication System Has Flaws, Army Memo Says
Posted3 months agoActive3 months ago
cnbc.comTechstoryHigh profile
skepticalmixed
Debate
80/100
Defense TechnologyCybersecurityPrototyping
Key topics
Defense Technology
Cybersecurity
Prototyping
The US Army has identified flaws in the Anduril and Palantir-developed NGC2 battlefield communication system prototype, but commenters debate whether this is a significant issue or normal for prototyping.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
12m
Peak period
49
0-3h
Avg / period
13.1
Comment distribution92 data points
Loading chart...
Based on 92 loaded comments
Key moments
- 01Story posted
Oct 3, 2025 at 11:46 AM EDT
3 months ago
Step 01 - 02First comment
Oct 3, 2025 at 11:58 AM EDT
12m after posting
Step 02 - 03Peak activity
49 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 4, 2025 at 11:35 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45464269Type: storyLast synced: 11/20/2025, 4:50:34 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
> Army chief information officer and Chiulli’s supervisor, said in a statement to Reuters that the report was part of a process that helped in “triaging cybersecurity vulnerabilities” and mitigating them.
So it's a brand new prototype and this is a run of the mill cybersecurity review while it undergoes some internal testing?
This sounds like a nothingburger.
Given this line from the article:
and it seems like there was a SIGNIFICANT mismatch in expectations between the team delivering the prototype and the people receiving it. Everyone's time was wasted as a result.Especially when the cost of busted security in this context is “exceptionally grave damage.”
Given that segmentation of data access is a core part of the pitch (see e.g. https://www.palantir.com/docs/gotham) - if security controls were intentionally omitted from the prototype scope, that seems like a reckless scoping decision to make. And if security controls were unintentionally bypassed, this speaks to insufficient red-teaming of the prototype before launch.
I couldn't be more pleased that this is coming to light, though. Perhaps it opens decisionmakers' eyes to the dangers of over-centralizing military operations on a system that simultaneously allows operators to diffuse accountability to a semi-autonomous target-calling system, and is foundationally connected to surveillance-state systems tracking U.S. citizens. Entire genres of movies posit the negative outcomes of this kind of system on civilians and military personnel alike; they are cautionary tales to Not Create The Torment Nexus. Sometimes decentralized, human-in-the-loop, need-to-look-the-target-in-the-eyes operational coordination is a feature, not a bug.
It is important to understand that the customers don't have a clear view of the types of access controls they want until they start field exercises. By having relatively limited access controls in the prototype, they discover in real use cases that allowing some data access which never would have occurred to them is highly valuable which can then be refined into specific types of data sharing. In a default locked down environment, these beneficial interactions would never be discovered because they can't occur. All of the ways the users access and use data in the prototypes is logged and studied.
Similarly, other types of data sharing expose real risks that can be reduced to specific scenarios developed during operational exercises. The problem with an exhaustive access control model is that it simply has too many degrees of freedom to be usable for most people.
During development, the universe of all possible uses of access control is reduced to a much simpler and more understandable model of the key data it is important to restrict and the key data it is important to share, grounded in real-world operational learnings. These models are simpler and more precise to implement, and also easier to verify the correctness of, than a default "access control all the things" approach.
We reject the notion of gating, pay-walling, or upselling core security controls like audit logging, single sign-on, and multi-factor authentication. Whether you’re a small business or a federal agency, you get access to every core enterprise security feature in our standard offering:
Mandatory encryption of all data, both in transit and at rest, that uses robust, modern cryptography standards. Strong authentication and identity protection controls, including single sign-on and multi-factor authentication. Strong authorization controls, including mandatory and discretionary access controls. Robust security audit logging for detecting and investigating potential abuse. Highly extensible information governance, management, and privacy controls to meet the needs of any use case. »
There's been no launch. It's a prototype.
Access control and user level logging seem like kind of basic feature requirements for a military C2 system? And Lattice isn't a completely immature product either IIRC.
Odds are this one will join the scrap heap of projects that never make it to operation, like thousands of systems before it.
DoD intentionally pushes hard to get testable capabilities as early as possible to shorten feedback loops, understanding that features ancillary to the capability will be limited, stubbed out, or implemented using a stopgap that you would never use in production. This will all be cleaned up in the production implementation once everyone is happy with how the capability works. Basically an agile customer development approach, similar to what is used in startups.
In my experience, the fine-grained control and security features are never implemented in the prototypes. This can be extremely fussy and slow development that isn't needed to evaluate capability. It also requires a lot of customer involvement, so they usually aren't willing to invest the time until they are satisfied that they want to move forward with the capability. The security architecture is demonstrably the kind of thing that can be mechanically added later so DoD takes the view that there is no development risk by not implementing it in the prototype.
There may be fair criticisms of the system but it looks like the article is going out of its way to mislead and misrepresent.
There is a separate concern around denied data environments in the software realm but that is not on many people's radar. Most software devs would not know where to even start to protect systems from this.
A tension with access controls is that if you implement it to the level of granularity the most demanding parts of DoD say they want, it never actually gets used because it is too complicated for users to reason about and manage. Or worse, they make mistakes and leak data because it is complicated. A simpler model is always implemented on top of it. At the same time, fine-grained and scalable access controls impose a high operational cost in the software even if they are not being used and some parts of DoD care a lot about software performance. Many parts of DoD are realistic enough to not want to pay for access controls they'll never actually use.
On top of this, security architecture is designed to be somewhat pluggable because different users will have mutually exclusive design requirements for the security architecture. It would be nice if this wasn't the case but it is what it is.
The concept of a denied environment is pretty clear to me when it comes to physical space, or radio communications - but could you clarify what you mean by a "denied data environment"? I have some notion of what you _might_ mean, but I can't find a clear definition of the idea anywhere.
Most systems, including military systems, use data from many exogenous sources. Critical systems may use data diodes and formally verified software interfaces to consume this data that make them extremely hardened against outright exploitation. However, these systems are vulnerable in another way: they use data structures and algorithms to serve their purpose, often with a bunch of architecture to allow scalability like multithreading.
You can subtly generate or dynamically edit data in exogenous data streams of target systems to produce pathological computer science outcomes that will pass all human inspection and formal verification as legitimate. Nonetheless, it is engineered to enable cascading pathological scenarios in common implementations of data structures and algorithms in popular systems. The attacks are usually targeted against lock graphs and subtle quadratic corner cases in algorithm implementations. Many years ago I engineered a prototype of this for fun targeting a well-known commercial database and it completely locked up the system for more than 10 minutes. For many purposes, that is almost as good as a system kill. The obvious way to recover your system is to disconnect it from data sources and users.
There is a requirement in DoD for systems that are designed to automatically detect these types of attacks and to preserve operational performance in these types of adversarial data environments. I think it isn't on anyone's radar because it hasn't been used in any real-world attacks against commercial systems. It definitely requires a sophisticated adversary, random hackers aren't going to pull it off.
The "data denied environments" are like the above, where your adversary is injecting these kinds of system attacks in all of your exogenous data feeds. If you have to shut those sources off to keep your systems up, you are running blind.
>>One application revealed 25 high-severity code vulnerabilities
What part of the above two quotes are hard to understand?
>>"The security architecture is demonstrably the kind of thing that can be mechanically added later so DoD takes the view that there is no development risk by not implementing it in the prototype."
Are you actually serious about this? Because to me, that is the most nonsensical thing said in a post of nonsensical things.
ARCHITECTURE is the FIRST thing that needs to be determined — what is the structure of the modules, how they communicate, and what underlying tech will be used? Sure, if you are making a true proof-of-concept prototype you can skip it but no one would be discussing the above kinds of issues evaluating a POC. Tacking on some afterthought of a 'security architecture' at the end of a development project is a recipe for disaster in countless ways.
And yes, you said "fine details" added later; agree. But even a prototype of a secure communications system the thing must AT LEAST show some security architecture; I mean at least show you can keep Joe-user from accessing Root privs... here, they said anyone can access anything. That is not fine details, that is simply not even close to meeting the spec.
(and yes, I do DOD work, and your account is not even close to how I've seen it work)
Um, yeah, they would be communicating that it's a bad idea to put sensitive data into it. You have to make that stuff explicit, or somebody will misuse it.
I'm looking for the scandal here. Did the contractors misrepresent the prototype systems as being secure in ways they're not? Is the project behind schedule, and proper authorization controls were supposed to be implemented by now? Is the prototype system being improperly used to share sensitive data? Any of those would be concerning, but the article doesn't make any of those claims.
All I see is, a prototype version of software doesn't have all the capability that it will need when it's finished.
The evaluations quoted in my previous comment shows it is VERY obvious the Army expected to see at least the capabilities to prevent "an adversary gaining persistent undetectable access”, expected to see capabilities preventing low-level users from accessing higher-clearance-level materials, and did not expect 25+ high-severity code vulnerabilities, and all of those expectations were failed by the suppliers.
True, the article did not cite info about the exact specifications and expected operational readiness level at this date detailed in the contracts.
However, we can safely assume the Army officers who wrote the scathing memo do have access to and understand the contractual expectations, and wrote the report in the proper context of the contractual expectations.
Instead, you want to do a rhetorical sleight-of-hand to claim since the article necessarily is not a complete report (it's a short news article), the missing bits mean the Army officers are just speaking out of context and there is no scandal. That is wrong
Below are some of the direct quotes from the Army Report quoted in the article.
>> “fundamental security” problems and vulnerabilities,
>> should be treated as a “very high risk
>> “We cannot control who sees what, we cannot see what users are doing, and we cannot verify that the software itself is secure,”
>> “very high risk” because of the “likelihood of an adversary gaining persistent undetectable access,” wrote Gabrielle Chiulli, the Army chief technology officer authorizing official.
>> Any user can potentially access and misuse sensitive”
>> One application revealed 25 high-severity code vulnerabilities. Three additional applications under review each contain over 200 vulnerabilities requiring assessment, according to the document.
>> “Any user can potentially access and misuse sensitive” classified information, the memo states, with no logging to track their actions.
The report on a supposedly secure communications system prototype contains these exact criticisms about 1) fundamental lack of access control for users of classified info at multiple levels, 2) complete lack of logging for any access authorized or unauthorized, and 3) likelihood of an adversary gaining persistent undetectable access.
To anyone with a clue, this is exactly the opposite of what a secure military communications system needs.
For people on a site who supposedly care about software quality to defend such obvious slop is ... surprising. It looks from this collection of quotes that Palantir and Anduril just tossed over some Slack/Discord clone and thought they'd "wow" the Army with the new (presumably to those backwards customers) tech and slap on some security later, and are now finding out their customer is a bit more sophisticated than they expected.
There are very good reasons consumer tech is often unsuited for military use.
(OFC, if the press piece is lying about the direct quotes and numbers from the document, we have an entirely different discussion, but no one here has argued that is the case)
Especially surprising that on a site unmatched by any other in the near-reflexive liberty and anti-surveillance attitudes, one of the most aggressive and far-reaching surveillance corporation finds such support...
1. The Army gave a reasonable analysis of the security issues with the prototype.
2. This is normal.
3. But most people don't have experience in this field.
4. A reporter got access to the report, and published the most salacious-sounding parts — and if you don't know that this is normal, it sounds really bad.
5. Oh no, the sky is falling. Click here to watch our ads, oops I mean, to read all about it.
Generally my read on tech-adjacent reporters is they have very little understanding of what they're reporting on, and mostly try to manufacture outrage to get clicks.
The impulse may be stronger in some and weaker in others, and the incentive structure at their outlet may be stronger or weaker, but all of them know, even subconsciously, what type of stories are likelier to capture attention, make $ for the outlet and hopefully return a portion of that to them. Outrage is high the list of qualities that guarantee attention.
Since learning about social media algorithms, I've come to think of all news media, even old timey newspapers, in the same light. IMO The only difference between TikTok's For You page and newspaper headlines from 200 years ago was the precision of the engagement feedback and the latency.
It's true for everyone. Over the years, I've been on the business end of several "tech outrage" stories that made the headlines on HN, and over time, became a sort of IT canon: "company X did something really bad".
And they all had the same origin: a party with an ax to grind latched onto a fairly boring story, distorted the facts to tell a narrative, and that narrative "clicked" with everyone else because it confirmed their preexisting biases.
It doesn't mean that there are no companies that are truly evil or incompetent, but I'd wager is <50% of the ones that get raked over the coals here and elsewhere.
I think the U.S. slander laws are probably too light. I'm generally supportive of the idea that "democracy dies in darkness," but I think we've gone further than that in allowing people to lie in public for money. I don't think that helps democracy, and in fact probably hurts it. I think anyone, anywhere on the political spectrum, can point to damning examples of the other side lying for their cause, and I think we'd all be better off if that stopped.
The reporters are extra aggressive when it involves companies with right of center founders or VCs involved. Both of these companies fit that profile.
Journos wouldn’t have to get clicks, if you people (yes, you people) would pay for journalism but no lets continue to post archive links on HN.
I’m throughly disappointed by the pampered, know-it-all tech class who continually complain about the world but refuses to make investments or engage in collective bargaining to force the tech billionaire class.
Apparently all it takes to sell one’s soul is the feeling of superiority, $150k+ wages and stock options.
The operations chosen for that purpose are selected for their limited risk if something goes wrong. DoD wants capability evaluations to be as realistic as possible even if that means taking some risks. They understand the risk/reward tradeoff.
This is the issue right here.
In my experience, the fine-grained control and security features are never implemented in the prototypes.
Those features are the whole system. What else is there to prototype, the ability to send and receive signals in the radio spectrum? I'm pretty sure we already have that. It's the secure features that matter to the military.
it looks like the article is going out of its way to mislead and misrepresent.
Sounds like bullshit to me, and I'd extend that to most of the comments in this subthread that are just based on tropes about the media being mean and lacking technical sophistication. This is a straightforward, soberly written article that provides a reasonable summary of the information. There's no suggestion here that the memo was leaked or is classified in any way.
it feels like the bar for what are acceptable toothing issues is set unrealistically low here.
Wow holy calamity lol JTRS never shipped. Paid a lot of mortgages though.
Donald Rumsfeld's term managed to get the US into a decades long morass and basically all of his procurement plans failed (FCS, JTRS, EMALS, LCS, EFV, DDG-1000). Rumsfeld's best program outcome (LCS) was still worse than McNamara's worst (F-111).
1: Not counting the current occupant where we can't fully judge him yet, but he's not doing well.
Basically, Rummie's approach was to emphasize speed and mobility over all other aspects of fighting, and it created ships with high speed and little sustainability or effective mission packages (LCS) and ground vehicles which were hideously expensive and way too vulnerable (EFV, FCS). Some things that happened under him (like the F-22 line being closed permanently) were more due to Congress' than him, but those programs were closely associated with him and his office at the time, and were gigantic CF's that, in the best case, ended without a single production unit ever reaching service.
I say this as someone who worked in Army procurement doing FCS things. In my opinion as someone who was there, it was Rumsfeld's baby, and it was such a disaster both Presidential candidates in 2008 promised to kill it. And it deserved that killing.
It's sad when partisan politics so overwhelms discourse that people just make up nonsense because they don't like the CEO of a company.
What are they making that would intimidate China, that China doesn’t already?
There's a massive difference between something in a lab or in a paper presentation and code that is running in production and provably works. There are lots of individuals pushing up daisies that would attest to the effectiveness of Palantir's applications if they could still speak.
Palantíri are the seeing stones.
I think this article has a good summary of Tolkien's experiences in the war: https://en.wikipedia.org/wiki/The_Great_War_and_Middle-earth
Example - Saruman clear cutting a good portion of Fangorn forest in order to strip mine it and produce weapons.
Paywalled article in the Atlantic on this subject
https://www.theatlantic.com/technology/archive/2012/07/fall-...
Brief exerpt:
'So it makes sense, then, that the chief exponents of technology in The Lord of the Rings are a demonic figure bent on world domination (Sauron) and a wizard (Saruman). Treebeard, the Ent or tree-shepherd, says of Saruman, "He is plotting to become a Power. He has a mind of metal and wheels; and he does not care for growing things, except as far as they serve him for the moment." '
Please don't sabotage national defense just for cheap shots at Palantir. This is the right way to develop defense tech. Make a prototype, see what works, iterate. Come on.
Criticising a prototype that it doesn't do all the things of a finished product is... interesting
Nice toys. Not sure the tech has anything substantial or innovative under the hood.
> "The Army should treat the NGC2 prototype version as “very high risk” because of the “likelihood of an adversary gaining persistent undetectable access," wrote Gabrielle Chiulli, the Army chief technology officer authorizing official."
It sounds like they are testing prototypes, and it’s not ready for mass fielding. Nothing new here.
Access/permissions control in military applications is VERY different when compared to the consumer or B2B space.
It takes real field testing to figure what is best for the customer needs.
https://breakingdefense.com/2025/10/army-says-its-mitigated-...
Crying about being unable to audit a propriety system isn’t the same thing as demonstrating that the system itself is flawed.
https://en.wikipedia.org/wiki/Skin_in_the_Game_(book)