Evidex
getevidex.comKey Features
Key Features
Tech Stack
Swapping them to production keys right now. Thanks for the heads up!
When building a product for medical audience which might care a lot about privacy maybe don't use components which are shady enough that they end up on blocklists.
1. Re: Clerk/uBlock: You were spot on. The default Clerk domain often gets flagged by strict blocklists. I just updated the DNS records to serve auth from a first-party subdomain (clerk.getevidex.com) to resolve this. It should be working now.
2. Re: Freshness & 'Rubbish': You are absolutely right that standard of care doesn't (and shouldn't) change overnight based on one new paper.
However, the decision to ditch the Vector DB for Live Search wasn't about pushing 'experimental treatments'—it was about Safety and Engineering constraints:
Retractions & Safety Alerts: A stale vector index is a safety risk. If a major paper is retracted or a drug gets a black-box warning today, a live API call to PubMed/EuropePMC reflects that immediately. A vector store is only as good as its last re-index.
The 'Long Tail': Vectorizing the entire PubMed corpus (35M+ citations) is expensive and hard to keep in sync. By using the search APIs directly, we get the full breadth of the database (including older, obscure case reports for rare diseases) without maintaining a massive, potentially stale index.
The goal isn't to be 'bleeding edge'—it's to be 'currently accurate'.
now you get why those system are not cheap. keeping indexes fresh, maintaining high quality at large scale and being extremely precise is challenging. by having distributed indexes you are at the mercy of the api providers and i can tell you from previous experience that it won't be 'currently accurate'.
for transparency: i am building a search api, so i am biased. but i also build medical retrieval systems for some time.
You are spot on that maintaining a fresh, high-quality index at scale is the 'hard problem' (and why tools like OpenEvidence are expensive).
However, I found that for clinical queries, Vector/Semantic Search often suffers from 'Semantic Drift'—fuzzily matching concepts that sound similar but are medically distinct.
My architectural bet is on Hybrid RAG:
Trust the MeSH: I rely on PubMed's strict Boolean/MeSH search for the retrieval because for specific drug names or gene variants, exact keyword matching beats vector cosine similarity.
LLM as the Reranker: Since API search relevance can indeed be noisy, I fetch a wider net (top ~30-50 abstracts) and use the LLM's context window to 'rerank' and filter them before synthesis.
It's definitely a trade-off (latency vs. index freshness), but for a bootstrapped tool, leveraging the NLM's billions of dollars in indexing infrastructure feels like the right lever to pull vs. trying to out-index them.
Re: the trackers: The SVG is just the icon inside the Clerk login button, but you're right that loading Tailwind via CDN isn't ideal for strict GDPR IP-masking. I'll look into self-hosting the assets to clean that up.
1. Prioritization: I instruct the model to prioritize evidence in this hierarchy: Meta-Analyses & Systematic Reviews > RCTs > Observational Studies > Case Reports. It explicitly deprioritizes non-human studies unless specified.
2. Why not OpenEvidence? OE is excellent! But we made two architectural choices to solve different problems:
'Long Tail' Coverage: OE relies on a pre-indexed vector store, which often creates a blind spot for niche/rare diseases where papers aren't in the 'Top 1% of Journals.' Because Evidex queries live APIs, we catch the obscure case reports that static indexes often prune out.
Workflow: OE is a 'Consultant' (Q&A). Evidex is a 'Resident' (Grunt work). The 'Case Mode' is built to take messy patient histories and draft the actual documentation (SOAP Notes/Appeals) you have to write after finding the answer.
> $150M RR on just ads, +3x from August. On <1M users.
Thanks for sharing that source. It really validates the thesis that unless the user pays (SaaS), the Pharma companies are the real customers.
You are definitely right that Live APIs come with their own headaches (mostly latency and rate limits).
For now, I chose this path to avoid the infrastructure overhead of maintaining a massive fresh index as a solo dev. However, I suspect that as usage grows, I will have to move toward a hybrid model where I cache or index the 'head' of the query distribution to improve performance.
Always great to meet others tackling this space. I’d love to swap notes sometime if you are open to it.
Users have noted that some current tools heavily overweight citations from 'Partner Journals' (like NEJM/JAMA) because they index the full text, effectively burying better papers from non-partner journals in the vector retrieval.
My goal is strictly Neutral Retrieval. By hitting the PubMed/OpenAlex APIs live, Evidex treats a niche pediatric journal with the same relevance weight as a major publisher, ensuring the 'Long Tail' of evidence isn't drowned out by business partnerships.
How would biomedical researchers use tons of time-series data? A better question is: what questions are biomedical researchers asking with time-series data? I'm a lot more interested in generalized querying over time-series data than just financial data. What would be a great proof of concept?
To answer your question: In the biomedical world, the 'Time-Series' equivalent is Patient Telemetry (Continuous Glucose Monitors, ICU Vitals, Wearables).
The Question Researchers Ask: 'Can we predict sepsis/stroke 4 hours before it happens based on the velocity of change in Heart Rate + BP?'
Right now, Evidex is focused on the Unstructured Text (Literature/Guidelines) rather than the structured time-series data, but the 'Holy Grail' of medical AI is eventually combining them: Using the Literature to interpret the Live Vitals in real-time.
I will send you an email shortly to get connected. I'd love to get your teams set up with a pilot instance. Appreciate the reach out.
The gap Evidex fills isn't 'Intelligence'. It is Provenance and Liability.
Strict Sourcing: Even advanced models can hallucinate a plausible-sounding study. Evidex constrains the model to answer only using the abstracts returned by the API. This reduces the risk of a 'creative' citation.
Explorer vs. Operator: You mentioned using AI as an 'explorer' (Patient use case). Doctors are usually 'operators'. They need to find the specific dosage or guideline quickly to close a chart.
I view this less as replacing Gemini/GPT. It is more of a 'Safety Wrapper' around them for a high-stakes environment.
My hope is that by reducing the time it takes to verify a paper from 20 minutes to 30 seconds, we can make it easier for providers to actually engage with the research a patient brings in. It helps prevent them from dismissing it just because they 'don't have time to read it'.
Currently, I handle this via Smart Routing. The engine analyzes the intent of your query (e.g. identifying if you’re looking for an RCT, a specific guideline, or drug dosing) and routes it to the most relevant clinical database using high-precision keyword matching.
I chose this deterministic approach for the launch to ensure clinical precision. While vector/semantic search is great for general concepts, it can sometimes surface 'similar-ish' papers that miss the specific medical nuances (like a specific ICD-10 code or dosage) required for clinical evidence.
The LLM (Gemini 2.5 Flash) currently lives in the Synthesis Layer. It takes the raw, high-precision results and synthesizes them into the clinical summaries you see.
I actually have LLM-based query expansion (translating natural language into robust MeSH/Boolean strings) built into the infrastructure, but I am keeping it in 'staging' right now. I want to ensure that as I bridge that semantic gap, I don't sacrifice the deterministic accuracy that medical professionals expect.
For review of meta-analysis you would need prompts developed by expert methodologists and discipline specialists- here is the prompt that worked: You are an environmental epidemiologist and exposure scientist, critially review this papers claim that the measured levels of unconventional gas emissions provide evidence of excess cancer risk: https://link.springer.com/article/10.1186/1476-069X-13-82
1. The Garbage Filter: Right now, I rely on a strict Hierarchy of Evidence to mitigate this (prioritizing Cochrane/Meta-analyses over observational studies), but you are absolutely right that LLMs can miss fatal methodological flaws in a single, high-ranking paper.
2. The 'Critic' Agent: I’m currently experimenting with a secondary 'Critic' pass. This is an LLM agent specifically prompted to act as a skeptic/methodologist to flag limitations before the main synthesis happens.
3. Multi-discipline prompting: The prompt you provided is a great case study in persona-based auditing. I’d love to learn more about the specific 'disciplines' or archetypes you’ve found most effective at catching these flaws. That is exactly the kind of domain expertise I’m trying to encode into the system.
As for individual studies, if a study is important, it often gets tested by others, although sometimes it doesn't, and then it's a decision theoretic play.
Cochrane in my estimation examines things from very narrow angles, and this can miss wide-ranging applicability to the real world.
My default right now is Clinical Safety. I prioritize high-grade evidence to prevent harm at the bedside.
However, for Research/Discovery, you are absolutely right. Excessive 'Gatekeeping' can slow down innovation.
The long-term fix is likely a 'Filter Dial'. We need tight constraints for treatment decisions, but loose constraints for hypothesis generation. I plan to support both modes.
I am adding 'Author Reputation/Bias Analysis' to the long-term roadmap. Thanks for the rigorous stress-test today.
While we might be able to detect 'Insular Citation Clusters' mathematically to flag systemic bias, no model can catch a private signal like an ignored email. It reinforces why the human expert is indispensable. The tool is a force multiplier for judgment, not a substitute.
I’m a solo dev building a clinical search engine to help my wife (a resident physician) and her colleagues.
The Problem: Current tools (UpToDate/OpenEvidence) are expensive, slow, or increasingly heavy with pharma ads.
The Solution: I built Evidex to be a clean, privacy-first alternative. Search Demo (GIF): https://imgur.com/a/zoUvINt
Technical Architecture (Search-Based RAG): Instead of using a traditional pre-indexed vector database (like Pinecone) which can serve stale data, I implemented a Real-time RAG pattern:
Orchestrator: A Node.js backend performs "Smart Routing" (regex/keyword analysis) on the query to decide which external APIs to hit (PubMed, Europe PMC, OpenAlex, or ClinicalTrials.gov).
Retrieval: It executes parallel fetches to these APIs at runtime to grab the top ~15 abstracts.
Local Data: Clinical guidelines are stored locally in SQLite and retrieved via full-text search (FTS) ensuring exact matches on medical terminology.
Inference: I’m using Gemini 2.5 Flash to process the concatenated abstracts. The massive context window allows me to feed it distinct search results and force strict citation mapping without latency bottlenecks.
Workflow Tools (The "Integration"): I also built a "reasoning layer" to handle complex patient histories (Case Mode) and draft documentation (SOAP Notes). Case Mode Demo (GIF): https://imgur.com/a/h01Zgkx Note Gen Demo (GIF): https://imgur.com/a/DI1S2Y0
Why no Vector DB? In medicine, "freshness" is critical. If a new trial drops today, a pre-indexed vector store might miss it. My real-time approach ensures the answer includes papers published today.
Business Model: The clinical search is free. I plan to monetize by selling billing automation tools to hospital admins later.
Feedback Request: I’d love feedback on the retrieval latency (fetching live APIs is slower than vector lookups) and the accuracy of the synthesized answers.
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.