Launch HN: BlankBio (YC S25) – Making RNA Programmable
- mRNA therapies: These therapies deliver a synthetically created messenger RNA (mRNA) molecule, typically protected within a lipid nanoparticle (LNP), to a patient's cells. The cell's own machinery then uses this mRNA as a temporary blueprint to produce a specific protein.
The big example here is CAR-T therapy from Capstan which just got acquired for 2.1B. Their asset,CPTX2309 , is currently in Phase 1. Previously to do Car-T therapy you had to extract a patient's T-cells and genetically engineer them in a special facility. Now the mRNA gets delivered directly to the patient's t cells which significantly lowers the cost and technical hurdles.
- RNA interferences (RNAi): Used for gene expression knockdown through natural cellular mechanisms for viral detection. The big example here is Alnylam with 5 approved therapies and a number in clinical trials.
- Antisense Oligonucleotides (ASOs): Short single stranded RNA molecules that get delivered directly to the cell and target an existing mRNA. The big win here is Spinraza which is the first approved treatment for Spinal Muscular Atrophy (SMA) which previously didn't have a treatment. The Spinraza clinical trial (ENDEAR) was so effective that they deemed it unethical to continue it because the control arm wasn't receiving the treatment. Prior to Spinraza most patients would pass away prior to two years of age.
I have to admit, at a _glance_ this feels like a promising idea with few results and lots of marketing. I'll try to be clear about my confusion, feel free to explain if I'm off base.
- There's not a lot of talk of your "ground truth" for evaluations. Are you using mRNABench?
- Has you mRNABench paper been peer reviewed? You linked a preprint. (I know paper submission can be touch or stressful, and it's a superficial metric to be judged on!)
- Do any of your results suggest that this foundation model might be any good on out of sequence mRNA sequences? If not, then is the (current) model supposed to predict properties of natural mRNA sequences rather than of synthetic mRNA sequences?
- Did a lot mRNA sequences have experimental verification of their predicted properties? At a quick glance, I see this 66 number in the paper---but I truly have no idea.
I'm super happy to praise both incremental progress and putting forth a vision, I just also want to have a clear understanding of the current state-of-the-art as well!
Hey yes, the ground truth for our evaluations is measured experimental data. Our models are benchmarked using mRNABench, which aggregates results from high-throughput wet lab experiments.
Our goal, however, is to move beyond predicting existing experimental outcomes. We intend to design novel sequences and validate their function in our own lab. At that stage, the functional success of the RNA we design will become the ground truth.
> peer reviewed?
Both mRNA bench and Orthrus are in submission (at a big ML conference and a big name journal) - unfortunately the academic systems move slow but we're working on getting them out there.
> synthetic mRNA sequences
I think you're asking on generalizing out of distribution to unnatural sequences. There are two ways that we do this: (1) There are these screens called Massively Parallel Reporter Assays (MPRAs) and we eval for example on https://pubmed.ncbi.nlm.nih.gov/31267113/
Here all the sequences are synthetic and randomly designed and we do observe generalization. Ultimately it depends on the problem that we're tackling: some tasks like gene therapy design require endogenous sequences.
(2) The other angle is variant effect prediction (VEP). It can be thought of as a counterfactual prediction problem where you ask the model whether a small change in the input predicts a large change in the output. This is a good example of the study (https://www.biorxiv.org/content/10.1101/2025.02.11.637758v2)
> experimental verification of their predicted properties
all our model evaluations are predictions of experimental results! The datasets we use are collections of wet lab measurements, so the model is constantly benchmarked against ground-truth biology.
The evaluation method involves fitting a linear probe on the model's learned embeddings to predict the experimental signal. This directly tests whether the model's learned representation of an RNA sequence contains a linear combination of features that can predict its measured biological properties.
Thanks for the feedback I understand the caution around pre-prints. We believe a self-supervised learning approach is well-suited for this problem because it allows the model to first learn patterns from millions of unlabeled sequences before being fine-tuned on specific, and often smaller, experimental datasets.
Just curious, in other areas of ML, I think it's widely acknowledged that benchmarks have pretty limited real world value, just end up getting saturated, and (my view) are all pretty correlated, regardless of their ostensible speciality and don't really tell you that much.
Do you think mRNABench is different, or where do you see the limitations? Do you imagine this or any benchmark will be useful for anything beyond comparing how different models do on the benchmark?
We think the situation is similar here - one the challenges is aligning the benchmark with the function of the models. Genomic benchmarks for gLMs and RNA foundation models have been very resistant to staturation.
I think in NLP the problem is that they are victims of their own success where the models can be overfit to particular benchmarks really fast.
In genomics we're a bit behind. A good paper on this is DartEval where they provide levels of complexity https://arxiv.org/abs/2412.05430
in RNA the models work much better than DNA prediction but it's key to have benchmarks to measure progress.
"We have internal benchmarks. Yeah. But we don't we don't publish them."
"we have internal benchmarks that the team focuses on and improving and then we also have a bunch of tasks like I think that accelerating our own engineers is like a top top priority for us"
The equivalent for us would be to ultimate looking to improve experimental results. Benchmarks are a good intermediate point but not the ultimate goal
It feels like things are further ahead in synthetic biology than I realized and that so so so exciting!
(yes, I meant "out of distribution"---but in today's day 'n age typos are proof of human creation :p )
founders@blankbio.com
I find it takes a large amount of effort to parse what the authors are doing, whether the data is high quality, and how to pre-process it in a way that makes sense for the task at hand.
Would love to chat more about how you're thinking of evaluating quality of these agents.
How would your AI solution help with finding natural analogs of or alternatives to or foils of mRNA procedures?
Re: "Sensitization of tumours to immunotherapy by boosting early type-I interferon responses enables epitope spreading" (2025) https://www.nature.com/articles/s41551-025-01380-1
How is this relevant to mRNA vaccines?:
"Ocean Sugar Makes Cancer Cells Explode" (2025) https://scitechdaily.com/ocean-sugar-makes-cancer-cells-expl... ... “A Novel Exopolysaccharide, Highly Prevalent in Marine Spongiibacter, Triggers Pyroptosis to Exhibit Potent Anticancer Effects” (2025) DOI: 10.1096/fj.202500412R https://faseb.onlinelibrary.wiley.com/doi/10.1096/fj.2025004...
The immune system recognizes a sugar as a PAMP, or Pathogen-Associated Molecular Pattern, which is a signature of a potential microbial threat.
This initiates pyroptosis an inflammatory form of programmed cell death causing the cell to burst. This rupture releases tumor antigens and DAMPs (Damage-Associated Molecular Patterns), which are "danger signals" from the dying cell
The release of DAMPs shifts the Tumor Microenvironment (TME) from an immunologically "cold" to a "hot" state, promoting a potent Type I Interferon (IFN-I) response.
The release of DAMPs shifts the Tumor Microenvironment (TME) from an immunologically "cold" to a "hot" state, promoting a potent Type I Interferon (IFN-I) response.
This response recruits Antigen Presenting Cells (APCs), which engulf the newly released tumor antigens.
---
mRNA vaccines are somewhat of a parallel approach where the antigen selection and delivery happens manually. An mRNA vaccine delivers the encoding sequence for specific tumor antigens to drive production and presentation, training the immune system. One of the big challenges of this space is optimal antigen selection from the patient's tumor.
One thing I'm not fully clear on is why only tumor cell react to PAMP instead of healthy cells. Could be a promising approach but molecular biology is pretty tricky and the devil is always in the details.
I am not a scientist, but I believe that "normal" cells do not seek long-chain alien sugars like those produced by ocean bacteria. Conversely, "cancerous" cells may find these uncommon sugars appealing, and they consume sugar eagerly (Warburg effect).
After the alien sugars are metabolized, fragments migrate to the cell membrane and might be recognized by the immune system as foreign.
The fact that large molecules trigger Pyroptosis may be helpful.
I'm curious about what your strategy is for data collection to fuel improved algorithmic design. Are you building out experimental capacity to generate datasets in house, or is that largely farmed out to partners?
For the data - Orthrus is trained on non experimentally collected data so our pre-training dataset is large by biological standards. It adds up to about 45 million unique sequences and assuming 1k tokens per sequence it's about 50b tokens.
We're thinking about this as large pre-training run on a bunch of annotation data from Refseq and Gencode in conjunction with more specialized Orthology datasets that are pooling data across 100s of species.
Then for specific applications we are fine tuning or doing linear probing for experimental prediction. For example we can predict half life using publicly available data collected by the awesome paper from: https://genomebiology.biomedcentral.com/articles/10.1186/s13...
Or translation efficency: https://pubmed.ncbi.nlm.nih.gov/39149337/
Eventually as we ramp up out wet lab data generation we're thinking about what does post-training look like? There is an RL analog here that we can use on these generalizable embeddings to demonstrate "high quality samples".
There are some early attempts at post-training in bio and I think it's a really exciting direction
> As compilers freed programmers from low-level details, we're building the abstraction layer for RNA.
That’s all fun and games when it’s literally fun and games. When it’s mRNA injected into living beings it’s the stuff of nightmares.
Will technologists ever _ever_ stop and think for a second?
From where we sit - there are people with diseases and mRNA is an effective way to revert them to a healthy state.
I'd be interested to hear more where you're coming from
It's the usual process for all new medicines. We already had a lot of bad cases with other potential medicines, so all new candidates must pass a lot of tests.
I had some fun one evening asking Claude how I could string together sequences for an imaginary therapeutic and it gave me enough to put into alphafold and get a render :) (Worst therapeutic ever: deliver mRNA into macrophages to target those pesky bacteria who happily just choose to reside there)
Also: How do you plan to navigate the unfortunate part of our country trying to write mRNA out of the American vocabulary?
Almost! Yes most of the data is on NIH sub-institutes. For us we take most of the data from NCBI and intelligently pair it together. The training objective of our model takes pairs of sequences (thus the Joint Embedding Architecture) and trains the model to recognize that they are semantically similar but differ in appearance. This is conceptually similar to a lot of the contrastive learning literature from computer vision.
Sounds like a fun side project :)
There are some great tools out there for putting together plasmids for gene therapies where you can plug in different "elements". Promoters UTRs payloads - check out SnapGene I believe they have a free version.
I personally am hopeful that the political headwinds will blow over. When it comes to cancer vaccines it's one of the most exciting new modalities for treating cancer.
1 in 2 Americans are going to get cancer in their lifetime so no matter political affiliation, the need for health will ultimately drive people to invest in the modality.