Show HN: ChartDB Agent – Cursor for DB schema design
app.chartdb.ioprompts:
https://github.com/chartdb/chartdb/blob/c3c646bf7cbb1328f4b2...
These are all easily testable in Docker containers. There's a concerning lack of attention to detail here, or perhaps the prompt itself was also created using AI and it bakes in hallucinations from the get-go.
As for the AGPL: in ChartDB's previous Show HN (only 6 weeks ago), I asked how they were running an enhanced paid SaaS when they had so many external contributions prior to adopting a CLA, and I did not receive a response [1].
When code is contributed to a FOSS project, and there's no CLA, the contribution is licensed by the third-party contributor under the same terms as the project, and furthermore the contributor retains copyright (assuming there's no CAA, which is stronger than a CLA).
So if you start an AGPL project and offer a paid SaaS, you need to do one of the following:
a) don't accept outside pull requests
b) accept outside pull requests with a CLA or CAA
c) provide the full source code to the paid SaaS to users, as per the terms of the AGPL
d) keep those pull requests in the AGPL repo but rewrite or remove them in the paid SaaS
Any other path opens you up to risk of copyright infringement lawsuits if your business is successful.
The OP eventually added a CLA, but had already accepted a lot of code contributions prior to that. So in their previous Show HN, I asked what path they took to resolve the AGPL hurdles, and they did not respond.
I choose to believe that it's a brilliant feint. The author will run all these comments back into their LLM to generate fixes for all of these issues.
"Design a schema like Calendly" --> Did it
"OK let's scale this to 100m users" --> Tells me how it would. No schema change.
"Did you update the schema?" --> Updates the schema, tells me what it did.
We've been running into this EXACT failure mode with current models, and it's so irritating. Our agent plans migrations, so it's code-adjacent, but the output is a structured plan (basically: tasks, which are prompt + regex. What to do; where to do it.)
The agent really wants to talk to you about it. Claude wants to write code about it. None of the models want to communicate with the user primarily through tool use, even when (as I'm sure ChartDB is) HEAVILY prompted to do so.
I think there's still a lot of value there, but it's a bummer that we as users are going to have to remind all LLMs for a little bit to do keep using their tools beyond the 1st prompt.
It was easier to close the tab than fire a human, but other than that not a great experience.
Have you thought about making a tool to help preview/dry run migrations?
I feel that's something I would want a ton of confidence in if an LLM is making migration scripts.
Especially if it's doing scary stuff like breaking up a table.
A SQL MCP + mermaid is all you need.
Btw, it can also do an awesome job turning query plans into readable mermaid flow diagrams, adding details to the chart, using color to highlight bottlenecks.
I'll probably stick with Claude for now.
A killer feature would be a discussion loop that slowly refines the model in plaintext before committing to a diagram.
Either way, working around sharding and scale seems like a place where a good AI could help provide recommendations and tradeoffs, so it's disappointing that the model doesn't seem to deal with it well. For basic relational schema design, I don't see much benefit of having AI.
Only thing I kinda dislike is the low readability of the connection because of that border around the table of the same color as the connection lines (I think things would look cleaner without it)