Notion API Importer, with Databases to Bases Conversion Bounty
Posted4 months agoActive4 months ago
github.comTechstoryHigh profile
heatedmixed
Debate
80/100
Notion APIObsidianBountyOpen-Source
Key topics
Notion API
Obsidian
Bounty
Open-Source
The Obsidian team has posted a $5,000 bounty for implementing a Notion API importer, sparking discussion about the feasibility of the task, the adequacy of the bounty, and the use of LLMs in development.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
1h
Peak period
17
3-6h
Avg / period
8.3
Comment distribution58 data points
Loading chart...
Based on 58 loaded comments
Key moments
- 01Story posted
Sep 17, 2025 at 1:11 AM EDT
4 months ago
Step 01 - 02First comment
Sep 17, 2025 at 2:22 AM EDT
1h after posting
Step 02 - 03Peak activity
17 comments in 3-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 18, 2025 at 12:54 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45271942Type: storyLast synced: 11/20/2025, 4:53:34 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Suddenly 5k$ does not sound as good
My first clue was that PR description was absurdly detailed and well structured... yet the actual changes were really scattershot. A human with the experience and attention to detail to produce that detailed description would likely also have broken it down into seperate PRs.
And the code seemed alright until I noticed a small one-line change: a UI component had been replaced with a comment that stated "Insantiating component now requires X"
Except the new insantiation wasn't anywhere. Their coding agent had commented out insantiating the component instead of figuring out dependency injection.
That component was the container for all of the app's settings.
-
It's interesting because the PR wasn't entirely useless: individual parts of it were good enough that even if I took over the PR I'd be fine keeping them.
But whatever coded it couldn't understand architecture well enough. I suspect whoever was piloting it probably tested the core functionality and assumed their small UI changes wouldn't break anything.
I hope we normalize just admitting when most of a piece of code is AI generated. I'm not a luddite about these tools, but it also changes how I'll approach a piece of code.
Things that are easy for humans get very hard for AI, and vice versa.
The thing is, if you’re talking about making laws (as GP is), your “surely people understand this difference” strategy doesn’t matter squat and the impact will be larger than you think.
>Even if AI contributed just a tiny bit.
Which would imply autocorrect should be reported as AI use.
I assume the OP was talking about things like LLMs and diffusion models which one could definitely single out for regulatory purposes. At the end of the day I don't think it would ever be realistically possible to have a law like this anyway, at least not one that wouldn't come with a bunch of ambiguity that would need to be resolved in court.
Laws are built on definitions and this hand-wavy BS is how we got nonsense like the current version of the AI act.
People using AI tools in their work is becoming normal. In the end, it doesn't matter how the code is created if the code works and is otherwise high quality. The person contributing is responsible for checking the quality of their contributions. Generally, a pull request that changes half the system for no good reason without good motivation is clearly not acceptable in most OSS systems. Likewise, a pull request that ignores existing design and conventions is also not acceptable. If you do such a pull request manually, it will probably also get rejected and get told off by repository maintainers.
The beauty of the pull request system is that it puts the responsibility on the PR creator to make sure their pull request is good enough. Creating huge pull requests is generally not appreciated and creates a lot of review overhead. It's also good practice to work via the issue tracker and discuss your plans before you start the work. Especially with bigger changes. The problem here is not necessary AI but but people using AI to create low quality pull requests and people not communicating properly.
I've not yet received any obvious AI generated pull requests on any of my projects. But I've used codex on my own projects for a few pull requests. I'd probably disclose that fact if I was going to contribute something to somebody else's code base and would also take the time to properly clean up the pull request and make sure it delivers as promised.
I also don't believe it can be one-shotted (there's too many deltas between Notion's API and Obsidian).
With that said, LLMs are great for enumerating edge-cases, and this feels like the perfect task for Codex/Claude Code.
I'd implore the obsidian team/maintainers to take a stab at building this with LLMs. Based on personal experience, the cost is likely within the same magnitude ($100-$1k in API cost + dev time), but the additional context (tests, docs, etc.) will be invaluable to future changes to either API surface.
At least have it be split into some files and structured in some way.
spaghetti code is code that snakes through the code base in a hard-to-follow way
My guess is this is close to the level of testing they put forth for ensuring the AI generated code works (based off my experience with other AI heavy devs). They didn’t take any time to thoroughly review nor understand the code. A large file doesn’t necessarily mean shoddy work, but it certainly indicates it’s likely that.
We all lean on LLMs in one way or another, but HN is becoming infested with wishful prompt engineers. Show, dont tell. Compete on the market instead of yet another PoC.
The bounty here is just $5k, and if you read my comment, I’m suggesting that the maintainer(s), even with LLMs, will likely spend a similar amount in inference + cost of their own time, however they’ll likely produce a solution more robust than what the bounty alone will produce.
To be clear: I’m not advocating that someone simply vibe-codes this up.
That’s already happening (2 PRs when this hit HN), both with negative feedback.
I’m suggesting that the maintainers should give LLM-assisted dev a try here, as they already have context on the Obsidian-side API.
Do you think the person you are replying to is Sam Altman?
A month ago I migrated company's website and blog from Framer to Astro (https://quesma.com/blog/ is you would like to see the end result).
This weekend I created a summary of Grafana dashboard data. LLMs are tireless at checking hypothesis, running grunt code, seeing results, and iterating on that.
What takes more than a single is to check if the result is correct (nothing missed, nothing confabulated, no default fallbacks) and to maintain code quality (I refactor early and often, here is a place in Claude Code that there is no other way than using Opus 4.1). Most of my time spend talking with Claude Code ais in refactoring - and it requires most knowledge of tooling, abstraction, etc.
From my own experience, I don't think that's the case. I've wrote a similar sync-script obsidian<->notion-databases myself some months ago, and I also used AI in the beginning; but I learned really fast what an annoying mess Notions API is, and how fast LLMs are hanging up on edge-cases. AI is good to get a start into the API, but at the end you still have to fix it up yourself.
- https://github.com/orgs/com-lihaoyi/discussions/6
If you look at that thread, you'll see I've paid out quite a lot in bounties, somewhere around 50-60kUSD (the amount is not quite precise, because some bounties I completed myself without paying, and others I paid extra when the work turned out to be more than expected). In exchange, I did manage to get quite a lot of work done for that cost
You do get some trash, it does take significant work to review, and not everything is amenable to bounties. But for projects that already have interested users and potential collaborators, sometimes 500-1000USD in cash is enough motivation for someone to go from curious to engaged. And if I can pay someone 500-1000USD to save me a week of work (and associated context switching) it can definitely be worth the cost.
The bounties are certainly not a living wage for people, especially compared to my peers making 1mUSD/yr at some big tech FAANG. It's just a token of appreciation that somehow feels qualitatively different from the money that comes in your twice-monthly paycheck
Granted this does feel a bit less like asking for spec work so I can see why they might have chosen to go this way instead of generically accepting bounties.
I posted a list of projects offering bounties elsewhere [1] in the thread.
[1] https://news.ycombinator.com/item?id=45278787
https://github.com/Quorafind/Bases-Toolbox
https://tinygrad.org/#worktiny
That being said, yay open source bounties! People should do more of those.
1. Tenstorrent https://github.com/tenstorrent/tt-metal/issues?q=is%3Aissue%... $200 - $3,000 bounties
2. microG https://github.com/microg/GmsCore/issues/2994 $10,000 bounty
3. Li Haoyi https://github.com/orgs/com-lihaoyi/discussions/6 multiple bounties (already mentioned upthread)
4. Algora also hosts bounties for COSS (Commercial OSS) https://algora.io/bounties
Although Obsidian isn’t open source, the community has a very similar vibe. Very anti-big-corporate-overlord.
But maybe not, maybe the world of bounties is just one im not in the loop on and this is common.
I can’t figure out why it’s so damn slow. But also, how is it any better than Notion at that point?
As for why is it better than Notion at that point, well, if you wanted to you could use a "faster" app like iA Writer on your phone, or anything that can open Markdown. That remains its biggest benefit, you're never locked in to files that are only on someone else's server.
Today I learned that obsidian have an API towards it. Still I wonder why it's not just easier to use notions "download your pages as markdown". Notion would very dislike an API that allows users to migrate away from it and probably actively would sabotage it. "download notes as markdown" is however a user service, which they probably don't want to break. (maybe now that they added an offline mode too late, I don't know)
I would love to two-way sync Notion <-> Obsidian vault. Notion is focused on online collaboration, Obsidian is focused on file-over-app personal software customization; there’s so much Obsidian can do especially with plugins that Notion isn’t able to address. If we can make the two work together even if it’s not perfectly seamless, it would extend usefulness of both tools by uniting their strengths and avoiding the tradeoff of their weaknesses.
If only I had an extra 24h per day I’d build it myself, but it needs some fairly complex machinery for change tracking & merging which would require ongoing support so it’s not something I can tackle responsibly as a weekend project.
At the least we could offer YAML frontmatter as an option for Notion’s markdown export feature. Maybe I can get to that today I have a few spare hours.
That's why I've been working Relay [0] - a privacy-preserving local-first collaboration plugin for Obsidian.
Our customers really like being able to self-host Relay Servers for complete document privacy while using our global identity system to do cross-org collaboration with anyone in the world.
[0] https://relay.md
- add a date tag to a text block
- create a code block longer than 2000 characters
- set a code block to wrap
This is based on very limited interaction but it feels like I’ve hit many snags early on. I imagine things get harder when you get into some of the more advanced or newer functionality.
Please invest in the API. I will only love Notion more.
1 more comments available on Hacker News