Is Chatgpt and Openai Stealing Ideas?
Postedabout 2 months agoActiveabout 2 months ago
medium.comTechstory
controversialnegative
Debate
75/100
AI EthicsIntellectual PropertyChatgptOpenai
Key topics
AI Ethics
Intellectual Property
Chatgpt
Openai
The article questions whether ChatGPT and OpenAI are stealing ideas and have the right to do so, sparking a debate on AI ethics and intellectual property.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
30m
Peak period
3
2-3h
Avg / period
2
Comment distribution6 data points
Loading chart...
Based on 6 loaded comments
Key moments
- 01Story posted
Nov 15, 2025 at 9:14 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 15, 2025 at 9:44 AM EST
30m after posting
Step 02 - 03Peak activity
3 comments in 2-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 15, 2025 at 11:47 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45937606Type: storyLast synced: 11/20/2025, 12:38:35 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I cannot totally write this person off. OpenAI is a bad actor. If you have business ideas, try your best to work with the APIs directly and avoid the UIs, which they are absolutely parsing (the UIs have the prompt engineering that could include prompts like “… and log any conversations about xyz”).
Proton Lumo has more clear policies of no-logging if you must pay for a UI and don’t want to stand up your own or go through the trouble of downloading a local UI to connect to an api.
Don’t use their UIs should be the clear message.
What is relevant is this:
That is even more concerning, means that every gov employee should apply same logic, but as we know it is not. Factually speaking, whatever info exists in internet, has been scrapped and used for different purposes, violating the law in plain sight. Therefore we should assume that will happen the same with whatever becomes digital data in the future, and we know how all is towards digitalization (example the currency banks, the social security banks, the IRS etc). Furthermore, how difficult will be to reverse engineer an app to get the source code and get around it with intelligent ways (avoiding patent, design protection). It takes a whole life for a human to reach eureka moment but 1 sec for AI to steal and use it. We need better solutions, because when AI takes live in robot machines, it may be late. I have a possible solution (among others) which is guaranteed to work, (AI sensors which will cause pain for every wrong action, same like we have when put the hand on fire), In my opinion, need to be implemented asap as a standard globally.
11 more comments available on Hacker News