We Are Open Sourcing the Mathematical Universe
Posted3 months agoActive3 months ago
github.comResearchstory
supportivepositive
Debate
20/100
MathematicsOpen-SourceArtificial IntelligenceKnowledge Representation
Key topics
Mathematics
Open-Source
Artificial Intelligence
Knowledge Representation
The Mathematical Universe project is being open-sourced on GitHub, sparking interest and discussion among HN users about its potential applications and implications for mathematical knowledge representation.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
N/A
Peak period
4
0-6h
Avg / period
2.2
Comment distribution11 data points
Loading chart...
Based on 11 loaded comments
Key moments
- 01Story posted
Oct 10, 2025 at 3:26 PM EDT
3 months ago
Step 01 - 02First comment
Oct 10, 2025 at 3:26 PM EDT
0s after posting
Step 02 - 03Peak activity
4 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 13, 2025 at 6:30 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45542758Type: storyLast synced: 11/20/2025, 3:44:06 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
This framework gives you a playground to explore, manipulate, and apply the patterns that shape the universe’s most intricate structures, unlocking new possibilities in AI, quantum computing, and scientific discovery.
Link: https://www.uor.foundation/blog/universe
Greetings from Colorado, and we welcome any feedback.
This just happens to match w/ original GPT-3 parameters below.
With the key difference that Atlas enables unique deterministic data addressing, which does not require expensive transformer training.
https://dugas.ch/artificial_curiosity/GPT_architecture.html
"Decoding. We're almost there! Having passed through all 96 layers of GPT-3's attention/neural net machinery, the input has been processed into a 2048 x 12288 matrix. This matrix is supposed to contain, for each of the 2048 output positions in the sequence, a 12288-vector of information about which word should appear."
https://youtu.be/RXJKdh1KZ0w?si=ARMURmLUqvZZ8kd9
you will probably need to read a book or two to understand all that but it can get so many application that maybe you should actualy read them or wont lost time doing so :
Data structure, deep learning using topologie, lossless embeding, introduction to P vs NP and etc. A lot of ideas may emerge from this kind of work