Ibm Z17 Time Synchronization Resiliency Enhancements
Posted25 days agoActive23 days ago
planetmainframe.comTech Discussionstory
informativepositive
Debate
20/100
Mainframe ComputingTime SynchronizationIbm Z17
Key topics
Mainframe Computing
Time Synchronization
Ibm Z17
Discussion Activity
Light discussionFirst comment
13m
Peak period
1
0-3h
Avg / period
1
Key moments
- 01Story posted
Dec 8, 2025 at 2:07 PM EST
25 days ago
Step 01 - 02First comment
Dec 8, 2025 at 2:20 PM EST
13m after posting
Step 02 - 03Peak activity
1 comments in 0-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Dec 10, 2025 at 9:13 AM EST
23 days ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 46196301Type: storyLast synced: 12/8/2025, 11:25:18 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
All that would remain is an eye-watering hardware and licensing bill.
Given that, having to manage two layers of parallelism to maximize some super-expensive hardware seems like a non-starter whereas I think the appeal of zArchitecture is that you can use a set of well-developed tools and frameworks like DB2 and CICS to build a certain sort of application -- the early motivation for Sysplex was that IBM had to make a transition from bipolar to CMOS transistors and the first CMOS mainframes could not equal the performance of the biggest bipolar mainframes so they needed to get N CMOS mainframes to do the job of one bipolar where N is a small number.
The vision I do get out of this idea is some kind of system that has a very smart compiler that looks at things in a fractal manner, that is it knows you can apply SIMD to a calculation and then you can apply SMP to it, and then you can apply clustering techniques and who knows, a "cluster of cluster" might make sense for geographically distributed situations. I think of
https://en.wikipedia.org/wiki/Cache-oblivious_algorithm
but not so much the algorithm being oblivious but rather the compiler very much models opportunities to parallelism and the costs of moving data around so the applications developer can be "oblivious" about it all.