Back to Home11/18/2025, 11:59:14 PM

Lucent 7 R/E 5ESS Telephone Switch Rescue (2024)

49 points
27 comments

Mood

thoughtful

Sentiment

positive

Category

tech

Key topics

telecom history

vintage technology

infrastructure reliability

Debate intensity20/100

The rescue and restoration of a Lucent 7 R/E 5ESS Telephone Switch highlights the ingenuity of past telecom systems and sparks discussion on the trade-offs between old and new technologies. The story and comments reflect a nostalgia for robust, understandable systems and a concern about the reliability of modern infrastructure.

Snapshot generated from the HN discussion

Discussion Activity

Light discussion

First comment

31m

Peak period

5

Hour 2

Avg / period

2.2

Comment distribution22 data points

Based on 22 loaded comments

Key moments

  1. 01Story posted

    11/18/2025, 11:59:14 PM

    19h ago

    Step 01
  2. 02First comment

    11/19/2025, 12:30:42 AM

    31m after posting

    Step 02
  3. 03Peak activity

    5 comments in Hour 2

    Hottest window of the conversation

    Step 03
  4. 04Latest activity

    11/19/2025, 6:07:40 PM

    1h ago

    Step 04

Generating AI Summary...

Analyzing up to 500 comments to identify key contributors and discussion patterns

Discussion (27 comments)
Showing 22 comments of 27
yborg
18h ago
2 replies
I wonder how many operating 5ESS are left now.
Aloha
17h ago
5 replies
a fairly large number - a bigger question is what happens to all the CO buildings once all the copper is turned down.

There is a huge opportunity about 5 years from now for edge datacenters. You have these buildings which have highly reliable power and connectivity, all thats needed is servers which can live in a NEBS environment.

bluedino
17h ago
1 reply
Most of those CO's are in buildings that don't have all that much space in them, were built in the 40's and 50's, and likely aren't suitable for that kind of thing. Cooling would be a big deal.
Aloha
17h ago
I have been in ~15 CO's - there is tons of floor space in them, and the only thing telephone switching equipment has done since the 50's is shrink - beyond that, most existing CO buildings had expansions when electronic switching came about, because they couldnt add the new electronic (1/1A/5 ESS) without additional floor space. Cooling is noted by the requirement for NEBS compliant equipment.
kjs3
15h ago
The CO closest to me was turned into condos. A friend was the general contractor. It was by all accounts a nightmare.
kayfox
17h ago
COs are already being used for edge datacenters, its just not been talked about much outside the industry.
peterbecich
10h ago
bediger4000
15h ago
Central offices are everywhere, too. You've driven or walked by any number of them, and the most you noticed was a Bell System logo. The downtown COs in big cities are on expensive real estate.
shrubble
12h ago
Across the USA? Very likely a few thousand.
g-mork
18h ago
1 reply
Talk about a gargantuan project.. also awesome to bag such a thing. He's lucky to even have the resources to store^W warehouse it
userbinator
15h ago
It's not that much space in some parts of the US where properties are measured in acres.
jakedata
18h ago
1 reply
Visiting Bletchley Park and seeing step-by-step telephone switching equipment repurposed for computing re-enforced my appreciation for the brilliance of the telecommunication systems we created in the past 150 years. Packet switching was inevitable and IP everything makes sense in today's world, but something was lost in that transition too. I am glad to see that enthusiasts with the will and means are working to preserve some of that history. -Posted from SC2025-
dekhn
18h ago
1 reply
I wanted to learn more about computer hardware in college so I took a class called "Cybernetics" (taught by D. Huffman). I thought we were going to focus on modern stuff, but instead, it was a tour of information theory- which included various mathematical routing concepts (kissing spheres/spherical code, Karnaugh maps). At the time I thought it was boring, but a couple decades later, when working on Clos topologies, it came in handy.

Other interesting notes: the invention of telegraphy and improvements to the underlying electrical systems really helped me understand communications in the 1800s better. And reading/watching Cuckoo's Egg (with the german relay-based telephones) made me appreciate modern digital transistor-based systems.

Even today, when I work on electrical projects in my garage, I am absolutely blown away with how much people could do with limited understanding and technology 100+ years ago compared to what I'm able to cobble together. I know Newton said he saw farther by standing on the shoulders of giants, but some days I feel like I'm standing on a giant, looking backwards and thinking "I am not worthy".

Animats
13h ago
When the Bell System broke up, the old guys wrote a 3-volume technical history of the Bell System.[1] So all that is well documented.

The history of automatic telephony in the Bell System is roughly:

- Step by step switches. 1920s Very reliable in terms of failure, but about 1% misdirected or failed calls. Totally distributed. You could remove any switch, and all it would do is reduce the capacity of the system slightly. Too much hardware per line.

- Panel. 1930s. Scaled better, to large-city central offices. Less hardware per line. Beginnings of common control. Too complex mechanically. Lots of driveshafts, motors, and clutches.

- Crossbar. 1940s. #5 crossbar was a big dumb switch fabric controlled by a distributed set of microservices, all built from relays. Most elegant architecture. All reliable wire relays, no more motors and gears. If you have to design high-reliability systems, is worth knowing how #5 crossbar worked.

- 1ESS - first US electronic switching. 1960s Two mainframe computers (one spare) controlling a big dumb switch fabric. Worked, but clunky.

- 5ESS - good US electronic switching. Two or more minicomputers controlling a big dumb switch fabric. Very good.

The Museum of Communications in Seattle has step by step, panel, and crossbar systems all working and interconnected.

In the entire history of electromechanical switching in the Bell System, no central office was ever fully down for more than 30 minutes for any reason other than a natural disaster, and in one case a fire in the cable plant. That record has not been maintained in the computer era. It is worth understanding why.

[1] https://archive.org/details/bellsystem_HistoryOfEngineeringA...

CaliforniaKarl
18h ago
1 reply
(2024), but still a good read!
gjvc
13h ago
this date obsession is moronic, especially when we are talking about technology over forty years old. Next time you are tempted to spam the date, wait, and see if conversation still happens without your vital input.

There are many articles missing a (2025) addition, so get to work!

kev009
10h ago
1 reply
Author and surprised to see this here but happy to see interest in these historical machines.

I will also plug Connections Museum who have an already neat installation in Seattle and are working on their own 5ESS recovery for display at a new site in Colorado https://www.youtube.com/watch?v=3X3-xeuGI5o

Aloha
4h ago
1 reply
Did you end up bringing up the AM and playing around with it?
kev009
1h ago
I've been squeezed a bit hard since then so not yet. I did get a couple DC power supplies that will have enough to at least get the AM and a few other shelves up, so if things slow down over the holidays I may try to image the disks for a recovery point and then see about doing it.
hasbot
6h ago
My first development job was as a software developer at Bell Labs in Naperville working on the 5E. I started at the end of 5E4 (the 4th revision) and then worked on 5E5 and 5E6. I went from school writing maybe 1000 line programs to maintaining and enhancing a system comprised of millions of lines of code and hundreds of developers. Most of the code itself was very simple but it was the interactions between modules and switching features that was very complex.
palmotea
11h ago
> In particular, the machine had an uptime of approximately 35 years including two significant retrofits to newer technology culminating in the current Lucent-dressed 7 R/E configuration...

Pretty impressive. It makes me sad that the trend is to move away from rock-solid stuff towards more and more unreliable and unpredictable stuff (e.g. LLMs that need constant human monitoring because they mess up so much).

luckyturkey
10h ago
This is such a stark contrast with how "critical infrastructure" is built now.

A university bought a 5ESS in the 80s, ran it for ~35 years, did two major retrofits, and it just kept going. One physical system, understandable by humans with schematics, that degrades gracefully and can be literally moved with trucks and patience. The whole thing is engineered around physical constraints: -48V, cable management, alarm loops, test circuits, rings. You can walk it, trace it, power it.

Modern telco / "UC" is the opposite: logical sprawl over other people's hardware, opaque vendor blobs, licensing servers, soft switches that are really just big Java apps hoping the underlying cloud doesn't get "optimized" out from under them. When the vendor loses interest, the product dies no matter how many 9s it had last quarter.

The irony is that the 5ESS looks overbuilt until you realize its total lifecycle cost was probably lower than three generations of forklifted VoIP, PBX, and UC migrations, plus all the consulting. Bell Labs treated switching as a capital asset with a 30-year horizon. The industry now treats it as a revenue stream with a 3-year sales quota.

Preserving something like this isn't just nostalgia, it's preserving an existence proof: telephony at planetary scale was solved with understandable, serviceable systems that could run for decades. That design philosophy has mostly vanished from commercial practice, but it's still incredibly relevant if you care about building anything that's supposed to outlive the current funding cycle.

5 more comments available on Hacker News

ID: 45973955Type: storyLast synced: 11/19/2025, 7:26:53 PM

Want the full context?

Jump to the original sources

Read the primary article or dive into the live Hacker News thread when you're ready.