Can Text Be Made to Sound More Than Just Its Words? (2022)
Posted2 months agoActiveabout 2 months ago
arxiv.orgTechstory
calmneutral
Debate
20/100
NlpDigital CommunicationTone AnalysisHuman-Computer Interaction
Key topics
Nlp
Digital Communication
Tone Analysis
Human-Computer Interaction
The post discusses whether text can be made to convey more than its literal words, exploring potential technological or cultural solutions.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
13d
Peak period
18
Day 13
Avg / period
18
Comment distribution18 data points
Loading chart...
Based on 18 loaded comments
Key moments
- 01Story posted
Nov 2, 2025 at 5:17 PM EST
2 months ago
Step 01 - 02First comment
Nov 15, 2025 at 7:00 AM EST
13d after posting
Step 02 - 03Peak activity
18 comments in Day 13
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 15, 2025 at 11:21 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45793920Type: storyLast synced: 11/20/2025, 7:50:26 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
In Akan languages it is not difficult to conceive of how the same word can be written in different ways to convey another dimension.
Anyone who speaks an akan language will understand that each of these words below means good but with a slightly different emphasis.
papa papaaapa papapapapapa
What is the linguistic term for this concept?
Chatgpt also explained the concept of ideophones which was helpful:
https://chatgpt.com/share/69187b3e-7948-8001-9fea-2b4412d5a7...
I don't know if 20% is correct, but I feel it's very close to it. I also think a lot of internet arguments happen as a direct result of miscommunication. Emojis are great, but they get abused to the point that HN filters them out. Perhaps allow readers to toggle if they want to see emojis or not?
How much of the book will you understand if you only read page 1?
I would load up audio files in Audacity and look at them to see how the audio "looked", as a function of how intense each frequency is over time.
You can even set a track to spectrogram while recording which allowed you to see the sound in real time.
Music also tends to be very beautiful in the spectrogram! And birdsong also. Sometimes I would see a bird first, and only afterwards notice it in my field of hearing.
I noticed while analyzing a podcast that I began to recognize common words like "you." I also noticed that I was able to easily distinguish between different people's voices.
I had to wonder if I were deaf, or if I become deaf, I would suddenly have a strong motivation to learn how to read these things. To develop some kind of device which would show them to me 24 hours a day.
I have not done this, but the project has remained in the back of my mind for over a decade.
Does anyone else know more about this? Does such a device exist?
I think that only some linguists learn how to read spectrograms. But it seems like something that might be extremely useful to any hearing impaired person?
Relating to the article, I think one could quickly learn to read them fluently (e.g. as subtitles, perhaps overlaid on real life), and of course you get the tonal information built in for free—that's what a spectrogram is!
12 more comments available on Hacker News