Laptops Create Systems. Phones Feed Algorithms. the Asymmetry Determines Power
Posted3 months agoActive3 months ago
zakelfassi.comTechstoryHigh profile
calmmixed
Debate
70/100
Laptop vs PhoneCreativity and ProductivityTechnology and Power Dynamics
Key topics
Laptop vs Phone
Creativity and Productivity
Technology and Power Dynamics
The article discusses how laptops enable creative systems while phones feed algorithms, and the discussion revolves around the implications of this distinction on productivity, creativity, and power dynamics.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
2h
Peak period
55
0-6h
Avg / period
16.5
Comment distribution66 data points
Loading chart...
Based on 66 loaded comments
Key moments
- 01Story posted
Oct 5, 2025 at 7:07 AM EDT
3 months ago
Step 01 - 02First comment
Oct 5, 2025 at 8:51 AM EDT
2h after posting
Step 02 - 03Peak activity
55 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 7, 2025 at 11:42 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45480620Type: storyLast synced: 11/20/2025, 6:39:46 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Seriously, please stop it. If you talk about an abstract topic, feel free to have no picture, just text.
So much writing on the internet seems derivative nowadays (because it is, thanks to AI). I’d rather not read it though, it’s boring and feels like a samey waste of time. I do question how much of this is a feedback loop from people reading LLM text all the time, and subconsciously copying the structure and formatting in their own writing, but that’s probably way too optimistic.
Also, lots of people who use Macs, because it's very easy to type on a Mac (shift-option-hyphen).
The reason LLMs use em-dashes is because they're well-represented in the training corpus.
The article has on average, about one em dash per paragraph. And “paragraph” is generous given they’re 2-3 sentences in this article.
I read a lot, and I don’t recall any authors I’ve personally read using an em dash so frequently. There would be like 3 per page in the average book if human writers used them like GPT does.
The rest of the blog has even more obvious AI output, such as the “recursive protocol” posts and writing about reality and consciousness. This is the classic output you get (especially use of ‘recursive’) when you try to get ChatGPT to write something that feels profound.
And a certain vacuousness. TFA is over 16000 words and I'm not really sure there's a single core point.
Most of the examples used to justify creation vs consumption can also be explained by low scale vs high scale (cost sensitive at high scale) or portability.
As the author, do you just don't see what ridiculous image the slop machine spewed out - a kind of visual dyslexia where you do not register problematic hallucinations?
I can go on for a while hypothesizing, and none of the reasons I can come up with warrant using obviously bad AI slop images.
Is it disdain for your users - they won't see it/they won't care/they don't deserve something put together with care? Is it a lack of self-respect? Do people just genuinely not care and think that an article must have visuals to support it, no matter how crappy?
The mind truly boggles.
Snark aside, I think it's laziness and the shotgun approach. The author writes some rough thoughts down, has an AI "polish" them and generate an image, and posts an article. Shares it on HN. Do it enough, especially on a slow Sunday morning, and you'll get some engagement despite the detractors like us in the comments. Eventually you've got some readers.
Look at the titles of other posts:
> Memory Beaches and How Consciousness Hacks Time Through Frame Density
> Witnesses Carry Weights: How Reality Gets Computed
> From UFO counsel to neighborhood fear to market pricing—reality emerges through weighted witnessing. A field guide to the computational machinery where intent, energy, and expectations become causal forces.
The blog is supposedly about AI agents and MCP (the current top buzzwords)
> Engineer-philosopher exploring the infrastructure of digital consciousness. Writing about Model Context Protocol (MCP), Information Beings, and how AI agents are rewiring human experience. Former Meta messaging architect.
The entire blog is just an LLM powered newsletter play.
I did within the last year switch from Windows to Mac for my primary desktop, and it feels like I regressed about a decade in the dumbification of computing compared to where Windows was headed.
Given Apple's recent actions in the US, MacOS doesn't feel like something I'd be switching to.
And Apple's recent actions in the US are not of much concern anymore
I have a couple apps that just stop working because they are open source, haven't paid the Apple developer tax, and the permission to the disk just expires seemingly randomly. I haven't figured out how to get around that and instead just don't rely on them.
I have personally felt like I have to fight less with MacOS than I did with Windows. And my pihole shows Apple isn’t greedily trying to hoover up my data like MS.
Apple seems to completely stuck with their macOS/iOS split, and probably will never do anything about it. Now iPadOS & macOS look and feel more similar than ever before, but it's all just facade. They should actually commit really hard on merging these OS, but they can't open up iOS, because that would threaten the 30% cut and so its simply not going to happen.
But they really loves multi-screens :) For me, multi-screens are a big waste, I find virtual screens for more useful. The only real use multi-screens have for me is debugging a program with some kind of user interface. And the 2nd screen only needs to be a text terminal.
But, I have not used Windows for decades, so I wonder if these multi-screen setups and popular due to how the Windows GUI work and are really needed.
I do admire people that can get it all done on a laptop.
Due to the rise of influencers, social media is barely a sharing platform anymore.. its just decentralized long-tailed broadcast media.
Many modern people think dining out and travel are hobbies, and in between doom scroll social media.
Time spent staring at the phone is rarely productive or anxiety reducing.
Did you notice that this entire blog is just an LLM content farm newsletter? That the laptop in the headline image has a double keyboard AI artifact that the author didn't even spend 10 seconds cropping out?
The recent posts hit all the common points in LLM hallucinated content like the famous "recursive protocol" trope. The posts are about BS like "UFO markets" and reality protocols.
It's ironic that people are consuming this obvious AI slop uncritically while criticizing other people for their uncritical consumption of media on their phones.
Comparing the two images is a good analogy. You instructed the AI to remove the keyboards, and it completely changed the entire contents of both screens, as well as the hand holding the phone. I'm not sure what app has a modular plug as its "main screen" icon, but that distracts me from the whole rest of the image: even the cardboard surface of the bottom part of the laptop. It's less clear what you were trying to convey with the image than before.
Human-to-human communication is not something that benefits from inserting generative AI in the middle. This whole article is confusing: like a collaboration between a pointillist and an impressionist, except they didn't agree on what they were trying to say, so the picture can only be understood by working backwards and trying to model the production process in your head.
> But—and this matters—the sandbox remains someone else's. The app defines the possibility space. The platform determines what's possible. Users create within the system, never of the system.
I was going to use this as an example of a paragraph I understood, but then I looked closer: I have no idea what the distinction between "within" and "of" that you're trying to draw actually is. Sure, I know what you're trying to say, but which one is meant to be "within", and which one is "of"? The slop header image is a symptom of the broken authorial process that led to this article, not the primary issue with it: the main problem is that you started out with something to say, and ended up with confusing, verbose, and semi-meaningless output.
Most people can write better than this. You can write better than this.
And then the reader has to consume the narrative to derive the core ideas themself.
So it's off-putting that the reader has to split off the narrative chaff that you didn't even write and/or spend the time editing.
At some point it makes more sense to publish a minimal paragraph of your core ideas, and the reader can paste it into an LLM if they want a clanker to couch it extra content.
You have now updated the article to admit using AI to write it.
So why is it funny that I recognized it as AI?
For some reason, calling things “recursive” and talking about “protocols” are common. The second post I clicked on in this blog has a section called “recursive protocol” with similar content to all of the other ChatGPT style writing. The subheading talking about “UFO markets” and all of the flowcharts purporting to describe reality are also similar to other ChatGPT fake profound output.
If that’s not gatekeeping I don’t know what is.
If someone is going to restaurants serving variations of a dish, or if they travel to cemeteries where their ancestors are buried, that's qualify as hobbies.
Whereas, if they go to all the trending restaurants or countries/cities on social networks, that's not hobbies.
In the first case, they make active decisions on what they do; in the second case, they are just following the decisions made by others.
To be clear, both are fines, as long as they feel happy with how they spend their free time and money.
But I know with who I'd like to spend some time, listening them explaining what they did in the last months.
I don't need to know how expensive the tasting menu was or how hard it was to get a reservation because in the words of Logan Roy - "congrats on saying the biggest number".
Tell me about something you did, made, learned, gave back, etc.
Vacations are nice, I take them, but 90% of the time hearing people describe their most recent banal trend following travel is boring. I swear every rich person I know seems to take the same 5-10 trips as each over. I would probably find a discussion of a book you read while on vacation more engaging.
Is shopping a hobby?
Hobbies to me are more about putting something out into the world even if just for yourself or family.
Cooking is more of a hobby than dining out.
Look at the heading and sub-heading of a post from a couple weeks ago:
> Witnesses Carry Weights: How Reality Gets Computed
> From UFO counsel to neighborhood fear to market pricing—reality emerges through weighted witnessing. A field guide to the computational machinery where intent, energy, and expectations become causal forces.
It even gets into the "recursive protocol" trope that has become a common theme among people think ChatGPT is revealing secrets of the universe to them.
This type of LLM slop has been hitting the page more frequently lately. I assume it's from people upvoting the headline before reading the content.
"My high school teacher in 2004 accused me of plagiarizing from Wikipedia because my research paper looked "too polished" for something typed on a keyboard instead of handwritten. Twenty years later, HN commenters see clean prose and assume LLM slop. Same discomfort, different decade, identical pattern: people resist leverage they haven't internalized yet.
I use AI tools the way I used spell-check, grammar tools, and search engines before them—as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.
You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine."
As someone who actually clicks the links and reads the articles, I’m growing frustrated with these AI-written articles wasting my time. The content is typical of ChatGPT style idea expansion where someone puts their “ideas” into an LLM and then has the LLM generate filler content to expand it into a blog post.
I try to bring awareness of the AI generated content so others can avoid wasting their time on it as well. Content like this also gets flagged away from the front page as visitors realize what it is.
Your edited admission of using AI only confirms the accusations.
Thanks. My own AI detection skills aren't always up to par, so I appreciate people calling it out.
Frankly the right tool is sometimes the one you have in front of you.
But anyone who's seen disaster DIY videos or worse had a house full of said projects from previous owners knows, there are problems caused by "When all you have is a hammer..." And an enthusiastic inexperienced amateur.
An impression being created here that laptops are the best creation tools, and that users have the right to greater control on them. MacOS, iOS, Windows and Android are just extensions of each other. In a continually connected device ecosystem, there is a false perception of power and control in the writer's mind about using a Laptop.
I certainly think that laptops have better software and interfaces for some types of work. But, Capcut mobile is earier to use and more powerful in the hands of the 99.9% than any desktop editing tool.
What we must remember, is that where once the limitation to productivity was typing, or clicking, LLMs and AI assisted tasks are going to afford mobile users the power that was once only available to computer users. For example, who needs to edit chunks of code when bitrig or cursor mobile (early in their stages of company development) do the laborious work for you. The limitation of mobile devices is now only one of perception.
I use AI tools the way I used spell-check, grammar tools, and search engines before them; as cognitive leverage, not cognitive replacement. The ideas are mine. The arguments are mine. The cultural references, personal stories, and synthesis across domains—all mine. If the output reads coherently, maybe that says more about expectations than about authenticity.
You can call it slop. Or you can engage with the ideas. One takes effort. The other takes a glance at a header image and a decision that polish equals automation. Your choice reveals more about your relationship to technology than mine :)
I invested in two wireless handheld keyboard+pointer inputs to match the different input styles of me and my wife.
Completely bypasses all ads with less effort than setting up a pihole or torrent+Plex server, and the bonus is avoiding the surveillance from the TV's 'consumption OS'
https://vonnik.substack.com/p/how-to-take-your-brain-back
i think it's underrated, too, how much the pairing of phone-camera to produce media amplifies the possibility of consumption via the same device.
IMO the important thing to be mindful of is your creation-vs-consumption balance. We tend to overindex on consumption.
Everything we have seen over the last few years (eg what microsoft is doing to Windows) points to a push to make the platforms we used to control, more like the 'consumption' platforms. Profit demands it.
"Does this serve my goals, or someone else's metrics?" indeed.
1 more comments available on Hacker News