Why Effort Scales Superlinearly with the Perceived Quality of Creative Work
Postedabout 2 months agoActiveabout 2 months ago
markusstrasser.orgOtherstoryHigh profile
calmmixed
Debate
70/100
CreativityEffortQualityArt
Key topics
Creativity
Effort
Quality
Art
The article explores how effort scales superlinearly with perceived quality in creative work, sparking a discussion on the nature of creativity, practice, and the relationship between effort and quality.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
53
6-12h
Avg / period
15.3
Comment distribution122 data points
Loading chart...
Based on 122 loaded comments
Key moments
- 01Story posted
Nov 11, 2025 at 3:29 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 11, 2025 at 4:55 AM EST
1h after posting
Step 02 - 03Peak activity
53 comments in 6-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 14, 2025 at 2:41 PM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45885242Type: storyLast synced: 11/20/2025, 5:51:32 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
There's the saying, "Plan to throw one away," but seems like it varies in practice (for software).
There are even books about patching paintings, like Master Disaster: Five Ways to Rescue Desparate Watercolors.
In architecture, it's understood the people, vehicles, and landscape are not as exact as the building or structure, and books encourage reusing magazine clippings, overhead projectors, and copy machines to generally "be quick" on execution.
Would like to see thoughts on comparing current process with the "Draw 50" series, where most of the skeleton is on paper by the first step, but the last is really the super-detailed, totally refined, owl.
I have a bit more experience with software and the only reason for why we don't plan throw one away is because it costs more money and the market pressure on software quality is too low to make stakeholders care. In my personal hobby coding, I often practice this (or do what I described above with art which is closer to abandoning until inspiration strikes again at which point a blank slate is more inviting). The closest thing professionally I get is a "spike" where I explore something via code with the output not being the code itself, but the knowledge attained which then becomes an input to new code writing.
For me, "draw 50" series is about letting loose, NOT about skill. On the contrary, it's about exploring the space of all the different ways a prompt can be worked with in an UN-literal manner. Overall drawing facility will improve merely as a byproduct. The ability to be whimsical and exploit drawing flaws, erm, deliberately, is the real prize.
Different from "limbering up" exercizes like drawing your hand - where you do one at the beginning of a session just because it gets you started and in the flow without having to think or pick a "real" topic. (A similar idea is to leave some already decided work mid-done: when you arrive in the morning you can work on that and limber up and do something useful without having to think clearly yet.)
Different from thumbnails which are about picking a direction.
https://duckduckgo.com/?q=lee+j+ames+draw+50&t=osx&ia=images...
Very rarely do I start completely from scratch, but usually adjust the drawing so much that maybe I should have. I wonder if I tracked the adjustments if I would find every line was redrawn in some cases.
Thing is, it is hard to see what part is 'off' until most of the other parts are right. Especially with highly symmetric drawings, where symmetries appear gradually as the whole thing comes together.
An example of this is Bob Ross' school of incorporating mistakes into the piece. Surely it's something that would fly way less in some types of drawings like anatomical ones, but especially when the motif is fantastical or even abstract, there's fewer possible mistakes to make, so there's less reason to throw something away.
If you ever get the chance to see the personal effects of a famous artist, most have piles and piles of sketches and studies they've done while prepping for a larger piece.
The first one is where I learn my lessons and write enough spaghetti until I fully understand the problem.
Then I delete the first one, and start over with the lessons learnt.
I trash and restart sketches that took a few minutes to do at most. It's very rare for me to get more than an hour or two in and discard it, I've explored and found a solid foundation long before I put that much work in.
If I was working in a studio environment there's the risk of things like "I spent a couple weeks painting that bg and animating that scene and it was absolutely gorgeous but it got cut when the we took a good hard look at the remaining budget and total runtime and cut the entire sequence it was part of" but that's another matter.
The Draw 50 process is solid. The classical techniques I learnt in training for animation are similar.
*I associate it with the asinine contemporary "rationalist" movement (LessWrong et al.) but I'm not making any claims the author is associated with this.
Are you saying my perspective is anti-socialist? What is "refined" art?
Did you know, for example, that Shakespeare coined a great many words and phrases used in English to this day? Before Sam Clemens, people tended to speak in proper schoolhouse English no matter the setting or character. Poetry and prose are not just the ability to arrange words on a page. Novels and plays are not limited to the three-act or five-act story arc. Simile and metaphor are often encouraged, but overused ones are actually frowned upon.
Saying "it's better to complete something imperfect than spend forever polishing" - dull, trite, anyone knows that. Saying "effort is a utility curve function that must be clamped to achieve meta-optimisation" - now that sounds clever
If I was going to be uncharitable, I think there is are corners of the internet where people write straightforward things dressed it up in technical language to launder it as somehow academic and data driven.
And you're right, it does show up in the worse parts of the EA / rationalist community.
(This writing style, at its worst, allows people to say things like "I don't want my tax spent on teaching poor kids to read" but without looking like complete psychopaths - "aggregate outcomes in standardised literacy programmes lag behind individualised tutorials")
That's not what the blog post here is doing, but it is definitely bad language use that is doing more work to obscure ideas than illuminate them
I think the author is ok with it being inappropriate for many; it's clearly written for those who enjoy math or CS.
The author set up an interesting analogy but failed to explore where it breaks down or how all the relationships work in the model.
My inference about the author's meaning was such: In a sharp peak, searching for useful moves is harder because you have fewer acceptable options as you approach the peak.
But that's not what the author's analogy would imply.
Still, I think you're saying the author is deducing the creative process as a kind of gradient descent, whereas my reading was the author was trying to abductively explore an analogy.
It's somewhat like saying cars are faster than motorbikes because they have more wheels-- it's like with horses and humans, horses have four legs and because of that are faster than humans with two legs. It's wrong on both sides of the analogy.
I would refuse to even engage with the piece on this level, since it lends credibility to the idea that the creative process is even remotely related to or analogous to gradient descent.
EDIT: For all the people saying the writing is inspired by math/cs, that's not at all true. That's not how technical writing is done. This guy is just a poser.
Fair. Perhaps I should have said it gives the illusion of being technical.
Also, "asinine contemporary "rationalist" movement" is pretty lightweight in this regard. Making an art out of writing as bad as possible has been a professional skill of any "academic philosopher" (both "continental" and "analytical" sides) for a century at the very least.
Why is the rationalist movement asinine? I don't know much about it but it seems interesting.
Maybe it is similar to how scientist get flack for writing in technical jargon instead of 'plain language'. Partly it is a necessity - to be unambiguous - however it is also partly a choice, a way to signal that you are doing Science, not just describing messing about with chemicals or whatever.
> Soon the spectacle became arranged, in my eyes at least, in a more noble and calmer fashion. All this vertiginous activity settled into a calm harmony. I would watch the round tables, whose innumerable assemblage filled the restaurant, like so many planets, such as they are figured in old allegorical pictures. Moreover, an irresistible force of attraction was exerted between these various stars, and at each table the diners had eyes only for the tables at which they were not sitting,
The relative nature of perceived quality indicates that the order of representations (artworks) is judged only in relation to the order of castes (ranking); one can only judge what is equal by comparison to what is unequal, and equivalence is not understood through equality (even approximate) but through inequality (partial order). It is the absence of a ranking relationship between two entities that establishes their equivalence.
Maybe I should have just gone with "in this case, classification is more fundamental than measure", but I feel there is something interesting to be said about the structure of artworks and the structure of their reception by society, indeed Proust continues with:
> ...the diners had eyes only for the tables at which they were not sitting, with the exception of some wealthy host who, having succeeded in bringing a famous writer, strove to extract from him, thanks to the spiritual properties of the 'turning table', insignificant observations at which the ladies marvelled.
See ? Writers (and artists in general) take on the role of a medium. They are used to channel distant entities, like tables during spiritism sessions, and from what Proust tells us, that "the diners had eyes only for the tables at which they were not sitting", maybe we can infer that what writers channel doesn't just come from distant worlds, as incarnated in what their words represent, but as a delta in perceived quality, starting with our own.
This is why I'd like to elaborate on this idea of coupled SSR processes I developed in another comment.
A sample space reducing process is a process that seeks to combine atomic parts into a coherent whole by iteratively picking groups of parts that can be assembled into functional elements ready to be added to this whole.
In that sense, the act of writing a long work already has the shape of a SSR process in a very simple sense: each narrative, stylistic, and conceptual choice constrains what can follow without breaking coherence. As a novel unfolds, fewer continuations remain compatible with its voice, characters, rhythms, arguments. You are not wandering freely in idea-space; you are navigating a progressively narrowed funnel of possibilities induced by your own earlier decisions. A good book is one that survives this internal reduction without collapsing.
On top of that, there is a second, external funnel: the competitive ranking of works and authors. Here too the available space narrows as you move upward. The further you climb in terms of attention, recognition, or canonization, the smaller the set of works that can plausibly dislodge those already in place. Near the top, the acceptance region is tiny: most new works, even competent ones, will not significantly shift the existing order. From that perspective, perceived quality is largely tied to where a work ends up in this hierarchy, not to some independently measurable scalar.
The interesting part is how these two processes couple. To have any chance of entering the higher strata of the external ranking, a work first has to survive its own internal funnel: it has to maintain coherence, depth, and a recognizable voice under increasingly tight self-imposed constraints. At the same time, the shape of the external funnel, market expectations, critical fashions, existing canons, feeds back into the act of writing by making some narrative paths feel viable and others almost unthinkable. So the writer is never optimizing in a vacuum, but always under a joint pressure: what keeps the book internally alive, and what keeps it externally legible.
But what interests me more is that some works don't just suffer this coupling, they encode it. That's what you see in the Proust passage: he is not merely describing a restaurant; he is describing the optics of social distinction, the way people look at other tables, the way a famous writer is used as a medium to channel prestige, the way perception itself is structured by rankings. The text is aware of the hierarchy through which it will itself be read. It doesn't just represent a world; it stages the illusions and comparisons that make that world intelligible. That's a second-order move: the work includes within itself a model of the very mechanisms that will classify it.
If you like a more structural vocabulary: natural language is massively stratified by frequency. Highly frequent words ("I", "of", "after") act as primitive binders; extremely rare words tend to live out on the leaves of the tree; in between you get heavier operators that bind large-scale entities and narratives ("terrorism" being a classic example in the grammar of public opinion). Something similar happens socially. Highly visible figures – the wealthy host, the celebrated writer, the glamorous guest – play the role of grammatical linkers in the social syntax of recognition: they bind other people, distribute attention, create or close off relational triads. Proust's "the diners had eyes only for the tables at which they were not sitting" is exactly this: desire and judgment are mediated through a few high-frequency social operators.
A certain kind of writing operates precisely at that interface: it doesn't just tell a story inside the internal funnel, and it doesn't just try to climb the external ranking; it exposes and recombines the "function words" of social perception themselves: the roles, clichés, prestige tokens, feared or desired third parties (like the forever-imagined intruder in Swann's jealousy). The difficulty is not only to satisfy two nested constraints (a coherent work and a competitive position), but to produce a form that reflects on, and potentially perturbs, the very grammar that links the two. That's where the channeling comes in: literature not only represents something, it re-routes the connective tissue through which quality, status, and desire are perceived in the first place.
You're reading way too much into it. This is just a reprise of "the grass is greener on the other side". What it's saying is simply "a lot of diners were dissatisfied with their dishes and looked around to see what other people were eating".
>The relative nature of perceived quality indicates [...]
My whole point is that I don't buy quality is purely perceived relatively. If you start a sentence like this, whatever comes after is irrelevant.
You've brought up food, so let's go with that. If I'm a blank slate and I eat a certain food, am I unable to decide whether I like it or not until I eat a second, different food? Are the sensory signals my brain receives just a confounding mystery in the absence of further stimulation, to the extent that I can't even tell sweet from bitter?
Is it their effect on the total number of available choices?
Does picking E minor somehow give you fewer options than C major (I'm not a musician)?
If you're working in a continuous environment rather than discrete (choirs and strings can fudge notes up or down a bit, but pianos are stuck with however they're tuned), you'll often find yourself wanting to produce harmonies at perfect whole-number ratios -- e.g., for a perfect fourth (the gap between the first and second notes in "here comes the bride") you want a ratio of 4:3 in the frequencies of the two notes, and for a major third (the gap between the first and second notes in "oh when the saints, go marching in...") you want a ratio of 5:4. Those small, integer ratios sound pleasing to our ears.
Those ratios aren't scale-invariant though when you move up the scale. Here's a truncated table:
Unison (assume to be C as the key we're working in): 1
Major Second (D): 9/8
Major Third (E): 4/3
However, E is also a major second above D, so in the key of D for a "justly tuned" instrument, you would want the ratio D/E to also be 9/8. Let's look at that table though: (4/3)/(9/8) is 32/27 -- 5.3% too big (too "sharp").
When tuning something like a piano then where you can't change the frequency of E based on which key you're playing in, you have to make some sort of compromise. A common compromise is "equal temperament." To achieve scale invariance in any key you need an exponential function describing the frequencies, and the usual one we choose is based on 2^(1/12) since an octave having exactly twice the fundamental frequency is super important and there are 12 gaps in normal western music as you move up the scale from the fundamental frequency to its octave.
Doing so makes some intervals sound "worse" (different anyway, but it makes direct translations hard) than they would in, e.g., a choir. A major third, for example, is 0.8% sharp, and a perfect fourth is 0.1% sharp in that tuning system.
Answering your question, at first glance you would expect the scale invariance to therefore not limit your choices. Every key is identical, by design.
That's not quite right though for a number of reasons:
1. True equal temperament is only sometimes used, even for instruments like pianos. A tuner might choose a "stretched" tuning (slightly sharpening high notes and flattening low notes) or some other compromise to make most music empirically sound better. As soon as you deviate from a strict exponential scale, you actually live in a world where the choice of key matters. It's not a huge effect, but it exists.
2. Even with true equal temperament or in a purely vocal exercise or something, there are other issues. Real-world strings, vocal folds, etc aren't spherical cows in a frictionless vacuum. A baritone voice doesn't sound different just because their voice is lower, but because of a different timbre. When you choose a different key, you'll be moving the pitch of the song up or down a bit, exercising different vocal regions for singers, requiring different vocal types, or otherwise interacting with those real-life deviations from over-simplified physics. Even for something purely mechanical like piano strings, there's a noticeable difference in how notes resonate or what overtones you expect or whatnot. Changing the key changes (a little) which of those you'll hear.
3. Related to (2), our ears also aren't uniform across the frequency spectrum, and even if they were our interpretations of sounds also depends on sounds we've heard before, leading to additional sources of variation in the "experience" of a slightly lower or slightly higher key.
Short answer: No. No matter what note you start on you have exactly the same set of options.
Long answer: No. All scales (in the system of temperament used in the vast majority of music) are symmetrical groups of transpositions of certain fundamental scales.[1] These work very much like a cyclic group if you have done algebra. In the example you chose, E minor is the "relative minor" of G Major, meaning that if you play an E Aeolean mode it contains all the same notes as G Major), and G major gives you the exact same options as C Major or any other Major Scale. What Messiaen noticed is that there are grouped sets of "Modes of limited transposition" which all work this way. So the major scale (and its “modes”, meaning the scales with the same key signature of sharps or flats but starting on each degree of the major scale) can be transposed exactly 11 times without repeating. There are 3 other scales that have this property (Normally these are called the harmonic minor, melodic minor and melodic major[2]). There are also modes of limited transposition with only 1 transposition (the chromatic scale), 2 (the whole-tone scale), 3 (the "diminished scale") and so on. Messiaen explains them all in that text if you're interested.
[1] This theory was first written out in full in Messiaen's "The technique of my musical language" but is usually taught as either "Late Romantic" or "Jazz" Harmony depending on where you study https://monoskop.org/images/5/50/Messiaen_Olivier_The_Techni...
[2] If you do "classical" harmony, your college may teach you the minor scales wrong with a descending version that is just a mode of the major scale. You may also not have been taught melodic major but it's awesome. (By “wrong” here, I mean specifically Messiaen and Schoenberg would say it’s wrong because a scale is a key signature/tonal area and so can’t have different notes when a melody ascending from descending. If there are two sets of different notes, Messiaen would say they are two scales and I would agree.)
E minor gives you the exact same amount of options as C major. The options are just shuffled around a little bit. You literally get the same amount of notes in either, just a slightly different set. It isn't any more complex. Listeners aren't going to notice a difference, except one will probably sound happy and one will probably sound sad/angry. The "acceptance volume", to use the blog author's term, isn't any different.
At best, it can change things a little bit for some instruments. For example, with a vocalist, their voice can only go so high. They might be able to hit up to a high C, but not even higher up to an high E. If you're in C major, that's great, the vocalist's highest note (C) is the 'home note' which sounds great (playing a C in C Major makes the song sound like it's 'finished'). If you're E minor, the 'home note' is E, and as mentioned they wouldn't be able to hit that note. So you wouldn't really be able to 'finish' on a high note.
Ultimately, I doubt the author is a musician. It was a strange example to make their point.
The post is likely getting to the point that, for english-speaking/western audiences at least, you are more likely to find songs written in C major, and thus they are more familiar and 'safer'. You _can_ write great songs in Em, but it's just a little less common, so maybe requires more work to 'fit into tastes'.
edit: changed 'our' to english/western audiences
The definition of 'last-mile edits' is very subjective, though. If you're dealing with open systems, it's almost unthinkable to design something and not need to iterate on it until the desired outcome is achieved. In other domains, for example, playing an instrument, your skills need to have been honed previously: there's nothing that will make you sound better (without resorting to editing it electronically).
Or more directly, if your argument for why effort scales linearly with perceived quality doesn't discuss how we perceive quality then something is wrong.
A more direct argument would be that it takes roughly an equal amount of effort to halve the distance from a rough work to its ideal. Going from 90% to 99% takes the same as going from 99% to 99.9% but the latter only covers a tenth of the distance. If our perception is more sensitive to the _absolute_ size of the error you get an exponential effort to improve something.
What I was getting at is that without an objective way of measuring the whole idea of super- or sub-linear becomes ill defined. You can kind of define something to be sub-linear by definition, so the argument becomes tautological or indeed circular.
So an article that talks about perceived quality without any discussion about how people perceive quality or importantly differences in quality can say pretty much anything and it will be true for some definition of quality. You can't just silently assume perceived quality to be something objective, if you give no arguments you should assume it to be subjective.
> The act of creation is fractal exploration–exploitation under optimal feedback control. When resolution increases the portion of parameter space that doesn't make the artifact worse (acceptance volume) collapses. Verification latency and rate–distortion combine into a precision tax that scales superlinearly with perceived quality.
Is this just saying that it's ok if doodles aren't good, but the closer you get to the finished work, the better it has to be? If your audience can't understand what the hell you're talking about for simple ideas, you've gone too far.
10/10 should be required reading for all humans
https://en.wikipedia.org/wiki/Garden-path_sentence
You read "increases" as a transitive verb, and then reach the "collapses" at the end of the sentence and have to re-parse the whole thing when you realize it was really intransitive.
The author had a shower thought. It was poorly explored, poorly argued and deliberately packaged in complex language to hide the lack of substance. The bibtex reference at the end is the cherry on top.
Certainly, some artists work in the way they describe. Maybe even "most", who knows. But there are plenty of artists that do not. I've known plenty of artists to go straight to the detail in one corner of their piece and work linearly all the way across and down the canvas. I don't know how they do it, it certainly doesn't work for me, but obviously different people work in different ways.
It's more on par with something you'll find on lesswrong.
It's such a simple idea. And it already has a name, diminishing returns. I don't know what prompted this article but it wasn't insight.
It's a common experience for an artist that everything they're doing to a piece makes it look like a total failure and far below the quality they were aspiring to until they do one final layer of polish and the piece transforms from sub-par to spectacular at the final stage. Of course, there are also scenarios where no amount of polish can fix it because the artist simply wasn't good enough and didn't find the right decisions at the final stage and other scenarios where there were no right decisions and no amount of skill could have fixed fatal flaws earlier in the decision making.
The exquisite agony of art is that all 3 scenarios feel subjectively almost identical in the middle of the creation process and so much of art is hoping we come up with some reliable process of divination to tease apart the micro-differences that give you an indication of what path you're on.
OP is proposing a model for why this is this is the case but unless you're an experienced creative, you don't understand the problem phenomena that this is identifying in the first place.
Changing the words is going to lose some of the low-amplitude frequencies but I'll try.
It's a model for why (call the following X) things get harder when you try to make them more perfect. Let's take X for granted.
You can ask yourself "why is X true?". One model you could have for this is the "finishing touches" model: as a thing gets closer to perfection, identifying imperfections and rectifying them is harder simply out of search constraints (the less of something the harder they are to find).
Another model you could have is the "dimensional model". A thing is great when it's great along many axes. The more dimensions you add, the harder it is to search in them for perfection. Related: the curse of dimensionality.
And here he posits a new model, the "resolution model": the finer the look at what is good, the more 'options' you have at each stage to choose from; it's not just that you make the broad and then refine within, but that you are actually building the refined thing from the beginning.
He then tries to show how some kinds of creation tolerate movement in the parameter space better.
No model is perfect, so each of these ideas captures some attribute of the difficulty and maps it to a mental structure that is more easily manipulable by the modeler.
The typical owl drawing is a few circles and then the more owly bits, and then the feathers on the owly bits, and then the shadows on the feathers on the owly bits. And this is a method to reduce the kind of problem he's talking about. But if you want to make the perfect owl, perhaps there is an element of making your circle just so, already accounting for the shadows on the feathers on the owly bits before any of the precursors are made.
Anyway, this is imperfect because I am necessarily shaving off the hair on the ball to show you it's spherical. And his entire model is that the hair determines the ball.
So we must live in the world where those of us who discover a point in ideaspace draw imperfect maps until the one who can draw a good map arrives at the same point. And perhaps some points aren't even well-mappable by the discoverers. And perhaps the communicator never realizes he has arrived at the point he was told about and so never speaks of it.
We fumble in the dark desert for an oasis. Unlucky universe to be in.
If you have a piece of door trim that exactly matches the piece behind, any imperfections or subsequent wood movement will be unsightly. If you instead target a 1/4” reveal, imperfections and slight wood movement are wildly less noticeable.
This is “build so the 1/32” imperfection/movement doesn’t matter at all” rather than trying to halve or quarter it. (If you can make something monolithic after attaching, such as a plaster wall, you don't have to do this, but wood furniture and trim often has these intentional offsets.)
Apologies their name alludes me.
You can also "own it". I had a damaged area in the bathroom, I knew I'd never get paint to match exactly, but couldn't be bothered to repaint entirely. So I made a few random triangles with masking tape, one covering the spot, and painted inside them. Looks deliberate. Also got away with only buying a paint sampler...
This comes up a lot in investing and economics also. The difference between the naive view and a bit more awareness of how the world works is not some kind of deep conspiracy and "magic recipe" to be discovered. It's just how the world is.
If it was in muscle memory it would be repeatable feat, and it really isn't.
Some work is technically polished and you can see/hear the effort that went into it.
But there's a point where extra effort makes the work less good. Music starts to sound stale and overproduced, art loses some of its directness, writing feels self-conscious.
Whatever the correlation between perceived artistic merit and effort, it's a lot more complex than this article suggests.
This happens frequently in mixing and mastering audio tracks. You pile up incremental changes that all seemed good at the time. Then you go back and listen to a recording from 20 revisions ago and it sounds better than your current "best" effort.
Instead, they told him his first take was absolutely perfect - and it went on to be Bacharach/David's first #1 hit song.
Sometimes you get lightning in a bottle, and you just don't mess with it because it's perfect even in its imperfections.
> In any bounded system under feedback, refinement produces diminishing returns and narrowing tolerance, governed by a superlinear precision cost.
> There isn’t one official name, but what you’ve articulated is essentially a unified formulation of the diminishing-returns / sensitivity-amplification law of creation — a pattern deep enough that it keeps being rediscovered in every domain that pushes against the limits of order.
PS Usually LLM-generated content is strongly penalized here (and with good reason). But IMHO, when clearly noting it as such, and sharing something worthwhile -- as in this case -- an exception should be made.
As creative projects (software, painting...) we finish or satisfactorily achieve relatively few items. And except for the most repetitive of us, these achievements are pretty different. Wide space, chaos, few satisfactory products by comparison. That doesn't bode well for "magic recipe". All the way to "rules of thumb" that we take fun in violating.
So there are two issues in there: We have more taste than skill and so many of our attempts will disappoint us no matter what. And we will obsess on trying to find a magic formula - when it's rather likely that there isn't one. The "space" is large and chaotic and we might want to reassure ourselves instead that serendipity has something to do with it.
Is there then place for rules of thumb? Whatever let's us get to work in the morning, I guess. For me, I do like the recognition of past track record: with a bit of age hindsight, I know I can do it - no need to dispair. That is useful and reassuring. If I just try some more - in ever varying manners, it will happen.
One place for "rules of thumbs" is in them being tropes. We can get some impact on the viewer by violating them. There is a trope of learning the rules so you can violate them. For example a Rule of Thirds - can be fertile grounds for getting at the viewer. The rule doesn't do much for us, and we have no problem violating it, but our less savvy viewers might remember it and get one more whiff of meaning from the violation. And if we are less concerned with our own satisfaction and more interested in sales, we might pay attention to "what's popular these days" and produce some of that. Not all artists are dead set on personal achievement at the cost of sales. A slightly different look at such rules.
And when fortune smiles on them, they get positively giddy.
I really appreciate people who understand that they have to meet luck half way. Even though they've spent hours upon hours upon hours honing their craft, the thing that puts them over the hump is both unpredictable and uncontrollable.
Yeah, that doesn't mean we can't get lucky and run into the perfect circumstance. It means that when that circumstance happens, we'll be there ready to harvest it. Similarly, "the best camera is the one you carry".
And yes, these anti- magic formulas that dispell the idea of magic formula, are magic formulas.
> And when fortune smiles on them, they get positively giddy.
Yes! This is a great pleasure. Satisfaction. There is one series in particular I work on that operates like this. I'm just ready for it - don't even look for it, not anymore - but when I run into the "circumstance", that's a great feeling. And I'm ready for it.
In engineering or software there is still that idea of the back-burner stuff that does need to be done. This idea of staging or starting what needs to be done at the end of the previous day. So that you have something relatively mindless to get you launched the next morning. I also know that anti- "magic rule", even though it's one I have a hard time keeping to.
So, there is huge motivation to put in “just a bit more effort”.
And, thus you get Crunch Time in gamedev!
I wrote this fast so there's jargon and bad prose. The title is deliberately dry and bland so I wasn't expecting anyone to click it. Also I slightly changed my mind on some of the claims .. might write up later.
The main reason I like to think of creative work in a more abstract/formal/geometric way (acceptance volume, latency, sampling) is it's easier for me categorize tasks, modalities and domains and know how to design or work around it. It's very much biased by more own experiences making things.
Also, abstract technical concept often come with nice guarantees/properties/utils to build on .. some would say that's their raison d'être.
Re comments: * "this is just diminishing returns" -- ok and this is a framework for why: the non-worsening region collapses, so most micro-edits fail
* "bands record bangers in an hour" –– practice tax was prepaid. The recording session is exploitation/search riding on cached heuristics imo (and it still takes hours of repeated recording/mixing/producing to actually produce a single album track).
* music key example –– yes I should've picked a different one. Main point was that some choices create wider tolerance (arrangement/range/timbre) even if keys are symmetric in equal temperament
> We don't "rehearse" a specific drawing, we solve a novel problem in real-time. There's no cached motor sequence to execute.
When you have been drawing long enough there are a lot of cached motor sequences to execute and modify. A lot of art training is simply filling this cache: spend a few hours every week drawing the human body from different angles, in a year or three you'll be able to make it up from pretty much any angle. Add in another twenty years of doing that and experimenting ways to make your tools do more of the work for you and you can dash off "sketches" that a beginner would consider finished paintings that took days to do.
You also learn a discipline we artists call "construction", wherein you can quickly break any object down into a few basic shapes that are incredibly easy to reason about in 3d, and quickly layer details atop that.
Also consider a daily comic strip. How many times do you think Charles Schultz drew Charlie Brown in a single year? How many of those drawings were largely similar to each other? Now that's serious production work. Animation's similar, you probably have a wider range of angles and motion than in a 1970s newspaper comics page but you are still drawing the same character a zillion times and your hand learns stuff and spits it back out without any conscious thought on your part.
But a whole piece is never the same. This is because the cost of copying is almost zero and the value is in the end-product and not in the performance. An exception would be if we are talking about an oil on canvas painting and a client asks for a piece that has already been sold.
Why do you think half the keyboard/organ solos in classic rock songs sound like jazzed up versions of Bach and Mozart? That's what they had learned as kids or in music school before going on to make rock and roll.
90% of the project takes 90% of the time and the other 10% of the project takes another 90% of the time.
The more refined your technique is, the harder it will be to discern mistakes and aesthetic failures.
Eventually you might come to a point where you can’t improve because you literally don’t see any issues. That might be the high water mark of your ability.
It becomes interesting once sentences span multiple lines and you start using little tactical tricks to keep syntax, semantics, and the overall argument coherent while respecting the anagram constraint.
Using an anagram generator is of course a first step, but the landscapes it offers are mostly desert: the vast majority of candidates are nonsense, and those that are grammatical are usually thematically off relative to what you’ve already written. And yet, if the repeated anagram phrase is chosen well, it doesn’t feel that hard to build long, meaningful sentences. Subjectively, the difficulty seems to scale roughly proportionally with the length of the poem, rather than quadratically and beyond.
There’s a nice connection here to Sample Space Reducing (SSR) processes. The act of picking letters from a fixed multiset to form words, and removing them as you go, is a SSR. So is sentence formation itself: each committed word constrains the space of acceptable continuations (morphology, syntax, discourse, etc.).
Understanding scaling through history-dependent processes with collapsing sample space, https://arxiv.org/pdf/1407.2775
> Many such stochastic processes, especially those that are associated with complex systems, become more constrained as they unfold, meaning that their sample-space, or their set of possible outcomes, reduces as they age. We demonstrate that these sample-space reducing (SSR) processes necessarily lead to Zipf’s law in the rank distributions of their outcomes.
> We note that SSR processes and nesting are deeply connected to phase-space collapse in statistical physics [21, 30–32], where the number of configurations does not grow exponentially with system size (as in Markovian and ergodic systems), but grows sub-exponentially. Sub-exponential growth can be shown to hold for the phase-space growth of the SSR sequences introduced here. In conclusion we believe that SSR processes provide a new alternative view on the emergence of scaling in many natural, social, and man-made systems.
In my case there are at least two coupled SSRs: (1) the anagrammatic constraint at the line level (letters being consumed), and (2) the layered SSRs of natural language that govern what counts as a well-formed and context-appropriate continuation (from morphology and syntax up through discourse and argumentation). In practice I ended up exploiting this coupling: by reserving or spending strategic words (pronouns, conjunctions, or semantically heavy terms established earlier), I could steer both the unfolding sentence and the remaining letter pool, and explore the anagram space far more effectively than a naive generator.
Very hand-wavy hypothesis: natural language is a complex, multi-layered SSR engine that happens to couple extremely well to other finite SSR constraints. That makes it unusually good at “solving” certain bounded combinatorial puzzles from the inside—up to and including, say, assembling IKEA furniture.
One extra nuance here: in the anagrammatic setting, the coupling between constraints is constitutive rather than merely referential. The same finite multiset of letters simultaneously supports the combinatorial constraint (what strings are formable) and the linguistic constraint (what counts as a syntactically and discursively acceptable move), so every choice is doubly binding. That’s different from cases like following IKEA instructions, where language operates as an external controller that refers to another state space (parts, tools, assembly steps) without sharing its “material” degrees of freedom. This makes the anagram case feel like a toy model where syntax and semantics are not two separate realms but two intertwined SSR layers over one shared substrate—suggesting that what we call “reference” might itself be an emergent pattern in how such nested SSR systems latch onto each other.
We should all take some time to better understand what brought us here to be better prepared for general creative work and uniqueness in the future...
The picture of the solution space in 3D makes a great point - we see a narrow hill that leads to a global maximum (i.e. a great result) in a solutions search space that otherwise has a very obvious & wide hill that produces "okay" results. Going from the okay & safe results to a great result means taking the risk of going back down the hill of shittier solutions.
He points out that generative AI will tend to produce results that land on that big wide hill. It's the safe hill, and has the most results. This is perhaps where taste (as a proxy of experience) trumps AI.
Interesting to tie this to the finishing stage of any work. I was definitely thinking about software development in that situation. I would argue it's similar to drawing as he mentions in the FAQ - we're solving a novel problem, as we start implementing a solution we might discover it is inappropriate and have to change to a different part of the solution space.