It’s Not Wrong That "\u{1f926}\u{1f3fc}\u200d\u2642\ufe0f".length == 7 (2019)
Posted4 months agoActive4 months ago
hsivonen.fiTechstoryHigh profile
calmmixed
Debate
80/100
UnicodeString EncodingProgramming Languages
Key topics
Unicode
String Encoding
Programming Languages
The article discusses the complexities of measuring string length due to Unicode, and the HN discussion explores the various approaches to handling string length in different programming languages.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
90
0-12h
Avg / period
20
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Aug 22, 2025 at 2:18 AM EDT
4 months ago
Step 01 - 02First comment
Aug 22, 2025 at 3:19 AM EDT
1h after posting
Step 02 - 03Peak activity
90 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 29, 2025 at 3:10 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 44981525Type: storyLast synced: 11/20/2025, 8:28:07 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
for context, the actual post features an emoji with multiple unicode codepoints in between the quotes
You never know, when you don’t know CSS and try to align your pixels with spaces. Some programers should start a trend where 1 tab = 3 hairline-width spaces (smaller than 1 char width).
Next up: The <half-br/> tag.
Is there a way to represent this string with escaped codepoints? It would be both amusing and in HN's plaintext spirit to do it that way in the title above, but my Unicode is weak.
Might be a little long for a title :)
Number of UTF-8 code units (17 in this case) Number of UTF-16 code units (7 in this case) Number of UTF-32 code units or Unicode scalar values (5 in this case) Number of extended grapheme clusters (1 in this case)
We would not have this problem if we all agree to return number of bytes instead.
Edit: My mistake. There would still be inconsistency between different encoding. My point is, if we all decided to report number of bytes that string used instead number of printable characters, we would not have the inconsistency between languages.
UTF-8 code units _are_ bytes, which is one of the things that makes UTF-8 very nice and why it has won
Only if you are using a new enough version of unicode. If you were using an older version it is more than 1. As new unicode updates come out, the number of grapheme clusters a string has can change.
But that isn't the same across all languages, or even across all implementations of the same language.
I don't understand. It depends on the encoding isn't it?
- Number of bytes this will be stored as in the DB
- Number of monospaced font character blocks this string will take up on the screen
- Number of bytes that are actually being stored in memory
"String length" is just a proxy for something else, and whenever I'm thinking shallowly enough to want it (small scripts, mostly-ASCII, mostly-English, mostly-obvious failure modes, etc) I like grapheme cluster being the sensible default thing that people probably expect, on average.
Notably Rust did the correct thing by defining multiple slightly incompatible string types for different purposes in the standard library and regularly gets flak for it.
My understanding of the current "always and only utf-8/unicode" zeitgeist is that is comes mostly from encoding issues among which the complexity of detecting encoding.
I think that the current status quo is better than what came before, but I also think it could be improved.
But if you do want a sequence of bytes for whatever reason, you can trivially obtain that in any version of Python.
I'll probably just use rust for that script if python2 ever gets dropped by my distro. Reminds me of https://gregoryszorc.com/blog/2020/01/13/mercurial%27s-journ...
Show me.
This is a script created by someone on #nethack a long time ago. It works great with other things as well like old BBS games. It was intended to transparently rewrite single byte encodings to multibyte with an optional conversion array.
It almost works as-is in my testing. (By the way, there's a typo in the usage message.) Here is my test process:
I suppressed random output of C0 control characters to avoid messing up my terminal, but I added a test that basic ANSI escape sequences can work through this.(My initial version of this didn't flush the output, which mistakenly lead me to try a bunch of unnecessary things in the main script.)
After fixing the `print` calls, the only thing I was forced to change (although I would do the code differently overall) is the output step:
I've tried this out locally (in gnome-terminal) with no issue. (I also compared to the original; I have a local build of 2.7 and adjusted the shebang appropriately.)There's a warning that `bufsize=1` no longer actually means a byte buffer of size 1 for reading (instead it's magically interpreted as a request for line buffering), but this didn't cause a failure when I tried it. (And setting the size to e.g. `2` didn't break things, either.)
I also tried having my test process read from standard input; the handling of ctrl-C and ctrl-D seems to be a bit different (and in general, setting up a Python process to read unbuffered bytes from stdin isn't the most fun thing), but I generally couldn't find any issues here, either. Which is to say, the problems there are in the test process, not in `ibmfilter`. The input is still forwarded to, and readable from, the test process via the `Popen` object. And any problems of this sort are definitely still fixable, as demonstrated by the fact that `curses` is still in the standard library.
Of course, keys in the `special` mapping need to be defined as bytes literals now. Although that could trivially be adapted if you insist.
As for typo, yep. But then, I've left this script essentially untouched for a couple of decades since I was given it.
The languages that i really dont get are those that force valid utf-8 everywhere but dont enforce NFC. Which is most of them but seems like the worst of both worlds.
Non normalized unicode is just as problematic as non validated unicode imo.
It uses Latin-1 for ASCII strings, UCS-2 for strings that contain code points in the BMP and UCS-4 only for strings that contain code points outside the BMP.
It would be pretty silly for them to explode all strings to 4-byte characters.
They need at most 21 bits. The bits may only be available in multiples of 8, but the implementation also doesn't byte-pack them into 24-bit units, so that's moot.
I disagree. Not all text is human prose. For example, there is nothing wrong with an programming language that only allows ASCII in the source code and many downsides to allowing non-ASCII characters outside string constants or comments.
Lots of people around the world learn programming from sources in their native language, especially early in their career, or when software development is not their actual job.
Enforcing ASCII is the same as enforcing English. How would you feel if all cooking recipes were written in French? If all music theory was in Italian? If all industrial specifications were in German?
It's fine to have a dominant language in a field, but ASCII is a product of technical limitations that we no longer have. UTF-8 has been an absolute godsend for human civilization, despite its flaws.
You are severely underestimate how far you can get without any real command of the English language. I agree that you can't become really good without it, just like you can't do haute cuisine without some French, but the English language is a huge and unnecessary barrier of entry that you would put in front of everyone in the world who isn't submerged in the language from an early age.
Imagine learning programming using only your high school Spanish. Good luck.
And frequently, there is no other name. There are a lot of diseases, and no language has names for all of them.
Identifiers in code are not a limited vocabulary, and understanding the structure of your code is important, especially so when you are in the early stages of learning.
This + translated materials + locally written books is how STEM fields work in East Asia, the odds of success shouldn't be low. There just needs to be enough population using your language.
Andreas Rumpf, the designer of Nim, is Austrian. All the keywords of Nim are in English, the library function names are in English, the documentation is in English, Rumpf's book Mastering Nim is in English, the other major book for the language, Nim In Action (written by Dominik Picheta, nationality unknown but not American) is in English ... this is not "American imperialism" (which is a real thing that I don't defend), it's for easily understandable pragmatic reasons. And the language parser doesn't disallow non-ASCII characters but it doesn't treat them linguistically, and it has special rules for casefolding identifiers that only recognize ASCII letters, hobbling the use of non-ASCII identifiers because case distinguishes between types and other identifiers. The reason for this lack of handling of Unicode linguistically is simply to make the lexer smaller and faster.
Maybe I'm tired, but I've read this multiple times and can't quite figure out your desired position.
I *think* you are in favor of non -ASCII identifiers?
Like I said, I must be tired.
No, it is actually for security reasons. Once you allow non-ASCII identifiers, identifiers will become non identifiable. Only zig recognized that. Nim allows insecure identifiers. https://github.com/rurban/libu8ident/blob/master/doc/c11.md#...
The motte: non-ASCII identifiers should be allowed
The bailey: disallowing non-ASCII identifiers is American imperialism at its worst
In fact it's awesome that we have one common very simple character set and language that works everywhere and can do everything.
I have only encountered source code using my native language (German) in comments or variable names in highly unprofessional or awful software and it is looked down upon. You will always get an ugly mix and have to mentally stop to figure out which language a name is in. It's simply not worth it.
Please stop pushing this UTF-8 everywhere nonsense. Make it work great on interactive/UI/user facing elements but stop putting UTF-8-only restrictions in low-level software. Example: Copied a bunch of ebooks to my phone, including one with a mangled non-UTF-8 name. It was ridiculously hard to delete the file as most Android graphical and console tools either didn't recognize it or crashed.
I was with you until this sentence. UTF-8 everywhere is great exactly because it is ASCII-compatible (e.g. all ASCII strings are automatically also valid UTF-8 strings, so UTF-8 is a natural upgrade path from ASCII) - both are just encodings for the same UNICODE codepoints, ASCII just cannot go beyond the first 127 codepoints, but that's where UTF-8 comes in and in a way that's backward compatible with ASCII - which is the one ingenious feature of the UTF-8 encoding.
And bytes can conveniently fit both ASCII and UTF-8.
If you want to restrict your programming language to ASCII for whatever reason, fine by me. I don't need "let wohnt_bei_Böckler_STRAẞE = ..." that much.
But if you allow full 8-bit bytes, please don't restrict them to UTF-8. If you need to gracefully handle non-UTF-8 sequences graphically show the appropriate character "�", otherwise let it pass through unmodified. Just don't crash, show useless error messages or in the worst case try to "fix" it by mangling the data even more.
This string cannot be encoded as ASCII in the first place.
> But if you allow full 8-bit bytes, please don't restrict them to UTF-8
UTF-8 has no 8-bit restrictions... You can encode any 21-bit UNICODE codepoint with UTF-8.
It sound's like you're confusing ASCII, Extended ASCII and UTF-8:
- ASCII: 7-bits per "character" (e.g. not able to encode international characters like äöü) but maps to the lower 7-bits of the 21-bits of UNICODE codepoints (e.g. all ASCII character codes are also valid UNICODE code points)
- Extended ASCII: 8-bits per "character" but the interpretation of the upper 128 values depends on a country-specific codepage (e.g. the intepretation of a byte value in the range between 128 and 255 is different between countries and this is what causes all the mess that's usually associated with "ASCII". But ASCII did nothing wrong - the problem is Extended ASCII - this allows to 'encode' äöü with the German codepage but then shows different characters when displayed with a non-German codepage)
- UTF-8: a variable-width encoding for the full range of UNICODE codepoints, uses 1..4 bytes to encode one 21-bit UNICODE codepoint, and the 1-byte encodings are identical with 7-bit ASCII (e.g. when the MSB of a byte in an UTF-8 string is not set, you can be sure that it is a character/codepoint in the ASCII range).
Out of those three, only Extended ASCII with codepages are 'deprecated' and should no longer be used, while ASCII and UTF-8 are both fine since any valid ASCII encoded string is indistinguishable from that same string encoded as UTF-8, e.g. ASCII has been 'retconned' into UTF-8.
The problem they're describing happens because file names (in Linux and Windows) are not text: in Linux (so Android) they're arbitrary sequences of bytes, and in Windows they're arbitrary sequences of UTF-16 code points not necessarily forming valid scalar values (for example, surrogates can be present alone).
And yet, a lot of programs ignore that and insist on storing file names as Unicode strings, which mostly works (because users almost always name files by inputting text) until somehow a file gets written as a sequence of bytes that doesn't map to a valid string (i.e., it's not UTF-8 or UTF-16, depending on the system).
So what's probably happening in GP's case is that they managed somehow to get a file with a non-UTF-8-byte-sequence name in Android, and subsequently every App that tries to deal with that file uses an API that converts the file name to a string containing U+FFFD ("replacement character") when the invalid UTF-8 byte is found. So when GP tries to delete the file, the App will try to delete the file name with the U+FFFD character, which will fail because that file doesn't exist.
GP is saying that showing the U+FFFD character is fine, but the App should understand that the actual file name is not UTF-8 and behave accordingly (i.e. use the original sequence-of-bytes filename when trying to delete it).
Note that this is harder than it should be. For example, with the old Java API (from java.io[1]) that's impossible: if you get a `File` object from listing a directory and ask if it exists, you'll get `false` for GP's file, because the `File` object internally stores the file name as a Java string. To get the correct result, you have to use the new API (from java.nio.file[2]) using `Path` objects.
[1] https://developer.android.com/reference/java/io/File
[2] https://developer.android.com/reference/java/nio/file/Path
Sure, it's backward compatible, as in ASCII handling codes work on systems with UTF-8 locales, but how important is that?
It's only Windows which is stuck in the past here, and Microsoft had 3 decades to fix that problem and migrate away from codegpages to locale-asgnostic UTF-8 (UTF-8 was invented in 1992).
None of the signals were intuitive because they weren’t the typical English abbreviations!
Restricting the program part to ASCII is fine for me, but as a fellow German it's also important to recognize that we don't loose much by not having ä cömplete sät of letters. Everyone can write comprehensible German using ASCII characters only. So I would listen to what people from languages that really don't fit into ASCII have to say.
More relevantly though, good things can come from people who also did bad things; this isn't to justify doing bad things in hopes something good also happens, but it doesn't mean we need to ideologically purge good things based on their creators.
UNICODE is essentially a superset of ASCII, and the UTF-8 encoding also contains ASCII as compatible subset (e.g. for the first 127 UNICODE code points, an UTF-8 encoded string is byte-by-byte compatible with the same string encoded in ASCII).
Just don't use any of the Extended ASCII flavours (e.g. "8-bit ASCII with codepages") - or any of the legacy 'national' multibyte encodings (Shift-JIS etc...) because that's how you get the infamous `?????` or `♥♥♥♥♥` mismatches which are commonly associated with 'ASCII' (but this is not ASCII, but some flavour of Extended ASCII decoded with the wrong codepage).
ASCII wasn't "imperialism," it was pragmatism. Yes, it privileged English -- but that's because the engineers designing it _spoke_ English and the US was funding + exporting most of the early computer and networking gear. The US Military essentially gave the world TCP/IP (via DARPA) for free!
Maybe "cultural dominance", but "imperialism at its worst" is a ridiculous take.
That's a tradeoff you should carefully consider because there are also downsides to disallowing non-ASCII characters. The downsides of allowing non-ASCII mostly stem from assigning semantic significance to upper/lowercase (which is itself a tradeoff you should consider when designing a language). The other issue I can think of is homographs but it seems to be more of a theoretical concern than a problem you'd run into in practice.
When I first learned programming I used my native language (Finnish, which uses 3 non-ASCII letters: åäö) not only for strings and comments but also identifiers. Back then UTF-8 was not yet universally adopted (ISO 8859-1 character set was still relatively common) so I occasionally encountered issues that I had no means to understand at the time. As programming is being taught to younger and younger audiences it's not reasonable to expect kids from (insert your favorite non-English speaking country) to know enough English to use it for naming. Naming and, to an extent, thinking in English requires a vocabulary orders of magnitude larger than knowing the keywords.
By restricting source code to ASCII only you also lose the ability to use domain-specific notation like mathematical symbols/operators and Greek letters. For example in Julia you may use some mathematical operators (eg. ÷ for Euclidean division, ⊻ for exclusive or, ∈/∉/∋ for checking set membership) and I find it really makes code more pleasant to read.
Not saying the trade-off isn't worth it, but I do feel like there is a tendency to overuse unicode somewhat in Julia.
Now list anything as important from your list of downsides that's just as unfixable
In addition to separate string types, they have separate iterator types that let you explicitly get the value you want. So:
Really my only complaint is I don't think String.len() should exist, it's too ambiguous. We should have to explicitly state what we want/mean via the iterators.ugrapheme and ucwidth are one way to get the graphene count from a string in Python.
It's probably possible to get the grapheme cluster count from a string containing emoji characters with ICU?
Most people aren't living in that world. If you're working at Amazon or some business that needs to interact with many countries around the globe, sure, you have to worry about text encoding quite a bit. But the majority of software is being written for a much narrower audience, probably for one single language in one single country. There is simply no reason for most programmers to obsess over text encoding the way so many people here like to.
Here's a better analogy, in the 70s "nobody planned" for names with 's in then. SQL injections, separators, "not in the alphabet", whatever. In the US. Where a lot of people with 's in their names live... Or double-barrelled names.
It's a much simpler problem and still tripped a lot of people
And then you have to support a user with a "funny name" or a business with "weird characters", or you expand your startup to Canada/Mexico and lo and behold...
Even plain English text can't be represented with plain ASCII (although ISO-8859-1 goes a long way).
There are some cases where just plain ASCII is okay, but there are quite few of them (and even those are somewhat controversial).
The solution is to just use UTF-8 everywhere. Or maybe UTF-16 if you really have to.
Just never ever use Extended ASCII (8-bits with codepages).
If I do s.charAt(x) or s.codePointAt(x) or s.substring(x, y), I'd like to know which values for x and y are valid and which aren't.
If you take a substring of a(bc) and compare it to string (bc) are you looking for bitwise equivalence or logical equivalence? If the former it's a bit easier (you can just memcmp) but if the latter you have to perform a normalization to one of the canonical forms.
I feel like if you’re looking for bitwise equivalence or similar, you should have to cast to some kind of byte array and access the corresponding operations accordingly
UTF-8 is a byte code format; Unicode is not. In Python, where all strings are arrays of Unicode code points, substrings are likewise arrays of Unicode code points.
(Also that's not what "character" means in the Unicode framework--some code points correspond to characters and some don't.)
P.S. Everything about the response to this comment is wrong, especially the absurd baseless claim that I misunderstood the claim that I quoted and corrected (that's the only claim I responded to).
My comment explains that you have misunderstood what the claim is. "Byte code format" was nonsensical (Unicode is not interpreted by a VM), but the point that comment was trying to make (as I understood it) is that not all subsequences of a valid sequence of (assigned) code points are valid.
> Also that's not what "character" means in the Unicode framework--some code points correspond to characters and some don't.
My definition does not contradict that. A code point is an integer in the Unicode code space which may correspond to a character. When it does, "character" trivially means the thing that the code point corresponds to, i.e., represents, as I said.
Neither of these are really useful unless you are implementing a font renderer or low level Unicode algorithm - and even then you usually only want to get the next code point rather than one at an arbitrary position.
- letter
- word
- 5 :P
Never thought of it but maybe there are rules that allow to visually present the code point for ß as ss? At least (from experience as a user) there seem to be a singular "ss" codepoint.
From a user experience perspective though it might be beneficial to pretend that "ß" == "ss" holds when parsing user input.
I never said it was ambiguous, I said it depends on the unicode version and the font you are using. How is that wrong? (Seems like the capital of ß is still SS in the latest unicode but since ẞ is the preferred capital version now this should change in the future)
I don't know how or if systems deal with this, but ß should be printed as ss if ß is unavailable in the font. It's possible this is completely up to the user.
[1] https://unicode.org/faq/casemap_charprop.html [2] https://www.rechtschreibrat.com/DOX/RfdR_Amtliches-Regelwerk...
It's not. Uppercase of ß has always been SS.
Before we had a separate codepoint in Unicode this caused problems with round-tripping between upper and lower case. So Unicode rightfully introduced a separate codepoint specifically for that use case in 2008.
This inspired designers to design a glyph for that codepoint looking similar to ß. Nothing wrong with that.
Some liked the idea and it got some foothold, so in 2017, the Council for German Orthography allowed it as an acceptable variant.
Maybe it will win, maybe not, but for now in standard German the uppercase of ß is still SS and Unicode rightfully reflects that.
Thanks, that is interesting!
> In the case of Wordle, you know the exact set of letters you’re going to be using
This holds for the generator side too. In fact, you have a fixed word list, and the fixed alphabet tells you what a "letter" is, and thus how to compute length. Because this concerns natural language, this will coincide with grapheme clusters, and with English Wordle, that will in turn correspond to byte length because it won't give you words with é (I think). In different languages the grapheme clusters might be larger than 1 byte (e.g. [1], where they're codepoints).
Strings should be thought of more like opaque blobs, and you should derive their length exclusively in the context in which you intend to use it. It's an API anti-pattern to have a context-free length property associated with a string because it implies something about the receiver that just isn't true for all relevant usages and leads you to make incorrect assumptions about the result.
Refining your list, the things you usually want are:
- Number of bytes in a given encoding when saving or transmitting (edit: or more generally, when serializing).
- Number of code points when parsing.
- Number of grapheme clusters for advancing the cursor back and forth when editing.
- Bounding box in pixels or points for display with a given font.
Context-free length is something we inherited from ASCII where almost all of these happened to be the same, but that's not the case anymore. Unicode is better thought of as compiled bytecode than something you can or should intuit anything about.
It's like asking "what's the size of this JPEG." Answer is it depends, what are you trying to do?
You shouldn't really ever care about the number of code points. If you do, you're probably doing something wrong.
Grapheme cluster counts can’t be used because they’re unstable across Unicode versions. Some algorithms use UTF8 byte offsets - but I think that’s a mistake because they make input validation much more complicated. Using byte offsets, there’s a whole lot of invalid states you can represent easily. Eg maybe insert “a” at position 0 is valid, but inserting at position 1 would be invalid because it might insert in the middle of a codepoint. Then inserting at position 2 is valid again. If you send me an operation which happened at some earlier point in time, I don’t necessarily have the text document you were inserting into handy. So figuring out if your insertion (and deletion!) positions are valid at all is a very complex and expensive operation.
Codepoints are way easier. I can just accept any integer up to the length of the document at that point in time.
You have the same problem with code points, it's just hidden better. Inserting "a" between U+0065 and U+0308 may result in a "valid" string but is still as nonsensical as inserting "a" between UTF-8 bytes 0xC3 and 0xAB.
This makes code points less suitable than UTF-8 bytes as mistakes are more likely to not be caught during development.
> This makes code points less suitable than UTF-8 bytes as mistakes are more likely to not be caught during development.
Disagree. Allowing 2 kinds of bugs to slip through to runtime doesn’t make your system more resilient than allowing 1 kind of bug. If you’re worried about errors like this, checksums are a much better idea than letting your database become corrupted.
Like it or not, code points are how Unicode works. Telling people to ignore code points is telling people to ignore how data works. It's of the same philosophy that results in abstraction built on abstraction built on abstraction, with no understanding.
I vehemently dissent from this view.
Trying to handle code points as atomic units fails even in trivial and extremely common cases like diacritics, before you even get to more complicated situations like emoji variants. Solving pretty much any real-world problem involving a Unicode string requires factoring in canonical forms, equivalence classes, collation, and even locale. Many problems can’t even be solved at the _character_ (grapheme) level—text selection, for example, has to be handled at the grapheme _cluster_ level. And even then you need a rich understanding of those graphemes to know whether to break them apart for selection (ligatures like fi) or keep them intact (Hangul jamo).
Yes, people should learn about code points. Including why they aren’t the level they should be interacting with strings at.
Ironic.
> The advice wasn’t to ignore learning about code points
I didn't say "learning about."
Look man. People operate at different levels of abstraction, depending on what they're doing.
If you're doing front-end web dev, sure, don't worry about it. If you're hacking on a text editor in C, then you probably ought to be able to take a string of UTF-8 bytes, decode them into code points, and apply the grapheme clustering algorithm to them, taking into account your heuristics about what the terminal supports. And then probably either printing them to the screen (if it seems like they're supported) or printing out a representation of the code points. So yeah, you kind of have to know.
So don't sit there and presume to tell others what they should or should not reason about, based solely on what you assume their use case is.
Nobody is saying that, the point is that if you're parsing Unicode by counting codepoints you're doing it wrong. The way you actually parse Unicode text (in 99% of cases) is by iterating through the codepoints, and then the actual count is fairly irrelevant, it's just a stream.
Other uses of codepoint length are also questionable: for measurement it's useless, for bounds checking (random access) it's inefficient. It may be useful in some edge cases, but TFA's point is that a general purpose language's default string type shouldn't optimize for edge cases.
No, it's telling people that they're don't understand how data works otherwise they'd be using a different unit of measurement
size(JPG) == bytes? sectors? colors? width? height? pixels? inches? dpi?
Most people care about the length of a string in terms of the number of characters.
Treating it as a proxy for the number of bytes has been incorrect ever since UTF-8 became the norm (basically forever), and if you're dealing with anything beyond ASCII (which you really should, since East Asian users alone number in the billions).
Same goes to the "string width".
Yes, Unicode scalar values can combine into a single glyph and cause discrepancies, as the article mentions, but that is a much rarer edge case than simply handling non-ASCII text.
And before that the only thing the relative rarity did for you was that bugs with code working on UTF-8 bytes got fixed while bugs that assumed UTF-16 units or 32-bit code points represent a character were left to linger for much longer.
The metrics you care about are likely number of letters from a human perspective (1) or the number of bytes of storage (depends), possibly both.
[1]: https://tomsmeding.com/unicode#U+65%20U+308 [2]: https://tomsmeding.com/unicode#U+EB
In an environment that supports advanced Unicode features, what exactly do you do with the string length?
I want to make sure that the password is between a given number of characters. Same with phone numbers, email addresses, etc.
This seems to have always been known as the length of the string.
This thread sounds like a bunch of scientists trying to make a simple concept a lot harder to understand.
Even this has to deal with the halfwidth/fullwidth split in CJK. Even worse, Devanagari has complex rendering rules that actually depend on font choices. AFAIU, the only globally meaningful category here is rendered bounding box, which is obviously font-dependent.
But I agree with the general sentiment. What we really about how much space these text blobs take up, whether that be in a DB, in memory, or on the screen.
114 more comments available on Hacker News