Who cares? It’s going to be hashed anyway. If the same user can generate the same input, it will result in the same hash. If another user can’t generate the same input, well, that’s really rather the point. And I can’t think of a single backend, language, or framework that doesn’t treat a single Unicode character as one character. Byte length of the character is irrelevant as long as you’re not doing something ridiculous like intentionally parsing your input in binary and blithely assuming that every character must be 8 bits in length.
Hmm. I wonder about this one. Different ways to encode the same character. Different ways to calculate the length. No obvious max byte size.
Who cares? It’s going to be hashed anyway. If the same user can generate the same input, it will result in the same hash. If another user can’t generate the same input, well, that’s really rather the point. And I can’t think of a single backend, language, or framework that doesn’t treat a single Unicode character as one character. Byte length of the character is irrelevant as long as you’re not doing something ridiculous like intentionally parsing your input in binary and blithely assuming that every character must be 8 bits in length.