• 2 Posts
  • 16 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle


  • I arrived in China 2001.

    I experienced the harshest and largest lockdown in all of history: Wuhan, January 23rd, 2020. A real lockdown, not the cosplay bullshit you experienced outside of China. (Yes, this is me saying you’ve never fucking set foot in the country.)

    The rest you’re just flat-out lying about. Sorry, Sparky. Did pet killings happen? Yes. They were not the mass shit that the press you’re so obviously reciting acts like they were. Did some doors get welded? Yes. But nowhere near you and, again, nowhere near in the masses the press you’re basing your lies on made it seem like. The local salaries are garbage iff you’re a fuckwit sitting in the west applying western prices to Chinese salaries. (Which, naturally, you are, good little fuckwit liar that you are.) And you’ve changed your tune from 14 hours to 12 hours really fucking quickly there, Sparky, not to mention using the proper slang only after I gave it to you.

    So yeah, you’re just a west-dwelling fuckwit lying about being here. Go toddle off in your China Watcher corners and play with the rest of the intellectual children you belong with. There’s a good boy.




  • The author, w/o explicitly mentioning it anywhere, is explicitly talking about distributed systems where you’ve got plenty of resources, stable network connectivity and a log/trace ingestion solution (like Sumo or Datadog) alongside your setup.

    That is the very core of my objection. He hasn’t identified the warrants for his argument, meaning his argument is literally gibberish to people working from a different set of warrants. Dudebro here could learn a thing or two from Toulmin.

    This is a problem endemic to techbros writing about tech. They assume, quite incorrectly, that the entire world is just clones of themselves perhaps a little bit behind on the learning curve. (It never occurs, naturally, that others might be ahead of them on the learning curve or *gasp!* that there may be more than one curve! That would be silly!)

    So they write without establishing their warrants. (Hell, they often write without bothering to define their terms, because “trace” means the same thing in all forms of computer technology, amirite?!) They write as if they have The Answer instead of merely a possible answer in a limited set of circumstance (which they fail to identify). And they write as if they’re on the top of the learning heap instead of, as is statistically far more likely, somewhere in the middle.

    Which makes it funny when he sings the praises of a tracing library that, when I investigated it briefly, made me choke with laughter at just how painfully ineffective it is compared to tools I’ve used in the past; specifically Erlang’s tracing tools. The library he’s text-wanking to is pitifully weak compared to what comes out of the box in an Erlang environment. You have to manually insert tracing calls (error-prone, tedious, obfuscatory) for example. Whatever you don’t decide to trace in advance can’t be traced. Whereas Erlang’s tracing system (and, presumably Ruby-on-BEAM’s, a.k.a. Elixir) lets you make ad hoc tracing calls on live systems as they’re executing. This means you can trace a live system as it’s fucking up without having to be a precognitive psychic when coding, leaving the costs of tracing at 0 until such a time as you genuinely need them.

    So he doesn’t identify his warrants, he writes as if he has the One True Answer, he assumes all programming forms use the same jargon in the same way, and he acts as if he’s the guru sharing his wisdom when he’s actually way behind the curve on the very tech he’s pitching.

    He is a, in a word, programmer.


  • My own thoughts.

    1. Instead of defining the difference between logging and tracing, the author spams the screen with pages’ worth of examples of why logging is bad, then jumps into tracing by immediately referencing code that uses a specific tracing library (OpenTelemetry Tracer) without at any point explaining what that code is actually doing to someone who is not familiar with it already. To me this smacks of preaching to the choir since if you’re already familiar with this tool, you’re likely already a) familiar with what “tracing” is compared to “logging”, and b) probably a tracing advocate to begin with. If you want to persuade an undecided or unfamiliar audience, confusing them and/or making assumptions about what they know or don’t know is … suboptimal.

    2. If you’re going to screen dump your code in your rant, FUCKING COMMENT IT YOU GIT! I don’t want to have to read through 100 lines of code in an unfamiliar language written to an unfamiliar architecture to find the three (!) lines that are actually on the fucking topic!

    3. If you’re going to show changes in your code, put before/after snapshots side by side so I don’t have to go scrolling back to the uncommented hundred-line blob to see what changed. It’s not that hard. Using his own damned example from “Step 1”:

    // BEFORE
    func PrepareContainer(ctx context.Context, container ContainerContext, locales []string, dryRun bool, allLocalesRequired bool) (*StatusResult, error) {
    	logger.Info(`Filling home page template`)
    
    // AFTER
    var tr = otel.Tracer("container_api")
    
    func PrepareContainer(ctx context.Context, container ContainerContext, locales []string, dryRun bool, allLocalesRequired bool) (*StatusResult, error) {
    	ctx, span := tr.Start(ctx, "prepare_container")
    	defer span.End()
    

    (And while you’re at it, how 'bout explaining the fucking code you wrote? How hard is it to add a line explaining what that defer span.End() nonsense is? Remember, you’re trying to sell people on the need for tracing. If they already know what you’re talking about you’re preaching to the choir, son.)

    Of course in “The Result” he talks about the diff between the two functions … but doesn’t actually provide that diff. Instead he provides another hundred-line blob kept far away from the original so you have to bounce back and forth between them to spot the differences. Side-by-side diffs are a thing and there’s plenty of tools that make supplying them trivial. Maybe the author should think about using them.

    1. The technique this guy is espousing, if I’m reading it right, sounds fine but only in limited realms. This would kill development in my realm (small embedded systems), for example. If you have (effectively, from my domain’s perspective) infinite RAM, CPU, persistent storage, and bandwidth, then yes, this is likely a very good technique. (I can’t be certain, of course, because he hasn’t actually explained anything, just blasted uncommented code while referencing a library he assumes we know about. The only reason I followed any of it is because I’m familiar with Erlang’s tooling for this kind of stuff which puts what he’s showing off to shame.) But if your RAM is limited (hint: measured in 2-digit KB and shared by your stack(s), heap, and static memory), if your CPU is a blazing-fast 80MHz, and if you think 1MB of persistent storage (which your program binary has to share) is a true bucket of gold in wealth, and, yes, if you’re transmitting over a communications link that would have '80s-era modem jockies looking on you with pity, then maybe, just maybe, tracing isn’t so great an idea after all.

  • For months at one place I worked senior developers and even junior managers had been haranguing the higher-ups with an alarm bell on how important the Internet was going to be and how we needed to start pivoting toward outfitting our product with the ability to interact properly on the Internet. We were steadfastly ignored and our concerns were quietly scoffed at because our product was a “best of breed” product in our space.

    Then we got hit by a huge wave of lost sales because we had no viable scheme in place to proper interact with Internet-based applications.

    The then-CEO called a “developers all-hands” meeting in which he pranced around on the stage at the front of the auditorium to complain to us that nobody had been telling him how important this Internet thing was going to be and that we were supposed to be keeping an eye on the leading edge of technology so he can make plans for these things.

    This sparked a VERY LOUD outcry as about 150 software developers who’d been ignored and scoffed at for months just flipped a switch into revolution mode. Lots of people started talking loudly (then shouting). One guy with a laptop connected it to the big projector display and started scrolling through an email folder where he’d collected the notices warning about the importance of the Internet and management’s (including the CEO’s) condescending replies. By the end of that little skirmish the CEO was making a lame excuse that he was “joking” and was “taking our feedback very seriously” after 20 people (half of them very senior) just flatly quit in front of him and walked out of the auditorium.

    That’s probably the worst “read the fucking room, dude!” moment I ever saw.










  • English has gendered pronouns, for example. There’s also some gender divides in nouns: actor/actress, for example. (These are slowly being replaced, however.)

    Languages like Farsi and Mandarin and such don’t. The only difference in pronouns, in fact, with Farsi is “courteous” vs. “common”. And even that isn’t happening as much as it used to. And the only time nouns are gendered is if the item they’re talking about has an actual physical gender. Like “man” or “woman”. There are no gendered declensions of any kind, in fact.

    It’s more complicated in Chinese. In oral Chinese there’s no gendered pronouns. It’s pronounced [tā] whether you mean man, woman, or other.1 As with Farsi, however, there are no gendered nouns outside of those describing literal physically-gendered things. And unlike Farsi, not only are there no gendered declensions of any kind, there are hardly any declensions of any kind2.


    1 In written Chinese, for complicated reasons, there are three different pronouns in common usage: 他 for masculine (he), 她 for feminine (she), and 它 for everything else (it). This “modernization” was first proposed in the very late 19th century and came into its final form sometime in the 1920s. It was a deliberate attempt to make Chinese easier to translate into western languages (and since at the time the Chinese had somewhat of an inferiority complex it was also couched as making Chinese a “modern” language). (There were a couple of others added, including one for deities and one for animals, but those never caught on and are hardly ever seen in modern Chinese.)

    But they’re all pronounced the same: [tā].

    And now, full circle, Chinese is “modernizing” again. While official laws, forms, scholarly papers, regulations, etc. use that three-way split in pronouns, increasingly in commercial settings (like the world’s largest digital souq: Taobao) all pronouns are being replaced with “TA”. Yes. Latin letters. Uppercased.

    This I find completely hilarious: Chinese developed gendered pronouns (in writing only!) to soothe western tastes … only to pick up an ungendered pronoun again … to match western tastes. And before westerners have solved the problem themselves in their own languages!

    2 Chinese does not decline for number except for a tiny handful of cases you can learn completely in 30 minutes. (And even here it’s not quite ‘declension’ like that word applies in the Indo-European family of languages.) There’s no “car” vs. “cars”. They’re both 汽车. If you want to specify that you mean more than one car, you would modify it by saying “some” or “three” or whatever in front of it: 一些汽车 [yī xiē qì chē], literally “one (small number) car” or “some cars”.


  • Speaking one language that is mildly gendered (English), two that are strongly (and in the case of the second bizarrely!) gendered (French, German) and one that is almost entirely ungendered (Mandarin), I have not found any utility whatsoever in grammatical gender.

    I suspect that grammatical gender is just an ur-form of grammatical classifiers that has stuck around for non-useful amounts of time. I suspect this because one of the grammatical “gender” divisions that’s in use in many languages isn’t masculine/feminine(/neuter) but rather animate/inanimate. So I suspect that grammatical gender was a classification mechanism whose system and utility was distorted into uselessness over the thousands of years of spread and development.

    So why do we have classification mechanisms? Well, in Mandarin there’s classifier words. (In English too: “a sheet of paper”, not “a paper”, but it’s waaaaaaaaaaaaay stricter in Mandarin.) The classifiers in Mandarin, given the sheer amount of punning potential in oral language, are likely a redundant piece of information to help nail down which specific word you mean in contexts where it might be unclear. For example in a noisy environment, or if someone is speaking unclearly, “paper” (纸张[zhǐ zhāng]) might be confused with “spider” (蜘蛛 [zhī zhū]). But if I say 一只蜘蛛 [yī zhī zhī zhū]—a spider—it’s harder to confuse that with 一张纸张 [yī zhāng zhǐ zhāng]—a piece of paper.

    So I’m positing that perhaps at some point grammatical gender was used as a primitive form of classification for disambiguation that some languages just never grew out of. Which is why in German men are masculine, women are feminine, boys are masculine, and girls are neuter. It has nothing to do with actual physical gender and is just a weird, atrophied, and somewhat useless remnant of language.