• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: August 6th, 2023

help-circle
  • What I’m trying to push back on is your assertion that everyone can do it.

    Security auditing is an extremely complex and specialized field within the already complex and specialized field of software development. Everyone cannot do it.

    Even if it were as straightforward as you imply, just the prevalence of major security flaws in thousands of open source packages implies that everyone doesnt do it.

    If I were to leave piles of aggregate and cement, barrels of water, hand tools and materials for forms, a grader and a compactor out and tell the neighborhood “now you can all pave your driveways” I’d be looked at like a crazy person because presented with the materials, tools and equipment to perform a job most people still lack the training and experience to perform it.


  • Idk what the person you’re arguing with is trying to say, but as a prolific user of open source software, there are thousands of serious vulnerabilities discovered every time some auditing company passes its eye over github.

    Malicious commits are a whole nother thing and with the new spaghetti code nightmare that is python nowadays it’s extremely hard to figure out which commits are malicious.

    Open source software is not more secure by default and the possibility of audit by anyone does not mean that it’s actually getting done. The idea that anyone who can write software can audit software is also absurd. Security auditing is a specialized subset of programming that requires significant training, skill and experience.







  • Woof.

    I’m not gonna ape your style of argumentation or adopt a tone that’s not conversational, so if that doesn’t suit you don’t feel compelled to reply. We’re not machines here and can choose how or even if we respond to a prompt.

    I’m also not gonna stop anthropomorphizing the technology. We both know it’s a glorified math problem that can fake it till it makes it (hopefully), if we’ve both accepted calling it intelligence there’s nothing keeping us from generalizing the inference “behavior” as “feeling”. In lieu of intermediate jargon it’s damn near required.

    Okay:

    Outputting correct information isn’t just one use case, it’s a deep and fundamental flaw in the technology. Teaching might be considered one use case, but it’s predicated on not imagining or hallucinating the answer. Ai can’t teach for this reason.

    If ai were profitable then why are there articles ringing the bubble alarm bell? Bubbles form when a bunch of money gets pumped in as investment but doesn’t come out as profit. Now it’s possible that there’s not a bubble and all this is for nothing, but read the room.

    But let’s say you’re right and there’s not a bubble: why would you suggest community college as a place where ai could be profitable? Community colleges are run as public goods, not profit generating businesses. Ai can’t put them out of business because they aren’t in it! Now there are companies that make equipment used in education, but their margins aren’t usually wide enough to pay back massive vc investment.

    It’s pretty silly to suggest that billionaire philanthropy is a functional or desirable way to make decisions.

    Edx isn’t for the people that go to Harvard. It’s a rent seeking cash grab intended to buoy the cash raft that keeps the school in operation. Edx isn’t an example of the private school classes using machine teaching on themselves and certainly not on a broad scale. At best you could see private schools use something like Edx as supplementary coursework.

    I already touched on your last response up at the top, but clearly the people who work on ai don’t worry about precision or clarity because it can’t do those things reliably.

    Summarizing my post with gpt4 is a neat trick, but it doesn’t actually prove what you seem to be going for because both summaries were less clear and muddy the point.

    Now just a tiny word on tone: you’re not under any compulsion to talk to me or anyone else a certain way, but the way you wrote and set up your reply makes it seem like you feel under attack. What’s your background with the technology we call ai?



  • You got two problems:

    First, ai can’t be a tutor or teacher because it gets things wrong. Part of pedagogy is consistency and correctness and ai isn’t that. So it can’t do what you’re suggesting.

    Second, even if it could (it can’t get to that point, the technology is incapable of it, but we’re just spitballing here), that’s not profitable. I mean, what are you gonna do, replace public school teachers? The people trying to do that aren’t interested in replacing the public school system with a new gee whiz technology that provides access to infinite knowledge, that doesn’t create citizens. The goal of replacing the public school system is streamlining the birth to workplace pipeline. Rosie the robot nanny doesn’t do that.

    The private school class isn’t gonna go for it either, currently because they’re ideologically opposed to subjecting their children to the pain tesseract, but more broadly because they are paying big bucks for the best educators available, they don’t need a robot nanny, they already have plenty. You can’t sell precision mass produced automation to someone buying bespoke handcrafted goods.

    There’s a secret third problem which is that ai isn’t worried about precision or communicating clearly, it’s worried about doing what “feels” right in the situation. Is that the teacher you want? For any type of education?