• 0 Posts
  • 11 Comments
Joined 1 year ago
cake
Cake day: June 23rd, 2023

help-circle

  • Eh, not much nefarious you can do by pushing data around. Taking a lot of CPU/GPU usage? Certainly, you can do a lot of evil with distributed computing. But bandwidth?

    Costs a lot to host all that data to push to people, and to handle streaming it to so many as well, all for them to just… throw it out? Users certainly don’t keep enough storage to even store a constant 100Mb/s of sneaky evil data, let alone do any compute with it, because the game’s CPU/GPU usage isn’t particularly out of the ordinary.

    So not much you could do here. Ockham’s razor here just says… planes are fast, MSFS is a high fidelity game, they’ve gotta load a lot of high accuracy data very quickly and probably can’t spare the CPU for terribly complicated decompression.


  • I think it is a problem. Maybe not for people like us, that understand the concept and its limitations, but “formal reasoning” is exactly how this technology is being pitched to the masses. “Take a picture of your homework and OpenAI will solve it”, “have it reply to your emails”, “have it write code for you”. All reasoning-heavy tasks.

    On top of that, Google/Bing have it answering user questions directly, it’s commonly pitched as a “tutor”, or an “assistant”, the OpenAI API is being shoved everywhere under the sun for anything you can imagine for all kinds of tasks, and nobody is attempting to clarify it’s weaknesses in their marketing.

    As it becomes more and more common, more and more users who don’t understand it’s fundamentally incapable of reliably doing these things will crop up.



  • Yeah, this is the problem with frankensteining two systems together. Giving an LLM a prompt, and giving it a module that can interpret images for it, leads to this.

    The image parser goes “a crossword, with the following hints”, when what the AI needs to do the job is an actual understanding of the grid. If one singular system understood both images and text, it could hypothetically understand the task well enough to fetch the information it needed from the image. But LLMs aren’t really an approach to any true “intelligence”, so they’ll forever be unable to do that as one piece.



  • Storytime! Earlier this year, I had an Amazon package stolen. We had reason to be suspicious, so we immediately contacted the landlord and within six hours we had video footage of a woman biking up to the building, taking our packages, and hurriedly leaving.

    So of course, I go to Amazon and try to report my package as stolen… which traps me for a whole hour in a loop with Amazon’s “chat support” AI, repeatedly insisting that I wait 48 hours “in case my package shows up”. I cannot explain to this thing clearly enough that, no, it’s not showing up, I literally have video evidence of it being stolen that I’m willing to send you. It literally cuts off the conversation once it gives its final “solution” and I have to restart the convo over and over.

    Takes me hours to wrench a damn phone number out of the thing, and a human being actually understands me and sends me a refund within 5 minutes.





  • Eh, there’s a lot of valid things to be skeptical about. Using these tools as a DM is fundamentally different from using them as a massive corporation, as you’re not considering replacing your team of talented artists and writers to cut costs.

    That said, done right, I also think this could be amazing. Legally train these models on the wealth of historical D&D art, and provide it to DMs to use during their campaigns to make maps, art for places the DM is describing on the fly, all of these things that no artist could possibly make because these locations are being invented on the fly as the players throw a skilled DM curveballs. D&D feels like an ideal “problem” for a lot of the “solutions” AI has to offer.