• 1 Post
  • 466 Comments
Joined 2 年前
cake
Cake day: 2023年7月3日

help-circle






  • It was a good game. Not perfect, but very good.

    Even the things I don’t like are pretty minor.

    • upgrading weapons is kind of tedious. Once you know where the stones are or the bearings, it’s kind of a chore to get them.
    • related: once you know where some high value items are, it’s really tempting to just beeline for them from the start. But that’s kind of tedious. I guess I could just pretend I don’t know where the +5 stats talisman is.
    • a lot of side content isn’t especially rewarding. The first time you play it’s exciting because you don’t know what you’ll find. But later it’s like “nah, this catacomb has a useless ash and boss I’ll fight elsewhere”. Which is a shame because most of the level design is great.



  • I just tried “Language Drops” and it was… interesting. It didn’t place me at the right level, so I got a very beginner lesson when I’m closer to intermediate (but definitely not fluent). I’m not sure I liked matching the pictures- the picture for “thank you” could mean different things depending on how you interpret the person’s face and body language- and then I hit the end of the free content for the day. It didn’t get to different tenses or even whole sentences- just basic vocabulary and no verbs. Maybe it ramps up quickly?






  • This reminds me of the new vector for malware that targets “vibe coders”. LLMs tend to hallucinate libraries that don’t exist. Like, it’ll tell you to add, install, and use jjj_image_proc or whatever. The vibe coder will then get an error like “that library doesn’t exist” and "can’t call jjj_image_proc.process()`.

    But you, a malicious user, could go and create a library named jjj_image_proc and give it a function named process. Vibe coders will then pull down and run your arbitrary code, and that’s kind of game over for them.

    You’d just need to find some commonly hallucinated library names



  • Many people have found that using LLMs for coding is a net negative. You end up with sloppy, vulnerable, code that you don’t understand. I’m not sure if there have been any rigorous studies about it yet, but it seems very plausible. LLMs are prone to hallucinating, so you’re going to get it telling you to import libraries that don’t exist, or use parts of the standard library that don’t exist.

    It also opens up a whole new security threat vector of squatting. If LLMs routinely try to install a library from pypi that doesn’t exist, you can create that library and have it do whatever you want. Vibe coders will then run it, and that’s game over for them.

    So yeah, you could “rigorously check” it but a. all of us are lazy and aren’t going to do that routinely (like, have you used snapshot tests?), b. it’s going to anchor you around whatever it produced, making it harder to think about other approaches, and c. it’s often slower overall than just doing a good job from the start.

    I imagine there are similar problems with analyzing large amounts of text. It doesn’t really understand anything. To verify it’s correct, you would have to read the whole thing yourself anyway.

    There are probably specialized use cases that are good- I’m told AI is useful for like protein folding and cancer detection- but that still has experts (I hope) looking at the results.

    To your point, I think people are trying to use these LLMs for things with definite answers, too. Like if I go to google and type in “largest state in the US” it uses AI. This is not a good use case.