• 0 Posts
  • 9 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle

  • Personally, I’ve found that LLMs are best as discussion partners, to put it in the broadest terms possible. They do well for things you would use a human discussion partner for IRL.

    • “I’ve written this thing. Criticize it as if you were the recipient/judge of that thing. How could it be improved?” (Then address its criticisms in your thing… it’s surprisingly good at revealing ways to make your “thing” better, in my experience)
    • “I have this personal problem.” (Tell it to keep responses short. Have a natural conversation with it. This is best done spoken out loud if you are using ChatGPT; prevents you from overthinking responses, and forces you to keep the conversation moving. Takes fifteen minutes or more but you will end up with some good advice related to your situation nearly every time. I’ve used this to work out several things internally much better than just thinking on my own. A therapist would be better, but this is surprisingly good.)
    • I’ve also had it be useful for various reasons to tell it to play a character as I describe, and then speak to the character in a pretend scenario to work out something related. Use your imagination for how this might be helpful to you. In this case, tell it to not ask you so many questions, and to only ask questions when the character would truly want to ask a question. Helps keep it more normal; otherwise (in the case of ChatGPT which I’m most familiar with) it will always end every response with a question. Often that’s useful, like in the previous example, but in this case it is not.
    • etc.

    For anything but criticism of something written, I find that the “spoken conversation” features are most useful. I use it a lot in the car during my commute.

    For what it’s worth, in case this makes it sound like I’m a writer and my examples are only writing-related, I’m actually not a writer. I’m a software engineer. The first example can apply to writing an application or a proposal or whatever. Second is basically just therapy. Third is more abstract, and often about indirect self-improvement. There are plenty more things that are good for discussion partners, though. I’m sure anyone reading can come up with a few themselves.




  • That’s a very interesting suggestion and I’d love to see it done, actually, regardless of what I’m about to write.

    The problem is that mods aren’t bot sweepers or disinformation sniffers. They’re just regular people… and there are relatively few of them. They probably have, on average, a better radar than most users, but when it comes to malicious actors they aren’t going to be perfect. More importantly, they have a finite amount of time and effort they can put into moderation. It’s way better to organically crowd-source these kinds of things if it’s possible, and the kind of community Lemmy has makes it possible.

    Banning these comments makes the community susceptible to all kinds of manipulation, especially in the run-up to a US election (let alone this one). The benefit of banning these comments is comparatively very minimal: effectively removing one type of ad hominem attack in arguments that have always featured ad hominem attacks, in one form or another.



  • keegomatic@lemmy.worldtoWorld News@lemmy.worldSidebar Update: Civility
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    4 months ago

    I think that public call-outs of suspicious behavior is the only real and continuous way to teach new or under-informed users what bots and disinformation actors (ESPECIALLY these) sound like. I don’t remember the last time I personally called out someone I thought was a paid/malicious account or a bot… maybe never have on Lemmy. But despite the incivility, I truly believe the publicity of these comments is good for creating a resilient community.

    I’ve been on forums or aggregators similar to Lemmy for a long time, and I think I have a pretty good radar when it comes to identifying suspicious account behavior. I think reading occasional accusations from within your community help you think critically about what’s being espoused in the thread, what the motivations of different users are, and whether to disbelieve or believe the accuser.

    Yes, sometimes it’s used as a personal attack. But it’s better to have it out in the open so that the reality of online discourse (extremely frequent attempted manipulation of opinions) is clear to everyone, and the community can respond positively or negatively to it and organically support users that are likely victims.



  • I have used it several times for long-form writing as a critic, rather than as a “co-writer.” I write something myself, tell it to pretend to be the person who would be reading this thing (“Act as the beepbooper reviewing this beepboop…”), and ask for critical feedback. It usually has some actually great advice, and then I incorporate that advice into my thing. It ends up taking just as long as writing the thing normally, but materially far better than what I would have written without it.

    I’ve also used it to generate an outline to use as a skeleton while writing. Its own writing is often really flat and written in a super passive voice, so it kinda sucks at doing the writing for you if you want it to be good. But it works in these ways as a useful collaborator and I think a lot of people miss that side of it.