• 0 Posts
  • 15 Comments
Joined 1 年前
cake
Cake day: 2023年6月11日

help-circle




  • I’m pretty sure I’ve seen several different clips where he repeats the same “I’m a moron” spiel.

    While I have only watched what few clips came my way, I was under the impression that was the entire point of his podcast: Invite interesting* people, then validating them in discussion by agreeing to most of their takes regardless of how bizarre they are so that they freely speak of their topic.

    *wherein “interesting” is usually something from the categories of fringe beliefs (often conspiracies), drugs, culturally influential people, or experts on whatever is a big topic for his viewership at the time.

    Many of the experts are also those of the fringe belief kind.


    Basically, if you take Rogan’s views significantly more seriously than the beliefs of your local meth head, you are doing it wrong.







  • Someone made a website to compile them you might find, but here’s what I remember:

    • Putting the extraordinarily unstable test release of a package in their normal release. That package specifically included disclaimers that it was for testing only, not meant for any users, and it was very clearly not meant for general release to unsuspecting end-users.

    • Getting banned off the AUR (twice?) for DDOS-ing it due to their faulty code. As I recall, every machine queried the AUR for updates constantly, or something like that.

    • Breaking AUR dependencies because of holding back releases for a few weeks, which they regularly to improve safety. Basically, don’t use AUR on Manjaro.




  • Direct link to the (short) report this article refers to:

    https://stacks.stanford.edu/file/druid:vb515nd6874/20230724-fediverse-csam-report.pdf

    https://purl.stanford.edu/vb515nd6874


    After reading it, I’m still unsure what all they consider to be CSAM and how much of each category they found. Here are what they count as CSAM categories as far as I can tell. No idea how much the categories overlap, and therefore no idea how many beyond the 112 PhotoDNA images are of actual children.

    1. 112 instances of known CSAM of actual children, (identified by PhotoDNA)
    2. 713 times assumed CSAM, based on hashtags.
    3. 1,217 text posts talking about stuff related to grooming/trading. Includes no actual CSAM or CSAM trading/selling on Mastodon, but some links to other sites?
    4. Drawn and Computer-Generated images. (No quantity given, possibly not counted? Part of the 713 posts above?)
    5. Self-Generated CSAM. (Example is someone literally selling pics of their dick for Robux.) (No quantity given here either.)

    Personally, I’m not sure what the take-away is supposed to be from this. It’s impossible to moderate all the user-generated content quickly. This is not a Fediverse issue. The same is true for Mastodon, Twitter, Reddit and all the other big content-generating sites. It’s a hard problem to solve. Known CSAM being deleted within hours is already pretty good, imho.

    Meta-discussion especially is hard to police. Based on the report, it seems that most CP-material by mass is traded using other services (chat rooms).

    For me, there’s a huge difference between actual children being directly exploited and virtual depictions of fictional children. Personally, I consider it the same as any other fetish-images which would be illegal with actual humans (guro/vore/bestiality/rape etc etc).