The Wolf's Den
  • Communities
  • Create Post
  • heart
    Support Lemmy
  • search
    Search
  • Login
  • Sign Up
floofloof@lemmy.ca to Technology@lemmy.worldEnglish · 10 months ago

AI models collapse when trained on recursively generated data

www.nature.com

external-link
message-square
34
fedilink
  • cross-posted to:
  • [email protected]
246
external-link

AI models collapse when trained on recursively generated data

www.nature.com

floofloof@lemmy.ca to Technology@lemmy.worldEnglish · 10 months ago
message-square
34
fedilink
  • cross-posted to:
  • [email protected]
AI models collapse when trained on recursively generated data - Nature
www.nature.com
external-link
 Analysis shows that indiscriminately training generative artificial intelligence on real and generated content, usually done by scraping data from the Internet, can lead to a collapse in the ability of the models to generate diverse high-quality output.
  • BombOmOm@lemmy.world
    link
    fedilink
    English
    arrow-up
    102
    arrow-down
    5
    ·
    10 months ago

    Yep. It leads to a positive feedback loop. They just continue to self-reinforce whatever came out before.

    And with increasing amounts of the internet being polluted with AI text output…

    • Ensign_Crab@lemmy.world
      link
      fedilink
      English
      arrow-up
      91
      arrow-down
      2
      ·
      10 months ago

      … AI inbreeding.

      • skillissuer@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        63
        arrow-down
        1
        ·
        10 months ago

        hapsburgGPT

      • Boozilla@lemmy.world
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        1
        ·
        10 months ago

        We call it the GRRM model.

        • Sibbo@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          30
          arrow-down
          1
          ·
          10 months ago

          In the USA, they call it the AlaLlama model.

        • bionicjoey@lemmy.ca
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 months ago

          GPTargaryen

        • sp3ctr4l@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          ·
          10 months ago

          What about the Grrr! model after that astoundingly XD So Random! thing from Invader Zim?

          He’s an android or robot, right?

      • LilaOrchidee@feddit.org
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        10 months ago

        AInbreeding

    • MagicShel@programming.dev
      link
      fedilink
      English
      arrow-up
      17
      ·
      10 months ago

      That seems so obviously predictable.

    • kevincox@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      10 months ago

      To be fair this doesn’t sound much different than your average human using the internet.

      • sp3ctr4l@lemmy.zip
        link
        fedilink
        English
        arrow-up
        4
        ·
        10 months ago

        2024, Reverse Turing Test Challenge:

        Can an LLM AI differentiate between human input and LLM AI input?

    • Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      2
      ·
      edit-2
      10 months ago

      You have to pretty much intentionally give it enough synthetic data to wreck it. OpenAI and Anthropic train their models on generated data to improve them. As long as there’s supervision during training, which there always will be, this isn’t really a problem.

      https://openai.com/index/prover-verifier-games-improve-legibility/

      https://www.anthropic.com/research/claude-character

    • Tobberone@lemm.ee
      link
      fedilink
      English
      arrow-up
      8
      ·
      10 months ago

      Well… Its built on statistics and statistical inference will return to the mean eventually. If all it ever gets to train on is closer and closer to the mean, there will be nothing left to work with. It will all be the average…

Technology@lemmy.world

technology@lemmy.world

Subscribe from Remote Instance

Create a post
You are not logged in. However you can subscribe from another Fediverse account, for example Lemmy or Mastodon. To do this, paste the following into the search field of your instance: [email protected]

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


  • @[email protected]
  • @[email protected]
  • @[email protected]
  • @[email protected]
Visibility: Public
globe

This community can be federated to other instances and be posted/commented in by their users.

  • 3.85K users / day
  • 9.73K users / week
  • 20.1K users / month
  • 34.7K users / 6 months
  • 1 local subscriber
  • 69.9K subscribers
  • 14.1K Posts
  • 546K Comments
  • Modlog
  • mods:
  • L3s@lemmy.world
  • L4sBot@lemmy.world
  • enu@lemmy.world
  • Technopagan@lemmy.world
  • L3s@hackingne.ws
  • L4s@hackingne.ws
  • BE: 0.19.5
  • Modlog
  • Instances
  • Docs
  • Code
  • join-lemmy.org