• halcyoncmdr@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    2 months ago

    You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.

    /s

  • ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    Altman is the latest from the conveyor belt of mustache-twirling frat-bro super villains.

    Move over Musk and Zuckerberg, there’s a new shit-heel in town!

  • JustARaccoon@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    2 months ago

    I’m confused, how can a company that’s gained numerous advantages from being non-profit just switch to a for-profit model? Weren’t a lot of the advantages (like access to data and scraping) given with the stipulation that it’s for a non-profit? This sounds like it should be illegal to my brain

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 months ago

    There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.

    Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.

    Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.

    • patatahooligan@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 months ago

      No, there isn’t really any such alternate timeline. Good honest causes are not profitable enough to survive against the startup scams. Even if the non-profit side won internally, OpenAI would just be left behind, funding would go to its competitors, and OpenAI would shut down. Unless you mean a radically different alternate timeline where our economic system is fundamentally different.

      • rsuri@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        2 months ago

        I mean wikipedia managed to do it. It just requires honest people to retain control long enough. I think it was allowed to happen in wikipedia’s case because the wealthiest/greediest people hadn’t caught on to the potential yet.

        There’s probably an alternate timeline where wikipedia is a social network with paid verification by corporate interests who write articles about their own companies and state-funded accounts spreading conspiracy theories.

      • mustbe3to20signs@feddit.org
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        AI models can outmatch most oncologists and radiologists in recognition of early tumor stages in MRI and CT scans.
        Further developing this strength could lead to earlier diagnosis with less-invasive methods saving not only countless live and prolonging the remaining quality life time for the individual but also save a shit ton of money.

        • T156@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          That is a different kind of machine learning model, though.

          You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.

          And those image recognition models aren’t something OpenAI is currently working on, iirc.

        • msage@programming.dev
          link
          fedilink
          English
          arrow-up
          0
          ·
          2 months ago

          Wasn’t it proven that AI was having amazing results, because it noticed the cancer screens had doctors signature at the bottom? Or did they make another run with signatures hidden?

          • mustbe3to20signs@feddit.org
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            1 month ago

            There were more than one system proven to “cheat” through biased training materials. One model used to tell duck and chicken apart because it was trained with pictures of ducks in the water and chicken on a sandy ground, if I remember correctly.
            Since multiple medical image recognition systems are in development, I can’t imagine they’re all this faulty trained with unsuitable materials.

            • msage@programming.dev
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 months ago

              They are not ‘faulty’, they have been fed wrong training data.

              This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.

              That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.

  • NeoNachtwaechter@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    2 months ago

    Altman downplayed the major shakeup.

    "Leadership changes are a natural part of companies

    Is he just trying to tell us he is next?

    /s

    • Avg@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      The ceo at my company said that 3 years ago, we are going through execs like I go through amlodipine.

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    The restructuring could turn the already for-profit company into a more traditional startup and give CEO Sam Altman even more control — including likely equity worth billions of dollars.

    I can see why he would want that, yes. We’re supposed to ooo and ahh at a technical visionary, who is always ultimately a money guy executive who wants more money and more executive power.

    • toynbee@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      I saw an interesting video about this. It’s outdated (from ten months ago, apparently) but added some context that I, at least, was missing - and that also largely aligns with what you said. Also, though it’s not super evident in this video, I think the presenter is fairly funny.

      https://youtu.be/L6mmzBDfRS4

      • Melatonin@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That was a worthwhile watch, thank you for making my life better.

        I await the coming AI apocalypse with hope that I am not awake, aware, or sensate when they do whatever it is they’ll do to use or get rid of me.

        • sunbeam60@lemmy.one
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          You will be kept alive at subsistence level to buy the stuff you’ve been told to buy, don’t worry.

    • dan@upvote.au
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      2 months ago

      It’s amusing. Meta’s AI team is more open than "Open"AI ever was - they publish so many research papers for free, and the latest versions of Llama are very capable models that you can run on your own hardware (if it’s powerful enough) for free as long as you don’t use it in an app with more than 700 million monthly users.

      • a9cx34udP4ZZ0@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        2 months ago

        That’s because Facebook is selling your data and access to advertise to you. The better AI gets across the board, the more money they make. AI isn’t the product, you are.

        OpenAI makes money off selling AI to others. AI is the product, not you.

        The fact facebook release more code, in this instance, isn’t a good thing. It’s a reminder how fucked we all are because they make so much off our personal data they can afford to give away literally BILLIONS of dollars in IP.

        • dan@upvote.au
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 months ago

          Facebook doesn’t sell your data, nor does Google. That’s a common misconception. They sell your attention. Advertisers can show ads to people based on some targeting criteria, but they never see any user data.

            • wischi@programming.dev
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Selling your data would be stupid, because they make money with the fact that they have data about you nobody else has. Selling it would completely break their business model.

  • FlashMobOfOne@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Sounds like another WeWork or Theranos in the making, except we already know the product doesn’t do what it promises.

    • lando55@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      What does it actually promise? AI (namely generative and LLM) is definitely overhyped in my opinion, but admittedly I’m far from an expert. Is what they’re promising to deliver not actually doable?

      • naught101@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        It literally promises to generate content, but I think the implied promise is that it will replace parts of your workforce wholesale, with no drop in quality.

        It’s that last bit that’s going to be where the drama happens

      • frezik@midwest.social
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        2 months ago

        They want AGI, which would match or exceed human intelligence. Current methods seem to be hitting a wall. It takes exponentially more inputs and more power to see the same level of improvement seen in past years. They’ve already eaten all the content they can, and they’re starting to talk about using entire nuclear reactors just to power it all. Even the more modest promises, like pictures of people with the correct number of fingers, seem out of reach.

        Investors are starting to notice that these promises aren’t going to happen. Nvidia’s stock price is probably going to be the bellwether.

      • Smokeydope@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        2 months ago

        It delivers on what it promises to do for many people who use LLMs. They can be used for coding assistance, Setting up automated customer support, tutoring, processing documents, structuring lots of complex information, a good generally accurate knowledge on many topics, acting as an editor for your writings, lots more too.

        Its a rapidly advancing pioneer technology like computers were in the 90s so every 6 months to a year is a new breakthrough in over all intelligence or a new ability. Now the new llm models can process images or audio as well as text.

        The problem for openAI is they have serious competitors who will absolutely show up to eat their lunch if they sink as a company. Facebook/Meta with their llama models, Mistral AI with all their models, Alibaba with Qwen. Some other good smaller competiiton too like the openhermes team. All of these big tech companies have open sourced some models so you can tinker and finetune them at home while openai remains closed sourced which is ironic for the company name… Most of these ai companies offer their cloud access to models at very competitive pricing especially mistral.

        The people who say AI is a trendy useless fad don’t know what they are talking about or are upset at AI. I am a part of the local llm community and have been playing around with open models for months pushing my computers hardware to its limits. Its very cool seeing just how smart they really are, what a computer that simulates human thought processes and knows a little bit of everything can actually do to help me in daily life.

        Terrence Tao superstar genius mathematician describes the newest high end model from openAI as improving from a “incompentent graduate” to a “mediocre graduate” which essentially means AI are now generally smarter than the average person in many regards.

        This month several comptetor llm models released which while being much smaller in size compared to openai o-1 somehow beat or equaled that big openai model in many benchmarks.

        Neural networks are here and they are only going to get better. Were in for a wild ride.

        • Stegget@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 months ago

          My issue is that I have no reason to think AI will be used to improve my life. All I see is a tool that will rip, rend and tear through the tenuous social fabric we’re trying to collectively hold on to.

          • Smokeydope@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            arrow-down
            1
            ·
            edit-2
            2 months ago

            A tool is a tool. It has no say in how it’s used. AI is no different than the computer software you use browse the internet or do other digital task.

            When its used badly as an outlet for escapism or substitute for social connection it can lead to bad consequences for your personal life.

            When it’s best used is as a tool to help reason through a tough task, or as a step in a creative process. As on demand assistance to aid the disabled. Or to support the neurodivergent and emotionally traumatized to open up to as a non judgemental conversational partner. Or help a super genius rubber duck their novel ideas and work through complex thought processes. It can improve peoples lives for the better if applied to the right use cases.

            Its about how you choose to interact with it in your personal life, and how society, buisnesses and your governing bodies choose to use it in their own processes. And believe me, they will find ways to use it.

            I think comparing llms to computers in 90s is accurate. Right now only nerds, professionals, and industry/business/military see their potential. As the tech gets figured out, utility improves, and llm desktops start getting sold as consumer grade appliances the attitude will change maybe?

  • Kyrgizion@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Canceled my sub as a means of protest. I used it for research and testing purposes and 20$ wasn’t that big of a deal. But I will not knowingly support this asshole if whatever his company produces isn’t going to benefit anyone other than him and his cronies. Voting with our wallets may be the very last vestige of freedom we have left, since money equals speech.

    I hope he gets raped by an irate Roomba with a broomstick.

  • barnaclebutt@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    I’m sure they were dead weight. I trust open AI completely and all tech gurus named Sam. Btw, what happened to that Crypto guy? He seemed so nice.

  • celsiustimeline@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    2 months ago

    Whoops. We made the most expensive product ever designed, paid for entirely by venture capital seed funding. Wanna pay for each ChatGPT query now that you’ve been using it for 1.5 years for free with barely-usable results? What a clown. Aside from the obvious abuse that will occur with image, video, and audio generating models, these other glorified chatbots are complete AIDS.

    • flo@infosec.pub
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      barely usable results

      Using chatgpt and copilot has been a huge productivity boost for me, so your comment surprised me. Perhaps its usefulness varies across fields. May I ask what kind of tasks you have tried chatgpt for, where it’s been unhelpful?

      • wholookshere@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        edit-2
        2 months ago

        Literally anything that requires knowing facts to inform writing. This is something LLMs are incapable of doing right now.

        Just look up how many R’s are in strawberry and see how chat gpt gets it wrong.

    • assassin_aragorn@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      2 months ago

      paid for entirely by venture capital seed funding.

      And stealing from other people’s works. Don’t forget that part