• EnderMB@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    8 months ago

    As a software engineer that works in AI, the “breakthrough” we’ve made is in proving that LLM’s can perform well at scale, and that hallucinations aren’t as big a problem as initially thought. Most tech companies didn’t do what OpenAI did because hallucinations are brand-damaging, whereas OpenAI didn’t give a fuck. In the next few years, all existing AI systems will be through LLM’s, and probably as good at ChatGPT.

    We might make more progress now that researchers and academics see the value in LLM’s, but my weakly held opinion is that it’s mostly surrounded by hype.

    We’re nowhere near what most would call AGI, although to be blunt, I don’t think the average person on here could truly tell you what that looks like without disagreeing with AI researchers.