“The real benchmark is: the world growing at 10 percent,” he added. “Suddenly productivity goes up and the economy is growing at a faster rate. When that happens, we’ll be fine as an industry.”

Needless to say, we haven’t seen anything like that yet. OpenAI’s top AI agent — the tech that people like OpenAI CEO Sam Altman say is poised to upend the economy — still moves at a snail’s pace and requires constant supervision.

  • funkless_eck@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    28
    arrow-down
    5
    ·
    5 hours ago

    I’ve been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

    So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

    The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

    If you don’t know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That’s no one’s fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

    Same for coding, if you understand what your code does, it’s a helpful tool for unsticking part of a problem, it can’t write the whole thing from scratch

    • earphone843@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      4 hours ago

      For coding it’s also useful for doing the menial grunt work that’s easy but just takes time.

      You’re not going to replace a senior dev with it, of course, but it’s a great tool.

      My previous employer was using AI for intelligent document processing, and the results were absolutely amazing. They did sink a few million dollars into getting the LLM fine tuned properly, though.

    • raspberriesareyummy@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      7
      ·
      2 hours ago

      So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

      Just that you call an LLM “AI” shows how unqualified you are to comment on the “successes”.

      • Lifter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        2 hours ago

        Not this again… LLM is a subset of ML which is a subset of AI.

        AI is very very broad and all of ML fits into it.

        • SoftestSapphic@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 hour ago

          A Large Language Model is not a Machine Learning program.

          An LLM is a program that translates human speech into sentiment instead of trying to acheive literal translations. It’s a layer that sits on other tech to make it easier for a program to talk with a person. It is not intelligent, an LLM does not learn.

          You really don’t know what you are talking about. A perfect example of how obfuscating tech to make it sound cool invites any random person to have an opinion on “AI”

          When people say AI is not real or intelligent they are speaking from a computer scientist perspective instead of trying to make sense of something they don’t understand from scratch.