OpenAI has released a new benchmark for testing AI systems in healthcare. Called HealthBench, it’s designed to evaluate how well language models handle realistic medical conversations. According to OpenAI, its latest models outperform doctors on the test.

  • orclev@lemmy.world
    link
    fedilink
    English
    arrow-up
    19
    ·
    4 hours ago

    Wake me up when someone besides OpenAI says they’re the best at something. When a company releases a benchmark they designed that their own tool that’s generally regarded as not very good is suddenly the best at, that’s not news, at best that’s PR, at worst propaganda. This reeks of “we investigated ourselves and found we did nothing wrong”.

  • Bezier@suppo.fi
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    edit-2
    5 hours ago

    Tl;dr: After performing poorly on benchmarks, OpenAI created their own. OpenAI products perform much better on OpenAI benchmark.

    • Opinionhaver@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      The bar exam isn’t created by OpenAI, yet the outdated GPT-4 model still ranked in the 90th percentile on it.

  • taladar@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    6 hours ago

    So they created a test so broken and warped that no actual professional can understand it but their AI performs well on it?

      • Kalvin@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        6 hours ago

        Those are analytical AI right? More procedural, and also, the data to train the AI are consented. I tried Ada, it didn’t give me official diagnosis but rather helping me to talk to doctor