I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

  • WalnutLum@lemmy.ml
    link
    fedilink
    English
    arrow-up
    0
    ·
    29 days ago

    Seeing as how the full unquantized FP16 for Llama 3.1 405B requires around a terabyte of VRAM (16 bits per parameter + context), I’d say way more than several.