• gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    2
    ·
    2 months ago

    Unless they’ve come up with some truly novel new architecture that makes training and/or inference MUCH more efficient… why? The generational creep is obviously unsustainable, and using current-style GPUs for ML applications is a recipe for obsolescence for a hardware manufacturer these days.

  • C1pher@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    2 months ago

    Why not, now that thy got 5bil from Nvidia. Lets start pissing money away, or get even more controlled by a competitor.