Mistral NeMo 12B is the name of the new AI model, presented this week by Nvidia and Mistral. “We are fortunate to collaborate with the NVIDIA team, leveraging their top-tier hardware and software,” said Guillaume Lample, cofounder and chief scientist of Mistral AI. “Together, we have developed a model with unprecedented accuracy, flexibility, high-efficiency and enterprise-grade support and security thanks to NVIDIA AI Enterprise deployment.”

The promise of the new AI model is significant. Whereas previous LLMs were tied to datacenters, Mistral NeMo 12B moves to workstations. And it does this without sacrificing performance, or well, that’s the promise.

  • bamboo@lemm.ee
    link
    fedilink
    English
    arrow-up
    56
    ·
    3 months ago

    There are already lots of models in the 7B and 14B ranges that are quite capable and run on commodity hardware. What makes this one so special?

      • bamboo@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        3 months ago

        Oh wow. Yeah a large context size is a significant improvement, doesn’t seem like the article included that detail.

    • MudMan@fedia.io
      link
      fedilink
      arrow-up
      10
      ·
      3 months ago

      Yeah, it seems more interesting to reverse engineer why they chose this line of marketing. They are clearly misrepresenting the challenge and cost of running a LLM locally, so… why?