• WhatSay@slrpnk.net
    link
    fedilink
    English
    arrow-up
    8
    ·
    46 minutes ago

    People like this should not be a government representative. But I guess that’s the trend lately

  • sudoer777@lemmy.ml
    link
    fedilink
    English
    arrow-up
    6
    ·
    2 hours ago

    Sooner or later we’re gonna have to bypass the Great Firewall of America by VPNing into China

  • ERROR: Earth.exe has crashed@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    4 hours ago

    Print the code in a book and mail it.

    Surely, they cannot ban books… right? Right?

    Edit: Wait wait wait… the Comstock Act says mail cannot be used for anything that can be used for abortion. And a AI can theoretically be used to get instructions for abortion. BOOM, it’s banned! 👀

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    121
    ·
    edit-2
    5 hours ago

    Wow, bold choice to ban the import of technology and knowledge. Usually governments are worried about export, so it doesn’t fall into the wrong hands.

    Btw, how is the Nvidia stock price doing?

    • forrgott@lemm.ee
      link
      fedilink
      English
      arrow-up
      23
      ·
      3 hours ago

      Right? Like, seriously, we all know somebody is just butthurt because their stock options tanked.

      Oh, wait, I’m sorry! That was very unpatriotic of me, wasn’t it? I mean, we all know that winning an election guarantees being heavily rewarded with insider trading, right? It’s not like they’re there to represent constituents or anything; I mean, doesn’t everyone know we’re a republic, not a democracy?!

      Sigh…

  • theunknownmuncher@lemmy.world
    link
    fedilink
    English
    arrow-up
    43
    ·
    edit-2
    5 hours ago

    Hawley’s statement called DeepSeek “a data-harvesting, low-cost AI model that sparked international concern and sent American technology stocks plummeting.”

    data-harvesting

    ???

    It runs offline… using open-source software that provably does not collect or transmit any data…

    It is low-cost and out-competes American technology, though, true

      • cyd@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        4 hours ago

        Base models are general purpose language models, mainly useful for AI researchers and people who want to build on top of them.

        Instruct or chat models are chatbots. They are made by fine-tuning base models.

        The V3 models linked by OP are Deepseek’s non-reasoning models, similar to Claude or ChatGPT4o. These are the “normal” chatbots that reply with whatever comes to their mind. Deepseek also has a reasoning model, R1. Such models take time to “think” before supplying their final answer; they tend to give better performance for stuff like math problems, at the cost of being slower to get the answer.

        It should be mentioned that you probably won’t be able to run these models yourself unless you have a data center style rig with 4-5 GPUs. The Deepseek V3 and R1 models are chonky beasts. There are smaller “distilled” forms of R1 that are possible to run locally, though.

      • thefartographer@lemm.ee
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        5
        ·
        4 hours ago

        r1 is lightweight and optimized for local environments on a home PC. It’s supposed to be pretty good at programming and logic and kinda awkward at conversation.

        v3 is powerful and meant to run on cloud servers. It’s supposed to make for some pretty convincing conversations.

        • Pennomi@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          4 hours ago

          R1 isn’t really runnable with a home rig. You might be able to run a distilled version of the model though!

          • thefartographer@lemm.ee
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            2 hours ago

            You’re absolutely right, I wasn’t trying to get that in-depth, which is why I said “lightweight and optimized,” instead of “when using a distilled version” because that raises more questions than it answers. But I probably overgeneralized by making it a blanket statement like that.

            • Pennomi@lemmy.world
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              1
              ·
              3 hours ago

              That likely is one of the distilled versions I’m talking about. R1 is 720 GB, and wouldn’t even fit into memory on a normal computer. Heck, even the 1.58-bit quant is 131GB, which is outside the range of a normal desktop PC.

              But I’m sure you know what version you’re running better than I do, so I’m not going to bother guessing.

  • Cantaloupe877@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    3 hours ago

    I’d get LM studio or Ollama, and download R1 your system can handle quick. If you’re on Linux, Alpaca is on Flathub, you can get it and it’ll download models and run them for you, including Deepseek R1.