• radix@lemmy.world
      link
      fedilink
      English
      arrow-up
      59
      arrow-down
      5
      ·
      1 year ago

      It was “Blockchain” in 2017. “NFT” in 2020. “AI” in 2023. In a few years, there will be a new buzzword that companies throw a bunch of money at in hopes of being on the leading edge of the ‘next big thing.’

      • DreamButt@lemmy.world
        link
        fedilink
        English
        arrow-up
        22
        arrow-down
        2
        ·
        1 year ago

        While I appreciate the sentiment I think the key difference here is that ML is actually helping people do their work either better or more easily. While Blockchain and NFTs mostly amounted to autofellatio. Meaning those technologies are only helpful if you are interested in using those technologies. Whereas ML has clearly been helpful for all kinds of professions not just Brogrammers

        • astropenguin5@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I’m not sure if AR has already had it’s big spike yet or not. It already basically exists, just isn’t as good or useful as VR yet. As it gets better we may see it become a buzzword again, but AR/VR has already kinda had it’s big time and is becoming more and more mainstream.

  • borkcorkedforks@kbin.social
    link
    fedilink
    arrow-up
    101
    ·
    1 year ago

    To me it looks like an over estimation of the capabilities for the tech. Same kind of thinking that led to lawyers submitting fake cases as support in court. The current tech can be useful but has to be verified and generally tweaked a bit to be good enough. It certainly has room for improvement in quality and just not lying. Real world use has some copyright questions with what the training data was. Applying it to something creative is questionable and more or less feels like uninspired remixes.

    Also the whole graphic is kinda suspect to me when “Blockchain engineers” is a job category and it’s produced by an org working on AI.

    • Betty White In HD@lemmy.world
      link
      fedilink
      English
      arrow-up
      35
      arrow-down
      1
      ·
      1 year ago

      It’s fine I’m a “pile driver operator” and my wife is a “tire repairer and changer”, so we’re in “low exposure”.

      Who wrote this shit?

      • bfr0@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        I love how specific the labor jobs on the left are and the right side is like… All mathematicians.

        • macrocephalic@lemmy.world
          link
          fedilink
          English
          arrow-up
          7
          ·
          1 year ago

          Which is funny because the thing that LLM AI is not good at is coming up with new ideas and confirming they’re correct. It is good for making language (or other knowledge objects) based on existing input.

      • tarjeezy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        9
        ·
        1 year ago

        Maybe they already replaced their Writers, Authors, and Proofreaders, and generated the whole thing using ChatGPT.

  • Calcharger@kbin.social
    link
    fedilink
    arrow-up
    88
    arrow-down
    1
    ·
    1 year ago

    Ah yes, because programming and critical thinking do not go hand in hand. We are going to have so many software vulnerabilities in the coming years. Better learn to hack, ladies and lads.

  • stravanasu@lemmy.ca
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    3
    ·
    edit-2
    1 year ago

    This image/report itself doesn’t make much sense – probably it was generated by chatGPT itself.

    1. “What makes your job exposed to GPT?” – OK I expect a list of possible answers:
      • “Low wages”: OK, having a low wage makes my job exposed to GPT.
      • “Manufacturing”: OK, manufacturing makes my job exposed to GPT. …No wait, what does that mean?? You mean if my job is about manufacturing, then it’s exposed to GPT? OK but then shouldn’t this be listed under the next question, “What jobs are exposed to GPT?”?
      • “Jobs requiring low formal education”: what?! The question was “what makes your job exposed to GPT?”. From this answer I get that “jobs requiring low formal education make my job exposed to GPT”. Or I get that who/whatever wrote this knows no syntax or semantics. OK, sorry, you meant “If your job requires low formal education, then it’s exposed to GPT”. But then shouldn’t this answer also be listed under the next question??

      

    1. “What jobs are exposed to GPT?”
      • “Athletes”. Well, “athletes” semantically speaking is not a job; maybe “athletics” is a job. But who gives a shirt about semantics? there’s chatGPT today after all.
      • The same with the rest. “Stonemasonry” is a job, “stonemasons” are the people who do that job. At least the question could have been “Which job categories are exposed to GPT?”.
      • “Pile driver operators”: this very specific job category is thankfully Low Exposure. “What if I’m a pavement operator instead?” – sorry, you’re out of luck then.
      • “High exposure: Mathematicians”. Mmm… wait, wait. Didn’t you say that “Science skills” and “Critical thinking skills” were “Low Exposure”, in the previous question?

      

    Icanhazcheezeburger? 🤣

    (Just to be clear, I’m not making fun of people who do any of the specialized, difficult, and often risky jobs mentioned above. I’m making fun of the fact that the infographic is so randomly and unexplainably specific in some points)

    • Urist@lemmy.ml
      link
      fedilink
      English
      arrow-up
      16
      ·
      1 year ago

      I’ve seen GPT struggling with pretty basic maths and “abstract” tasks such as making the letters add up in an anagram. Math requires insight that a language model cannot posess. I don’t really get why people like infographics so much. The format usually just distracts from the data presented, which is convenient given that the data is usually garbage too.

      • stravanasu@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Math requires insight that a language model cannot posess

        Amen to that! Good maths & science teachers have struggled for decades (if not centuries) so that students understand what they’re doing and don’t simply give answers based on some words or symbols they see in questions [there are also bad teachers who promote this instead]. Because on closer inspection such answers always collapse. And now comes chatGPT that does exactly that instead – and collapses in the same way – and gets glorified.

        Amen to what you say on infographic content as well 😂

  • Candelestine@lemmy.world
    link
    fedilink
    English
    arrow-up
    53
    arrow-down
    31
    ·
    1 year ago

    It’s not bad.

    There’s one thing that people tend to neglect that I like to remember–it’s going to be awhile yet before an AI can walk up to your door, knock, come in and find the specific nature of a plumbing/electrical/HVAC or whatever problem, and then diagnose and fix it. And then get safely home without getting hit by a truck or vandalized by bored teenagers or both.

    That’s such a complex suite of different “problems” that we’re going to need nothing less than a general AI to navigate them all. Thus, one of the last jobs that’ll be replaced is various kinds of repair professionals that do house calls. The constant novelty of the career, where every call is its own unique situation, is a nightmare for a current-method AI.

    • jmp242@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      I kind of feel like it’s a bit overwrought - and not supported by current tech anyway. I could predict where the tech will go, but I don’t think that’s possible to do in a reasonable way over a useful time-span for this.

      Lets look at the proposed affected jobs(I’ll leave out the ones I just don’t have enough knowledge about to even hazard a guess):

      Interpreters + Translators: I haven’t tried GPT for this, but I imagine it’s likely not too much more affecting than google translate. For people and situations where machine translation is good enough - this has been happening for quite a while. I have my doubts that this will change the trajectory of that field. Translation seems like something that you can’t “edit after the fact” - you have to do the whole translation anyway to see if the machine translation is right or missing important non-literal parts.

      Writers and Authors: I can see this speeding their work up, and enabling people who might have story ideas and be a decent editor but not a good first draft writer to become authors. However, writers have been dealing with both lowered standards for technical writing and content glut for many years - I don’t think this changes that appreciably.

      Public Relations Specialists: I feel like this is massively devaluing the psychology and experience in PR. It might well replace press release writing, but I just bet there’s more there than is obvious to everyone.

      Tax Preparers: If you’re doing fine with TurboTax - you’ve been doing this for decades now. If you can’t solve it with existing traditional tax software, it’s often because you just aren’t sure about vague tax rules, or complex tax rules. And you usually want someone else to take on some liability and ability to represent you if you’re audited. I don’t see how GPT changes this fundamentally.

      Mathematicians: Really? It’s horrible at math.

      Proofreaders and Copy Markers: Also really? I feel like for a while at least there’s going to be more proofreading of the output of GPT for factual content and style.

      • orclev@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 year ago

        It’s mostly BS. The only ones that really need to worry are jobs where doing things accurately don’t really matter and where that work is very time consuming to do or there’s an absolute ton of it (and also doesn’t involve physically doing anything).

        Types of jobs impacted would be tier one call center staff whose primary role is to function as a filter to tier two. Technically their job could be replaced with a pre-recorded message already, but people tend to ignore those so they’re less effective than having a person just read a script to the caller. Other impacted jobs would be movie extras or very cheap actors where a wooden or slightly off performance could be ignored.

        Lastly some jobs will be changed but not replaced. A lot of the initial work of things like concept artists, editors, certain kinds of script writers, and analysts will be generated and then they’ll spend their time refining or fixing that initial copy. Ironically this will most likely lead to even more demand for those kinds of jobs as finding and fixing mistakes in generated content can be more time consuming than doing it right the first time.

        The big elephant in the room of course is that it’s going to be very expensive to run these systems, and you’re going to need a whole group of high skill specialists to maintain and operate them. Just like self driving trucks you’re just replacing a large group of low skill cheap workers with a medium sized group of high skill very expensive workers. Ultimately this will be far more expensive for companies, but for those that can afford it the increased volumes will offset the increased costs.

        • effingjoe@kbin.social
          link
          fedilink
          arrow-up
          6
          arrow-down
          1
          ·
          1 year ago

          This feels like wishful thinking. Any automated system (cars, LLMs, etc) only need to be better than a human doing that job. Your example, for, um, example, ignores that self-driving trucks don’t need to take sleep breaks, or bathroom breaks, or spend time with their families, etc.

          Using the assumption that this is the bottom of the curve for this LLM technology and that we still have a lot of expansion in the tech coming in a relatively short amount of time, then I would guess that any job that makes art that is “work for hire” will cease to exist, and I imagine programming is going to take a pretty big hit in available jobs. I don’t think you’ll be able to get rid of human programmers altogether, but you’ll need way fewer of them.

          • orclev@lemmy.world
            link
            fedilink
            English
            arrow-up
            5
            arrow-down
            2
            ·
            1 year ago

            Any automated system (cars, LLMs, etc) only need to be better than a human doing that job. Your example, for, um, example, ignores that self-driving trucks don’t need to take sleep breaks, or bathroom breaks, or spend time with their families, etc.

            I’m not ignoring that, but you’re ignoring that these systems have their own associated costs. A self driving truck might not need bathroom breaks, but it does need regular maintenance which given the increased complexity of such a system is going to be significantly more expensive than a normal truck and require more skilled labor to properly maintain. That’s why I said it’s more expensive, but large companies can make it up in volume. The extra expense only makes sense if you can take advantage of the E.G. increased transport capacity provided.

            Using the assumption that this is the bottom of the curve for this LLM technology and that we still have a lot of expansion in the tech coming in a relatively short amount of time, then I would guess that any job that makes art that is “work for hire” will cease to exist, and I imagine programming is going to take a pretty big hit in available jobs. I don’t think you’ll be able to get rid of human programmers altogether, but you’ll need way fewer of them.

            You’re assuming that LLMs can ever be made accurate. I think you might be able to make them somewhat more accurate, but you’ll never be able to trust their output implicitly. You will always need someone reviewing and fixing what they produce. For something entirely subjective like art that’s probably acceptable, but not for anything that requires any amount of accuracy.

            As a programmer I am absolutely not worried in the slightest that LLMs are coming for my job. I’ve seen LLM produced programs, they’re an absolute trash fire, most of them won’t even compile let alone produce correct output. LLMs might be coming for really really bad programmers jobs, but anyone with even a shred of talent has nothing to worry about.

            There’s a famous saying out there about programming that goes:

            You can write a program that’s so simple there’s obviously no problems, or a program that’s so complicated there’s no obvious problems.

            LLMs are very much an exercise in the later not the former. I’m sure there will be a bunch of jobs soon for programmers to check and fix LLM generated code, but you couldn’t pay me to do that job. That’s going to be absolutely miserable work and way harder than just writing the code yourself in the first place. Ultimately companies are going to figure out it’s cheaper to just skip the LLM in the first place and then the whole thing will be dead. One things for sure though, you won’t need fewer programmers, you’ll need more of them.

            • effingjoe@kbin.social
              link
              fedilink
              arrow-up
              5
              ·
              1 year ago

              That’s why I said it’s more expensive, but large companies can make it up in volume. The extra expense only makes sense if you can take advantage of the E.G. increased transport capacity provided.

              Isn’t this functionally the same thing? What happens to smaller companies in this hypothetical? Are you not assuming that they get pushed out of the market shortly thereafter?

              You’re assuming that LLMs can ever be made accurate. I think you might be able to make them somewhat more accurate, but you’ll never be able to trust their output implicitly.

              I am assuming this. I am assuming that we’re at the bottom of this technology’s sigmoid curve, there is going to be a ton of growth in a relatively short amount of time. I guess we’ll have to wait to see which one of us has a better prediction.

              As a programmer I am absolutely not worried in the slightest that LLMs are coming for my job. I’ve seen LLM produced programs, they’re an absolute trash fire, most of them won’t even compile let alone produce correct output. LLMs might be coming for really really bad programmers jobs, but anyone with even a shred of talent has nothing to worry about.

              You have described the state of LLMs right now. Programming languages seem like a perfect fit for a LLM; they’re extremely structured and meticulously (well, mostly) defined. The concepts and algorithms used not overly complex for a LLM. There doesn’t need to be much in the way of novel creativity create solutions for standard use cases. The biggest difficulty I’ve seen is just getting the prompting clear enough. I think a majority of the software engineering field is on the chopping block, just like the “art for hire” crowd. People pushing the limits of the fields will be safe but that’s a catch 22, isn’t it? If low-level entry is impossible, how does one get to be a high-level professional?

              And even if we take your [implied] stance that this is the top of the S-curve and LLMs aren’t going to get much better-- it will still be a useful tool for human programmers to increase productivity and reduce available jobs.

      • skillissuer@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        Re: proofreading this could maybe work for technical writing for ESL authors, but i won’t trust chatGPT with technically confidential data. So we’re back to square one

        • effingjoe@kbin.social
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          You shouldn’t trust ChatGPT for that, but your company could definitely spin up their own LLM and then we’re back at the problem.

      • LibertyLizard@slrpnk.net
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        2
        ·
        1 year ago

        I think you’re underestimating the impact here. It obviously won’t replace all of the jobs in these fields, but even shortening or eliminating enough tasks will have impacts on employment levels. If fewer people can do the same amount of work, some of those people will be laid off.

        • Shurimal@kbin.social
          link
          fedilink
          arrow-up
          6
          arrow-down
          2
          ·
          1 year ago

          Hasn’t this been the case always? One excavator operator can dig a hole for house foundation way faster than 10 guys with shovels; one truck driver can deliver more cargo than a caravan of horse-drawn carriages; one electronic computer can solve math problems way faster than a room full of humans doing paper-and-pencil calculations; e-mails are faster and can carry way more data than telegraph. AI is just the next step on this path. AI is not the problem, our neoliberal capitalist economic system that seeks unlimited growth of profit is.

          • LibertyLizard@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            I largely agree that the system is the problem and not AI. But we live in that system and will need to prepare for the impacts that will unfold within it.

        • jmp242@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Maybe my phrasing wasn’t clear, but the areas where I said I didn’t see it changing the trajectory much for the job, I meant that (as I mentioned with writers) the prospects already had lots and lots of competition and a very small percentage of people who’d like to do the job actually can make a living doing it. The numbers are already close to winning the lottery, I just don’t see AI making it like wining the powerball (multi state lottery) a substantive difference to people trying to “make it” in that field. If I’m already at 1 in 10 million, I don’t see that my decision making is going to be that affected if AI makes it 1 in 20 million. I don’t think people make decisions in that way.

          And for government interventions - do we subsidize writers now? If not, I just don’t see it politically, economically, or even philosophically to make sense to do so because of AI.

      • garyyo@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        One thing to note is that making an industry more efficient (like translating, which gpt is really good at, much better than google translate but not necessarily better than existing tools) comes with a decrease in the amount of jobs. Tech doesn’t have to eliminate the human portion, but if it even makes one more human twice as efficient in their job, thats half the humans you need doing that job for the same amount of work output.

        That being said this is not a great infographic for this topic.

      • assassin_aragorn@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I think the utility for technical papers and documents may be a bit overstated as well. There’s usually templates for these documents if possible. If not, the topic is broad enough that I don’t think you could provide a suitable prompt to generate meaningful text.

        Course that’s just my 2 cents as someone who’s approaching this as highly skeptical. We should see how it performs in these areas and test it out. It’s just premature to make employment or policy decisions, imo.

    • dragnucs@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      1 year ago

      For me it is hard to understand why block chain engineers are highly impacted by GPT.

      • ShittyBeatlesFCPres@lemmy.world
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        1 year ago

        Because blockchain is useless and no one is investing in that space anymore because GPT is the new shiny object. Kind of hard to become a blockchain engineer when interest rates aren’t 0%, investors are doing due diligence, and the marketing buzzword train has left the station. Even clueless VCs and MBAs now know blockchain was all hype around selling libertarian fantasies to people who don’t read books.

          • ShittyBeatlesFCPres@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 year ago

            It’s pretty useless. It’s proven too slow and inefficient for 99% of the promising use cases and despite having been out for like 15 years, no one has done anything particularly important (and also legal) with it. I’m not sure a typical person would miss anything if the whole ecosystem disappeared tomorrow.

            • HubertManne@kbin.social
              link
              fedilink
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              im curious about the slow and innefficient part. Its a ledger and I was under the impression it was perfect for following a supply chain to keep track of shipments. I have sorta been surprised it has not been used for this

              • dandi8@kbin.social
                link
                fedilink
                arrow-up
                3
                ·
                1 year ago

                Why would you use blockchain to track shipments when you can easily do the same with a standard database? What would be the gain?

              • Bipta@kbin.social
                link
                fedilink
                arrow-up
                1
                ·
                1 year ago

                Blockchain itself is not slow. No idea of slow or fast can exist without an implementation, and the implementation they probably have in mind is Bitcoin which is slow as shit.

    • hellishharlot@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      1 year ago

      Orchestrator AI to determine which context it’s in with specialized AIs running for those tasks.

      • Candelestine@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        14
        ·
        1 year ago

        You’re going to have an unacceptably high failure rate as you attempt to trial-and-error your way through all the lower-probability problems. Meanwhile, independent research paths aiming at general AI, which absolutely could handle all these problems, is racing you.

    • Brkdncr@kbin.social
      link
      fedilink
      arrow-up
      3
      arrow-down
      2
      ·
      1 year ago

      An hvac company that is able to adopt AI for 1st call processing and scheduling will be able to eliminate a number of jobs and remain open 24x7. They will undercut their local competitors, and the hvac techs will find themselves out of a job or working for their competitor soon.

      Small companies won’t be able to compete.

      I’m all for this but we need to offset these immense productivity gains with economic safety nets. I don’t know how the next 100 year will look if we don’t adopt UBI, universal healthcare, and some amount of subsidized housing.

      • effingjoe@kbin.social
        link
        fedilink
        arrow-up
        3
        arrow-down
        2
        ·
        1 year ago

        if we don’t adopt UBI, universal healthcare, and some amount of subsidized housing

        This has been my stance for years. Automation is coming for all of us. The only reason LLMs are so controversial is that everyone in power assumed automation was coming for the blue collar jobs first, and now that it looks like white collar and creative jobs are on the chopping block, suddenly it’s important to protect people’s jobs from automation, put in safety nets, etc, etc.

        Forgive my cynicism. haha

  • ShittyBeatlesFCPres@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    2
    ·
    1 year ago

    I think realistically, it’ll be decades before people comfortable with GPT enter the workforce and actually make most of those jobs redundant. It’s like when the internet blew up and older managers had no clue what to do with it. They hired web developers and eventually, the web developers wrote things like Wordpress so the staff could edit the web site themselves.

    And guess what happened? The web devs didn’t get laid off. Staff kept sending the web developers changes in Word documents for like a decade before a generation of young people comfortable with posting text on the web entered the workforce and actually wanted to do it themselves. (Even then, the web developers didn’t disappear but, instead, were freed to build more complicated things.)

    So, basically, I think the concepts there are fine but that it’ll take a generation for businesses to fully take advantage of the new tech. Some firms will embrace it quickly but these things almost always take longer than technology enthusiasts assume.

  • kenbw2@lemmy.world
    link
    fedilink
    English
    arrow-up
    18
    ·
    1 year ago

    Seems like OpenAI lobbying the government to pave the way for their technology to become indispensable

    No opinion on the accuracy of it, but this is lobbying, not independent opinion