I must confess to getting a little sick of seeing the endless stream of articles about this (along with the season finale of Succession and the debt ceiling), but what do you folks think? Is this something we should all be worrying about, or is it overblown?

EDIT: have a look at this: https://beehaw.org/post/422907

  • @years_past_matter@lemmy.ml
    link
    fedilink
    English
    2110 months ago

    A lot of the fearmongering surrounding AI, especially LLMs like GPT, is very poorly directed.

    People with a vested interest in LLM (i.e shareholders) play into the idea that we’re months away from the AI singularity, because it generates hype for their technology.

    In my opinion, a much more real and immediate risk of the widespread use of ChatGPT, for example, is that people believe what it says. ChatGPT and other LLMs are bullshitters - they give you an answer that sounds correct, without ever checking whether it is correct.

    • TheRtRevKaiserM
      link
      fedilink
      English
      1310 months ago

      The thing I’m more concerned about is “move fast and break things” techbros implementing these technologies in stupid ways without considering: A) Whether the tech is mature enough for the uses they’re putting it to B) Biases inherited from training data and methods.

      LLMs inherit biases from their data because their data is shitloads of people talking and writing, and often we don’t even know or understand the biases without close examination. Trying to apply LLMs or other ML models to things like medicine, policing, or housing without being very careful about understanding the potential for incredible biases in things that seem like impartial data is reckless and just asking for negative outcomes for minorities. And the ways that I’m seeing most of these ML companies try to mitigate those biases seem very much like bandaids as they attempt to rush these products out the gate to be the first out the door.

      I’m not at all concerned about the singularity, or GI, or any of that crap. But I’m quite concerned about ML models being applied in medicine without understanding the deep racial and gender inequities that are inherent in medical datasets. I’m quite concerned with any kind of application in policing or security, or anything making decisions about finance or housing or really any area with a history of systemic biases that will show up in a million ways in the datasets that these models are being trained in.

      • Gaywallet (they/it)
        link
        fedilink
        English
        6
        edit-2
        10 months ago

        Medicine is quite resistant and has a lot of math and stats nerds already. I know this because I do data science in healthcare. We’re not pushing to overuse AI in poor ways. We definitely get stuff wrong, but I wouldn’t look towards it to worsen inequality.

        Low wage jobs that have a high impact on inequality such as governmental jobs, however, are a huge and problematic target. VC will spin up automating away the need for tax auditors, welfare auditors, to replace or reduce the number of law clerks, to proactively identify people to face extra scrutiny by police and child services, etc. This is where we’re gonna create a lot of inequality real fast if we’re not careful. These fields are neither led by, nor have any math and stats nerds to tell them to slow down. They’re also already at risk and frequently on the chopping block, putting a lot of pressure on them to be ‘efficient’.

        • TheRtRevKaiserM
          link
          fedilink
          English
          410 months ago

          Hey @Gaywallet, I was hoping I’d see you chime in given your background. I don’t have any particular expertise when it comes to this subject, so it’s somewhat reassuring to see your confidence that folks in the healthcare industry will be more careful than I assume. I work in an adjacent field and know there are a lot of folks doing really good work with ML in healthcare, and that most of those people are very cognizant of the risks. I still worry that there are a lot of spaces in healthcare and especially in areas like claims payment/processing where that care is not going to be taken and folks are going to be harmed.

          • Gaywallet (they/it)
            link
            fedilink
            English
            410 months ago

            Ahhh yeah claims and payment processing is typically done by the government or by insurance companies and is definitely a valid risk. They already do as much as they can to automate - they’re mostly concerned with whether they’re making profit or keeping costs down. That is absolutely a sector to be worried about.

            • TheRtRevKaiserM
              link
              fedilink
              English
              410 months ago

              Yeah, I work for a company that builds and runs all kinds of healthcare related systems for state and local governments. I work on a Title XIX (Medicaid) account and while we are always looking for ways to increase access, budgets are very tight. One of my concerns is that payors in this space will look to AI as a way to cut costs, without enough understanding or care for the potential risks, and the lowest bidder model that most states are forced into will mean that the vendors that are building and running these systems won’t put in the time or expertise needed to really make sure those risks are accounted for.

              When it comes to private insurance, I don’t expect anything from them but absolute commitment to profit over any other concern, and I’m deeply concerned about the ways that they may use AI to try and automate to the detriment of patients, but especially minorities. I absolutely don’t expect somebody like UHC to take the kind of care needed to mitigate those biases when applying AI to their processes and systems.

              • Gaywallet (they/it)
                link
                fedilink
                English
                510 months ago

                If its of any console most of these places have systems in place which already automate out most of the work - that article that broke recently about physicians looking at appeals for an average of under one second is an example of how they’re currently doing it. There are some protections in place, but I am also very pessimistic about this sector, and as you have mentioned all sectors which operate under a lowest bidder model.

      • @spoonful@beehaw.org
        link
        fedilink
        English
        210 months ago

        AI being bias is not a new problem. All of our tools are bias and we have humans to moderate them to adjust. This goes all the way back to basic algorithms and even physics - e.g. darker skin people were harder to capture on primitive film but we did it.

        I think addressing bias is an easy problem to solve as we’ll have a surplus of work force and AI moderator/training might as well be a big, new career path.

        Also regarding finance and politics - there’s really nowhere to go but up imo. As someone who worked in fintech and real estate it’s all complete an utter joke that can’t get any worse.

  • @jherazob@beehaw.org
    link
    fedilink
    English
    1810 months ago

    This video by supermarket-brand Thor should give you a much more grounded perspective. No, the chances of any of the LLMs turning into Skynet are astronomically low, probably zero. The AIs are not gonna lead us into a robot apocalypse. So, what are the REAL dangers here?

    • As it’s already happening, greedy CEOs are salivating at the idea of replacing human workers with chatbots. For many this might actually stick, and leave them unemployable.
    • LLMs by their very nature will “hallucinate” when asked for stuff, because they only have a model of language, not of the world. As a result, as an expert said, “What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be. And this is a fundamental limitation of all these models, it’s not something that can be patched out of them. So, bots will spew extremely convincing bullshit and cause lots of damage as a result.
    • NVidia recently reported the earnings they’ve gotten thanks to all this AI training (as it’s heavily dependent on strong GPUs), a trillion dollars or something like that. This has created a huge gold rush. In USA in particular is anticipated that it will kill any sort of regulation that might slow down the money. The EU might not go that route, and Japan recently went all in on AI declaring that training AIs doesn’t break copyright. So, we’re gonna see an arms race that will move billions of coin.
    • Both the art and text AIs will get to the point where they can replace low level people. They’re not any danger towards proper experts and artists, but students and learners will be affected. This will kill entry position jobs. How will the upcoming generations get the experience to actually become trained? “Not my problem” will say the AI companies and their customers. I hope this ends up being the catalyst towards a serious move towards UBI but who knows.

    So no, we’re not gonna see Endos crushing skulls, but if measures aren’t taken we’re gonna see inequality go way, WAAAY worse very quickly all around the world.

    • English Mobster
      link
      fedilink
      English
      410 months ago

      Yep. I’m already seeing AI start to displace some jobs. And what we’re seeing right now is the “baby” form of AI. What will it look like in 5 years? 10? 20?

      But really AI is just part of the puzzle.

      Everyone talks about Tesla as it is now and how bad their self-driving is - without considering that one day it will get better. And it’s not just Tesla; it’s places like Waymo too. Let’s not forget that Tesla now sells semi trucks, and there’s no reason why the tech won’t apply for them as well. One day - not today, but maybe in a decade - self-driving will be the norm. And that kills off Uber, taxis, semi-truck drivers, and anyone else who drives for a living.

      And that applies to other delivery methods too. Right now, Domino’s has a pizza-delivery robot. At scale, those can replace DoorDash, Amazon, and even the USPS. Any job which is “move a thing from one place to another” is at risk within a decade. Even things which don’t exist now - like automated garbage trucks - will one day soon exist. Like, within our lifetime.

      It doesn’t stop there, either. Amazon has a store without cashiers. Wal-Mart has robots which restock shelves. That’s a good chunk of stores now completely automated. If you’re a stocker or a cashier, your job is on the chopping block too.

      Japan has automated hotels. You don’t need to interact with a human, at all. There are also robot chefs flipping hamburgers. And I’m sure you’ve seen the self-order kiosks at McDonald’s. Between all that - that’s the entire service industry automated.

      Did I mention that ChatGPT can write code? It’s not good code, but it’s code. When given enough time - tech will replace a good chunk of programmers, too. Do you primarily use Excel in your job? This same AI can replace you, too.

      AI is coming for all kinds of jobs. Construction workers are even at risk now, for example. And even if the AI isn’t good - one day it will be.

      Just like how computers in the 1970s weren’t good. But they are now.

      It will happen. You can’t stop it. CGP Grey did a great video on this a while back.

      His analogy was basically: just because the stagecoach was good for horses doesn’t mean that when the car was invented there were even more horse jobs created. You can’t presume that new technology will always create jobs; at some point it’s going to cause a net decline.

      And what happens when entire industries disappear overnight? What will happen to college students who now can’t get a simple customer service job to put them through college? What happens to entry-level jobs?

      Like I said. The genie is out of the bottle.

      Now. It’s in the capitalist’s best interest to have money entering the workforce. If the workforce doesn’t have money, they don’t spend that money, and the capitalist doesn’t get more money.

      It’s in the politician’s best interest to keep the masses happy. They are what decide elections, and automation isn’t going to stop elections from happening.

      Because of that, there are 3 ways things can go down:

      1. A complete ban on AI capable of a certain level of automation. I think this is unlikely but conceivable. I expect conservative parties to start championing this in 10-20 years.

      2. A universal basic income/expanded social safety net. Notably this is what Andrew Yang was talking about in the US 2020 primaries, and - whether you like Yang or not - it’s something that has gained traction.

      3. Fully automated luxury gay space communism. I find this the most unlikely option, but if the politicians/capitalists for whatever reason decide to ignore the fact that 3/4 of the workforce doesn’t have a job… well, something’s gotta give. But like I said - I don’t think this will actually happen, or even come close to happening.

      I expect that politicians will be reasonable and nip this in the bud with something like UBI. The reaction will be similar to what happened during the pandemic - nobody has a job and nobody can work, but the economy needs to go on. So the government gives people a stipend to go spend on stuff to keep the economy going.

      But honestly… who knows? It could really go either way.

    • art
      link
      fedilink
      English
      3
      edit-2
      10 months ago

      supermarket-brand Thor

      Kyle Hill looks like if Thor and Aquaman had a baby. A very nerdy baby.

    • hedgeOP
      link
      fedilink
      English
      1
      edit-2
      10 months ago

      Hi @jherazob@beehaw.org, finally got around to watching the video, thanks for letting me know about it.👍 One thing that really befuddles me about AI is the fact that we don’t know how it gets from point A to point Z as Mr. Dudeguy mentioned in the video. Why on earth would anyone design something that way? And why can’t you just ask it, “ChatGPT, how did you reach that conclusion about X?” (Possibly a very dumb question, but anyway there it is 🤷).

  • @uzay@beehaw.org
    link
    fedilink
    English
    1310 months ago

    All that AI apocalypse talk by the tech billionaires who are financing AI development is a ploy to oversell their capabilities, generate hype, and divert attention from the very real risks to society these so-called AI models pose right now: surveillance, discrimination, replacing real workers with crappy chatbots etc.

  • @mobyduck648@beehaw.org
    link
    fedilink
    13
    edit-2
    10 months ago

    I think AI is attractive to a similar breed of grifter cryptocurrency attracts. It’s also no coincidence in my opinion that it’s those either with a proprietary competitive lead or an interest in gaining one who are the main people other than outright grifters pushing the AI doom narrative. It’s telling that they latch onto the paperclip optimiser scenario as though that’s not your average VC funded business model in the social media space.

  • Melody Fwygon
    link
    fedilink
    English
    1110 months ago

    It probably is.

    That doesn’t stop foolish CEOs from wrecking lives with it; nor does it prevent anyone else who is uninitiated or uninformed about AI from misusing it.

    Therefore; we will most certainly have to deal with the repercussions of this technology.

  • @darkfoe@lemmy.one
    link
    fedilink
    1110 months ago

    Way overblown. Someone either trying to sell me AI products and overhyping what they can do, or trying to sell clicks on an ad-infested website to read probably AI generated content on why AI is bad.

    But, as a tool it’s great. Copilot has saved me some time on boilerplate stuff on personal projects, and ChatGPT has helped refine some writing for family when I’ve pointed them in the direction of it.

    • @catacomb@beehaw.org
      link
      fedilink
      810 months ago

      I studied some AI at University. Not overhyped frameworks or tools, just fundamentals. Believe me when I say the dropout rate is high in the first few weeks when people realise a lot of it is fancy statistical models or inference.

      You’re right, it’s a helpful tool and people have found it useful. That’s great. To wonder if we should be scared? I’m quite honestly more scared about what the already intelligent people around me can do, especially when empowered by these shiny new tools.

      • @darkfoe@lemmy.one
        link
        fedilink
        710 months ago

        Yeah, what smart folks can do with these tools combined with mountains of data most people leave in the wake with their online presence is a little frightening. Not like the data itself wasn’t frightening in the first place, but with these tools it’s a lot easier to process it.

  • @shufflerofrocks@beehaw.org
    link
    fedilink
    910 months ago

    Definitely overblown - the capabilites by grifters and VCs who want to sell their AI services, the doomsday consequences by clickbaiters

    That said there are many legitimate concerns - Exploitation of artists, non-consensual deepfakes, huge amounts of misinformation, and the intense push towards pushing chatbots and the likes as a replacement for human connection.

  • @kyrla@beehaw.org
    link
    fedilink
    910 months ago

    The apocalypse will never be “skynet launches all the nukes”. It’ll be “skynet-AI has predicted with 85% certainty that china has launched all of their nukes”.

  • Butterbee (She/Her)
    link
    fedilink
    910 months ago

    Ohhh boy I have concerns about AI, I really do. I’m afraid of the misinformation we’re going to have to sort through in the next few years. But as far as those overblown EXTINCTION claims? Boy howdy!

    Those billionaires who are claiming that have already got all 10 grubby little fingers and all 10 greasy toes in every pie that is already leading to our extinction.

    I’m far less worried about AI ending us than humans.

    • hedgeOP
      link
      fedilink
      English
      6
      edit-2
      10 months ago

      So maybe it’s the misinformation we should be worrying about, not a Skynet-type nuclear holocaust scenario . . . 🤔

      • Butterbee (She/Her)
        link
        fedilink
        English
        510 months ago

        I think that’s realistically what we’re going to have to deal with in our daily lives. The big spooky AI threat I feel like it’s a distraction to keep us from looking at climate change, socio-economic inbalance, the rising boldness of bigotry. All things those people profit from.

        • hedgeOP
          link
          fedilink
          English
          410 months ago

          It’s funny that with all this tech that’s supposed to increase human knowledge (at least from a positive pre-cyberpunk sci-fi-type mindset), it just ends up doing the opposite and confusing everyone to the point where people can’t distinguish between what’s real and what isn’t.

  • art
    link
    fedilink
    English
    810 months ago

    I honestly believe that the doom and gloom around AI is just to pump up venture capitalist funding and try to get large defense contracts from governments.

    There’s a lot of cool things that it can do and just like any tool it can be used for awesome projects or for evil. It is neutral. It’s a fancy hammer.

  • @uthredii@beehaw.org
    link
    fedilink
    English
    710 months ago

    IMO the concerns are. The latests AI:

    • have capabilities that are/were unexpected.
    • have novel applications that are changing job markets and power balances.
    • have acted deceptively in some cases.
    • have been very persuasive in some cases.
  • @cavemeat@beehaw.org
    link
    fedilink
    English
    610 months ago

    Honestly, I don’t think it’ll take us over, computers are a lot stupider than average consumers think they are. However, I am concerned that they’ll help further spread misinformation, and increase the “signal to noise ratio” of the internet.

  • @spoonful@beehaw.org
    link
    fedilink
    English
    610 months ago

    I’m actually very optimistic and here’s why - it changes education and research completely.

    Generally when learning new things the initial step is the hardest. Where to start and what to learn is extremely overwhelming and we basically got rid of that. It’s amazing.

    I’m a fullstack engineer and honestly I feel that with LLM I have the tools to switch to basically any career I’d want to. If AI takes away coding then I’d happily let it build stuff for me and pivot somewhere else. The things get a bit weirder for people who can’t do that but that’s not a new issue - we already have people who need assistance and if anything we should be able to support them better now.

    • Scott Harper
      link
      fedilink
      410 months ago

      @spoonful @hedge I look at LLMs very much with the lens of accessibility as well. There’s a lot of things an AI can do for you that helps with various developmental disorders which people can’t do on their own, and having an AI take over some of the more difficult aspects could really be a game changer (and already has been for me)!

    • @orclev@lemmy.ml
      link
      fedilink
      English
      210 months ago

      The problem with ALL the LLMs is that they don’t actually understand anything at all. They produce output that looks like other similar things but may or may not have any actual relationship to reality. So they’re incredibly advanced bullshit generators.

      I would never trust a piece of code written by one of these things and you’d spend just as much time debugging what it wrote as it would have taken to write it in the first place.

      For that matter you’d never be able to actually trust anything one told you about anything either as you never know if anything it has said is true or not so you literally need to research everything it tells you in order to know if it was true or not.

      It could maybe work as a better interface to a search engine though. You ask it a question and it redirects you to what it thinks are the most relevant search results. E.G. “how do I do X” and it tells you “People who wanted to do X often needed to know about Y and Z, here are some of the top search results for that”, but you’d need to actually follow the links provided, not let it summarize them.

  • @glad_cat9187@beehaw.org
    link
    fedilink
    6
    edit-2
    10 months ago

    I think it’s a bit overblown right now like outsourcing 10 years ago, but it may change in the future and put people out of a job.

    I am scared about junior programmers though. It feels like they will try to “learn” by reading answers from ChatGPT without actually learning, searching, thinking, and failing. Failing is good when you begin because you learn what works and what doesn’t. If everyone do this, we will have a whole generation of programmers who can’t produce good code or change it.

    I already saw this trend on /r/learn_programming where half the posts were beginners asking simple questions that could be answered by “read a book,” “read the official tutorial,” or “use Google.”

    Edit: I can add that this bad trend also happened in /r/programming and /r/coding where all the posts were either about technology (off-topic) or basic programming advice.