The ubiquity of audio commutation technologies, particularly telephone, radio, and TV, have had a significant affect on language. They further spread English around the world making it more accessible and more necessary for lower social and economic classes, they led to the blending of dialects and the death of some smaller regional dialects. They enabled the rapid adoption of new words and concepts.

How will LLMs affect language? Will they further cement English as the world’s dominant language or lead to the adoption of a new lingua franca? Will they be able to adapt to differences in dialects or will they force us to further consolidate how we speak? What about programming languages? Will the model best able to generate usable code determine what language or languages will be used in the future? Thoughts and beliefs generally follow language, at least on the social scale, how will LLM’s affects on language affect how we think and act? What we believe?

  • elshandra@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    9 months ago

    Do you actually believe this?

    LLMs are the opposite of a dead end. More like the opening of a pipe. It’s not that they will burn out, it’s just that they’ll reach a point that they’re just one function of a more complete AI perhaps.

    At the very least they tackle a very difficult problem, of communication between human and machine. Their purpose is that. We have to tell machines what to do, when to do it, and how to do it. With such precision that there is no room for error. LLMs are not tools to prove truth, or anything.

    If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct.

    Validating the facts of the response is another function again, which would employ LLMs as a translation tool.

    It’s not a long leap from there to a language translation tool between humans, where an AI is an interpreter. deepl on roids.

    • Lvxferre@mander.xyz
      link
      fedilink
      arrow-up
      3
      ·
      9 months ago

      My belief is that LLMs are a dead end that will eventually burn out, but because they’ll be replaced with better models. In other words machine text generation will outlive them, and OP’s concerns are mostly regarding machine text generation, not that specific technology.

    • HelloThere@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      edit-2
      9 months ago

      Do you actually believe this?

      Yes. I’m also very happy to be proven wrong in the years to come.

      If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct

      I don’t want to get too philosophical here, but you cannot detach understanding / comprehension from the accuracy of the reply, given how LLMs work.

      An LLM, through its training data, establishes what an answer looks like based on similarity to what it’s been taught.

      I’m simplifying here, but it’s like an actor in a medical drama. The actor is given a script that they repeat, that doesn’t mean they are a doctor. After a while the actor may be able to point out an inconsistency in the script because they remember that last time a character had X they needed Y. That doesn’t mean they are right, or wrong, nor does it make them a doctor, but they sound like they are.

      This is the fundamental problem with LLMs. They don’t understand, and in generating replies they just repeat. It’s a step forward on what came before, that’s definitely true, but repetition is a dead end because it doesn’t hold up to application or interrogation.

      The human-machine interface part, of being able to process natural language requests and then handing off those requests to other systems, operating in different ways, is the most likely evolution of LLMs. But generating the output themselves is where it will fail.

      • elshandra@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        9 months ago

        So I feel like we agree here. LLMs are a step to solving a low level human problem, i just don’t see that as a dead end… If we don’t take the steps, we’re still in the oceans. We’re also learning a lot in the process ourselves, and that experience will carry on.

        I appreciate your analogy, I am well aware LLMs are just clever recursive conditional queries with big semi self-updating datasets.

        Regardless of whether or not something replaces LLMs in the future, the data and processing that’s gone into that data, will likely be used along with the lessons were learning now. I think they’re a solid investment from any angle.

        • HelloThere@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          9 months ago

          Regardless of whether or not something replaces LLMs in the future, the data and processing that’s gone into that data, will likely be used along with the lessons were learning now. I think they’re a solid investment from any angle.

          I’m a big proponent of research for the sake of research, so I agree that lessons will be learnt.

          But to go back to Ops original question, how will LLMs affect spoken language, they won’t.

          • elshandra@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            9 months ago

            But to go back to Ops original question, how will LLMs affect spoken language, they won’t.

            That’s a rather closed minded conclusion. It makes it sound like you don’t think they have the chance.

            LLMs have the potential to pave the way to aligning spoken language, perhaps even evolving human communication to a point where speech is an occasional thing because it’s really inefficient.

            • HelloThere@sh.itjust.works
              link
              fedilink
              arrow-up
              3
              ·
              edit-2
              9 months ago

              You’re putting the cart very much before the horse here.

              For what you describe to happen requires global ubiquity. For ubiquity to happen, it must be something with sufficient utility that people from all walks of time, and in all contexts (ie not just professional) gain value from it.

              For that to happen, given the interface is natural language, the LLM must work across languages to a very high level, which works against the idea that human language will adapt to it. To work across language to that level it must adapt to humans, not the other way around.

              This is different to other technology which has come before - like post, or email - where a technical restriction in particular format/structure (eg postal or email address) was secondary to the main content (the message).

              For LLMs to affect language you’re basically talking about human-to-human communication adopting “prompt engineering” characteristics. I just don’t see this happening on the scale you describe, human-to-human communication is wooly, imperfect, with large non-verbal elements, and while most people make do most of the time, we all broadly speaking suck at making points with perfect clarity and no misunderstanding.

              For any LLM to be successful, it must be able to handle that, and being able to handle that dramactically reduces the likelihood of affecting change, because if change is required it won’t be successful.

              It’s basically a tautology, is why it’s such a difficult thing, and why our current generation of models are supported mainly through hype and fomo.

              Lastly, the closest example to a highly structured prompt that currently exists are programming languages. These are used by millions of people every day, and still developers do not talk to each other via their prefered language’s syntax of choice.

              • elshandra@lemmy.world
                link
                fedilink
                arrow-up
                3
                ·
                9 months ago

                This is interesting and thought provoking discussion, ty.

                You’re absolutely right, I was looking for the dead end - plugging LLM into a solution.

                I’m more thinking LLMs used in conjunction with other tech will have these effects on our communicating. LLMs, or whatever replaces them to do that interpretation, are necessary to facilitate that.

                When we come up with something better, to do the same job better, then of course, LLMs will be redundant. If that happens, great.

                We are already seeing a boom in popularity of LLMs outside of professional use. Global ubiquity for anything is never going to happen, unless we can fix communication, which we probably can’t. We certainly can’t alone. It’s very much a chicken an egg problem, that we can only gain from by progressing towards.

                Imagining vocallising using programming languages gave me a chuckle. I have been known to do things like use s/x/y/ to correct in written chats though.

                Programming languages allow us to talk to and listen to machines. LLMs will hopefully allow machines to listen and talk to/between us.

                • elshandra@lemmy.world
                  link
                  fedilink
                  arrow-up
                  3
                  ·
                  edit-2
                  9 months ago

                  I’m going to take the time to illustrate here, how I can see LLMs affecting human speech through existing applications and technologies that are (or could) be made both available and popular enough to achieve this. We’re far enough down the comment chain I can reply to myself now right?

                  So, we can all agree that people are increasingly using LLMs in the form of chatgpt and the like, to acquire knowledge/information. The same way as they would use a search engine to follow a link to that knowledge.

                  Speech-to-text has been a thing for at least 3 decades (yeah it was pretty hopeless once, but not so much now). So let’s not argue about speech vs text. People already talk to Google and siri and whoever else to this end, llms. Pale have their responses read out via tts.

                  I remember being blown away watching a blind sysadmin interacting with a Linux shell via tts at rates I couldn’t even understand the words in 1998. How far we’ve come. I digress, so.

                  We’ve all experienced trouble getting the information we’re looking for even with all these tools. Because there’s so much information, and it can be very difficult to find the needle in the haystack. So we constantly have to refine our queries either to be more specific, or exclude relationships to other information.

                  This in turn, causes us to think about the words we were using to get the results we want, more frequently because otherwise we spend too much time on recursion.

                  In turn, the more we do this, and are trained to do this, the more it will bleed into human communication.

                  Now look, there is absolutely a lot of hopium smoking going on here, but damn, this could have everlasting impact on verbal communication. If technology can train people - through inaccurate/incorrect results to think about the communication going out when they speak, we could drastically reduce the amount of miscommunication between people by that alone.

                  Imagine:

                  get me a chair

                  wheels out an office chair from the study

                  no I meant a chair for at the kitchen table

                  Vs

                  get me a chair for at the kitchen table

                  You can apply the same thing to human prompted image generation and video generation.

                  Now… We don’t need llms to do this, or know this. But we are never going to achieve this without a third party - the “llm”, and whatever it’s plugged into - because the human recipient will usually be more capable of translating these variances, or employ other contexts not as accessible via a single output as speech or text.

                  But if machines train us to communicate out better (more accurately, precisely and/or concisely), that is an effect I can’t welcome enough.

                  Realistically, the machines will learn to deal with us being dumb, before we adapt.

                  e: formatting.

                  • HelloThere@sh.itjust.works
                    link
                    fedilink
                    arrow-up
                    3
                    ·
                    9 months ago

                    My question is simple.

                    Given humans have not already achieved this clarity of communication, when we are social animals, have been utterly dependant on each other for the entire existence of our species, the importance of communication was literally a matter of life and death, and for the vast majority of that time we only communicated through speech (written word dates to approx 4k BCE)…then why would an LLM, or any human-machine interface for that matter achieve this as a side effect of usage?

                    I fully accept that people, everyone, can be trained in precise speech, but we aren’t talking about purposeful training here.