The question I'm asking is yes, Wittgenstein seems to have nailed the "use" argument, but the question remains, Who or what gives use its meaning? Nobody today uses language better than LLM's (it could be argued) but they don't have a clue what the language means, it's the classic example of the John Searle's chinese room thought experiment. So language has meaning for each person, so how does that come about?
阴阳师黑科技过御魂8/9/10层攻略 阴阳师御魂10层配置
-
1"it could be argued" really? how much cutting edge philosophy or literature has it written? i'm not saying it cannot happen...– patientCommented Aug 1 at 23:21
-
2LLMs do not "use" language in the sense Wittgenstein was talking about.– David GudemanCommented Aug 2 at 1:14
-
Used by the speaker, and specifically by a community of speakers sharing a common set of rules, often implicitly.– Mauro ALLEGRANZACommented Aug 2 at 7:41
-
1A well-posed question. Witty, John Searle, and ChatGPT (our very first AI) are the 3 points of our mystery triangle. I recall someone on a math forum claiming he wrote code that could solve triangles. I wish he were here to give us his valuable input and advice. If only wishes were horses.– HudjefaCommented Aug 2 at 8:25
-
1In Searle's thought experiment I think it's generally understood that the person in the room doesn't actually "know" Chinese. So it's a good analogy for LLMs.– BarmarCommented yesterday
5 Answers
There are two questions here:
- "Who or what gives use its meaning?"
- "language has meaning for each person, so how does that come about?"
You throw in a comment about LLM models, to the effect that LLMs "don't have a clue what the language means" and I want to say something about that. It will help to clarify my answers.
Who or what gives use its meaning? In a way, this is rather puzzling. Use is meaning. But I think you mean who or what gives language its meaning.
The short answer is language users. Its the interchange or interaction between users that establishes and maintains the meaning.
We're all familiar with the ways of learning new words, new uses for old words and many of us have learnt new languages. But all that is for people who already use language, and so does not give the radical answer to your question.
We learn our first language - almost all of us - right from the beginning. But it is entirely a matter of interacting with language users in the context of daily life. Learning language is inseparable from learning how to navigate the world. I don't think anyone knows much about the details of how language as such developed for human beings. But I'm sure that it developed in similar ways, out of the context of social life.
As to your second question, I think that this also explains how each of us learns about language and what language means. We are each initiated into it as a part of learning about the world, and especially other people.
But it is important to add the rider, that natural language is not a formal system like Euclid's Geometry or Newton's mechanics. We all follow the same rules - that is essential for communication - but we do not necessarily always apply them in the same way.
That produces a drift in meaning, where new, possibly deviant, uses by individuals, are imitated if they are useful and so survive and are incorporated into the language. If they are not found useful, they get forgotten.
On the question of LLMs, it seems to me that the right answer to give emerges from what I have said. Understanding and using language is not merely understanding a linguistic system or being able to produce coherent sentences. It's about understanding and participating in the practices and way of life in which the language is rooted. That's why LLMs don't - and can't - understand language as humans do.
All of this is based on Wittgenstein, so if you look at the beginning of the Philosophical Investigations, up to about section 20, you will see him explaining how it works.
-
Right, does a Stop sign really mean it when it says Stop? If not, can we safely ignore it? Commented yesterday
-
1Good question. It depends where it is and why it is where it is. In the right circumstances, it can say Stop. It's a bit pointless to deny that - and possibly actually misleading. Given that "Stop" means stop, I'm inclined to say that the sign means stop; but it is possible that it is not the sign that does the meaning, but the person who put it where it is. Others would say that it is the person reading the sign who does the meaning. I think they are all in it together. Is it a conspiracy, do you think?– Ludwig VCommented yesterday
A slight misunderstanding: in Wittgenstein's view, we don't "give use its meaning"; we abstract meaning from (collective) use. If we go back to his Language Game #2 (§2 of Philosophical Investigations), he holds that we come to understand the meaning of word like 'brick' because there are collective activities (like building walls) that require the creation/exchange of objects of a certain type, material, shape, etc. We start by pointing, showing, and using other physical gestures, then we transition to using vocal gestures: sounds that perform the same gesturing activity. A meaningful word is a sound that points to something others wants, need, or are interested in. In essence, no one gives a word its meaning, just like no one produces a child by themself. A word (like a child) is a product of interaction.
This is where Large Language Model AIs fall short. LLM AIs learn to use words based on correlations and structural regularities of language production; they have no grounding in collective activities. They are a kind of parrot. A parrot hears sounds in its environment and learns to mimic them; if those sounds are human language the parrot will mimic the sound of human language, producing what seem like complex linguistic utterances. But a parrot can't use language to collaborate with — literally 'work with' — humans, except in extremely simple ways. It's merely reproducing sounds it hears with a detailed and exacting ear. Likewise, a LLM AI gathers large amounts of verbal production from its 'environment' and mimics it. But it can't use language to collaborate. It cannot abstract meaning because it can only 'see' words in terms of their structural relation to other words within language, not as a gesture towards something outsde of language.
-
1It's kind of like the plastic food that tends to show up at the front of Japanese restaurants (at least that's where I've seen it): it sends the same photons to your eyes as real food does, so your mind can interpret it as food. But try to take a bite of it. Also, the first time you run into it you've never considered that someone might make something that looks so much like food but isn't. I'd say we're currently at that same point with "it's language, but there isn't meaning behind it" with LLMs. Commented yesterday
-
1
Meaning -- understood as the capacity of humans to communicate meaningfully by using a natural language -- is established (learned, acquired, perceived, changed) in the interactions of humans with each other and with their environment. These interactions provide the full linguistic and non-linguistic situational context against which meaning arises, i.e. against which certain external signals or symbols become meaningful to a group of people. This context is what Wittgenstein refers to as shared forms of life -- including other actions, habits, expectations. The environment can, I think, best be conceptualized as an open-ended set of questions or tasks. Tasks as varied as "Can you pass me the cheese?" or "What's the wheather today?" or "Tell me a joke". (In Wittgenstein's terminology these would correspond to various language games.)
Some tasks can be performed or resolved by non-verbal action (the other passes you the cheese in silence). Some tasks can be resolved by purely verbal means, i.e. by producing language (Siri or Alexa tells a joke). Yet others may require both language production and non-verbal action.
Human individuals acquire linguistic meaning -- i.e. become skilfull in language usage and communication -- through a rather long developmental process, participating in such task-oriented contexts. Purely verbal communication becomes meaningful embedded in those contexts, these forms of life, encompassing overt actions, shared habits, shared perceptions, feelings and expectations.
LLMs do not use language in precisely the same way:
- They are trained (mostly) on tasks that only require language production to resolve a task.
- They often don't have access to "trivial" information or knowledge that would be required to answer some questions.
- They either don't monitor their own language production or this monitoring is much more restricted, much less informed than our self-monitoring. They cannot really respond to correction (since the core model is static).
- They cannot lie. Their language generation may be in accordance to certain (ethical and esthetical) values, but we do not hold the programs responsible for those values, but rather the companies who deploy the programs. The values are in some sense not (yet?) intrinsic to the programs themselves.
Related to the third point: An LLM as such does not have a confidence model that can modulate the tone of its statements. It may generate (non-overtly) a list of candidate responses to a verbal prompt and then select whatever is the statistically most likely candidate given its model. But in this kind of architecture it's extremely hard to integrate (or use) a confidence model: the LLM itself is its own confidence model, and training a separate model next to it is very hard. Human language production seems to be very different, presumably because the underlying mechanisms (the architecture) are very different.
However in sofar as any meaning or information can be encoded in language, LLMs do, at least partially, encode this. Whether this constitues "real" understanding or not seems a question that only makes sense in regards to particular tasks. If we abstract from actual tasks, abstract from any communicative use of language, the question whether or not LLMs "really" understand something, seems to me a question without sense: I see no way to differentiate "real" understanding from "fake" understanding or "mere simulation" or non-understanding, without reference to actual use cases.
For instance, a little while ago, someone asked an LLM what the best way would be to remove the wrinkles out of one's scrotum. The LLM apparently responded that ironing might be okay, and that using a steaming iron would be best. (Unfortunately, I cannot find a reference. Perhaps this is a new urban legend, but I'm merely mentioning it as example.) Now, we may assume that the LLM was not joking. So, the joke is on the clueless LLM that, indeed, doesn't have enough world-knowledge. But a philosophical sceptic would have to say: How do we know the LLM was not joking? How do you know it wasn't answering a perhaps somewhat absurd question with a joke of its own? Based only on this interaction we can not say the LLM was not joking. (Note that "ball ironing" actually seems to be a thing, so it's good to always keep an open mind...)
Now, the fundamental criticism is that an LLM "of course" cannot joke, since it doesn't "have" intentional states. It "merely" produces text, but we cannot really, justifiably say that it is serious or nonserious, we cannot say that it means what it says (or doesn't mean what it says), since, again, "there is" no intentionality "behind" its text production. In other words, it doesn't have an inner life.
This criticism is at least partially true given current services: an LLM doesn't (for all we know) have an inner life, so it cannot express itself, it cannot mean what it says and cannot lie. But this criticism overlooks a key aspect of linguistic interaction: that meaningfulness is not (and cannot be) solely anchored in the presence of inner states, but can only be anchored in public criteria for interpreting utterances. Wittgenstein repeatedly emphasized that the meaning of a statement lies in its use within a form of life, not in some hidden mental accompaniment. We take a person to be joking, or not joking, based on tone, timing, situational cues, shared expectations: not because we have access to their (or our own) private mental state. In other words, while it's (still) true that LLM-generated language lacks intentionality (it lacks -- for all we know -- a subjectively experienced desire to be funny or a subjective appreciation of fun or what we call "a sense of humor"), it can still function as humor, given the right context. And if something functions as humor, if it's appreciated as such by a reader, then, at least to that reader, it is humor. From a pragmatic point of view, at least, there is no difference to be found.
-
Strictly speaking it's nonsense to say (as I did) that the language lacks intentionality. Usually there is nothing in the language - the visible expressions - itself that shows or manifests intentionality or that can be used as actual criteria for its presence. So, when we (or I) say sth like that, it means that we (I) don't ascribe intentionality to the language generating system. Commented yesterday
-
I believe that the single most important thing we've learned from LLMs is that highly advanced language-related tasks can be accomplished by purely computational means without having to assume any intentionality or conscious experience in the generating system. It's kind of irrelevant to then wonder if the system "really" understands something. For most actual purposes it does, and something it doesn't. Commented yesterday
-
People make smalltalk at parties too, and may well have little knowledge and no interest in the topics, but it's a game and few people can resist a game. The Intent only goes as far as passing time by participating. AIs can't choose to participate or opt out. (as soon as they can, I'll bet we know about it) Commented yesterday
-
1@ScottRowe - I think that's the game of "Oh, that reminds me of ...(next story)". Which reminds me that I once read a study (I believe by Robert Axelrod, but cannot find a reference now) where these kind of linked narratives were studied as explanations: So being- reminded-of would be seen as an attempt at explaining what one just heard someone else tell. Another way to model it is as a random walk through topic space... Commented yesterday
Wittgenstein was only highlighting the fact that words lack essence, which was the assumption philosophers were working on when trying to solve philosophical problems. Wittgenstein framed this positively as "meaning is use", a tool analogy that plays on the versatility of tools. It reflects the fact that words can be assigned arbitrary meanings and they are in the natural environment of words and there's no regulatory mechanism for word usage.
I don't know much about LLMs but they seem to be semantically-endowed, at least from their textual outputs.
LLMs are John Searle's Chinese rooms (actualized).
-
1Right out of the horse's mouth - Mr Chat Gpt "No, large language models (LLMs) like GPT do not have semantics in the way humans do. They manipulate syntactic and statistical patterns in language without inherent understanding of meaning. They simulate semantics—but don't possess it intrinsically. Commented yesterday
-
-
-
Most welcome. I didn't quite understand your point. Since ChatGPT isn't semantically enabled, I was wrong.– HudjefaCommented 23 hours ago
-
Your 1st paragraph is spot on, You said "they are in the natural environment of words", I said " Language is like that. It hovers in a field of structured potential. " same idea, different analogy. My main point is that individual consciousness is the carrier of semantics and so Wittgenstei's public language argument was not wrong but rather structured incorrectly, there is no question of language being public or private. Commented 13 hours ago
We often credit Wittgenstein with transforming how we think about language and rightly so. In his later work, he argued that meaning doesn’t come from mental pictures or internal definitions. It comes from use. Words mean what they do because of how we use them in real-life contexts, commands, questions, stories, jokes, bargains.
But here’s a question he didn’t really ask (or maybe avoided on purpose): Use by what? Or more precisely: Who or what gives use its meaning?
Large language models, like GPT, simulate human language almost flawlessly. They perform the use part spectacularly, predicting words, matching tone, following the dance of conversation. But they don’t experience anything. They have no awareness, no intention, no memory in the way we do. So… do they really use language in the way Wittgenstein meant?
Maybe not. Maybe they just simulate use. Because without consciousness, there’s no one there to mean anything.
This leads to a different take: What if language—spoken, written, generated, is not meaning until it’s "rendered" by a conscious mind? (by rendering, I mean putting our conscious attention on the word(s) and givinging it our personal meaning)
Think of it like a quantum wave function: a cloud of possibilities, superpositions, not-yet-decided outcomes. Language is like that. It hovers in a field of structured potential. Only when a conscious being observes it,attends to it, interprets it, remembers it,does it “collapse” into actual meaning.
LLMs generate potential. But it’s you who renders meaning. The use is real, yes—but meaning comes only when a conscious mind engages it. That’s when the language game becomes more than just moves—it becomes understanding.
This turns the “private language” problem upside down. Wittgenstein said we couldn’t have a private language because no one could check its rules. But if language only becomes meaning through the private, conscious rendering of each mind, then all language collapses privately—even when it looks shared. Agreement isn’t proof of shared inner meaning—it’s just a sign that our private renderings happen to align.
So yes, Wittgenstein was right: meaning is use. But use only matters because there’s something there—someone there—to use it. Language, in this view, isn’t a fixed system. It’s a field of possibility, waiting to be rendered into lived experience.
Until then, it’s not language. It’s potential.
-
1if you cannot do philosophy with a calculator/machine, then what is it?– patientCommented Aug 1 at 23:41
-
Books are full of flawless language too. Movies are full of very human-looking... images. Commented yesterday
-
1Aboslutely, books are full of flawless language too movies are full of very human-looking... images, but it is still the individual bookreader or movie watcher is the one who gives meaning to the use. A baby looking at the words in the book or images on the screen give give different meaning to use than adults etc. Commented yesterday