# Linguist Forum

## Specializations => Historical Linguistics => Topic started by: Daniel on May 23, 2014, 10:16:42 PM

Title: Man vs. Beast
Post by: Daniel on May 23, 2014, 10:16:42 PM
It was once said that man was separated from the beasts by using tools... before Jane Goodall showed that was just wrong.

And now it is claimed that humans are superior to animals because of the complexity of our communication system: Language.

...I'm skeptical of that. And I've been wondering about a few things, like whether there really is a sharp distinction between human and animal communication. But I just thought of a more interesting question:

If human language is really superior to animal communication, then why can't we all easily learn to speak cat? Or dog? Or dolphin?

And I mean that as a serious question. If humans have animal communication plus added complexity (recursion or whatever you want to claim), then doesn't that mean we should be able to also still speak cat, dog and dolphin?

If instead it turns out that the communication systems of different species are just different, then the entire assumption that humans have some special Language Faculty or UG or whatever is an odd one-- every animal would then. Sure, for us it might add to the "complexity" in the way we observe daily, but if we can't learn to speak cat, then isn't cat also a pretty complicated language?

Of course one lazy (and irrelevant?) answer is that we don't have the same physical speech organs to produce what animals produce. But I don't think that's an important consideration.

The only other way around this would be to suggest that "speak cat" is an inaccurate description of an action, given that cats don't speak-- but they do seem to make some sounds with certain associations, in a way that we don't/can't. Right?
Title: Re: Man vs. Beast
Post by: jkpate on May 23, 2014, 11:37:05 PM
If human language is really superior to animal communication, then why can't we all easily learn to speak cat? Or dog? Or dolphin?

And I mean that as a serious question. If humans have animal communication plus added complexity (recursion or whatever you want to claim), then doesn't that mean we should be able to also still speak cat, dog and dolphin?

A system that learns one thing doesn't necessarily learn every simplification of that thing. This is because a more complex hypothesis space is going to be larger, and so learning a simple system with that complex hypothesis space requires two kinds of evidence: 1) evidence that the system is simple, and 2) evidence about the behavior of that system. A learner that does not consider the large hypothesis space needs only the second kind of evidence. Indeed, a complex learner might assume that the system is complex, in which case no amount of the first kind of evidence will be enough.

For a concrete example, we can view context free grammars as a generalization of linear structures by saying that a linear structure is a tree where every local subtree branches to the right:

A learner that assumes a linear structure on data that has only linear structure will discover the linear regularities very quickly. A learner that assumes that local subtrees may branch in either direction, however, will need both evidence for the linear regularities and evidence that there is nothing except rightward branching structure. Moreover, if the learner refuses to consider the possibility that all trees branch to the right, then the learner will never learn the correct linear structure.
Title: Re: Man vs. Beast
Post by: freknu on May 24, 2014, 12:09:31 AM
I'm always quite sceptical of things that say humans are "special" or somehow "superior" to other animals. Sure, our intellect seems to be quite astounding compared to most other animals, but that is one tiny aspect of the animal that we are — other animals do other things far better than we do.

I would say that out intellect (and our idea of "humanity") is simply an evolutionary trait equivalent to that of the Galapagos finches' beaks. From our point of view our intellect is the survival trait which has shaped us over the millennia.

To say that humans are superior because we ended up with one trait over another is, well, childish. Oh, and dare I say, circular.
Title: Re: Man vs. Beast
Post by: Corybobory on May 24, 2014, 03:25:58 AM
All species have special things that other species don't do - that is why they have their own niche and adaptive pressures resulted in them speciating in the first place. Humans, like other animals, have unique traits.  I believe one of these things is language, but there are a number of other cognitive skills humans have that are not found in other animals. Language isn't monolithic, and is made up of a huge amount of interacting skills - and these skills can be found in other species to varying degrees sometimes.

But humans are the only ones that intentionally communicate with a learned system of symbols that have been agreed upon by a community. And humans are the only animals that live in this niche, with this type of social structure and life strategies. It's not superior, because 'superior' is a subjective designation and is in this context meaningless, like discussing which communication system is more beautiful.

For my research I'm looking at the relationship between theory of mind and language.  Theory of mind is also a trait that is rare and possibly unique to the human species.  Both of these traits scaffold the development of each other and are required for the other to work properly. There are a number of developments that are required before the development of theory of mind or language - and it's the lack of these, as they haven't been evolutionarily adaptive and have not developed in other species, that other species can't be taught to speak, and we can't be taught their communication system. So it's a much more complicated story than that language is just 'communication plus complexity', or our system with a little less complexity and you get 'cat'.

I don't find the term 'language faculty' helpful at all - I suggest tossing it away and things might be a bit easier to talk about!
Title: Re: Man vs. Beast
Post by: Daniel on May 24, 2014, 06:35:13 AM
jkpate, that's an interesting point. However, wouldn't that mean that we'd at least be able to understand cat? Maybe we would inherently speak it in a way that is too complicated for other animals to understand, but a simpler system would be within what we would understand, as part of understanding the more complex system. I think.

Quote from: freknu
To say that humans are superior because we ended up with one trait over another is, well, childish. Oh, and dare I say, circular.
I'd agree with you there. I've never understood the need to make and adjust such claims in light of how they're frequently falsified either :)

Quote from: Cory
But humans are the only ones that intentionally communicate with a learned system of symbols that have been agreed upon by a community.
What's unique to humans is the arbitrariness of the sign? I don't think that's true. The details are not something I'm particularly familiar with, but I think some other species use arbitrary signs. For example, apes have been taught to use bits and pieces of (arbitrary) sign languages, and prairie dogs have different dialects in different locations.

To phrase the question another way, then, why can't humans speak cat?

Quote
All species have special things that other species don't do - that is why they have their own niche and adaptive pressures resulted in them speciating in the first place. Humans, like other animals, have unique traits.
But then why do we assume that there is some special trait that is inherently better than the traits of all other species that allows us to use Language?

The impression I get from Chomskian approaches to language evolution is that there was a time when humans communicated like other animals then there was a genetic mutation (he says Merge) that allowed humans to have Language. And that missing linguistic link is all that separates us from animals and all that critically supports our ability to speak. Its presence and development is deduced from the "fact" that we clearly have a more evolved communication system, and so forth.
I'm question those assumptions.

If all species (or some?) just communicate "differently", then there's no reason to assume a basic logical/mathematical difference between the systems; rather, they may differ in complex ways, not just one better system replacing an inferior one.
Title: Re: Man vs. Beast
Post by: freknu on May 24, 2014, 06:50:16 AM
One thing that I always come to think of in arguments like this is that other animals don't require language to the degree that humans do. Whether there is a biological/neurological aspect of human language or whether other animals lack this, is irrelevant — other animals do not need language like we do.

If any other species developed the same need for communication, what is there to say that they wouldn't develop something similar to human language, or that it couldn't be analysed similar to human language?

These are quite big assumptions being made, all hinging upon some axiom that human language is "special".

If you compare pit hole eyes to vertebrate eyes, one could say that the pit hole eyes are more primitive, but if you compare fish eyes to human eyes, which one is more primitive?

And do all animals need vertebrate eyes? Why would vertebrate eyes be special?
Title: Re: Man vs. Beast
Post by: Daniel on May 24, 2014, 08:19:27 AM
Quote
One thing that I always come to think of in arguments like this is that other animals don't require language to the degree that humans do. Whether there is a biological/neurological aspect of human language or whether other animals lack this, is irrelevant — other animals do not need language like we do.
Humans don't necessarily need to be able to fly. But that doesn't mean that we happen to have a dormant ability to fly, even though we don't use it.
While I agree that there may be issues of whether species need language or not, we can't assume that all have the potential for it. It may develop specifically when a species does need it, for one thing.

Quote
If any other species developed the same need for communication, what is there to say that they wouldn't develop something similar to human language, or that it couldn't be analysed similar to human language?
I'd imagine this to be the case, yes. But that suggests that human language is some abstract thing, for example something that could be represented mathematically. As it is, I get the impression that Human Language is viewed as a human-specific trait. I'm not sure WHY that is believed, but it is a popular opinion.

Quote
These are quite big assumptions being made, all hinging upon some axiom that human language is "special".
Indeed. It all comes back to that, and nothing more.

Quote
And do all animals need vertebrate eyes? Why would vertebrate eyes be special?
Squid eyes are said to be incredibly similar to human eyes.
Title: Re: Man vs. Beast
Post by: freknu on May 24, 2014, 08:28:36 AM
While I agree that there may be issues of whether species need language or not, we can't assume that all have the potential for it.

Of course, not!

It may develop specifically when a species does need it, for one thing.

Which is why it seems so strange, why judge animal communication based on our form of language, based on our evolution, when no other animal seems to have developed any similar form of language?

That in itself seems assumptive.
Title: Re: Man vs. Beast
Post by: Daniel on May 24, 2014, 09:13:23 AM
Quote
Which is why it seems so strange, why judge animal communication based on our form of language, based on our evolution, when no other animal seems to have developed any similar form of language?
Seems to line up with what Con Slobodchikoff says on the subject-- judge animal communication systems within their behavior, not compared to ours.
Title: Re: Man vs. Beast
Post by: jkpate on May 24, 2014, 06:36:54 PM
jkpate, that's an interesting point. However, wouldn't that mean that we'd at least be able to understand cat? Maybe we would inherently speak it in a way that is too complicated for other animals to understand, but a simpler system would be within what we would understand, as part of understanding the more complex system. I think.

I don't think so. If a learner assumes that there is hierarchical structure when there is not hierarchical structure, then the learner could produce systematically wrong parses. If accurate parses are necessary for understanding, then such a learner would not understand the system. Whether this happens would depend on exactly how the learner is set up (whether it has a strong bias for having branches in both directions, for example) and what the data look like (in terms of amount and the likelihood of spurious hierarchical regularities).
Title: Re: Man vs. Beast
Post by: MalFet on May 24, 2014, 08:06:38 PM
My four-year-old is pretty convinced that he can speak Cat (or, "Catanese" as he calls it). I am surprised by my inability to come up with persuasive arguments to demonstrate that he can't.

More concretely, do any animal communication systems engage in metapragmatic functions (http://en.wikipedia.org/wiki/Metapragmatics)? I'm not aware of any that do, and it's hard to imagine a more defining feature of human language use (cf. threads like this one).
Title: Re: Man vs. Beast
Post by: Daniel on May 24, 2014, 09:35:46 PM
Quote
More concretely, do any animal communication systems engage in metapragmatic functions? I'm not aware of any that do, and it's hard to imagine a more defining feature of human language use (cf. threads like this one).
Humans are indeed above the beasts, simply because we proclaim that we are. Interesting thought.
Title: Re: Man vs. Beast
Post by: MalFet on May 25, 2014, 01:21:06 AM
Quote
More concretely, do any animal communication systems engage in metapragmatic functions? I'm not aware of any that do, and it's hard to imagine a more defining feature of human language use (cf. threads like this one).
Humans are indeed above the beasts, simply because we proclaim that we are. Interesting thought.

Maybe, but I'm not sure what that has to do with metapragmatics, so I'm afraid it's not the same as my thought.

(That was a metapragmatic sentence. So was that. And this. :D)
Title: Re: Man vs. Beast
Post by: Daniel on May 25, 2014, 06:11:08 AM
I'm speaking a little more broadly, but I also don't know of any species other than humans that discusses such things. I'm skeptical about that implying any sort of "superiority" but you certainly do seem to be right that metapragmatics (along with other kinds of self-analysis) seem to be lacking in other animals, at least as far as we know.
Title: Re: Man vs. Beast
Post by: MalFet on May 25, 2014, 08:13:25 AM
If non-human communication systems don't allow for metapragmatics, it certainly seems fair to say that human languages are superior tools for metapragmatic discourse. I've never found human languages particularly good at describing the precise location of pollen, but then again our metapragmatic capabilities do allow us to specify bum-waggling protocols on the fly!
Title: Re: Man vs. Beast
Post by: Daniel on May 25, 2014, 12:42:14 PM
Quote
If non-human communication systems don't allow for metapragmatics
Hmm... don't allow? Or aren't used for?
Title: Re: Man vs. Beast
Post by: MalFet on May 25, 2014, 05:46:56 PM
Whichever. If animals don't use metapragmatics, humans are better at metapragmatics.
Title: Re: Man vs. Beast
Post by: Daniel on May 25, 2014, 06:04:36 PM
Right. But not necessarily due to any inherent properties of the system.
Title: Re: Man vs. Beast
Post by: MalFet on May 25, 2014, 07:12:24 PM
To my knowledge, no animal communication systems treat communication itself as an object to be communicated about. All human languages do, and doing so is precisely the thing that makes language function as a cultural fact.

That distinction is both categorical and consequential. Whether it is "inherent" or "whatever-the-opposite-of-inherent-is" seems like so many questions about angels dancing on the heads of pins. How would we even go about measuring inherentness?
Title: Re: Man vs. Beast
Post by: Daniel on May 25, 2014, 09:40:30 PM
Quote
All human languages do, and doing so is precisely the thing that makes language function as a cultural fact.
Proposing a universal, are you? Do all cultures discuss language? I'd imagine at least the Pirahã don't.
Title: Re: Man vs. Beast
Post by: MalFet on May 25, 2014, 09:56:18 PM
Quote
All human languages do, and doing so is precisely the thing that makes language function as a cultural fact.
Proposing a universal, are you? Do all cultures discuss language? I'd imagine at least the Pirahã don't.

Of course they do. This is an explicit topic across the Pirahã literature, whatever the politics.

Quote
Each evening for eight months my wife would try to teach Pirahã men and women to count to ten in Portuguese. They told us that they wanted to learn this because they knew that they did not understand nonbarter economic relations and wanted to be able to tell whether they were being cheated. After eight months of daily efforts, without ever needing to call them to come for class (all meetings were started by them with much enthusiasm), the people concluded that they could not learn this material, and classes were abandoned. (Everett 2005)
Title: Re: Man vs. Beast
Post by: Daniel on May 26, 2014, 10:04:15 AM
That's a stretch. To me, it appears that they were interested in and discussed the behavior of others (as I would be in their behavior), not metapragmatics.
Quote
They told us that they wanted to learn this because they knew that they did not understand nonbarter economic relations and wanted to be able to tell whether they were being cheated.
Purely behavior and discussing behavior.
It would be interesting if they have a word for "lie", and that would then go against many of Everett's claims, though.

Quote
the people concluded that they could not learn this material, and classes were abandoned.
That's an interpretation. That doesn't mean they metapragmatically discussed the issue. Rather, they gave up on an activity. It's certainly possible they could have metapragmatic discussions on the matter, but the interpretation by Everett doesn't require such discussions.
Title: Re: Man vs. Beast
Post by: MalFet on May 26, 2014, 07:59:46 PM
That's a stretch. To me, it appears that they were interested in and discussed the behavior of others (as I would be in their behavior), not metapragmatics...Purely behavior and discussing behavior.

Yes, they discussed the verbal behavior of others, particularly with regards to the efficacy of that behavior. That is the textbook definition of metapragmatic discourse.
Title: Re: Man vs. Beast
Post by: Daniel on May 26, 2014, 10:04:51 PM
Quote
That is the textbook definition of metapragmatic discourse.
Does that necessarily include metalinguistic analysis? I guess I'm having a little trouble seeing something that could be seen as behavior (what is accomplished with language) as strictly linguistic, as opposed to discussions about language itself.

Quite a bit of what Everett has said has suggested limited metalinguistic knowledge. For example, there's the anecdote about how he was trying to teach them to write and showed them how to write the word "sky" then they laughed. He asked why, and they said it was funny because it seemed like he was saying something like their word for "sky".
Title: Re: Man vs. Beast
Post by: MalFet on May 26, 2014, 11:43:23 PM
Quote
That is the textbook definition of metapragmatic discourse.
Does that necessarily include metalinguistic analysis? I guess I'm having a little trouble seeing something that could be seen as behavior (what is accomplished with language) as strictly linguistic, as opposed to discussions about language itself.

Quite a bit of what Everett has said has suggested limited metalinguistic knowledge. For example, there's the anecdote about how he was trying to teach them to write and showed them how to write the word "sky" then they laughed. He asked why, and they said it was funny because it seemed like he was saying something like their word for "sky".

I'm not really clear on what distinction you're making between "analysis" and "behavior", and I can't for the life of me figure out how it's relevant to metapragmatics. When people talk about the consequentiality of talk — be they graduate students analyzing syntax trees or isolated villagers laughing at the weird-o spoken behaviors of weird-o foreigners — that's metapragmatic discourse. I not sure where you're getting your sense of the term, but it seems to be leading you in inaccurate directions. At the very minimum, metapragmatics is not the same as metalinguistic knowledge...especially not in an academic sense.

So, metapragmatics: Seemingly all humans do it. Seemingly no animals do it. Given the tremendous role that metapragmatic functions play in situating language as a social phenomenon, that seems like a big deal.
Title: Re: Man vs. Beast
Post by: Daniel on May 27, 2014, 12:45:33 AM
Quote
metapragmatics is not the same as metalinguistic knowledge...especially not in an academic sense.
I'm not sure I see any difference (based on how you're using it; I thought I did, and was attempting to use them distinctively), except that, obviously, "metalinguistic" is associated with western/academic approaches to language, but I don't mean to imply that.

Quote
Given the tremendous role that metapragmatic functions play in situating language as a social phenomenon
Can you expand a bit? Why does it matter, beyond simply a behavioral difference? Does this change language? Is this, for example, why we get arbitrary form-meaning pairs?
Title: Re: Man vs. Beast
Post by: MalFet on May 27, 2014, 01:45:25 AM
Quote
metapragmatics is not the same as metalinguistic knowledge...especially not in an academic sense.
I'm not sure I see any difference (based on how you're using it; I thought I did, and was attempting to use them distinctively), except that, obviously, "metalinguistic" is associated with western/academic approaches to language, but I don't mean to imply that.

I'm having a very hard time following you here. Any talk about the consequentiality of language is metapragmatic discourse. If that's the same as whatever you mean by metalinguistic knowledge, so be it. If it's not, also so be it. I'm not trying to mince definitions with you. But, however you want to configure these terms, what the Pirahã are doing in both of our examples is quintessential, textbook, bread-and-butter, vanilla-with-no-toppings metapragmatic discourse. It's talk about the efficacy of talk. That's all it needs to be. Nothing more, and nothing less.

All that said, I'm only belaboring this Pirahã engagement with metapragmatics because I strongly suspect that your objections are based on a misunderstand of the term. More generally, I'm not really that interested in hunting for absolutely exceptionless universals. That seems to me an ambition built on faulty understandings of evolution. Even if we were to find some example of people who do not talk about talk (which does not include the Pirahã, at least not from the data both of us have presented), who cares? If 99.lots-of-nines% of humans organize their language in one way and 0% of animals do, why on earth would we call that anything less than a distinctly human property of language?

Quote
Given the tremendous role that metapragmatic functions play in situating language as a social phenomenon
Can you expand a bit? Why does it matter, beyond simply a behavioral difference? Does this change language? Is this, for example, why we get arbitrary form-meaning pairs?

There have been hundreds of thousands of pages worth of ink spilled on why metapragmatic functions are fundamental to language, and the scope of that literature is vast. I'm not going to try to rehash any of it here, but at absolute barest minimum metapragmatics allows language to be its own agent of change. As a silly example, I can say things like "From now on, whenever I say 'chuchurocketbop' I actually mean 'slice of pizza'. Please pass me a chuchurocketbop." As a less silly example, metapragmatics is what allows humans to expand language (by, say, introducing jargon like "metapragmatic" or "transformational grammar" in an academic paper) at a rate faster than generational evolution. In other words, metapragmatics does to a communication system what Turing completeness does to a computation engine. If that's not a big deal, I don't know what would be!
Title: Re: Man vs. Beast
Post by: Guijarro on May 27, 2014, 09:59:04 AM
We are universally aware, aren't we, that:

(1) Many other living species engage in inter-specific communication.

(2) A lot of other living species have their own languages (whales, dolphins, bees, and whatnot)

What we don't seem to share universally is that:

(3) Most animal languages are the other side of their communicative "coin" (faculty, event, or whatever!)

(4) Human language is, definitely not (3); instead, it is the other side of the cognitive coin.

(5) Human cognition is the ability to use formalised representations in lieu of real objects, events, etc out there, to do all sort of things with them.

I am not aware that other species have such an evolved cognitive system. They may point to things out there, like we do, but hardly to things inside their minds (supposing we may describe their brains in that way, which is not at all clear to me).

(6) Humans may, then, communicate complex cognitive states with hardly or no external reference. They may use (or not) their language to help them in that task.

(7) What is different from other species, then, is not really that linguistic tool; but the faculty to communicate their representations. Other species may communicate their feelings, of course. But the representations of their feelings? ... I would be surprised if they could.

(eight) True enough, however, our human linguistic tool has evolved in a very sophisticated way. Does that make it superior to other animal languages? Och, I do not ken! We will have to describe superiority as "more complex", if we wish to have that superiority feeling.

(9) It is true that humans have a greater power over the environment than other species, because we are able to amplify our mental representations by communicative processes and use them to engage in dealings with the world.

(10) This power will probably bugger up the world we now live in.

Is that a really evolutive advantage?

Title: Re: Man vs. Beast
Post by: Daniel on May 27, 2014, 11:57:09 AM
Quote from: MalFet
But, however you want to configure these terms, what the Pirahã are doing in both of our examples is quintessential, textbook, bread-and-butter, vanilla-with-no-toppings metapragmatic discourse. It's talk about the efficacy of talk. That's all it needs to be. Nothing more, and nothing less.
And no animals ever notice or communicate about the fact that other species are communicating or sound funny? No cat has ever meowed because of a loud barking dog? When the definition is so broad, I'm not sure it's uniquely human.

Quote
As a less silly example, metapragmatics is what allows humans to expand language (by, say, introducing jargon like "metapragmatic" or "transformational grammar" in an academic paper) at a rate faster than generational evolution. In other words, metapragmatics does to a communication system what Turing completeness does to a computation engine. If that's not a big deal, I don't know what would be!
Another way to change language faster than generational evolution is simply creative language use. All that is required is some small flexibility in the system, and language can evolve within an individual. For example, I might coin a phrase like "pizzaslice" if for whatever reason I wanted a single concept to refer to "pizza slice"-- doing so is not necessarily metapragmatic, though. That may be a completely natural use of language (where performance influences competence!). In fact, this might be what we do every time we utter a new linguistic form based on our knowledge of our language.

Guijarro, I'm with you for most of your post. A few details:
Quote
(5) Human cognition is the ability to use formalised representations in lieu of real objects, events, etc out there, to do all sort of things with them.
Why don't other animals do this? Bees are a great example.

Quote
They may point to things out there, like we do, but hardly to things inside their minds (supposing we may describe their brains in that way, which is not at all clear to me).
Indeed-- I'm not sure we do that either. I think we merely articulate our perceptions of the world, and that animals do the same. Arguably a dog barking "cat" is expressing a mental state much beyond merely pointing out that there is a cat-- there's excitement as well.

Quote
(6) Humans may, then, communicate complex cognitive states with hardly or no external reference. They may use (or not) their language to help them in that task.
No reference? Doesn't your approach to language assume a truth-conditional semantics? If so, then that's all based on reference to truth and falsity.
I don't know what it would mean for humans to talk without any reference to the world.
I think what you mean is the general tendency for humans to talk beyond the "here and now" (with the exception of Pirahã, which is why I keep bringing it up). I see this as gradient rather than a strict dichotomy.

Quote
(7) What is different from other species, then, is not really that linguistic tool; but the faculty to communicate their representations. Other species may communicate their feelings, of course. But the representations of their feelings? ... I would be surprised if they could.
When a cat hisses, I think I know what it's feeling. It's not as articulate as a person, but I do believe that the cat is attempting to convey a message to the "listener" and expects the Cooperative Principle to be in effect.

Quote
(eight) True enough, however, our human linguistic tool has evolved in a very sophisticated way. Does that make it superior to other animal languages? Och, I do not ken! We will have to describe superiority as "more complex", if we wish to have that superiority feeling.
Indeed. And the problem then is that we assume we are superior then look for the reason why. Unscientific, obviously. So... why all of the conclusions about language being a complex system and such? (It may very well to turn out to be the case, but at the moment it seems axiomatic rather than empirical.)

Quote
(9) It is true that humans have a greater power over the environment that other species, because we are able to amplify our mental representations by communicative processes and use them to engage in dealings with the world.

(10) This power will probably bugger up the world we now live in.

Is that a really evolutive advantage?
Indeed. But egocentrically it's "better", in the same sense that western technology may be viewed as "better" than the resources in other cultures. And therefore, we reach the (unfounded) conclusion that man is better / more complex than beast, and that man has a more complex and unique communication system. I remain skeptical.

The problem with complexity is that measuring it is complex. Without a specific way to operationalize the criteria, we don't have a single number. I know that 5<6, but without a way to measure two systems in a one-dimensional metric for complexity, we can't compare them in such a way. Certainly human communication is complex, but that doesn't necessarily lead to any more conclusions than just that simple observation. We don't know, for example, that humans have one more neural circuit (called Merge?) than animals. We don't know much at all.
Title: Re: Man vs. Beast
Post by: MalFet on May 27, 2014, 09:02:31 PM
Quote from: MalFet
But, however you want to configure these terms, what the Pirahã are doing in both of our examples is quintessential, textbook, bread-and-butter, vanilla-with-no-toppings metapragmatic discourse. It's talk about the efficacy of talk. That's all it needs to be. Nothing more, and nothing less.
And no animals ever notice or communicate about the fact that other species are communicating or sound funny? No cat has ever meowed because of a loud barking dog? When the definition is so broad, I'm not sure it's uniquely human.

Huh? Why would a cat meowing at a barking dog be metapragmatic? Again, that's just not what the term means.

You seem to be conflating structured talk about verbal efficacy (metapragmatics) with mere verbal response to a signal (not metapragmatics). If the cat said something like, "You know dog, you'd be a lot more likely to get your owner's attention with whining than barking" or "Your barking sounds ridiculous because it's very similar to my word for asparagus", that would be metapragmatic. Meowing at barks, however, isn't on its face metapragmatic in the slightest.

Quote
As a less silly example, metapragmatics is what allows humans to expand language (by, say, introducing jargon like "metapragmatic" or "transformational grammar" in an academic paper) at a rate faster than generational evolution. In other words, metapragmatics does to a communication system what Turing completeness does to a computation engine. If that's not a big deal, I don't know what would be!
Another way to change language faster than generational evolution is simply creative language use. All that is required is some small flexibility in the system, and language can evolve within an individual. For example, I might coin a phrase like "pizzaslice" if for whatever reason I wanted a single concept to refer to "pizza slice"-- doing so is not necessarily metapragmatic, though. That may be a completely natural use of language (where performance influences competence!). In fact, this might be what we do every time we utter a new linguistic form based on our knowledge of our language.

Actually, coining a new term in that way would be almost certainly be metapragmatic.

I don't know why we're at this impasse, but your objections are based wholly on misappropriations of the term "metapragmatic". I don't usually have this much difficulty explaining the concept, but for some reason I am failing to do so effectively here. For now, however, I'm just repeating myself again. If you're interested in a good explanation of metapragmatics, including why it is important to language function and why it appears to be distinctly human, consider checking out Silverstein's "Limits of Awareness" (http://eric.ed.gov/?id=ED250941).
Title: Re: Man vs. Beast
Post by: Daniel on May 27, 2014, 10:40:55 PM
Quote
You seem to be conflating structured talk about verbal efficacy (metapragmatics) with mere verbal response to a signal (not metapragmatics). If the cat said something like, "You know dog, you'd be a lot more likely to get your owner's attention with whining than barking" or "Your barking sounds ridiculous because it's very similar to my word for asparagus", that would be metapragmatic. Meowing at barks, however, isn't on its face metapragmatic in the slightest.
Regarding the Pirahã discourse described above, I see it as much closer to the "response to stimulus" you describe than what you're calling "metapragmatic". *shrug*

Quote
Actually, coining a new term in that way would be almost certainly be metapragmatic.
To be clear, I meant the coining would be an unconscious, uncontrolled process through natural language use. It would not be something I "decided" to do. My phrasing may have been ambiguous.

I'll take a look at the link.
Title: Re: Man vs. Beast
Post by: MalFet on May 27, 2014, 10:59:35 PM
Regarding the Pirahã discourse described above, I see it as much closer to the "response to stimulus" you describe than what you're calling "metapragmatic". *shrug*

You don't see a significant difference between people saying they want to learn Portuguese because they fear they are being swindled by Portuguese speakers, on the one hand, and a cat meowing at a barking dog, on the other? I find that baffling, to say the least.
Title: Re: Man vs. Beast
Post by: Guijarro on May 28, 2014, 12:48:25 AM
I think that this METAPRAGMATIC concept (if I understand it well enough) is another way to say metapragmatically what I intend to make cognitively manifest about my cognitive/linguistic coin instead of your communicative/linguistic one.
Title: Re: Man vs. Beast
Post by: MalFet on May 28, 2014, 12:53:25 AM
I think that this METAPRAGMATIC concept (if I understand it well enough) is another way to say metapragmatically what I intend to make cognitively manifest about my cognitive/linguistic coin instead of your communicative/linguistic one.

With a few caveats, I think you are absolutely correct.
Title: Re: Man vs. Beast
Post by: Daniel on May 28, 2014, 06:44:48 AM
Quote
You don't see a significant difference between people saying they want to learn Portuguese because they fear they are being swindled by Portuguese speakers, on the one hand, and a cat meowing at a barking dog, on the other? I find that baffling, to say the least.
1. I thought we were discussing noticing the similarities between Everett's utterance (a pronunciation of a written word) and Pirahã, not their desire to learn Portuguese. Is that even established? I didn't know they wanted to learn Portuguese. If so, it goes against a lot of what Everett says about their culture in the here and now! [This is aside from the Brazilian government's creepy intervention by adding buildings and a Portuguese school with a TV!]
2. It's not that I don't see a difference, but that I don't see a categorical one, especially when we don't know the full thought process of the cat. Observing verbal behavior doesn't give us that information. From the behavior itself, I'm not sure it isn't metapragmatic.
3. This all seems behavioral to me, not linguistic. The Pirahã might also want guns. Would that also be classified as metapragmatic?
Title: Re: Man vs. Beast
Post by: jkpate on May 28, 2014, 07:14:46 AM
3. This all seems behavioral to me, not linguistic. The Pirahã might also want guns. Would that also be classified as metapragmatic?

If they want guns as symbols, such as to communicate power, wealth, or political allegiances, then I think it would be meta-pragmatic. If they want guns for non-symbolic purposes, such as to use them to hunt, then that would not be meta-pragmatic on my understanding.
Title: Re: Man vs. Beast
Post by: MalFet on May 28, 2014, 08:21:26 AM
1. I thought we were discussing noticing the similarities between Everett's utterance (a pronunciation of a written word) and Pirahã, not their desire to learn Portuguese. Is that even established? I didn't know they wanted to learn Portuguese.

Are you really not clear on what I'm referring to here? If so, that's frustrating.
http://linguistforum.com/historical-linguistics/man-vs-beast/msg2406/#msg2406

2. It's not that I don't see a difference, but that I don't see a categorical one, especially when we don't know the full thought process of the cat. Observing verbal behavior doesn't give us that information. From the behavior itself, I'm not sure it isn't metapragmatic.

If you can find me an animal behaviorist who suggests that feline communication involves metapragmatics, let's talk. Until then, if we're attributing complex cognition in the absence of evidence, why not just assume the cat is meowing because he's fed up with post-structuralist social theory and its critical obsession with Cartesian epistemology? I know I am!

3. This all seems behavioral to me, not linguistic. The Pirahã might also want guns. Would that also be classified as metapragmatic?

Well, no. Of course not. At least not without the addition of something else...i.e., as jkpate suggests, an argument that "guns make words more powerful", or something like that.

I can't help but feel that you have absolutely no idea what the word "metapragmatics" means. That's fine, of course, and it is indeed a moderately complicated idea, but if you don't know what it means that makes your very strong opinions about what it does and doesn't apply to a bit premature, no?

All I can suggest at this point is that you look at the article I posted.  Alternately, the more recent article "Metapragmatic Discourse and Metapragmatic Function" might be a better overview, but "Limits of Awareness" was very first introduction of the idea. I can recommend many others if either of those are not to your liking.
Title: Re: Man vs. Beast
Post by: Daniel on May 28, 2014, 10:32:02 AM
Quote
Are you really not clear on what I'm referring to here? If so, that's frustrating.
Quote
They told us that they wanted to learn this because they knew that they did not understand nonbarter economic relations and wanted to be able to tell whether they were being cheated.
As I said earlier, I'm not clear on exactly what that means. For example, I don't know how much interpretation is involved in "want to learn Portuguese". To put it in more technical terms, I'm not sure whether they said that extensionally and intensionally.
If someone orders a pizza (let's say in Italian) and then has food, someone might say "I want to do that". I'm tentatively (conservatively) suggesting that may be entirely behavioral (extension). You are claiming it is linguistic/metapragmatic (intensional). You might be right, but I don't know that you're right.

Quote
If you can find me an animal behaviorist who suggests that feline communication involves metapragmatics, let's talk. Until then, if we're attributing complex cognition in the absence of evidence, why not just assume the cat is meowing because he's fed up with post-structuralist social theory and its critical obsession with Cartesian epistemology? I know I am!
Again, I don't know. I know little of the field of animal communication (one reason I'm interested in it at the moment). But, for example, Con Slobodchikoff (whose book I've been reading-- that's why I keep mentioning the name) would probably say we can't know until we actually understand their communicative behavior/system from their perspective. And that's all I'm saying-- we don't know.

Quote
Well, no. Of course not. At least not without the addition of something else...i.e., as jkpate suggests, an argument that "guns make words more powerful", or something like that.
Ok, and that would then be metapragmatic, even completely outside the domain of language?

In the end, I'm questioning the strict dichotomy. I don't question that humans are farther along the spectrum than other animals.
As a hypothetical, let's imagine that there is some intermediate level in the spectrum. What would that look like?
It may turn out to be a completely false assumption (that there is some spectrum to consider), but let's falsify it, rather than assuming it doesn't exist.

Just to add another animal communication act to the discussion, consider parrots that imitate humans. They then have a collection of signs including normal bird sounds and some human words like "hello". What I'd wonder is if there is any evidence whatsoever that they distinguish between the two types of signs as a (human) bilingual would, even to a very small extent. Do they say "hello" to other birds? Do they make normal bird noises to humans?
Title: Re: Man vs. Beast
Post by: MalFet on May 28, 2014, 11:50:18 PM
If someone orders a pizza (let's say in Italian) and then has food, someone might say "I want to do that". I'm tentatively (conservatively) suggesting that may be entirely behavioral (extension). You are claiming it is linguistic/metapragmatic (intensional). You might be right, but I don't know that you're right.

I'm not really following you in the slightest. How could any act of speech be entirely extensional (let alone one using deictic shifters like "I" and "that"!)?

You seem to want metapragmatic function to depend on some sort of distinction between concept and behavior but -- again -- that's just not what's at stake. That's just not what it's about. Kripke himself talks about the extensional dimensions of metapragmatics at some length. Heck, that's precisely what his baptismal events are: extensional metapragmatics.

If a person interprets the arrival of pizza as the effect of speaking in some particular way or under some particular set of circumstances, that's a pragmatic interpretation. If that person goes on to talk about wanting to be able to produce that same effect themselves, that's metapragmatic discourse. Full stop. Whatever other conditions you are imposing on the term are just incorrect.

Quote
If you can find me an animal behaviorist who suggests that feline communication involves metapragmatics, let's talk. Until then, if we're attributing complex cognition in the absence of evidence, why not just assume the cat is meowing because he's fed up with post-structuralist social theory and its critical obsession with Cartesian epistemology? I know I am!
Again, I don't know. I know little of the field of animal communication (one reason I'm interested in it at the moment). But, for example, Con Slobodchikoff (whose book I've been reading-- that's why I keep mentioning the name) would probably say we can't know until we actually understand their communicative behavior/system from their perspective. And that's all I'm saying-- we don't know.

And, we do understand a great deal about animal communication systems, particularly primate systems. Is it possible that we will one day realize that some animals have language complexity vastly beyond what we currently appreciate, including even metapragmatic functions? Sure. Is it possible that one day we will realize that the moon actually is made out of cheese? Also sure. This room for skepticism is not unique to these particular problems but rather is built into the scientific process itself.

In the meantime, at least, everything we know about animal communication tells us that they don't engage in metapragmatic functions. In the absence of dramatic new findings, that is and will continue to be the state of the art. Unevidenced speculations about what we might discover in the future don't change that.

Quote
Well, no. Of course not. At least not without the addition of something else...i.e., as jkpate suggests, an argument that "guns make words more powerful", or something like that.
Ok, and that would then be metapragmatic, even completely outside the domain of language?

I have absolutely no idea how you could make a statement about the power of words "outside the domain of language". That's just a contradiction of terms on several different levels.

In the end, I'm questioning the strict dichotomy. I don't question that humans are farther along the spectrum than other animals. As a hypothetical, let's imagine that there is some intermediate level in the spectrum. What would that look like? It may turn out to be a completely false assumption (that there is some spectrum to consider), but let's falsify it, rather than assuming it doesn't exist.

Of course there is a wide spectrum of complexity in metapragmatic functions. The article I keep pointing you towards is chiefly about exactly that continuum. But, at minimum and by definition, metapragmatic function requires a speaker to treat the consequentiality of language as itself an object of language. Despite vast quantities of research, there is exactly no evidence that I am aware of to suggest that animals do this. If you happen to find any, I'd be very interested to learn of it.

Just to add another animal communication act to the discussion, consider parrots that imitate humans. They then have a collection of signs including normal bird sounds and some human words like "hello". What I'd wonder is if there is any evidence whatsoever that they distinguish between the two types of signs as a (human) bilingual would, even to a very small extent. Do they say "hello" to other birds? Do they make normal bird noises to humans?

No clue, but whether they do these things or not is irrelevant to their metapragmatic capabilities (or lack thereof).
Title: Re: Man vs. Beast
Post by: MrChiLambda on June 01, 2015, 05:45:10 AM
Vibrations are fairly complex.

A dolphin vibrates with clicks, and that is better able to traverse water.

A cat uses tone, most probably. Emotion expressed through varied pitches. I am unsure if they are attempting to learn to speak to humans, and the language is incomplete, or if they already do a formal language and are bending it towards us. When seeing mother daughter cats communicate, they tend to be silent and use their eyes.

A dog. Maybe they are similar to cats in the sense they communicate in other ways, primarily instinct, impulse and scent, correct? I am unsure if they have any formal language, or heightened sense of observation when compared to cats.

Title: Re: Man vs. Beast
Post by: Copernicus on June 09, 2015, 11:05:22 AM
My problem with discussing what other animals with complex brains do and don't think is that our empathic abilities can sometimes fail pretty miserably with other human cultures, and it is much, much harder to put ourselves in the heads of different animal species.  First of all, we need to remember that human language need not be speech-based.  It is at least plausible that our speaking ability evolved in conjunction with equally complex gestural communication.  One major evolutionary advantage of speech over gesture (or any visual mode) is that it works in the dark.

Charles Fillmore once told me that he liked to define language as "word-guided mental telepathy".  Of course, that metaphor works very well in the context of Frame Semantics (http://en.wikipedia.org/wiki/Frame_semantics_%28linguistics%29), but it is a really profound statement about the nature of human communication.  Ultimately, it is about the replication of thought in different minds.  Animals clearly make inferences about the behavior of other animals and things in their environment, and it is really useful for social animals to be able to communicate those inferences across a species by any means at all.  Spoken communication is just a very useful, efficient way of doing that in so many social interactions.  And we are able to transfer spoken communication to other formats, as well, e.g. gesture, writing, touch (touch-typing, braille).

So, why don't we understand "cat"?  The fact is that cats, like humans, have a call system.  They can convey thoughts through sound, and we can learn to understand their calls on some level and even imitate them.  Cats can also understand our behavior on that level.  When we cry, scream, and laugh, I do think that animals we socialize with--cats, dogs, parrots, dolphins--understand that level of communication, at least up to a point.  We do a better job of communicating with other primates, especially other apes.  Do the other apes have what we would call "language"?  That is, can they string "words" together to communicate the finer nuances of their world models?  I think that it is still very much of an open question about how much we can interact with them on that level, but I suspect that they do a much better job of reading the minds of other members of their species than we do of reading theirs.
Title: Re: Man vs. Beast
I hope this thread is still alive. I realize I am jumping on about a year late.

I also want to address your question simply and at face value, so when you say that, given the assumption that human language is more complex a beast's language (e.g. a cat's language), we should be able to speak to beasts since we understand a higher form of their language.

In reality, I'm don't think you can call a lot of what the animal kingdom uses as communication a "language". Without looking up the word, I say that language is a form of communication that aides its communication with symbols, either phonetically or graphically or gesturally, and those symbols follow some sort of syntax. Language is also GREATLY supported by other forms of communication that we share with nearly all animals that have a social element, such as expressing emotion through our tone of voice and posture.

So when you ask why we can't "speak cat", I would say we can pretty well, if you mean communicate with cats. I have a dog and I know that I can fairly reliable communicate to him when I am ready to play by having a certain posture and beating my hands on the ground and tossing his toy around. I am communicating with my dog and I believe we both understand each other (to some degree), which I find beautiful and really motivating. But at the same time, I'm not "speaking dog" because I don't think there is a "dog language."

Please excuse any poor form in my response. I am new to the forum environment. If you see any formatting or structural ways I can improve, please let me know :). And thanks for the thought-provoking question!

Matthew
Title: Re: Man vs. Beast
Post by: panini on July 06, 2015, 09:14:43 AM
I say that language is a form of communication that aides its communication with symbols, either phonetically or graphically or gesturally, and those symbols follow some sort of syntax.
I hope you are not intentionally saying that the purpose of language is communication, since that is incorrect. Communication is a possible use of language, as is obfuscation; but the purpose of language is cognition.
Title: Re: Man vs. Beast
Post by: Daniel on July 06, 2015, 10:38:31 AM
Quote from: panini
but the purpose of language is cognition.
Why are you assuming there is some purpose to language, and not just effects of language. Evolution doesn't have goals, just results.
(And the idea that language=cognition rather than language=communication is controversial; further, isn't obfuscation a kind of communicative goal?)

Matthew, welcome to the forum!

Of course you're right on some level. But is it that simple? You seem to be approaching it as a subset relationship where your communication with dogs is a subset of how you communicate with humans. (Obviously it must be: you are a human, and therefore you are only capable of human things. So any communication with other species is a subset of what we're capable of.) But that doesn't mean the dogs can't also do other things. One obvious case is smell, which I assume you don't communicate with as effectively as dogs.
Whether it's a "language" may just be definitional, but it still seems complicated to me.
In the end, it seems that language is circularly defined as "human language", with open questions as to things like whether it's associated more with cognition or communication.

And still, why is there [assumed to be] a sharp distinction between the two categories? That's what puzzles me. I may agree with you that dogs don't have language, but I can't exactly express why in testable terms. (I do know they don't speak English or any other human language, though, but that's not very informative.)
Title: Re: Man vs. Beast
Post by: panini on July 06, 2015, 06:13:59 PM
Why are you assuming there is some purpose to language, and not just effects of language.
You're right, that is not literally true; I don't mean to imply that there is a purpose, I was simply harmonizing my wording with the statement that "language is a form of communication", which is also not literally true. Language is a cognitive system that can be used for all sorts of cognitive purposes.

I don't know of a definition of "communication" whereby not conveying information is communication, and the idea that language=communication is also controversial, thus we can't avoid controversy. Since cognition subsumes communication (and not vice versa), my claim is that the nature of language is broader than the popular "language is communication" position.
Quote
In the end, it seems that language is circularly defined as "human language", with open questions as to things like whether it's associated more with cognition or communication.
I disagree with the premise that a definition is necessary. Take a natural term like "dog" – how do you define "dog"? You can say "Dog is defined as the things called 'dog'", which would indeed be circular, or you can semi-circularly define it as "canis familiaris" (which is defined as "dog"). Anybody can declare that they define a given term however they say they define it, so I can define "language" as a motor vehicle that gets at least 32 MPG. In ordinary use, "language" refers to the things that people speak, and doesn't refer to the noises that cats or cows make. I grant that there are sectors of the population which use "language" to refer to anything systematic, but we have the right to correct people when they make confused statements, whether they be about "the language of dance" or "the language of genetics", or weird ideas about "primitive languages".

I also don't see that it is an open question as to whether language is associated with cognition or communication -- I think it is uncontroversial that it is associated with both. Perhaps you mean something more specific than "associated with".

Title: Re: Man vs. Beast
Post by: Daniel on July 07, 2015, 09:51:52 AM
Quote
I also don't see that it is an open question as to whether language is associated with cognition or communication -- I think it is uncontroversial that it is associated with both. Perhaps you mean something more specific than "associated with".
What I mean is that others would completely disagree with you. It's a controversial topic, more than an open question: most people seem to have an answer, but that answer differs, with each perspective having a very strong view.
Title: Re: Man vs. Beast
Post by: panini on July 07, 2015, 09:36:34 PM
Quote
I also don't see that it is an open question as to whether language is associated with cognition or communication -- I think it is uncontroversial that it is associated with both. Perhaps you mean something more specific than "associated with".
What I mean is that others would completely disagree with you. It's a controversial topic, more than an open question: most people seem to have an answer, but that answer differs, with each perspective having a very strong view.
I still don't understand the nature of the controversy or disagreement that you are pointing to. I am utterly unaware of there being any controversy over the proposition that communication is a specific instance of the concept cognition. What exactly is it that you think is controversial or not obviously true about my position, which is, specifically, that language is fundamentally a tool for cognition, and that communication is a sub-case of cognition?
Title: Re: Man vs. Beast
Post by: Daniel on July 07, 2015, 10:32:21 PM
It seems to me that a lot of people link language directly to communication, while others (perhaps more) make the argument you're making. That's all.
Title: Re: Man vs. Beast
Post by: Guijarro on July 08, 2015, 03:49:57 AM
“It is often pointed out that, thanks to their grammars and huge lexicons, human languages are incomparable richer codes that the small repertoire of signals used in animal communication.

Another striking difference –but one that is hardly ever mentioned– is that human languages are quite defective when regarded simply as codes. In an optimal code, every signal must be paired with a unique message, so that the receiver of the signal can unambiguously recover the initial message. Typically, animal codes (and also artificial codes) contain no ambiguity. Linguistic sentences, on the other hand, are full of semantic ambiguities and referential indeterminacies, and do not encode at all many other aspects of the meaning they are used to convey. This does not mean that human languages are inadequate for their function. Instead, what it strongly suggests is that the function of language is not to encode the speaker’s meaning, or, in other terms, that the code model of linguistic communication is wrong. (pg.332).

[…]

The human mind is characterized by two cognitive abilities with no real equivalent in other species on Earth: language and naive psychology (that is, the faculty to represent the mental state of others). […] It is because of the interaction between these two abilities that human communication was able to develop and acquire its incomparable power. From a pragmatic perspective, it is quite clear that language faculty and human languages with their richness and flaws, are only adaptive in a species that is already capable of naive psychology and inferential communication. The relatively rapid evolution of languages themselves, and their relative heterogeneity within one and the same linguistic community –we see these two features as linked– can only be adequately explained if the function of language in communication is only  to provide evidence of the speaker’s meaning, and not to encode it.

In these conditions, research on the evolution of language faculty must be closely linked to research on the evolution of naive psychology”. (pg.338).

(Sperber, Dan & Gloria Origgi (2012): “A pragmatic perspective on the evolution of language” in Wilson, Deirdre & Dan Sperber (2012): Meaning and Relevance, Cambridge, C.U.P. )

Title: Re: Man vs. Beast
Post by: Daniel on July 08, 2015, 06:20:58 AM
I've always found that argument (regarding imperfections in human communication) to be very confusing, for two reasons:

1. It may just be inherent that, given the complexity of our message and use in the real world (rather than, say, transferring a file by email with machines that are inherently accurate for every bit of information-- literally), such imperfections are just natural. Nothing "human" about it, just part of having a complex code.

2. It is, I think, a distraction from the real issue: it confuses interpretation and information, in that no code is fully unambiguous regarding interpretation, and no code is fully explicit. For example the data stream for an audio file of the human voice has exactly the same ambiguities and underspecifications as that human voice plus more when the channel is noisy and certain sounds are less clear.

So we must distinguish between the code itself (the combination of sounds->words being transmitted from one individual to another) and their purpose in communication (where these imperfections really arise).

There are of course some legitimate cases of ambiguity in the code itself. But this is just an effect of a natural evolution of the code by trial and error, where some messages happen to look alike. They are also rarely truly ambiguous in context and with intonation. Furthermore, this ambiguity is an effect of linearization of the signal more than anything.

The reason it (that is, ambiguity) doesn't come up for animals is that their codes aren't that complicated (at least as far as we understand them), but remember that they are certainly underspecified quite often ("danger" -- of what?).
Title: Re: Man vs. Beast
Post by: Guijarro on July 08, 2015, 09:26:17 AM
Sperber & Origgi do not talk about imperfect communication, as far as I understand their text. On the contrary, they think that the undertermination of the linguistic code when trying to represent the speaker's meaning favours human freedom to use other means to arrive at the speaker's intention in a richer and more accurate communicative process. What they claim is that linguistic coded material couldn't have evolved and developed at such a pace if coding/decoding was the only process involved in human communication (in their paper they give an extensive example of how this could work in reality to account for the constant changes of human linguistic codes --which, understandably, does not show in other species codes). Our languages are all that complex and rich in comparison with other animal codes precisely because changing elements of the code does not hinder the achievement of accurate communication. Animal codes are so unchangeable (and unable to expand) because they are the only means some species have to communicate and a failure matching is counter-effective and may hinder communication.

ASIDE: They don't mention it, but I believe that we are able to understand some of the messages sent by our pets for we use our naive psychology to arrive at some kind of "mind reading" and, therefore, become aware of some of their intentions. The more I hear people talking about their pets (for instance, Malfet's son with his cat), the more I think this process is taking place in their relationships.

But I may be wrong, of course.
Title: Re: Man vs. Beast
Post by: Copernicus on July 14, 2015, 06:31:31 PM
I'm not at all sure why one would conclude that animal communication is unambiguous or less ambiguous than human language.  If language is evolved to facilitate replication of a train of thought--an understanding of the intentions of other animals--then it may well be that the more limited modes of expression available to non-human animals may be open to far many more interpretations.  The human "call system"--cries, laughter, screams, etc.--is certainly open to lots of different interpretations.  For example, crying can indicate sorrow, but also relief.  Human language gives us the advantage that it can be far more precise about thought content in different contexts than a limited set of calls, postures, and expressions can.
Title: Re: Man vs. Beast
Post by: jkpate on July 16, 2015, 11:24:54 AM
“It is often pointed out that, thanks to their grammars and huge lexicons, human languages are incomparable richer codes that the small repertoire of signals used in animal communication.

Another striking difference –but one that is hardly ever mentioned– is that human languages are quite defective when regarded simply as codes. In an optimal code, every signal must be paired with a unique message, so that the receiver of the signal can unambiguously recover the initial message. Typically, animal codes (and also artificial codes) contain no ambiguity. Linguistic sentences, on the other hand, are full of semantic ambiguities and referential indeterminacies, and do not encode at all many other aspects of the meaning they are used to convey. This does not mean that human languages are inadequate for their function. Instead, what it strongly suggests is that the function of language is not to encode the speaker’s meaning, or, in other terms, that the code model of linguistic communication is wrong. (pg.332).

It depends on what you mean by "optimal" here. One definition could be no errors, but another could be an acceptable error rate. Formally, we can understand the error rate by considering the conditional entropy of the message $M$ given the signal $S$, denoted $H(M | S)$. If the signal is completely unambiguous, $H(M | S) = 0$ and there is no risk of an error. If there are two equally likely messages for $S$, then $H(M | S) = 1$ bit and there is risk of an error. If there are two possible messages but one is much more likely, then $H(M | S) < 1$ bit. The Noisy Channel Theorem (https://en.wikipedia.org/wiki/Noisy-channel_coding_theorem) establishes a bound on codes, using this conditional entropy, that have a pre-specified error rate greater than 0 as well as for codes that have an error rate arbitrarily close to zero. A non-zero error rate might well be tolerable if it is easy to recover from the errors (indeed, this is how lossy compression algorithms (https://en.wikipedia.org/wiki/Lossy_compression), such as JPEG and MP3, manage to provide such exceptional compression).

Moreover, many potential ambiguities are ruled out by the real world context -- let's call it $C$. By a general property of entropy, the conditional entropy of the message given the signal and the context is less than or equal to the entropy of the message given the signal alone: $H( M | S, C ) \leq H( M | S )$, with equality iff the signal and the context are statistically independent. Presumably for natural language the signal and the context are not statistically indepedent: some utterances are more likely in some contexts than others (i.e. $P(S | C) \neq P(S)$). So, for natural language, an information-theoretic approach entails that isolated utterances are more ambiguous than situated utterances.

This is all just to say that the concerns you raise do not challenge the "language as code" view, and to show how an information-theoretic approach provides a natural treatment.
Title: Re: Man vs. Beast
Post by: Guijarro on July 19, 2015, 09:46:37 AM
Your argument seems impressive --at least for a fellow like me who becomes dizzy when I see formulae in a text. As I am a perfect nullity in information theory, I could not make heads and tales of your last posting. But I am also a member of another forum concentrating on Relevance Theory, where I wrote asking for helping me decipher (and, if possible, respond to) your text.

Here is what I got:

if I understand it correctly, your interlocutor might have a point in saying that, since (if only because of channel noise) there is no such thing in reality as an absolutely unambiguous signal, the optimality of a code is a gradual (sc. statistical) notion in code theory; from their point of view, this would make an "ideal" code a straw man.

Where this could be countered, I think, is the introduction of "the real world context" C as a factor in disambiguation. As shown by Sperber & Wilson, the context is not given, but "chosen". And this choice is not a matter of decoding (even in a more sophisticated sense of decoding); moreover, in metaphor, irony or ad hoc concepts, this choice may even make us override linguistic code rather than disambiguate it.

In fact, there has been some work in information science, esp. with respect to information retrieval systems, on the problem of "relevance" and the need to include context and users' knowledge and preferences in the definition of relevance. So possibly an overly simple code model could even be challenged on its home ground?
(Jan Straßheim)
Title: Re: Man vs. Beast
Post by: jkpate on July 27, 2015, 11:49:46 PM
Could you clarify how the Sperber and Wilson notion of context selection enters into the argument? My understanding of context selection in Relevance Theory is that listeners select features from the current environment and their background knowledge, but the set of features from which they select is fixed (when interpreting a given utterance). The conditional entropy $H(M|S,C)$ similarly will be sensitive to only those features $c_i$ of $C$ that are not statistically independent of the message given the utterance. For example, if a listener's background knowledge can be represented as a (potentially infinite) causal graph, only those features of the context that are not d-separated (http://www.andrew.cmu.edu/user/scheines/tutor/d-sep.html) from the meaning by the features of the current situation will change $H(M|S,C)$.

Under this view, I understand this context selection business as saying that finding all non-d-separated nodes is computationally intractable, and speakers employ a variety of accessibility heuristics to find most of the most important ones. Is that inconsistent with your understanding?

Title: Re: Man vs. Beast
Post by: Copernicus on July 28, 2015, 12:35:15 PM
I can't speak for Guijarro, but I can try to express how I understood his point.  It might help to consider Michael Reddy's conduit metaphor of language (https://en.wikipedia.org/wiki/Conduit_metaphor), which he developed in the 1970s.  Basically, the idea is that language is thought of as a "pipe" through which information flows.  It is packaged or encoded at one end and decoded at the other.  This metaphor has dominated thinking about language for a very long time, and it is very powerful.  Formal languages tend to conform to it.  So to extract information from a linguistic signal, all one has to do is simply decode the content contained in it.  Information theory is essentially about signal processing, not natural human language processing.  The conduit metaphor is misleading for human language, but not formal symbolic systems.

An alternative metaphor--one that Charles Fillmore once expressed to me--is that language is "word-guided mental telepathy".  That is, it is thought replication that uses linguistic expressions as keys to unlock associative clumps of information.  So, if you look at his FrameNet (https://framenet.icsi.berkeley.edu/fndrupal/) approach to semantics, the "frames" represent clumps of conceptual associations that one can then map words onto.  So the semantic structure of a sentence is not actually structured like the linguistic signal, but the linguistic signal evokes it through association with frames.  Frames then structurally represent the information that the speaker is trying to communicate.  So I suppose that that is one way of looking at what you call "accessibility heuristics".  However, there could be non-linguistic information that also contributes to the slot-filling activity of assigning roles to entities.  Formal languages are always literal and compositional in usage, whereas natural language can have layers of non-literal and conventional significance.  The natural linguistic signal is always going to be a defective component of a discourse context.
Title: Re: Man vs. Beast
Post by: jkpate on July 30, 2015, 02:16:34 PM
Hmm, I still don't see how these points argue against the code view. Formal languages may be literal and compositional, but information-theoretically optimal codes often aren't. Arithmetic coding (https://en.wikipedia.org/wiki/Arithmetic_coding), for example, approaches information-theoretic limits and is not compositional.

I think Fillmore's metaphor is fully compatible with an information-theoretic approach. Listeners don't just have a model for relating semantic representations to strings; they also have a models for relating real world situations to semantics, models of likely real-world situations, models of other talkers, etc.. Probability theory provides a natural and mathematically well-grounded language for expressing and testing potential models, and information theory is so profoundly tied up in probability theory that probabilistically sensible behavior is bound to be also information-theoretically sensible.

Maybe probability theory will end up being inadequate, but opponents of the code view are going to need to make much more specific criticisms to convince me.

---

Incidentally, the compositionality of natural language is actually an argument against the view that natural language achieves an information-theoretic optimum (http://jkpate.net/random_words/2014/10/
N13/why-does-linguistic-structure-exist/). Briefly, if language is compositional, then the length of the signal $l_{\mathbf m}$ for a message $\mathbf m$ is equal to the sum of the lengths of each component $m_i$ of the message:

$l_{\mathbf m} = \sum_{m_i \in \mathbf m} l_{m_i}$

It turns out that this summation over lengths corresponds to an assumption that the message components are statistically independent from each other (more discussion and derivation at the above link to my blog). In natural language, of course, what we usually consider to be components (words and constructions) are not statistically independent -- a sentence that mention scrambled eggs is more likely to mention orange juice or coffee. Non-compositional language phenomena provide an opportunity to provide much shorter signals than would be possible in an exclusively compositional approach (think of the difference in length between "United States of America" and "USA"). So, the question then is whether talkers choose non-compositional forms in a way that moves language closer to the information-theoretic optimum, and there's some evidence that they do (e.g. Frank and Jaeger, 2008 (http://www.bcs.rochester.edu/people/fjaeger/papers/frankjaeger08cogsci.pdf).).
Title: Re: Man vs. Beast
Post by: Copernicus on July 30, 2015, 08:42:28 PM
Hmm, I still don't see how these points argue against the code view. Formal languages may be literal and compositional, but information-theoretically optimal codes often aren't. Arithmetic coding (https://en.wikipedia.org/wiki/Arithmetic_coding), for example, approaches information-theoretic limits and is not compositional.
IMO, the problem is that you still think of the meaning of an expression as somehow fully encoded in the signal.  The point I was trying to make is that it isn't.  The signal is semantically defective, but it contains information that enables the receiver to assemble the meaning, given assumptions made by the sender.  That is, linguistic meanings are essentially inferred from components of the signal, not encoded in it.  My criticism of the information-theoretic approach is that it fully buys into the "conduit metaphor" view of language.  That metaphor seems to hold up at the sentence level, but it ignores the fact that sentences only convey meaning in an assumed context.  However, once you start looking at the level of discourse processing, it breaks down rather quickly.

Quote from: jkpate
I think Fillmore's metaphor is fully compatible with an information-theoretic approach. Listeners don't just have a model for relating semantic representations to strings; they also have a models for relating real world situations to semantics, models of likely real-world situations, models of other talkers, etc.. Probability theory provides a natural and mathematically well-grounded language for expressing and testing potential models, and information theory is so profoundly tied up in probability theory that probabilistically sensible behavior is bound to be also information-theoretically sensible.
I think you are basically agreeing with me that the meaning of linguistic expressions requires context in order for a listener to discover it, but the information-theoretic approach is basically about signal processing, not meaning comprehension.  To understand an expression is essentially to integrate it with one's experiences--what you might call a "world model".  I'm not saying that the problem is computationally impossible, but that it involves a lot more than mere signal processing.  Probabilistic approaches work very well on a gross level for disambiguating word senses primarily because mutual information is an extremely powerful concept.  I think that they have proven their worth in applications such as text mining large amounts of data.

Quote from: jkpate
Maybe probability theory will end up being inadequate, but opponents of the code view are going to need to make much more specific criticisms to convince me.
How does probability theory actually help you generate linguistic structure?  There are two sides to language--production and comprehension.  The best that probability theory can do for you is provide you with a cloud of more or less related words.  How do you assemble those words into structured phrases that can be understood in a given context?   Where does probability help you to decide the quantifier scope?  You can extract meaning from clouds of words in a document, but constructing the document requires knowledge about how to structure the information for a discourse context.  Probabilistic approaches help with certain types of linguistic processing, but they are a dead end when it comes to real text understanding.  In fact, I don't think any approach that relies just on signal processing is scalable.  However, the conduit metaphor is well-ensconced in our thinking about language, so signal processing approaches sound very promising at first blush.

Quote from: jkpate
Incidentally, the compositionality of natural language is actually an argument against the view that natural language achieves an information-theoretic optimum (http://jkpate.net/random_words/2014/10/
N13/why-does-linguistic-structure-exist/). Briefly, if language is compositional, then the length of the signal $l_{\mathbf m}$ for a message $\mathbf m$ is equal to the sum of the lengths of each component $m_i$ of the message:

$l_{\mathbf m} = \sum_{m_i \in \mathbf m} l_{m_i}$

It turns out that this summation over lengths corresponds to an assumption that the message components are statistically independent from each other (more discussion and derivation at the above link to my blog). In natural language, of course, what we usually consider to be components (words and constructions) are not statistically independent -- a sentence that mention scrambled eggs is more likely to mention orange juice or coffee. Non-compositional language phenomena provide an opportunity to provide much shorter signals than would be possible in an exclusively compositional approach (think of the difference in length between "United States of America" and "USA"). So, the question then is whether talkers choose non-compositional forms in a way that moves language closer to the information-theoretic optimum, and there's some evidence that they do (e.g. Frank and Jaeger, 2008 (http://www.bcs.rochester.edu/people/fjaeger/papers/frankjaeger08cogsci.pdf).).
If your assumption is that all of the information necessary to decode the signal is in the signal "pipeline", then you are right about that length metric.  I do not believe that that assumption is correct.  The expression "scrambled eggs" represents a very complex web of associations.  The trick is to get the right set of associations in the given discourse context.  If you base your notion of "discourse context" on just the literal meanings of the word cloud, you are going to miss critical information that relates to an analogical mapping.  That is, you aren't going to be able to handle a fundamental aspect of human language--metaphor.
Title: Re: Man vs. Beast
Post by: Daniel on July 30, 2015, 11:44:21 PM
It seems to me that the idea of "information" in the signal is at least trivially true: something is transmitted and we can quantify that in terms of information. I believe jkpate's point is that we can theoretically optimize over that information at various levels including context. There are some questions of what the best way is, but I don't see a problem with the basic point.
Title: Re: Man vs. Beast
Post by: Copernicus on July 31, 2015, 12:49:14 AM
It seems to me that the idea of "information" in the signal is at least trivially true: something is transmitted and we can quantify that in terms of information. I believe jkpate's point is that we can theoretically optimize over that information at various levels including context. There are some questions of what the best way is, but I don't see a problem with the basic point.
I'm not really attacking information theory per se.  From my perspective, the problem is that "information" in "information theory" is not the same thing as "meaning" in linguistic semantics.  The criticism that Michael Reddy originally made regarding the conduit metaphor was that much of what a sentence means is not actually contained in the signal.  It is not just a matter of decoding a signal.  Rather, it is "constructed" or inferred from that information (in an information-theoretic sense of "information").  Reddy talks about a "toolmakers paradigm" as an alternative metaphor.  His seminal work actually had a very powerful influence on the development of the rather eclectic school of Cognitive Linguistics, which grounds meaning in embodied cognition (https://en.wikipedia.org/wiki/Embodied_cognition).
Title: Re: Man vs. Beast
Post by: Daniel on July 31, 2015, 01:33:21 AM
But isn't that just a question of what kind of information you are measuring? So a particular information theoretic analysis may very well be incorrect. But can the entire approach of measuring information content in a domain be wrong? We just need to find the right domain.
Title: Re: Man vs. Beast
Post by: jkpate on August 01, 2015, 01:48:18 AM
That metaphor seems to hold up at the sentence level, but it ignores the fact that sentences only convey meaning in an assumed context.  However, once you start looking at the level of discourse processing, it breaks down rather quickly.

...

How does probability theory actually help you generate linguistic structure?  There are two sides to language--production and comprehension.  The best that probability theory can do for you is provide you with a cloud of more or less related words.  How do you assemble those words into structured phrases that can be understood in a given context?   Where does probability help you to decide the quantifier scope?  You can extract meaning from clouds of words in a document, but constructing the document requires knowledge about how to structure the information for a discourse context.  Probabilistic approaches help with certain types of linguistic processing, but they are a dead end when it comes to real text understanding.

Probability theory helps with generating structure by using graphical models (https://en.wikipedia.org/wiki/Graphical_model). A graphical model describes relationships between (potentially infinite) random variables. Standard results in probability theory show how to perform inference for the values of some of those variables even if we never observe them. For example, for dependency parsing, we may propose a graphical model that includes a variable for each word, a variable for each potential directed arc between words, and a constraint that only variable configurations that correspond to a tree receive non-zero probability. To parse a particular sentence, or to update our grammar, we "clamp" the word variables to the values of the words of the sentence, and then use probabilistic inference techniques to compute the probability distribution over possible trees given those words or find the tree with the highest probability. This is how the Dependency Model with Valence (http://www.cs.berkeley.edu/~klein/papers/acl04-factored_induction.pdf) works (and subsequent variants). The same basic strategy has since been pursued for CCG (http://nlp.cs.illinois.edu/HockenmaierGroup/Papers/TACL2013/HDP-CCG.pdf) and Tree Substitution Grammar (http://www.jmlr.org/papers/volume11/cohn10b/cohn10b.pdf), and inspired a new kind of grammar called Adaptor Grammars (http://papers.nips.cc/paper/3101-adaptor-grammars-a-framework-for-specifying-compositional-nonparametric-bayesian-models.pdf).

There has also been work on graphical models that relate strings to logical forms via unobserved syntactic structure, using CCG (http://www.aclweb.org/anthology/D10-1119) or Hyper-edge Replacement Grammars (http://homepages.inf.ed.ac.uk/s1051107/hrg-lang-mod.pdf) (like a context-free grammar for graphs). I'm not aware of a model that includes a probability distribution over discourse structures, such as those provided by Discourse Representation Theory (https://en.wikipedia.org/wiki/Discourse_representation_theory), but I don't see any reason in principle that would be impossible. It's still just variables (that presumably have an infinite domain) that have various relationships with each other.

Broadly speaking, linguistic theories build structures by selecting reusable components (such as local subtrees and lambda expressions) from a potentially-infinite bag of possible components. Graphical models work exactly the same way, except they also define a probability distribution over different ways to assemble the components. All of the information-theoretic quantities are defined with respect to probability distributions, so any time we have a probability distribution we also have all the potentially-useful information-theoretic quantities.
Title: Re: Man vs. Beast
Post by: Copernicus on August 01, 2015, 09:45:07 AM
But isn't that just a question of what kind of information you are measuring? So a particular information theoretic analysis may very well be incorrect. But can the entire approach of measuring information content in a domain be wrong? We just need to find the right domain.
The point I was trying to make is that context is not actually part of the signal, but information-theoretic approaches are all about signal processing.  They are very useful for many different types of text processing tasks, but they don't really work well for a model of how humans actually process language.  They analyze structure in signals, but they tend not to help us understand how the structure got into the signal or what it is there for in the first place.  The term "information" in "information theory" is really about the transformational processing of data from one form to another.  It is not really about understanding what natural language expressions mean.  For that, you need to have a theory that explains the relationship between thought and language.  Signal processing approaches do not.
Title: Re: Man vs. Beast
Post by: Copernicus on August 01, 2015, 10:49:05 AM
Probability theory helps with generating structure by using graphical models (https://en.wikipedia.org/wiki/Graphical_model). A graphical model describes relationships between (potentially infinite) random variables. Standard results in probability theory show how to perform inference for the values of some of those variables even if we never observe them. For example, for dependency parsing, we may propose a graphical model that includes a variable for each word, a variable for each potential directed arc between words, and a constraint that only variable configurations that correspond to a tree receive non-zero probability. To parse a particular sentence, or to update our grammar, we "clamp" the word variables to the values of the words of the sentence, and then use probabilistic inference techniques to compute the probability distribution over possible trees given those words or find the tree with the highest probability. This is how the Dependency Model with Valence (http://www.cs.berkeley.edu/~klein/papers/acl04-factored_induction.pdf) works (and subsequent variants). The same basic strategy has since been pursued for CCG (http://nlp.cs.illinois.edu/HockenmaierGroup/Papers/TACL2013/HDP-CCG.pdf) and Tree Substitution Grammar (http://www.jmlr.org/papers/volume11/cohn10b/cohn10b.pdf), and inspired a new kind of grammar called Adaptor Grammars (http://papers.nips.cc/paper/3101-adaptor-grammars-a-framework-for-specifying-compositional-nonparametric-bayesian-models.pdf).
OK, but I'm familiar with all of that.  I've worked in Natural Language Processing for a few decades, so I've seen variations on all of those approaches.  Right now, people are very interested in building "hybrid" parsers, which I think is what you are suggesting here.  Language generation is quite a bit more challenging than language analysis, but people have come up with marvelously clever techniques for dialog interactions.  Dialog modeling (which non-computational linguists like to call "discourse modeling") is now a very active area of research, and language generation is a part of that.  Speaking as a computational linguist, I would say that all of these approaches show varying degrees of promise for human-computer linguistic interfaces, but, speaking as a theoretical linguist, I would say that they cannot scale up to a plausible model of linguistic behavior in humans.  And, to be clear, I am talking about a causal model, not a statistical or probabilistic one.

Quote
There has also been work on graphical models that relate strings to logical forms via unobserved syntactic structure, using CCG (http://www.aclweb.org/anthology/D10-1119) or Hyper-edge Replacement Grammars (http://homepages.inf.ed.ac.uk/s1051107/hrg-lang-mod.pdf) (like a context-free grammar for graphs). I'm not aware of a model that includes a probability distribution over discourse structures, such as those provided by Discourse Representation Theory (https://en.wikipedia.org/wiki/Discourse_representation_theory), but I don't see any reason in principle that would be impossible. It's still just variables (that presumably have an infinite domain) that have various relationships with each other.
I am more or less familiar with those approaches.  Always been a fan of Mark Steeedman and categorial grammars.  The linguistic work is all very important as a contribution to our understanding of how linguistic signals are structured, and I do see a role for probabilistic approaches in human-computer discourse interactions.  However, if we are interested in more than just simulated discourse, I become more pessimistic that such approaches lead us in a useful direction.

Why do people choose to use the words they do?  The interesting thing about a linguistic expression is that the same expression can be used to convey completely different thoughts in different contexts, but different expressions can be used to convey the same thought in a specific discourse.  You can "canoe across a lake", "cross a lake in a canoe", or "go across a lake with a canoe".  The information content differs slightly in those three expressions in that they aren't interchangeable in all discourse contexts, but they are interchangeable in some.  In some contexts, "the boy died in a fire" is understood to mean that the fire caused his death.  In others, it could just mean that he died of some other cause while in a fire.  The expression itself has no inherent meaning, although it does contain information.  It only means something in a discourse context.

Quote
Broadly speaking, linguistic theories build structures by selecting reusable components (such as local subtrees and lambda expressions) from a potentially-infinite bag of possible components. Graphical models work exactly the same way, except they also define a probability distribution over different ways to assemble the components. All of the information-theoretic quantities are defined with respect to probability distributions, so any time we have a probability distribution we also have all the potentially-useful information-theoretic quantities.
Broadly speaking, I would agree with you.  However, potentially-useful information-theoretic quantities begs the question of whether they are useful as explanatory models of human linguistic behavior.  For that, you really need a causal model.  My view is that the causal model is essentially that of causing mental events to take place, i.e. understanding or comprehension.  It is about meaningful exchanges.  The communicative function of language ultimately drives its structural properties, although linguists have shown that one can describe those structural properties while largely ignoring their communicative function.  You can use statistical modeling to predict how likely a person is to use a relative clause, but that doesn't explain why that person uses a relative clause or how it affects the thinking of a listener.
Title: Re: Man vs. Beast
Post by: Daniel on August 01, 2015, 07:35:51 PM
Quote
The point I was trying to make is that context is not actually part of the signal, but information-theoretic approaches are all about signal processing.  They are very useful for many different types of text processing tasks, but they don't really work well for a model of how humans actually process language.  They analyze structure in signals, but they tend not to help us understand how the structure got into the signal or what it is there for in the first place.  The term "information" in "information theory" is really about the transformational processing of data from one form to another.  It is not really about understanding what natural language expressions mean.  For that, you need to have a theory that explains the relationship between thought and language.  Signal processing approaches do not.
Why not include context in "the signal"? The "signal" component is based on a literal transmission over a certain channel (like radio) due to the history of Information Theory (e.g., Shannon's work). There's no reason we need to assume the auditory linguistic information is the only Information in the model. So then the question is, as I suggested earlier, picking the right signal, not whether somehow context is embedded within someone's speech.
Title: Re: Man vs. Beast
Post by: Copernicus on August 01, 2015, 10:14:00 PM
Quote
The point I was trying to make is that context is not actually part of the signal, but information-theoretic approaches are all about signal processing.  They are very useful for many different types of text processing tasks, but they don't really work well for a model of how humans actually process language.  They analyze structure in signals, but they tend not to help us understand how the structure got into the signal or what it is there for in the first place.  The term "information" in "information theory" is really about the transformational processing of data from one form to another.  It is not really about understanding what natural language expressions mean.  For that, you need to have a theory that explains the relationship between thought and language.  Signal processing approaches do not.
Why not include context in "the signal"? The "signal" component is based on a literal transmission over a certain channel (like radio) due to the history of Information Theory (e.g., Shannon's work). There's no reason we need to assume the auditory linguistic information is the only Information in the model. So then the question is, as I suggested earlier, picking the right signal, not whether somehow context is embedded within someone's speech.
My response to that is that a signal is something different that the information content that it "contains".  Thoughts are not really transmitted, unless we are talking about real "mental telepathy".  In that case, one might consider brain waves, or some such thing, to be the "signal".  But that isn't the case.  Thought takes place independently of the linguistic signal.
Title: Re: Man vs. Beast
Post by: Daniel on August 02, 2015, 04:58:45 AM
It isn't thought that is transmitted, no. It is linguistic information (e.g., acoustics) and the information regarding context (location of utterance, shared knowledge regarding the history of the conversation, shared goals, etc.).

We might say that the message is the thought, but the signal of course is not. The thought is encoded using both a literal linguistic signal and the information in the context. Then together, we could see this as the whole linguistic signal, the speaker and hearer share information and thereby have related (but probably not identical) thoughts.
Title: Re: Man vs. Beast
Post by: Copernicus on August 02, 2015, 05:50:14 PM
It isn't thought that is transmitted, no. It is linguistic information (e.g., acoustics) and the information regarding context (location of utterance, shared knowledge regarding the history of the conversation, shared goals, etc.).

We might say that the message is the thought, but the signal of course is not. The thought is encoded using both a literal linguistic signal and the information in the context. Then together, we could see this as the whole linguistic signal, the speaker and hearer share information and thereby have related (but probably not identical) thoughts.
I used "thought" rather than "meaning", because I wanted to emphasize that thought is not linguistic in nature and does not require language to take place.  Generative semanticists typically assumed back in the 1970s, that meaning was linguistically structured (so-called "natural logic").  That assumption failed.  So I don't have a problem with calling it the "message".  Ultimately, though, linguistic structure must somehow be tightly integrated with thought, because its purpose is to communication thought.  Language is the "RNA" to mental "DNA".

To get back to the original topic here, it seems clear that other animals can think and plan very much like humans.  Broca's aphasia (or motor aphasia--loss of command of parts of the "grammar") does not seem to impair comprehension too seriously.  We can understand people who speak ungrammatically or agrammatically.  However, it is clear that language production strategies necessarily require some command of grammar.  For that reason, I consider linguistic grammars not to be neutral between perception and production strategies, but to be production-biased.  I would say that what generative linguists refer to as "the grammar" is essentially a mental process for producing language--an important component of what Chomsky called "performance".  He was wrong to assume that the purpose of the grammar was to calculate grammaticality intuitions.
Title: Re: Man vs. Beast
Post by: Guijarro on August 05, 2015, 03:47:13 AM
I see that the debate went on and on and my contribution (or rather, Jan's contribution) may be a bit late to make sense. However, this is what he has just written to me today in response to JKpate's answer to his message (a page above this one):

As far as I understand jkpate's point on context selection, I think a number of doubts are raised by findings about relevance.

1. Relevance crucially has to do with changes to one's beliefs (surprises, new information, funniness etc.) rather than with merely conforming to probabilistic frames of what is typically expected.

2. If the model is still meant to be a code model, all speakers and hearers would have to use the same frames of the world around them to construct the relevant contexts (as with a codebook). But relevance is always relevance to an individual, and the "sophisticated understanding" (Sperber) required in adult communication takes differences and changes in individual perspectives into account. I suspect that this is one thing which makes humans different from animals.

3. "Sophisticated understanding" is not just based in processes in individual brains, but in social processes happening between people. So a simulation of what goes on in an individual would not be enough to solve the puzzle.

I don't know if this makes any sense? Personally, I tend to think that the task is ultimately not tractatable in an information-theoretic framework or even in a cognitive-science one, but that we need a relevance theory integrated with a social theory (I argued some of this in a Journal of Pragmatics article in 2010).

I think that I am getting lost in the information theory framework, so I would just need to understand (as simply as you may express it --think of me as a moron and try to put some sense into my thick skull as clearly as you can) the point you seem to be making.

For you, I take it, linguistic meaning is exactly the same as the speaker's meaning.

So, for instance,

If I say,

e.1. Here is a school for boys and girls of wealthy parents

There is no ambiguity in the syntactico-semantic meaning of the sentence, and, therefore, there's no need to indulge in non coded inferencing processes to get MY meaning, which, in this case is, say, that the school allows all kind of boys and only those girls whose parents are rich.

Or, suppose I say:

e.2. The beach is full

The "fullness" I am thinking about is not full of sand, nor full of pets, nor full of people, but rather, full of empty coke bottles which disgusts me. Are you saying that to get to this thought of mine, you have enough with your linguistic decoding?

Suppose, now, I tell you

e.3. Don't you dare!

Do you maintain that the coded meaning of the sentence is enough to make my wish clear, namely, that you don't dare smoke in the hospital, or that you abstain from cheating in an exam? Or thousand of other thoughts that may be UNDER-covered by the use of that linguistic expression at different moments? One does not need to indulge in inferencing processes? Is that what you mean?

But these operations (solving ambiguities, determining scope, and fixing references) are not the only problems to solve with a code model. Take a simple straightforward coded linguistic expression, like, say, I have been to X, and tell me please how you do account for the altogether different consequences you may extract in the following two examples:

e.4.a. I have been to the bar

e.4.b. I have been to the Republic of Congo

How come that in the first interpretation of e.4.a., one assumes that the past is considered quite near the present, and does not normally imply that you have done so one or a few times in your life, but quite normally; whereas the reverse is true when interpreting e.4.b?

Now, if you answer me with little formulae, I will be stuck as a non-winged duck in the desert, and will not be able to respond, unless Jan comes to my help again --which will perhaps stretch his patience a bit too much.

You see, I thought it was obvious that semantics (coded organisation of linguistic material) cannot cover the whole range of human mental representations one wants at one moment or other make manifest through a communicative process.

It seems, I was wrong. It is far from obvious to intelligent and dedicated scientists like you seem to be.

I am astonished!

[Afterthought: if this fact is far from obvious, although we indulge in communication processes trillions of times in our life and we can watch what happens DIRECTLY, can we imagine what a debate on other less familiar ideas (i.e., evolution, the existence of god, art, heliocentric systems ... and whatnot) will be? An enormous fuss!]

Title: Re: Man vs. Beast
Post by: Daniel on August 05, 2015, 05:49:55 AM
Quote
You see, I thought it was obvious that semantics (coded organisation of linguistic material) cannot cover the whole range of human mental representations one wants at one moment or other make manifest through a communicative process.
Where did we say it did? I for one do make that distinction. But why can't you measure pragmatic/contextual information as information as well? That's where this seems to be getting controversial, not what semantics and pragmatics refer to.
Title: Re: Man vs. Beast
Post by: jkpate on August 05, 2015, 08:39:18 AM
Where did we say it did? I for one do make that distinction. But why can't you measure pragmatic/contextual information as information as well? That's where this seems to be getting controversial, not what semantics and pragmatics refer to.

I think this is exactly right. Information in information theory is just about ruling out alternatives, and it's certainly possible to define probability distributions, and so codes, over infinite spaces of possible pragmatic interpretations. Guijarro's examples show that there are many possible alternative interpretations for a non-situated utterance, and that the situation provides more information so that situated utterances may be less ambiguous (and may rule out what seems to be the most likely interpretation of the non-situated utterance, such as the one that the beach is full of people). Graphical models provide a language for exploring how these information sources are integrated.

Copernicus may well be right that a graphical modeling approach can't scale to real world situations (but then I would wonder how any approach would succeed at a task that is information-theoretically impossible). It certainly is a difficult task, and the positive case in favor of an information-theoretic view is far from complete. My only gripe is that the negative case has not been made in a serious way.
Title: Re: Man vs. Beast
Post by: Guijarro on August 05, 2015, 12:58:25 PM

However, after all my efforts to understand your (apparently) simple arguments, I confess I don't have a hint of what you are claiming. My problem, of course.

Where's the back door, pray, so that I may silently step out of this blatant proof of my intellectual inability, without loosing too much face?

Cheers!

Title: Re: Man vs. Beast
Post by: Daniel on August 06, 2015, 09:39:46 AM
Guijarro, very basically, you might think of Information Theory as a theory of decision making. Not about what you'll have for breakfast or where you'll go on your next vacation (though I suppose you could come up with some application like that too), but decisions regarding the information content in a signal.

And likewise, Pragmatics is the study of how humans determine the intended (information) content of an utterance. So it's also a study of decisions.

I'm not claiming anything specific (so if you found that lacking in my posts, you're right). But there's no reason to rule out information theory as a sort of quantified theory of pragmatics.

--

jkpate, regarding some of these more difficult problems, I think the challenge comes from attempting to apply methods that try to solve them rather than just optimize over the possible answers using heuristics. If we could somehow understand the heuristics that the human brain uses, we might come very close to understanding how "language" works.
Title: Re: Man vs. Beast
Post by: Copernicus on August 06, 2015, 12:02:26 PM
Guijarro, very basically, you might think of Information Theory as a theory of decision making. Not about what you'll have for breakfast or where you'll go on your next vacation (though I suppose you could come up with some application like that too), but decisions regarding the information content in a signal.

And likewise, Pragmatics is the study of how humans determine the intended (information) content of an utterance. So it's also a study of decisions.

I'm not claiming anything specific (so if you found that lacking in my posts, you're right). But there's no reason to rule out information theory as a sort of quantified theory of pragmatics.
Insofar as pragmatic information is encoded in a signal.  If some (or most) of the information is not extracted from the signal, then the relevance of an information-theoretic approach becomes less obvious.
Title: Re: Man vs. Beast
Post by: Daniel on August 06, 2015, 04:50:48 PM
Again, what is "the signal"?

The entire point is that context is not some vague unknown completely unassociated with the utterance. It's not part of the acoustic signal transmitted to a listener, no, but it's part of what they receive. If we're talking about syntax or semantics, the signal is obviously the language itself; if we're talking about pragmatics, then the signal is the Information in the world that is relevant to the utterance as well as the utterance itself.
Title: Re: Man vs. Beast
Post by: Copernicus on August 06, 2015, 09:14:19 PM
Again, what is "the signal"?
In the case of language, it is the perceived linguistic medium--acoustic signal (speech), visual signal (signing, writing), touch (braille), etc.  You yourself mentioned "signal" in your prior post, so I would ask you to explain what you think it is.  Do you take private thoughts to be part of the signal?  Are presuppositions part of the signal?  Presuppositions are propositions that must be true in order for a speech act to carry off.  In what sense are they necessarily part of a signal?

Quote
The entire point is that context is not some vague unknown completely unassociated with the utterance. It's not part of the acoustic signal transmitted to a listener, no, but it's part of what they receive. If we're talking about syntax or semantics, the signal is obviously the language itself; if we're talking about pragmatics, then the signal is the Information in the world that is relevant to the utterance as well as the utterance itself.
I'm not sure what you mean by "information in the world".  I don't see information as having an independent existence outside of a mind.  It is interpreted data, which means that there has to be an interpreter for it to exist.  We agree that language is inherently a signal, but this idea that pragmatic knowledge is basically a signal strikes me as very strange.  I tend to think of it as a dynamic model.  I mean "dynamic" in the sense that it is a web of associations that is constantly being changed as new information arrives from sensory signals.  The model itself is not a signal, and communication can only take place to the extent that both the sender and the receiver have overlapping models of reality or shared metaphors.  Human cognition is fundamentally associative--a web of associations.
Title: Re: Man vs. Beast
Post by: Daniel on August 06, 2015, 09:53:48 PM
Quote
In the case of language, it is the perceived linguistic medium
That's a general assumption, but not one implied by the idea of Information Theory. The idea of IT in this case would be to quantify over relevant Information. It would be, as I said, beyond the linguistic signal, to include other kinds of input:
Quote
In what sense are they necessarily part of a signal?
Because they are available to the hearer and listener (generally speaking), and because they affect the message.

Another very strange way of looking at this would be to see the signal as the language itself and the context as the channel through which it is transmitted, thereby adding noise (which would model reduced information due to implicatures). Then the job of the listener is to extract the noise of the channel (context) from the originally intended message. This would essentially reconstruct what the speaker intended, beyond their original words.

The question is at what level the analysis is being done. Clearly this is beyond just transcribing phonemes or identifying the semantics of the signal.

If you take a look at the second page of this pdf, you might get a sense of how context could be considered "noise", although maybe I'm reaching a bit:
http://worrydream.com/refs/Shannon%20-%20A%20Mathematical%20Theory%20of%20Communication.pdf
Either way, there's no reason we can't consider the signal to include context if needed.

What's interesting, in a way, is that human language may be the only case of a signal that adapts itself to align with a context for efficient communication (so that the whole signal becomes the linguistic signal and the context as well). I can either say "the man with the red hat" or point and said "that man" in order to be more efficient if it is possible contextually.
I don't know if there are other cases of adaptive signals like this. Maybe there are.

Quote
I don't see information as having an independent existence outside of a mind.  ...  Human cognition is fundamentally associative--a web of associations.
Correct. The listener receives information from the linguistic signal narrowly defined as well as the context. That information is the input and IT deals with it to identify the message.
Title: Re: Man vs. Beast
Post by: Copernicus on August 07, 2015, 12:34:19 PM
Quote
In the case of language, it is the perceived linguistic medium
That's a general assumption, but not one implied by the idea of Information Theory. The idea of IT in this case would be to quantify over relevant Information. It would be, as I said, beyond the linguistic signal, to include other kinds of input:
Quote
In what sense are they necessarily part of a signal?
Because they are available to the hearer and listener (generally speaking), and because they affect the message.
I think it begs the question to assume that everything that affects a "message" is part of a signal.  Shannon himself talks about the act of communication as a message replication process.  I have no problem at all with his schema, but it really only seems suited for very limited types of signal transmission.  The "message" replicated in an act of lingustic communication is far more complex than in a telephone transmission, and it is not at all clear how one would go about quantifying what we have been loosely calling relevant "context".

Shannon treats messages as something of a black box in his schema--the end points of a communicative act.  It makes a lot of sense when you start out with a good idea of what "message" means, but it is not well defined for a linguistic act.  So, from the perspective of a speech understanding system, one could claim that the goal is to identify word tokens so they can be converted to text and looked up in a dictionary.  We can very successfully convert acoustic speech signals into what ASR researchers call "phonemes" but which bear little or no resemblance to linguistic phonemes.  And we can map those to alphabetic symbols algorithmically and make some pretty good guesses for speech that falls within a range of pronunciations.  What we cannot do is actually detect real linguistic phonemes that humans actually perceive in the speech signal.  AFAICT, information theory does not point us in any direction that can account for phonemic hearing.  Phonemes are not detectable from the acoustic signal alone, although they are a filter that is imposed on that signal at a higher level of processing.  Again, I think that the problem with information-theoretic approaches is that they conceive of information as passing through a "channel"--Reddy's "conduit metaphor".  What comes out at the receiving end of the acoustic channel is not "selected" in the straightforward manner that other kinds of signals are.

Quote from: djr33
Another very strange way of looking at this would be to see the signal as the language itself and the context as the channel through which it is transmitted, thereby adding noise (which would model reduced information due to implicatures). Then the job of the listener is to extract the noise of the channel (context) from the originally intended message. This would essentially reconstruct what the speaker intended, beyond their original words.

The question is at what level the analysis is being done. Clearly this is beyond just transcribing phonemes or identifying the semantics of the signal.
I agree, but I do not see how the simplistic communication schema devised by Shannon (and Weaver) helps us there, because they really don't have any theory of semantics to help with the analysis.  Information theory is not about meaning.  It is about encoding and decoding signals that are used to calculate meaning from. The "message" in their schema is just an input to much more complex thought processing in the brain.

Quote from: djr33
If you take a look at the second page of this pdf, you might get a sense of how context could be considered "noise", although maybe I'm reaching a bit:
http://worrydream.com/refs/Shannon%20-%20A%20Mathematical%20Theory%20of%20Communication.pdf
Either way, there's no reason we can't consider the signal to include context if needed.
I'm not quite with you on this.  Maybe I misunderstand what they mean by "noise", but my understanding is that it is something that exists in a channel.  We are talking about processing that goes on where "messages" are theoretically encoded and decoded.

Quote from: djr33
What's interesting, in a way, is that human language may be the only case of a signal that adapts itself to align with a context for efficient communication (so that the whole signal becomes the linguistic signal and the context as well). I can either say "the man with the red hat" or point and said "that man" in order to be more efficient if it is possible contextually.
I don't know if there are other cases of adaptive signals like this. Maybe there are.
I don't think that human language is unique in that way.  Animals think like we do, and they don't actually have our type of linguistic skills.  What you are talking about here is sometimes called "data fusion" in computer science.  You have multiple sources of information that somehow get integrated into a coherent model in a command and control center of some sort.  Data fusion is a huge problem for developing situational awareness in complex social operations--e.g. military maneuvers.  In a way, the same problem occurs at a psychological level.  Somehow, the brain has to assemble a coherent model of its host body's "situation".  And we know that phrase structure encodes a lot of situational information that helps shape the model.  To my way of thinking, phrasal meanings map to parts of the dense associative cloud that creates the model.  So language has to be integrated with data coming in from the other senses, i.e. the peripheral nervous system.
Title: Re: Man vs. Beast
Post by: Daniel on August 07, 2015, 04:54:23 PM
Reasonable reply. I think the open question now is whether IT is viable as an explanation for language. So there are various ways to model it (including whether contexts is included as part of the signal, etc.), and to move the conversation to the next productive stage would require having some results from that. In theory, I'm confident that it is possible to include context within the approach of IT, but I have no info either way on whether it would be useful or work out (though in some sense, it might be inevitable that it would, assuming we could process all the Information well). But it isn't the normal interpretation of the model, no.

(As for phonemes, I know there is research going on that attempts to do automatic speech recognition with models like this and eventually map it onto a linguistic level. There may be intermediate steps, but ruling out IT for that approach is certainly not a given for all researchers.)
Title: Re: Man vs. Beast
Post by: Copernicus on August 07, 2015, 07:58:16 PM
I've had some good exposure to speech recognition and understanding technologies, including productive interactions with some of the luminaries in the field.  I am incredibly impressed with what they have achieved, and I suspect that the field of robotics will continue to spin off ideas and applications that have relevance to theoretical linguistics.

Text processing research has already yielded benefits for theoreticians.  Towards the end of his life, Chuck Fillmore pointed out that one of the AI researchers who had the greatest effect on his thinking was Roger Schank, a very vocal critic of theoretical linguists (albeit not as much of generative semanticists).  Since I had shifted to the field of Natural Language Processing and AI research after 1990, I began to run into Fillmore at Association for Computational Linguistics conferences.  CL had come to have a great interest for him, especially as the major developer of FrameNet (https://framenet.icsi.berkeley.edu/fndrupal/) at Berkeley.

You can see what impressed Fillmore so much in the early work of Schank, Abelson, and others on conversational slot-filler programs.  Schank basically showed that a conversation could not be interpreted coherently without information that existed outside the overt linguistic signal.  For example, he would describe a conversation about a visit to a restaurant in terms of shared "world knowledge" in the background, which Schank implemented as a descriptive list of events, activities, and roles that the conversation would be about.  So he would look at the inferences that would license understanding of narratives like:

1.  John was hungry, so he went to the diner.
2.  The food was so bad that he could not finish.
3.  He left without leaving a tip.

Schank represented the "world knowledge" in terms of a generic script that would allow a computer program to instantiate information from the text and answer questions about the text that depended on the unspoken information in the script.  For example, you can answer the question "Do you think the waiter was happy?", because the script would contain information that waiters served food, received tips, and were happy to receive tips.  The script might also contain information about what tips are and the motivation for leaving them or not leaving them.  So the coherence of those three sentences as a narrative really depends on information that is not in the linguistic signal--for example, no reference whatsoever to a "waiter", even though one could ask questions about the probable state of mind of a waiter from reading that conversation.

This kind of observation really inspired Fillmore.  So he and his colleagues at Berkeley decided to build a repository of more sophisticated "scripts" in terms of recursively-structured networks of "frames" that one could map to words in a lexicon.  He could then describe a "family" of words in terms of an explicit representation of the concepts that tied the words together.  As a lexicologist, he delighted in the power that the idea gave him.  There are plenty of limitations to his approach, but the result is a set of information structures that can actually be used to extract useful information from text.  So FrameNet became one of several useful tools that NLP researchers can use to perform useful information extraction from text.
Title: Re: Man vs. Beast
Post by: Daniel on August 07, 2015, 08:39:43 PM
Have you seen Dan Jurafsky's work on event structure in computational models? I saw an interesting talk a few years ago by him that described, for example, the event structure of a fire-- the initial smoke/flames, the call to emergency services, the arrival of the fire trucks, the news reporters showing up, etc.
Title: Re: Man vs. Beast
Post by: Copernicus on August 07, 2015, 09:55:45 PM
Actually, I've never met Dan, but I keep thinking that I should.  We have a lot of mutual acquaintances.  Anyway, I haven't read the work you are referring to, but I have heard good things about him.  Unfortunately, I no longer have any affiliation that would give me access to academic materials.
Title: Re: Man vs. Beast
Post by: Daniel on August 07, 2015, 09:59:57 PM
You might find a few things on his website:
http://web.stanford.edu/~jurafsky/pubs/lrec2014_ds.pdf

I haven't looked into this for a few years for exactly what is the best overview, but I think there's information around if you're just curious.
Title: Re: Man vs. Beast
Post by: Pon on November 17, 2019, 01:18:12 PM
Interesting thread, I think we could understand 'cat' or any other animal (that uses sound we can hear and gestures), to a large degree we do understand 'cat' when it comes to both gestures and sounds, dogs as well, partly because both of these animals have evolved alongside us as companions.

If we would live the life of a cat with the needs that a cat has and the understanding that a cat has of what it is to be a cat, then we might understand the language that a cat has as well. Some of the language, I think, is set in the fundamental structure of what it is to be a cat, and we need that foundation to build the language and to understand it.
Title: Re: Man vs. Beast
Post by: cladking on January 27, 2020, 10:37:00 AM
I've finally figured this all out but nobody can believe me.

This is really simple but it flies in the face of everything we believe we know and it wholly contradicts limited areas of what we call "linguistics".  We can't speak "Cat" because the formatting for our language changed in ~2000 BC.  Humans have a somewhat more complex language than other species because there was a mutation 40,000 years ago that tied our speech center more closely to higher brain functions.  But this mutation didn't directly cause more complex language simply because humans are not significantly more "intelligent" than other species.  Rather, people became capable of communicating with one another and the cooperation allowed the advancement of their primitive "science" based on observation and the logical formatting of their animal yet increasingly complex language.

Like ALL ANIMAL language this language was metaphysical.  Words were representative rather than symbolic and meaning was tied to what was already known.  there were no "definitions" since every word had a fixed meaning.  Because the language contained all human knowledge it had to change and adapt as more was learned and it became increasingly complex.  People "think" in language and as their's became more complex fewer people could master it and new language arose for those who were otherwise tongue-tied.  The new pidgin languages used the exact same vocabulary but there was no longer any tie to reality or the formatting of the four dimensional brain that created language.  You could say literally anything in these modern languages.  Ancient Language obeyed a strict formatting that kept it tied to observation and logic EXACTLY like animal languages.  This is why we must teach English to animals to communicate.  The change from metaphysical to pidgin language is one way.  We can model animal languages but to so we must understand what they know and how they know it!  We can even decipher some of it to a limited extent and this has been begun with "Prairie Dog'.

https://www.cbc.ca/news/technology/prairie-dogs-language-decoded-by-scientists-1.1322230

There is one HUGE difference between animal language and human language besides the fact that we all think in terms of language so animal "thought" is logical and logic can't even be expressed in modern language because all statement can be deconstructed.

This difference is that there are no words for "thought" because animals don't experience consciousness in this way.  There are no words for "belief" because animals have no use for "belief", only knowledge.  There are no taxonomies because animals don't use such terms as mnemonics.  They have other types of mnemonics.   They have no words for reductionism because such ideas lie outside of logic and observation.  Ancient Language breaks Zipf's Law for all of these reasons.

There is a common denominator to these words.  Animals don't use or understand abstraction.  We use abstraction to try to understand animals and then when we do discover a word we start parsing them.  But the difference in formatting between Ancient Language and modern languages means that translation is impossible.  Even the vocabulary is different because the nature of words in the two languages is different.  We can only model and interpret Ancient Languages and other "animal" languages because they are wholly dissimilar to our own but very similar in many ways to each other.