Author Topic: Man vs. Beast  (Read 17271 times)

Offline Copernicus

  • Linguist
  • ***
  • Posts: 60
  • Country: us
    • Natural Phonology
Re: Man vs. Beast
« Reply #75 on: August 06, 2015, 09:14:19 PM »
Again, what is "the signal"?
In the case of language, it is the perceived linguistic medium--acoustic signal (speech), visual signal (signing, writing), touch (braille), etc.  You yourself mentioned "signal" in your prior post, so I would ask you to explain what you think it is.  Do you take private thoughts to be part of the signal?  Are presuppositions part of the signal?  Presuppositions are propositions that must be true in order for a speech act to carry off.  In what sense are they necessarily part of a signal?

Quote
The entire point is that context is not some vague unknown completely unassociated with the utterance. It's not part of the acoustic signal transmitted to a listener, no, but it's part of what they receive. If we're talking about syntax or semantics, the signal is obviously the language itself; if we're talking about pragmatics, then the signal is the Information in the world that is relevant to the utterance as well as the utterance itself.
I'm not sure what you mean by "information in the world".  I don't see information as having an independent existence outside of a mind.  It is interpreted data, which means that there has to be an interpreter for it to exist.  We agree that language is inherently a signal, but this idea that pragmatic knowledge is basically a signal strikes me as very strange.  I tend to think of it as a dynamic model.  I mean "dynamic" in the sense that it is a web of associations that is constantly being changed as new information arrives from sensory signals.  The model itself is not a signal, and communication can only take place to the extent that both the sender and the receiver have overlapping models of reality or shared metaphors.  Human cognition is fundamentally associative--a web of associations.
« Last Edit: August 06, 2015, 09:15:59 PM by Copernicus »

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1581
  • Country: us
    • English
Re: Man vs. Beast
« Reply #76 on: August 06, 2015, 09:53:48 PM »
Quote
In the case of language, it is the perceived linguistic medium
That's a general assumption, but not one implied by the idea of Information Theory. The idea of IT in this case would be to quantify over relevant Information. It would be, as I said, beyond the linguistic signal, to include other kinds of input:
Quote
In what sense are they necessarily part of a signal?
Because they are available to the hearer and listener (generally speaking), and because they affect the message.

Another very strange way of looking at this would be to see the signal as the language itself and the context as the channel through which it is transmitted, thereby adding noise (which would model reduced information due to implicatures). Then the job of the listener is to extract the noise of the channel (context) from the originally intended message. This would essentially reconstruct what the speaker intended, beyond their original words.

The question is at what level the analysis is being done. Clearly this is beyond just transcribing phonemes or identifying the semantics of the signal.

If you take a look at the second page of this pdf, you might get a sense of how context could be considered "noise", although maybe I'm reaching a bit:
http://worrydream.com/refs/Shannon%20-%20A%20Mathematical%20Theory%20of%20Communication.pdf
Either way, there's no reason we can't consider the signal to include context if needed.

What's interesting, in a way, is that human language may be the only case of a signal that adapts itself to align with a context for efficient communication (so that the whole signal becomes the linguistic signal and the context as well). I can either say "the man with the red hat" or point and said "that man" in order to be more efficient if it is possible contextually.
I don't know if there are other cases of adaptive signals like this. Maybe there are.

Quote
I don't see information as having an independent existence outside of a mind.  ...  Human cognition is fundamentally associative--a web of associations.
Correct. The listener receives information from the linguistic signal narrowly defined as well as the context. That information is the input and IT deals with it to identify the message.
Welcome to Linguist Forum! If you have any questions, please ask.

Offline Copernicus

  • Linguist
  • ***
  • Posts: 60
  • Country: us
    • Natural Phonology
Re: Man vs. Beast
« Reply #77 on: August 07, 2015, 12:34:19 PM »
Quote
In the case of language, it is the perceived linguistic medium
That's a general assumption, but not one implied by the idea of Information Theory. The idea of IT in this case would be to quantify over relevant Information. It would be, as I said, beyond the linguistic signal, to include other kinds of input:
Quote
In what sense are they necessarily part of a signal?
Because they are available to the hearer and listener (generally speaking), and because they affect the message.
I think it begs the question to assume that everything that affects a "message" is part of a signal.  Shannon himself talks about the act of communication as a message replication process.  I have no problem at all with his schema, but it really only seems suited for very limited types of signal transmission.  The "message" replicated in an act of lingustic communication is far more complex than in a telephone transmission, and it is not at all clear how one would go about quantifying what we have been loosely calling relevant "context". 

Shannon treats messages as something of a black box in his schema--the end points of a communicative act.  It makes a lot of sense when you start out with a good idea of what "message" means, but it is not well defined for a linguistic act.  So, from the perspective of a speech understanding system, one could claim that the goal is to identify word tokens so they can be converted to text and looked up in a dictionary.  We can very successfully convert acoustic speech signals into what ASR researchers call "phonemes" but which bear little or no resemblance to linguistic phonemes.  And we can map those to alphabetic symbols algorithmically and make some pretty good guesses for speech that falls within a range of pronunciations.  What we cannot do is actually detect real linguistic phonemes that humans actually perceive in the speech signal.  AFAICT, information theory does not point us in any direction that can account for phonemic hearing.  Phonemes are not detectable from the acoustic signal alone, although they are a filter that is imposed on that signal at a higher level of processing.  Again, I think that the problem with information-theoretic approaches is that they conceive of information as passing through a "channel"--Reddy's "conduit metaphor".  What comes out at the receiving end of the acoustic channel is not "selected" in the straightforward manner that other kinds of signals are.

Quote from: djr33
Another very strange way of looking at this would be to see the signal as the language itself and the context as the channel through which it is transmitted, thereby adding noise (which would model reduced information due to implicatures). Then the job of the listener is to extract the noise of the channel (context) from the originally intended message. This would essentially reconstruct what the speaker intended, beyond their original words.

The question is at what level the analysis is being done. Clearly this is beyond just transcribing phonemes or identifying the semantics of the signal.
I agree, but I do not see how the simplistic communication schema devised by Shannon (and Weaver) helps us there, because they really don't have any theory of semantics to help with the analysis.  Information theory is not about meaning.  It is about encoding and decoding signals that are used to calculate meaning from. The "message" in their schema is just an input to much more complex thought processing in the brain.

Quote from: djr33
If you take a look at the second page of this pdf, you might get a sense of how context could be considered "noise", although maybe I'm reaching a bit:
http://worrydream.com/refs/Shannon%20-%20A%20Mathematical%20Theory%20of%20Communication.pdf
Either way, there's no reason we can't consider the signal to include context if needed.
I'm not quite with you on this.  Maybe I misunderstand what they mean by "noise", but my understanding is that it is something that exists in a channel.  We are talking about processing that goes on where "messages" are theoretically encoded and decoded.

Quote from: djr33
What's interesting, in a way, is that human language may be the only case of a signal that adapts itself to align with a context for efficient communication (so that the whole signal becomes the linguistic signal and the context as well). I can either say "the man with the red hat" or point and said "that man" in order to be more efficient if it is possible contextually.
I don't know if there are other cases of adaptive signals like this. Maybe there are.
I don't think that human language is unique in that way.  Animals think like we do, and they don't actually have our type of linguistic skills.  What you are talking about here is sometimes called "data fusion" in computer science.  You have multiple sources of information that somehow get integrated into a coherent model in a command and control center of some sort.  Data fusion is a huge problem for developing situational awareness in complex social operations--e.g. military maneuvers.  In a way, the same problem occurs at a psychological level.  Somehow, the brain has to assemble a coherent model of its host body's "situation".  And we know that phrase structure encodes a lot of situational information that helps shape the model.  To my way of thinking, phrasal meanings map to parts of the dense associative cloud that creates the model.  So language has to be integrated with data coming in from the other senses, i.e. the peripheral nervous system.

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1581
  • Country: us
    • English
Re: Man vs. Beast
« Reply #78 on: August 07, 2015, 04:54:23 PM »
Reasonable reply. I think the open question now is whether IT is viable as an explanation for language. So there are various ways to model it (including whether contexts is included as part of the signal, etc.), and to move the conversation to the next productive stage would require having some results from that. In theory, I'm confident that it is possible to include context within the approach of IT, but I have no info either way on whether it would be useful or work out (though in some sense, it might be inevitable that it would, assuming we could process all the Information well). But it isn't the normal interpretation of the model, no.

(As for phonemes, I know there is research going on that attempts to do automatic speech recognition with models like this and eventually map it onto a linguistic level. There may be intermediate steps, but ruling out IT for that approach is certainly not a given for all researchers.)
Welcome to Linguist Forum! If you have any questions, please ask.

Offline Copernicus

  • Linguist
  • ***
  • Posts: 60
  • Country: us
    • Natural Phonology
Re: Man vs. Beast
« Reply #79 on: August 07, 2015, 07:58:16 PM »
I've had some good exposure to speech recognition and understanding technologies, including productive interactions with some of the luminaries in the field.  I am incredibly impressed with what they have achieved, and I suspect that the field of robotics will continue to spin off ideas and applications that have relevance to theoretical linguistics.

Text processing research has already yielded benefits for theoreticians.  Towards the end of his life, Chuck Fillmore pointed out that one of the AI researchers who had the greatest effect on his thinking was Roger Schank, a very vocal critic of theoretical linguists (albeit not as much of generative semanticists).  Since I had shifted to the field of Natural Language Processing and AI research after 1990, I began to run into Fillmore at Association for Computational Linguistics conferences.  CL had come to have a great interest for him, especially as the major developer of FrameNet at Berkeley.

You can see what impressed Fillmore so much in the early work of Schank, Abelson, and others on conversational slot-filler programs.  Schank basically showed that a conversation could not be interpreted coherently without information that existed outside the overt linguistic signal.  For example, he would describe a conversation about a visit to a restaurant in terms of shared "world knowledge" in the background, which Schank implemented as a descriptive list of events, activities, and roles that the conversation would be about.  So he would look at the inferences that would license understanding of narratives like:

1.  John was hungry, so he went to the diner.
2.  The food was so bad that he could not finish.
3.  He left without leaving a tip.

Schank represented the "world knowledge" in terms of a generic script that would allow a computer program to instantiate information from the text and answer questions about the text that depended on the unspoken information in the script.  For example, you can answer the question "Do you think the waiter was happy?", because the script would contain information that waiters served food, received tips, and were happy to receive tips.  The script might also contain information about what tips are and the motivation for leaving them or not leaving them.  So the coherence of those three sentences as a narrative really depends on information that is not in the linguistic signal--for example, no reference whatsoever to a "waiter", even though one could ask questions about the probable state of mind of a waiter from reading that conversation.

This kind of observation really inspired Fillmore.  So he and his colleagues at Berkeley decided to build a repository of more sophisticated "scripts" in terms of recursively-structured networks of "frames" that one could map to words in a lexicon.  He could then describe a "family" of words in terms of an explicit representation of the concepts that tied the words together.  As a lexicologist, he delighted in the power that the idea gave him.  There are plenty of limitations to his approach, but the result is a set of information structures that can actually be used to extract useful information from text.  So FrameNet became one of several useful tools that NLP researchers can use to perform useful information extraction from text.
« Last Edit: August 07, 2015, 08:05:00 PM by Copernicus »

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1581
  • Country: us
    • English
Re: Man vs. Beast
« Reply #80 on: August 07, 2015, 08:39:43 PM »
Have you seen Dan Jurafsky's work on event structure in computational models? I saw an interesting talk a few years ago by him that described, for example, the event structure of a fire-- the initial smoke/flames, the call to emergency services, the arrival of the fire trucks, the news reporters showing up, etc.
Welcome to Linguist Forum! If you have any questions, please ask.

Offline Copernicus

  • Linguist
  • ***
  • Posts: 60
  • Country: us
    • Natural Phonology
Re: Man vs. Beast
« Reply #81 on: August 07, 2015, 09:55:45 PM »
Actually, I've never met Dan, but I keep thinking that I should.  We have a lot of mutual acquaintances.  Anyway, I haven't read the work you are referring to, but I have heard good things about him.  Unfortunately, I no longer have any affiliation that would give me access to academic materials.

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1581
  • Country: us
    • English
Re: Man vs. Beast
« Reply #82 on: August 07, 2015, 09:59:57 PM »
You might find a few things on his website:
http://web.stanford.edu/~jurafsky/pubs/lrec2014_ds.pdf

I haven't looked into this for a few years for exactly what is the best overview, but I think there's information around if you're just curious.
Welcome to Linguist Forum! If you have any questions, please ask.