Author Topic: Is language really rule-based?/ "Do you miss not having a job?"(2014)*revisited*  (Read 3341 times)

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 2036
  • Country: us
    • English
Quote
It is not that much but I do feel it is enough for Linguist's Lounge > Outside of the box forum branch. I do understand that the list of alternatives in a context-free grammar like...
Sure, and I'm not suggesting that you need to "solve" English just to reply here. But what I am saying is that all of that has been done, and it has consistently been met with problems-- specific problems called "data"!

Quote
I proposed a more general solution which I believe the final approach indeed because the next level grammars – unrestricted grammars – have nothing to study, they just describe what they see exactly. But it is the whole everything indeed. Unrestricted grammars require a full-featured Turing machine for lexical and syntactical analyses.
There are some very interesting studies going on (or really they have been doing on for decades) looking at whether context-sensitive grammars are necessary (one area in particular is where complementizers, like the word "that", happen to agree with the subject in some languages, and things get tricky from there). The point is that this research tends to proceed bottom-up, looking at specific data, although generally in support of the need to have a higher formalism of grammar, rather than proving it does not need to exist (which we can't do).

Quote
, IMO if you want to continue to improve your studying of languages with the formal approach you are either switch to context-sensitive grammars (and become the sixth person in the world who really understands them) or invent something revolutionary different.
Personally I prefer to stay pragmatic and do not try to develop a universal solution (I know it is impossible) but solve the concrete task separately. Again, it is not scientific at all but I believe is a kind of a fun for this section of the forum.
But again this really has been tried! These aren't new, revolutionary ideas. They're tried, and sometimes true ideas, which don't solve everything.

Your making assertions by hypothesis rather than actually showing that your theory works. Yes, it would be hard to show that. And your ideas are valid as ideas. But unless we leave it at that, you need to take the next step because the burden of proof is on you to demonstrate that what you're suggesting actually accomplishes something new. It's fine to talk about all of this, and we've done that. But what next? Or was that all you intended?

One of the main frustrations for me in science is the disconnect between top-down theorizing and bottom-up experimentation. But I guess that's the way it needs to be, and we can continue to seek that connection.
Welcome to Linguist Forum! If you have any questions, please ask.

Offline Rock100

  • Linguist
  • ***
  • Posts: 70
> It's fine to talk about all of this, and we've done that. But what next?
> Or was that all you intended?
I am afraid yes, that was all. I am not a professional linguist. I has occurred to me once that I could try to solve some of my problems with English phonetics at the expense of highly qualified professional native English speakers. I have looked at several forums and have chosen this very one. Than this topic helped me see if the formal grammar approach is really used in the linguistics world. I can tell you in return that it is used less and less in commercial software industry even for the tasks it suits really well.

> One of the main frustrations for me in science is the disconnect between top-down
> theorizing and bottom-up experimentation. But I guess that's the way it needs to be,
> and we can continue to seek that connection.
I insist on that to continue the development in the “formal” direction the linguists will need a revolutionary new ideas – no one will ever allow you to use the half of the Earth as a computer to model a real language. You, linguists, need to find a way trick the nature and to get more with the less resources. One of the tricks is/are (not sure) the statistical methods (the artificial intelligence (AI), neural networks, etc.). But the real breakthrough will most likely come when they make the quantum computers commercially available. They will incredibly speed up the AI facilities and, which is more important, will allow you to develop and use special “quantum” algorithms to process the data you are talking about (I know nothing about them). This is the nearest future. If you, linguists, fail even with the quantum algorithms you will have to wait for the times when the strings theory enabled computers appear. Those will be the days!
And personally I am happy with the fact I can handle the context-free languages like XML with a finite state machine plus the stack of it’s states. Nobody knows about such a trick any longer.

Offline panini

  • Linguist
  • ***
  • Posts: 197
Hans would probably know of the appropriate sutras of the Ashtadhyayi: it’s probably an interpretation from a later commentator.

It’s an interesting but kind of untestable question whether speakers can freely break literally any generalizations in grammar, but speakers are certainly capable of breaking many generalizations in grammar. Though I should back up a bit and say that grammars have rules, not generalizations. For example, it is a rule-ish of English that we don’t use SOV word order (asterisk about the correct statement of the rule…), but I am able to the verb at the end of the sentence put. And it’s not just me, pretty much everyone can. This is in contrast to the law of gravity, which is literally inviolable. The asterisk about the rule “we don’t use SOV word order” is that that isn’t really the correct statement of the rule, but tech details are not the point. We are actually able to violate fundamental rules of the grammar in phonetics when we speak. Free will allows us to override certain choices, and talking is a matter of choice. I have nothing to say about Exemplar Theory, so that’s not what I’m getting at. I’m getting at the basic competence / performance distinction, that grammar a.k.a. competence underlies language behavior, but it can be overridden by other factors, including cussedness. Again, grammar is not like physical laws.

When we only consider that which is generated by the grammar and exclude mistakes and willful overriding of the grammar, we still have to face the question of “exceptions” and what it means to be an exception. Linguists generally reject the trashy notion of “exception” promulgated by language teachers, who offer a certain heuristic that fails for some complex reason, and they say “that’s just an exception”. At least in phonology there has (had) been a clear understanding of what an exception is: it’s the case where a form should undergo a rule because it meets the structural description, but it does not. Unfortunately, the sloppy language-teacher notion “too complex for me to explain” filtered into phonology, and then there is OT which doesn’t have rules so what could “exception” even mean?

In the classic view, any segment could in principle bear an annotation “[–rule 36]” meaning that the rule 36 would not apply if a term in the substring is so marked. There was a debate over whether such specifications could be introduced by rule, and it was concluded that they should not be introduced by rule. A half a century later, with everything being in balkanized chaos, the arguments against exception features being introduced by rule are not persuasive, and there’s a recent UCSD dissertation by Hout, about exceptions, which I look forward to reading, to see what new light has been shed.

My scientific frustration has been with the dearth of top-down experimentation and bottom-up theorizing, but I came to realize that arbitrary stipulation is the biggest problem in science, at least as conducted by linguists. The model where you say “I define X as Y; it follows that Q” is completely wrong-headed, instead one should ask “What actual thing are we talking about, and what are its properties?”.

Unfortunately, it looks like our interlocutor has only a passing interest in the fundamental questions of mental models and grammar. I “get it” from the industrial perspective, that it costs a lot to build a principled model of any language and unfortunately contemporary theory is way too speculative and fluid to be of use in building actual rules. Besides, languages which challenge the dominant statistical approach (i.e. 99% of human languages) are too small to worry about. The real concern, I think, is figuring out some interface between pragmatics and phonetics where voice recognition can figure out that “less” doesn’t make any sense in that context, so the word must have been “less”.

Offline Rock100

  • Linguist
  • ***
  • Posts: 70
> It’s an interesting but kind of untestable question whether speakers can
> freely break literally any generalizations in grammar, but speakers are certainly
> capable of breaking many generalizations in grammar.
Well, you (and Daniel) probably may explain exceptions and errors with “breaking the rules” approach but I believe I have just invented (please, do not take “invented” seriously) an exception/error handling mechanism for the formal grammars approach. The real grammars of more or less complex languages do their job in describing languages not very well. Usually practical grammars describe subset or (which is more likely) superset of the language. For example in my message above I have omitted the “to” particle of the infinitive “trick” in the phrase “a way trick the nature”. I bet you have had no problem in extending the formal correct English grammar <something1><infinitive><something2> with incorrect <something1>(<infinitive> | <verb>)<something2> and have parsed the phrase successfully. And in this very approach the notion of “breaking the rule” becomes very clear and obvious: the rules are broken if the correspondent cannot parse the sentence with formal or extended grammar (you may require to take into account the semantics of the sentence if you will). There are probably no formal grammars at all and many extended grammars (supersets of the language) – every person uses its own one (and the ESL teachers are the champions). Let us talk about syntax only because I believe that though you may understand syntactically acceptable by your extended grammar sentence incorrectly such a scrupulousness will be excessive.
This explanation/proposal for the formal grammar math approach only. I do not claim that everything happens this very way in a human mind. It is just an adaptation of the model to handle the errors (not exceptions, I still believe the exceptions are the valid elements of a language and shall be handled by its grammar).
> so what could “exception” even mean?
Personally I think of them as about the rules (alternatives) that are used very rarely. Let us suppose we speak about the formal grammars math approach later on. Because it a statistical approach it may mean something like the least probable outcome that is still fit to the language.
> and there’s a recent UCSD dissertation by Hout, about exceptions, which
> I look forward to reading, to see what new light has been shed.
I have set a reminder in my Outllook to somehow (does the English grammar allows such a treatment of the infinitives? BTW personally my English does allow it) obtain it. I do believe the exceptions are just the rare rules of the language, they are respected by the language and are understood by its speaker and are the norm. I really do not understand so big interest to them from linguists. I do understand a programmer might want to keep the exception in a cheap memory of a non-unified computer system but linguists?..
> My scientific frustration has been with the dearth of top-down experimentation
> and bottom-up theorizing,
May I read it vice versa? I do believe the experimenters think with their hands and try to come from the bottom (local uncoordinated results) to the top (the generalization of the results and explanation of similar problems).
> Unfortunately, it looks like our interlocutor has only a passing interest
> in the fundamental questions of mental models and grammar. I “get it”
> from the industrial perspective, that it costs a lot to build a principled
> model of any language and unfortunately contemporary theory is way
> too speculative and fluid to be of use in building actual rules.
You are absolutely right. I do not have a special education, I do not understand your, linguists’, terminology, I am from parallel (at least not a perpendicular/orthogonal) world. And I do believe you must be lucky I am with you, because you, linguists, do not have books like “A serious linguistics in 21 days for dummies and complete idiots”. You, linguists, are the champions in funding attraction -- nobody knows what you really do but such your applications as speech therapists, schoolteachers, a language as a second language teachers are so vital that you do not feel any necessity to write such books -- you do not need a popularization at all. There are a lot of “couch” analysts (the ones that lie on their couches in front of their TV sets and “study” something) almost on everything in the world: viruses, finance, gas and oil, physics, except for linguistics. I am a kind of the only one that is a linguistics couch analyst.
Please, get me right – I do not complain. I even wish you were less polite and more informative here indeed. I do have fun staying here. And I do my best not to irritate you, who are here, with that. At least too much.
> Besides, languages which challenge the dominant statistical approach
> (i.e. 99% of human languages) are too small to worry about. The real concern,
> I think, is figuring out some interface between pragmatics and phonetics where
> voice recognition can figure out that “less” doesn’t make any sense in that context,
> so the word must have been “less”.
Unbelievable. It is probably too much of Jagermeister but I the feel strong desire to read it the opposite way. So, I will not do it as I have done with top-down/bottom-up approaches.
To the point. A speech recognition (the lexical analysis if you will), syntactic analysis (I cannot figure it out but let it be present) and semantic analysis (a sense in a context, meaning, etc.) are the very different tasks nowadays. I do understand your desire to combine them all in a (the “one” does start with a consonant, does not it?) one step but I still believe it is technically impossible nowadays. I do know about auto-subtitles feature in the YouTube videos and it does not use (at least) the semantics at all. (Please, do not tell anybody but it is, probably, much better in English lexical analysis (phonetics) than I am).

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 2036
  • Country: us
    • English
Thanks for the detailed reply, panini!
Quote
...I’m getting at the basic competence / performance distinction, that grammar a.k.a. competence underlies language behavior, but it can be overridden by other factors, including cussedness. Again, grammar is not like physical laws.
When we only consider that which is generated by the grammar and exclude mistakes and willful overriding of the grammar, we still have to face the question of “exceptions” and what it means to be an exception. ...
So then what of grammaticality? Is it flexible? Can we separate out speech errors from truly grammatical sentences? Are there things we say that are not narrowly grammatical yet still used, effectively and consistently? Grammaticality illusions, for example (yes, rarely used, more just oddly perceived, but I talk about the possibility and some possible examples of used and acceptable grammaticality illusions in my paper linked earlier).

Quote
My scientific frustration has been with the dearth of top-down experimentation and bottom-up theorizing, but I came to realize that arbitrary stipulation is the biggest problem in science, at least as conducted by linguists. The model where you say “I define X as Y; it follows that Q” is completely wrong-headed, instead one should ask “What actual thing are we talking about, and what are its properties?”.
Interesting perspective, especially the first point. I generally take that as a given, although not neessarily unquestionable. I've always enjoyed experimental work (psycholinguistics, etc.), but found it too far removed from the big questions (I'd really like to see more research about, for example, the psychological reality of constituency, something beyond the basics that we teach students in an intro to syntax class, but really getting into the big unanswered questions), and then the big idea top-down theories always seem to fall flat, at least to my ears, when they make predictions that quite easily can be shown to be false if we just look at enough diverse data.

--
Rock100:
Quote
And I do believe you must be lucky I am with you, because you, linguists, do not have books like “A serious linguistics in 21 days for dummies and complete idiots”. You, linguists, are the champions in funding attraction -- nobody knows what you really do but such your applications as speech therapists, schoolteachers, a language as a second language teachers are so vital that you do not feel any necessity to write such books -- you do not need a popularization at all. There are a lot of “couch” analysts (the ones that lie on their couches in front of their TV sets and “study” something) almost on everything in the world: viruses, finance, gas and oil, physics, except for linguistics. I am a kind of the only one that is a linguistics couch analyst.
There are a few issues in this, probably the most important that linguists aren't just of a single type. There are so many people who study language in so many ways and from so many different perspectives, including with just about every methodology imaginable. Linguistics isn't really a specific field, but more like the intersection of about 20 inter-disciplinary studies, located somewhere between psychology, sociology, biology, physics, computer science, mathematics, philosophy and humanities. My point is simply that although many linguists may approach things a certain way, and I do think that quite often there is a particular perspective linguists adopt from having looked at language in such detail (sometimes with positive consequences like being open minded to the many cultures around the world!), but in terms of theoretical insight and new ideas, there's really very little that linguists haven't considered, or that someone hasn't suggested to linguists (it's always interesting to witness or be part of a conversation between a linguist and an engineer for example!!).

Additionally, it's worth asking what different linguists (or non-linguists) actually want to answer about language, or what they want to accomplish. The last few posts here have revealed different interests in those replying, which is of course fine, but must also be considered in the points we're each making.
Welcome to Linguist Forum! If you have any questions, please ask.

Offline Rock100

  • Linguist
  • ***
  • Posts: 70
I would like to make an addition – the part I have somehow lost yesterday. But this is even better because there was at least one fault in my assumptions I see now (and can fix).
> For example, it is a rule-ish of English that we don’t use SOV word order
> (asterisk about the correct statement of the rule…), but I am able to the
> verb at the end of the sentence put. And it’s not just me, pretty much everyone
> can. This is in contrast to the law of gravity, which is literally inviolable.
I knew the trick with “Arocdnicg to rsceearch at Cmabrigde Uinervtisy, it deosn’t mttaer in waht oredr the ltteers in a wrod are, the olny iprmoatnt tihng is taht the frist and lsat ltteer are in the rghit pcale. The rset can be a toatl mses and you can sitll raed it wouthit pobelrm. Tihs is buseace the huamn mnid deos not raed ervey lteter by istlef, but the wrod as a wlohe.” but I believe your example is much more interesting (at least for me) because it is formal grammar related. At least I can provide the explanation from the formal grammar theory point of view.
I believe that a speaker of English can handle SOV grammar the two following ways:
1. He can understand SOV grammar directly (one will need a lot of practice in it I do not think it would be possible from scratch especially for a native speaker).
2. He will use a translating grammar. Here is the example of such a grammar. I will place the actions in square brackets [do something]
<sentence>:=<s>[emit s]<o>[push o]<v>[emit v, pop]
<s>:=<noun>
<o>:=<noun>
<noun>:= elephant | home
<v>:= goes

Given the sentence “elephant home goes” we get “elephant” (push “home” on stack) “goes” “home” (popped out from stack).
I believe that personally I have started parsing your example with a kind of a translating grammar myself. But as far as you, I believe erroneously, split the infinitive it required me several additional passes to parse the example. I am not sure what really happed – whether I found the alternative grammar for <infinitive><object> as to<object><verb> or taught myself understanding of to<object><verb> grammar directly. But I bet that for me if you made your example as “I am able the verb at the end of the sentence to put” would be much easier just to make use of the translating grammar and I would not need additional passes to teach myself a new grammar.
Yesterday I believed there was a third way handling SOV through an intermediate representation SOV--> intermediate representation --> generation of the new sentence with SVO grammar from the intermediate representation. But today I realized that the ability to build an intermediate representation means that the initial SOV sentence have been already parsed correctly.

> There are so many people who study language in so many ways and from
> so many different perspectives <and so on>
Your points are absolutely valid. The only thing I can argue/extend is that with their bottom-up approach and zillions of combinations of different aspects in different languages linguists still have the luxury to remain in their own domain of research. For example, it is extremely hard to invent something new even in physics and poor physicists have to choose adjacent areas for their dissertations (like physics/chemistry (it became hard nowadays too), physics/sociology, physics/gender science) to minimize the risks to fail. I believe the linguists can just choose a “Multisubjective sentences in modern English” and vu a la.

Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 2036
  • Country: us
    • English
Quote
The only thing I can argue/extend is that with their bottom-up approach and zillions of combinations of different aspects in different languages...
But again, it's important to be clear: some linguists are top-down theorists, sometimes based on data from a variety of languages, other times just looking at a single language, maybe English. Chomsky, for example, very rarely publishes examples from languages other than English (although when he rarely does so, he is capable and competent), in part because his philosophy is that in essence all languages are basically the same thing just with different pronunciations, and because he's interested in that underlying similarity, he thinks we can study it almost exclusively by looking at English, or any other language as a substitute. Many linguists disagree with that approach, and even he looks at other languages to support his ideas (rarely, as I said, although perhaps more often in his informal study than in what makes a final draft for publication).

Quote
...bottom-up approach and zillions of combinations of different aspects in different languages linguists still have the luxury to remain in their own domain of research.
To an extent, yes, this is a somewhat representative characterization of the field. Something else interesting is that older research is still relevant: a description of a language from the 50 years ago can still be just as valuable (if written well) as a current publication. And some much older publications, like from the 1800s, can still be valuable, depending on several factors and the type of research you're working on.

Quote
For example, it is extremely hard to invent something new even in physics and poor physicists have to choose adjacent areas for their dissertations (like physics/chemistry (it became hard nowadays too), physics/sociology, physics/gender science) to minimize the risks to fail. I believe the linguists can just choose a “Multisubjective sentences in modern English” and vu a la.
Yes, linguists can sometimes choose just about any topic to work on, but whether it makes a bigger impact in the field is a different question. A broader problem isn't this flexibility, but that there is no clearly best theory. Chomsky was interviewed and asked about the state of Linguistics (a few decades ago) and described it as pre-Gallilean. And he has a very strong opinion about the right way to approach the field, but there are such strong varying opinions about theories, there's been relatively little progress in settling any differences between them, or, as I think makes more sense, finding ways to unite the theories to explain language more generally. There have been "wars" about theories over the decades, and no one theory came out as the winner. At certain universities (and to some degree countries, or parts of countries, etc.), yes, but there are other theories out there somewhere else, all still actively researched with extensive publications. Some differ a lot (Generativism vs. Construction Grammar), while others are just slightly different shades of the same color of theory, and if you read that research carefully you'll see just how passionately those theoreticians disagree with each other too.

But all of that really is a reason to recognize that linguistics isn't lacking in new ideas or possible directions, but instead lacking in agreement. What are we trying to do and why? And how? Again, that's probably why the bottom-up approach is more popular. According to some, the broadest top-down theories also don't produce very interesting answers: Chomsky says that language is basically "Merge", an operation combining (words or phrases) A+B to produce a larger phrase "AB", and that's it; or others might say language is just a bunch of statistics. If either of those answers is correct, there's not much left to study at the broad theoretical level. Of course there are still many, many connections to be drawn, like how processing relates to all of this.

In the end, I'm not really sure what you're suggesting.
« Last Edit: May 21, 2020, 09:34:50 PM by Daniel »
Welcome to Linguist Forum! If you have any questions, please ask.

Offline poemworld

  • Jr. Linguist
  • **
  • Posts: 38
  • Country: us
    • English
    • Poemworld
My two centavos worth, from Charles Sanders Peirce, bless his soul.
There are three general categories: case, rule, and result. Case is what is. Rule is a general tendency of change. Result is an outcome. Three logics can be made of these categories:
deduction: case + rule = result
induction: case + result = rule
abduction or retroduction: result + rule = case

This provides a context for rule and rules, without which one is chasing one's tail as a rule.
“The basic tool for the manipulation of reality is the manipulation of words. If you can control the meaning of words, you can control the people who must use them.”
Philip K. Dick

Offline panini

  • Linguist
  • ***
  • Posts: 197
TLDR;

I think grammaticality is very rigid: a form is, or it isn’t. So we can distinguish grammatical sentences from ungrammatical ones if we “know the grammar”. Wherein lies the difficulty. I do not put any stock in ungrammatical utterances that are effective (decipherable), but “consistently” is a whole ‘nother kettle of fish. W.r.t. raw data, I attribute more of the facts about data to either performance or crypto-codeswitching (“I can get that in my dialect”) than most theoreticians do, I think. I personally know that in my original dialect, “needs washed” is word salad, but I’ve been exposed to it enough times over the decades that I don’t flinch, and occasionally utter it. When people repeatedly say things that grammatical theories don’t countenance, I am inclined say that there is an under-reported dialect feature that needs reporting. Before engaging in a line of research to broaden what syntax should allow, I would first ask for some discussion of the methodology of deciding that a fact pattern is due to competence versus performance.

For example, templates controlling the order of prenominal adjectives are, IMO, based on stylistic acceptability judgments and not about something in grammar. There are those who would put that in grammar. Is there any empirical or logical basis for deciding between these positions, and what would that basis say about English noun-determiner order, which I would say is in grammar. There is an infamous problem stemming from Chomsky’s “Remarks on nominalization” that sentences like “Bob’s difficulty to understand makes him a bad teacher”, meaning “The dificulty that one would have in understanding Bob is what makes him a bad teacher”. Many syntacticians including Chomsky declare raising out of a nominalized tough-clause to be ungrammatical, but I and myriad other linguists (usually not syntacticians) find this construction perfectly fine (there’s a subset of speakers where the raised NP is interpreted as lower subject which I totally don’t get, but it’s a fact). Chomsky has a dismissive footnote to the effect that such people are unaware of a property of their grammar that their grammar doesn’t produce this output, whereas I would say that tough-nominalization deniers are unaware of a property of the English language that this is grammatical in some or all dialects, and just not acceptable to some. This obviously raises the question, how do you know what the grammar actually produces. Perhaps that is why I am a phonologist and field worker. When I do syntax, it’s not English or researcher-introspective and it’s about things of the kind of clarity in determiner-noun order in English.

But this murkiness about what sentences are actually generated by the grammar does not call into serious question the existence of gramar and the distinction between competence and performance. I occasionally propose that by around 1972, we had solved all of English syntax, and have spent the last 50 years trying to tweak the formal model.

I should clarify that I have very little interest in plug-and-chug laboratory approaches to language, because the logical connections to theory are usually too weak for the results to be palatable. The toy-language phonology experiment paradigm is an exemplar of what I consider to be wasted effort. When I advocate bottom-up theorizing, I mean, what are the most fundamental and simple existential claims about language that are best-justified, and then what propositions might we consider adding? For example, in phonology, the existence of segments is one of those well-established claims about phonology that can’t be reasonably denied. Alas, anything can technically be denied, but usually if anyone denies the segment, they are really denying a particular proposition about segments. I also think that features are a fact, that segments are a compositional function of features, so features are existentially more atomic. The bottom-up theorizing approach starts with that which is fundamental w.r.t. what we know, and discovers ontological fundamentals in the course of developing a theory of the epistemological fundamentals.

Top-down experimentation, then, is those empirical methods that focus on well-established theoretical propositions, discerns under-appreciated predictions, and then tests them. Repeatedly. I think we’re doing reasonably well with low-level experiments to test low-level claims, like the duration of long versus short consonants in some language, we’re just not doing well at all in devising experiments that test simple claims like the one lurking in phonology for over a half cantury that all “phonetic detail” can be reduced to manipulating scalar values for features on segments (i.e. “[3round]”).

The proposal for a “translating grammar” needs a bit of terminological translating, I would say. Mainly, what’s needed is a change from “grammar” to “?strategy”. Linguists are generally very soft on prescriptive issues: we resolutely refuse to say that any words mean specific things, we always soft-pedal the issue by saying “it depends on how you define X”. That’s not a technical result of linguistic research, it’s a particular philosophical perspective. I hate metaphor in science, because it makes it so much harder to figure out if we’ve learned anything (since we don’t know if a person is using “feature” in an established technical sense or in some extended metaphorical sense). So I would not call that set of cognitive operations which allow English speakers to parse “elephant home goes” a grammar, but I also have a more pointed reason for not doing so, namely that I want to figure out in what way those strategies differ from what is in a grammar. In similar fashion, I hate this idea of the grammar of music, but I totally agree with the goal of finding the common features of music vs. language and also discerning in what ways they are different.

I officially propose that linguistics undertake a line of research into word-salad parsing. The basic axiom is that there is a grammar of English that does what linguists think a syntax does, and then people have a cognitive ability to perform and comprehend perversions of what an I-grammar produces. What is the theory of that cognitive ability? Obviously, we need at least two theories (I hope that’s obvious) and even more obviously, we need one theory of that ability. Rock100 might flesh out his theory so that we could see how it generally works (as salad-to-English filter), and I might posit a competing set of computations. Then we can discuss which account is better, as a model of what the mind does. If we can agree that the goal is to model the actions of the mind, and not to find a certain kind of computer program that maps string to string. (If it’s not obvious, the reason for doing this is to better distinguish competence from performance, by fleshing out the theory of performance).





Offline Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 2036
  • Country: us
    • English
Quote
There is an infamous problem stemming from Chomsky’s “Remarks on nominalization” that sentences like “Bob’s difficulty to understand makes him a bad teacher”, meaning “The dificulty that one would have in understanding Bob is what makes him a bad teacher”. Many syntacticians including Chomsky declare raising out of a nominalized tough-clause to be ungrammatical, but I and myriad other linguists (usually not syntacticians) find this construction perfectly fine (there’s a subset of speakers where the raised NP is interpreted as lower subject which I totally don’t get, but it’s a fact). Chomsky has a dismissive footnote to the effect that such people are unaware of a property of their grammar that their grammar doesn’t produce this output, whereas I would say that tough-nominalization deniers are unaware of a property of the English language that this is grammatical in some or all dialects, and just not acceptable to some. This obviously raises the question, how do you know what the grammar actually produces. Perhaps that is why I am a phonologist and field worker.
Funny enough, I cited that particular argument of Chomsky's for that reason in my article linked in an earlier post. That's exactly what I have in mind in asking this question (so the happenstance that you did think of exactly the same point suggests you're a bit of a syntactician after all). I actually agree with you, that Chomsky's argument there didn't really make sense empirically. But I still do wonder about the possibility for an odd quirk of a grammar that some sentence might be acceptable (=utterable, not just comprehensible), and yet not grammatical (at least in some technical sense). Why or why not?

Quote
When I do syntax, it’s not English or researcher-introspective and it’s about things of the kind of clarity in determiner-noun order in English.
And that applies to most theoretical syntax, of the Chomsky type. I'm drawn to these more troublesome cases, though, because they really do seem to poke holes in the generalizations and thereby the theories overall.

Quote
But this murkiness about what sentences are actually generated by the grammar does not call into serious question the existence of gramar and the distinction between competence and performance.
Indeed.
Quote
I occasionally propose that by around 1972, we had solved all of English syntax, and have spent the last 50 years trying to tweak the formal model.
Haha, that's one way of looking at it!

Quote
I officially propose that linguistics undertake a line of research into word-salad parsing. The basic axiom is that there is a grammar of English that does what linguists think a syntax does, and then people have a cognitive ability to perform and comprehend perversions of what an I-grammar produces. What is the theory of that cognitive ability? Obviously, we need at least two theories (I hope that’s obvious) and even more obviously, we need one theory of that ability.
I love that question. I don't always agree with Chomsky, but let's for the sake of argument take his word as fact: there's a strict distinction between Competence and Performance, and he has all but solved Competence. OK. So why hasn't anyone figured out Performance? I think it's a harder problem, actually. And while psycholinguists are making a lot of interesting progress from their side of things, it just so rarely connects with anything in a syntax textbook you can't help but wonder :)
Welcome to Linguist Forum! If you have any questions, please ask.

Offline waive15

  • Linguist
  • ***
  • Posts: 71
Hi,


"Maybe language really is logical, once we get the right system of logic." (jkpate, 2014)


---

The Language of Logic

https://www.youtube.com/watch?v=EpMfZvbmO0c

---

Elephant Principle

https://www.youtube.com/watch?v=UA42ojvSJs0

---


Thank you.
« Last Edit: October 10, 2020, 06:10:40 PM by waive15 »