TLDR;
I think grammaticality is very rigid: a form is, or it isnât. So we can distinguish grammatical sentences from ungrammatical ones if we âknow the grammarâ. Wherein lies the difficulty. I do not put any stock in ungrammatical utterances that are effective (decipherable), but âconsistentlyâ is a whole ânother kettle of fish. W.r.t. raw data, I attribute more of the facts about data to either performance or crypto-codeswitching (âI can get that in my dialectâ) than most theoreticians do, I think. I personally know that in my original dialect, âneeds washedâ is word salad, but Iâve been exposed to it enough times over the decades that I donât flinch, and occasionally utter it. When people repeatedly say things that grammatical theories donât countenance, I am inclined say that there is an under-reported dialect feature that needs reporting. Before engaging in a line of research to broaden what syntax should allow, I would first ask for some discussion of the methodology of deciding that a fact pattern is due to competence versus performance.
For example, templates controlling the order of prenominal adjectives are, IMO, based on stylistic acceptability judgments and not about something in grammar. There are those who would put that in grammar. Is there any empirical or logical basis for deciding between these positions, and what would that basis say about English noun-determiner order, which I would say is in grammar. There is an infamous problem stemming from Chomskyâs âRemarks on nominalizationâ that sentences like âBobâs difficulty to understand makes him a bad teacherâ, meaning âThe dificulty that one would have in understanding Bob is what makes him a bad teacherâ. Many syntacticians including Chomsky declare raising out of a nominalized tough-clause to be ungrammatical, but I and myriad other linguists (usually not syntacticians) find this construction perfectly fine (thereâs a subset of speakers where the raised NP is interpreted as lower subject which I totally donât get, but itâs a fact). Chomsky has a dismissive footnote to the effect that such people are unaware of a property of their grammar that their grammar doesnât produce this output, whereas I would say that tough-nominalization deniers are unaware of a property of the English language that this is grammatical in some or all dialects, and just not acceptable to some. This obviously raises the question, how do you know what the grammar actually produces. Perhaps that is why I am a phonologist and field worker. When I do syntax, itâs not English or researcher-introspective and itâs about things of the kind of clarity in determiner-noun order in English.
But this murkiness about what sentences are actually generated by the grammar does not call into serious question the existence of gramar and the distinction between competence and performance. I occasionally propose that by around 1972, we had solved all of English syntax, and have spent the last 50 years trying to tweak the formal model.
I should clarify that I have very little interest in plug-and-chug laboratory approaches to language, because the logical connections to theory are usually too weak for the results to be palatable. The toy-language phonology experiment paradigm is an exemplar of what I consider to be wasted effort. When I advocate bottom-up theorizing, I mean, what are the most fundamental and simple existential claims about language that are best-justified, and then what propositions might we consider adding? For example, in phonology, the existence of segments is one of those well-established claims about phonology that canât be reasonably denied. Alas, anything can technically be denied, but usually if anyone denies the segment, they are really denying a particular proposition about segments. I also think that features are a fact, that segments are a compositional function of features, so features are existentially more atomic. The bottom-up theorizing approach starts with that which is fundamental w.r.t. what we know, and discovers ontological fundamentals in the course of developing a theory of the epistemological fundamentals.
Top-down experimentation, then, is those empirical methods that focus on well-established theoretical propositions, discerns under-appreciated predictions, and then tests them. Repeatedly. I think weâre doing reasonably well with low-level experiments to test low-level claims, like the duration of long versus short consonants in some language, weâre just not doing well at all in devising experiments that test simple claims like the one lurking in phonology for over a half cantury that all âphonetic detailâ can be reduced to manipulating scalar values for features on segments (i.e. â[3round]â).
The proposal for a âtranslating grammarâ needs a bit of terminological translating, I would say. Mainly, whatâs needed is a change from âgrammarâ to â?strategyâ. Linguists are generally very soft on prescriptive issues: we resolutely refuse to say that any words mean specific things, we always soft-pedal the issue by saying âit depends on how you define Xâ. Thatâs not a technical result of linguistic research, itâs a particular philosophical perspective. I hate metaphor in science, because it makes it so much harder to figure out if weâve learned anything (since we donât know if a person is using âfeatureâ in an established technical sense or in some extended metaphorical sense). So I would not call that set of cognitive operations which allow English speakers to parse âelephant home goesâ a grammar, but I also have a more pointed reason for not doing so, namely that I want to figure out in what way those strategies differ from what is in a grammar. In similar fashion, I hate this idea of the grammar of music, but I totally agree with the goal of finding the common features of music vs. language and also discerning in what ways they are different.
I officially propose that linguistics undertake a line of research into word-salad parsing. The basic axiom is that there is a grammar of English that does what linguists think a syntax does, and then people have a cognitive ability to perform and comprehend perversions of what an I-grammar produces. What is the theory of that cognitive ability? Obviously, we need at least two theories (I hope thatâs obvious) and even more obviously, we need one theory of that ability. Rock100 might flesh out his theory so that we could see how it generally works (as salad-to-English filter), and I might posit a competing set of computations. Then we can discuss which account is better, as a model of what the mind does. If we can agree that the goal is to model the actions of the mind, and not to find a certain kind of computer program that maps string to string. (If itâs not obvious, the reason for doing this is to better distinguish competence from performance, by fleshing out the theory of performance).