Author Topic: General Categorization and Terminology  (Read 553 times)

Offline user116

  • New Linguist
  • *
  • Posts: 1
General Categorization and Terminology
« on: November 21, 2016, 11:00:16 AM »
Hello,

I'm not sure whether I'm posting in the wrong section of the forum (if that's the case, please migrate this thread to the appropriate section of the forum; I apologize in advance) or if this kind of question belongs in this forum at all.

I have no formal education in linguistics and am not a native speaker of English; that said, I do know what nouns, verbs, etc. are, and I've got a general idea of what subjects, predicates, attributes, clauses, etc. are (at least what we were taught in elementary/high school).


I'd like to know which category "subject", "predicate", "object", and "adjunct". belong to, and if the mentioned fall into one specific category at all. I've read articles on these topics on Wikipedia, but I still cannot quite grasp the whole thing. Here's an example to illustrate why this is so confusing to me:
https://en.wikipedia.org/wiki/Adjunct_(grammar)#Examples The example they discuss mentions "subject argument", "predicate", "object argument", and "adjunct" on the same level of abstraction.

At https://en.wikipedia.org/wiki/Predicate_(grammar)#Predicates_in_traditional_grammar, the article says: "She met him in the park. - verb + object + adjunct predicate." In this example, it appears that "adjunct predicate", which I think is just an adjunct, is lower in the hierarchy than the predicate. Now that's not congruent with the hierarchy established above — they're mixing apples and oranges.

I'd appreciate if someone could list all these items by category so I can google them, or link me to the site where I can get a general idea about the sentence analysis. From what I garnered, syntax trees, or whatever they're called, don't show the "subject", "predicate", etc. which I wanna identify, but I suppose the aforementioned become apparent (by their definitions) once the tree is written.

If you were, thanks for reading.

Online Daniel

  • Administrator
  • Experienced Linguist
  • *****
  • Posts: 1539
  • Country: us
    • English
Re: General Categorization and Terminology
« Reply #1 on: November 21, 2016, 07:45:40 PM »
First, welcome to the forum! And this is definitely a relevant discussion, and for this section. (Some parts of a complete answer will also be related to semantics, too.)


Despite your lack of a formal background, your questions and discussion show some general insight into what's going on in linguistic structure. On the other hand, there are some ways that changing your perspective might help you understand the questions better (that is, "like a linguist"-- there are some traditional ways we look at things that aren't always intuitive and we may or may not be right, but at the very least would be important to understand why we're wrong about them-- feel free to think about things your own way too though!). The terminology is to some degree just that-- terminology that describes things linguists observe. So some of it only makes sense within the perspective of a linguist working from a certain theory (really, a family of theories), and making certain assumptions about language.

The most abstract answer is that linguists disagree about whether there even are such things as cross-linguistic categories. For example, it is very difficult to define what a "word" is in a way that applies across all (or even most) languages. Maybe all languages have words, but if the 'definition' (identifying criteria, at least) would vary, then are they really all the "same thing"? Similar arguments have been made for the well known word classes like noun and verb, and of course other classes like adjectives or adverbs. In many languages adjectives function like verbs (Japanese is an example of this, at least partial overlap, and there are more extreme cases), and in others like Spanish an adjective can describe a noun (but in English a noun can describe a noun: paper cup!) or it can be used as a noun itself-- "I'd like the green [one]". I'd say that the majority of linguists do seem to work from the perspective that at least something is shared across languages, especially at this broad level. But how do you work that out?

The terms subject and object are not especially technical terms in linguistic research, just inherited from traditional grammar. They're used all the time descriptively, but when it matters a more technical term may be used instead. From the perspective of semantics, there are what are called theta roles which relate to expected arguments the verb takes. A subject is often an agent, and an object is often a patient, for example, but there are others, and in some cases those can be reversed. This is how relationships to the verb are often explained. On the other hand, the distinction between argument and adjunct tells us whether something is an argument of the verb at all. Something a little tricky is that a lot of the time the subject isn't seen as directly being an argument of the verb, except in the semantics. The syntactic structure is a little complicated, but you'll have to read about that in a syntax textbook, from a particular theory. (In short, it seems that the subject combines with the whole predicate, rather than just the verb, so there are various ways to deal with this.)

An adjunct is a generally optional part of structure, but it, unlike the other terms you mentioned, does not refer to a particular part of any given clause, and there may be many adjuncts, at many levels. There might be an adjunct within the noun phrase (an adjective), or within the verb phrase (an adverb, etc.), or for the whole sentence (a sentence-level adverb). And since most theories recognize many levels of hierarchy, there are technically speaking infinitely many possible ways for an adjunct to attach to a sentence. Or to put it another way, at the max there could be an adjunct attached to any level of the hierarchy, and even basic sentences have a few levels, with some analyses of more complex (but still normal spoken) sentences having dozens of levels.

So that's one important perspective to start thinking about: linguists tend to look at sentences as layers of hierarchical structure, rather than as a sequence of linear words. Trying to diagram a sentence written on a page in its "parts" might approximate some analyses in some ways, but to really get into how it is structured, you can't work from the surface form as pronounced. (In fact, I take this a step farther in my personal research by distinguishing form from structure entirely, at least in principle, in that the form might not even directly reflect the structure at all. Others try to explain the relationship with complex "transformations" and other ways that make the structure/derivation very complicated, while yet others try to explain things more based on the surface form, often based on "constructions" (e.g., in Construction Grammar), but still imagine some hierarchy as those constructions are nested within each other.) Only relatively primitive theories from the middle of last century tried to stick to really flat structures: for example, Tagmemics is one example, which didn't really say that sentences weren't entirely flat, but they described a few levels like sentence, clause and phrase, rather than assuming a flexible, fully hierarchical structure. (If you read something about Tagmemics you'll see what I mean about focusing on a relatively flat surface structure. Even then though I don't think it was really thought to be quite so flat, but it was a way of abstracting and trying to find patterns/similarities. Approaches like that have more or less been abandoned, though, to focus more on the hierarchical structure now.)

This gets into recursion, which is debated/discussed from various perspectives and even controversial for some languages (Pirahã is the main case), but can't really be denied on a basic level for English and many other languages:
I said that you said that he said that she said that we said that they said that it said that I said that you said that..............

Because there are so many possible levels there, it only makes sense to talk about structural syntactic relationships from the perspective of hierarchy. In a basic sentence we don't necessarily need to, but for the idea to generalize to other sentences we do.

Adjuncts then are optional modifiers found at any level. (For more about adjuncts, read the part of a syntax textbook that discusses adjuncts vs. complements. These are like adjuncts vs. arguments, but in a more narrow sense: combining a complement is done because it is an expected argument right there in the structure [a "sister" in the syntactic tree, so to speak], while an adjunct combines at effectively the same location but is not a required/expected argument.)

Then turning to predicate, the first point is clause vs. sentence: a sentence is a "whole" utterance, although utterances sometimes are not whole (like answers to questions, or just casual speech omitting some elements). All sentences are clauses. But sentences may contain more clauses within them. Actually, clauses can also contain more clauses within them (but only some can stand alone as sentences). Ignoring embedded clauses ("The man who arrived was a linguist."), clauses can be defined as the combination of a subject and a predicate. (Again these terms aren't always used in a technical sense.) Plus maybe some adjuncts, but we'll set that aside.

Defined by exclusion, then, a predicate then is whatever is not the subject (nor adjuncts). Defined by inclusion, it's the verb and things related to the verb. Linguists have reason to believe that a subject combines with the full predicate, so that there is what we can call a "verb phrase", consisting of the verb plus the object. There is no subject+verb phrase which would exclude the object. In other words:
Sentences have the structure S[VO], rather than [SV]O. (When the word order varies a lot this can get a little complicated, and theories do it in different ways, but we'll set that aside because in general the idea of a verb phrase, one way or another, is fairly standard.) See a syntax textbook for more.

"Verb phrase" can be used as a synonym for "predicate", although in more detailed structures in some theories (e.g., Generative Grammar such as in the work of Noam Chomsky) the "verb phrase" is a specific level of structure actually lower than what combines with the subject. Higher levels include things like the "tense phrase". Regardless we can call the part that combines with the subject the predicate — the verb phrase or something like it (several levels of abstract structure above).

An intuitive and easy way to identify predicates is to note that there is one predicate per clause, so we can identify either by the presence of the other. (Again, we're setting aside embedded clauses for now, which aren't too hard to identify.) Likewise, it is very tempting to say that there is also one verb in every predicate (and clause). But that's problematic. Consider a parallel for assuming there's one vowel per syllable. A generally reliable way to identify syllables is to count vowels. The number of consonants in a syllable can vary, but there's always a vowel (well, almost always; if not, something acting like one). The problem is that diphthongs are sequences of two vowels that occur in the same syllable: coin is pronounced with an /o/ ("oh") then an /i/ ("ee") as the nucleus of a single syllable. So we can say that all syllables have exactly one nucleus (vowel or combination of vowels, or equivalent), but we can't say that all syllables have exactly one vowel.

For verbs in predicates, there are two problems. First, some languages allow sentences that (apparently) have no verbs at all, as in Arabic. Some theories allow this, just like that, and others would imagine an unpronounced verb of some sort of keep the structure consistent. Either way, it's a little tricky. But still you might say, there's exactly one verb or verb-like thing (maybe a noun or an adjective). That isn't true either. There can actually be two verbs in a single predicate (again, we're still setting aside embedding).

Foley & Olsen (1985) were among the first to discuss explicitly the fact that a phenomenon known as Serial Verb Constructions allowed for multiple verbs to be within the same predicate. Where we would have a single verb in English, another language (especially one from west Africa, southeast Asia, Papua New Guinea, and various other places) might have two verbs, or more (strings up to 6 or even 9 verbs have been reported). Conceptually it's something like breaking down a verb into sub-events or sub-actions. Consider something like "hit die" (=kill) or "carry go" (=take). Some have been tempted to say it can be defined semantically based on what translates as a single verb in another language like English, but looking at the data it's pretty clear to me that these languages at least sometimes have more expressive power than English (that is, compared to simple verbs in English) rather than just using more words or something like that. We can call these a type of Complex Predicate (there are also verb+noun [e.g. "take a bath"] or verb+particle ["look up"] types too), where somehow the predicate/verb, even in a narrow sense (excluding various adjuncts and even complements), seems to be phrasal rather than a single word. Complicated issues that aren't really solved yet.

Another way of looking at subject vs. predicate is from the perspective of usage or information structure: the subject is who/what the sentence is about, and the predicate is what's being said about it.


So that's a very general answer, with some tangents, to your questions. There isn't a single category that encompasses all of those terms, because they're defined differently within hierarchical structure. For a full understanding I'd recommend an introductory syntax textbook. Whether or not you agree with the theory it develops, you'll get a sense of why it's developing that theory and some of the questions/issues that need to be solved.

Two textbooks I've used to teach are:
"Grammar as Science" by Larson
"Syntax: A Generative Introduction" by Andrew Carnie

I'd recommend Carnie's book if you want to know about mainstream "Generative" theories that are prominent in the United States (and some other places) and especially the work of Noam Chomsky. This book can be used for undergraduates (but it's difficult) or for a graduate level introductory class. It's a solid overview of what you'd want to know. It will also take some work to get through it, so only take it on if you want to devote some time to it.

Larson's book is much simpler and relatively easy for undergraduates. It has 30 chapters that are meant to be covered in about one class day each. It builds up the theory very slowly and in general terms, without focusing too much on real research. It walks you through the steps along the way and ends up not quite as far as Carnie's book but still at something that resembles current theory. It would be easier to follow this one on your own, and also probably more helpful if you stop reading the book halfway through than Carnie's might be.

Both books steer you right toward the theories developed by Chomsky and others. There are alternatives, so keep an open mind, but it is good to know about that approach. The biggest objection I have to both books is that over time they contradict themselves: one chapter introduces something, then the next might remove it from the theory or use it for something else entirely. It's good to know that going into it. But it also makes sense, because it's not teaching you the theory as-is, but building it up step by step. Carnie focuses more on the historical development of the theory (1950s to now), while Larson is less about real research and more about the student building up a grammatical model from scratch (it ends up resembling current research); both are step by step.

Finally, a warning about terminology: it is often used inconsistently, either by different authors/linguists, or by the same linguists with slightly different meanings in different contexts. It's sometimes used pre-theoretically ("noun" = what you think noun should mean) but sometimes applied within a very specific theoretical approach (noun = exactly and only those things that fit a very specific theoretical criteria), where the latter is supposed to be a more technical version of the former. For the most part these broad terms you're talking about it isn't such a big problem but for something like "Serial Verb Construction" the usage can be all over the place while having something like the same intuitive idea, or more realistically just some overlapping/similar ideas. When reading any particular author look for a definition of a technical term if you can.

If you're just wondering about linguistic terminology for other reasons, then unfortunately the answer is that we haven't solved everything yet and there isn't a consensus about a lot of things. But from a descriptive point of view, we have terminology that pretty well categorizes most phenomena for language, so either ask a linguist or find a relevant publication and phrase it the way they do. There might be some debate about the exact technical usage, but it will still broadly capture the idea.


Feel free to ask followup questions, but each of those terms could easily be a major project (or at least post) in itself.
Welcome to Linguist Forum! If you have any questions, please ask.