Recent Posts

Pages: 1 2 [3] 4 5 ... 10
21
Semantics and Pragmatics / Re: Apocope, meaning change and lexeme
« Last post by vox on October 18, 2017, 09:40:39 AM »
Intuitively I don’t agree with this analysis :
lexeme 1 manifestation ‘show, display’
lexeme 2 manifestation, manif  ‘protest’

So to me manifestation is polysemous. But you suggest that truncation reveals an unnoticed homonymy or an interim step between homonymy and polysemy. It’s true that we can’t say *Une manifestation d’étudiants et de colère without producing a zeugma (‘A protest/display of students and anger’). But the fact remains that the two meanings are obviously related : the second one (‘protest') is just a usage-based specialization of the first one. It would seem very exagerated for any French native to consider that we have two homonymous words manifestation. I think the best equivalent word in english is demonstration.

What about considering that truncation is a morpho-phonological operation to create synonymous lexemes ? It seems more plausible to me.
22
Linguist's Lounge / Politics
« Last post by FlatAssembler on October 17, 2017, 09:13:46 PM »
So, what do you guys here think about politics?
I think that anarchists are right. Government isn't actively trying to protect us. The police only comes after a psychopath has already murdered someone. And then they put him in a place not where he will rehabilitate, but to a place from which he will return with even more psychological problems, that made him murder in the first place. For all we know, they could just be making things worse.
It's often said that the government makes people less greedy. But it seems obvious that the opposite is true. It gives people a sense of entitlement. It often makes laws such as "You have the right to be taken care of when you are sick, therefore we will force people to give you some money for the medicines." or "If you try to help other students during difficult tests, you will get punished for that." Furthermore, neither Proto-Indo-European nor Proto-Afro-Asiatic nor Proto-Uralic had even a word for "to have".
It would be interesting to hear what people educated in social sciences think.
23
Semantics and Pragmatics / Re: Apocope, meaning change and lexeme
« Last post by Daniel on October 17, 2017, 07:19:51 PM »
That's an interesting question.

In the first place you would need to decide if the first word is actually polysemous or homonymous. Maybe it's just two lexemes that happen to have the same form? And even if it's "polysemous" how would you analyze that as a single lexeme? Maybe even then it's two linked lexemes. Only when you have two closely related meanings would that analysis definitely not work. In the end, the 'right' analysis depends on your assumptions and the particular theoretical approach you're taking.

Additionally, since truncations are not generally rule-based, that must be memorized anyway, so it doesn't seem that odd to also memorize along with that some meaning changes.

So while I can't definitively answer the question about analysis (but maybe those ideas help!), I do want to add something:

This doesn't seem unusual to me at all. In fact, in many cases I think truncation results in highlighting one meaning over others:

  • Referee can be anyone who makes judgments (for example, a referee at a journal or the usage referring to writing letters of recommendation). But the truncated form ref only refers to athletic referees.
  • Professional can be used widely to refer to anyone in a specialized (maybe even non-specialized?) job. But pro has some narrower and conventional meanings: a golf pro (a trainer/coach for golfing), a professional athlete, a prostitute (not sure if this is the same abbreviation or not!), and also someone who is "a real pro" meaning very good at their job, but not just anyone who has such a job. Doctors, lawyers, etc., would not typically be called "pros" even though you would consider them "professionals".
  • Cellular can mean many things, but cell abbreviates only 'cellular phone' (among other I think mostly unrelated meanings of 'cell').
  • Carriage has several meanings, but car is more specific (though this may be a later change, and also in parallel to usage for 'train car' etc.).

And so on.

My suspicion for an analysis (from a historical perspective at least) would be that words actually have specific usage/meanings we don't notice until something else also changes. I would imagine that the specific usage that the truncated form takes on follows from conventionalized usage of the original lexeme, which arguably has already split even though there is little evidence for that immediately.

This reminds me of something I was working on and thinking about a while ago with the meaning of words that change when applied to new contexts. The best example is "husband" or "wife" in reference to gay marriage. This has nothing to do with politics! Whoever you ask, two men who married each other are husbands, not wives. And two women who married each other are wives, not husbands. The first edition of the OED (and many other dictionaries) defined husband as roughly "a man married to a woman". The recent revision in the last couple years changed that to "a married man", I assume because of changes in politics/laws. But what is important is that this was not a change that happened because of legalizing gay marriage. It was already what the word meant. It did not refer to a man who married a woman, but to any man who was married. It just also happened to be the case that previously men only legally married women, so the distinction in the definition was undetectable and irrelevant. But once gay marriage became a topic of discourse, it was very quickly discovered (not changed!) that "husband" actually referred to any married man-- thus husband means "male spouse". A logically equivalent possibility would have been that as the spouses of men, gay married men would be called "wives"-- "spouse of a man", while gay married women would be called "husbands"-- "spouse of a woman". But that's not what happened. And it doesn't sound right to my ears. Because we know that "husbands" are men, and that "wives" are women. It isn't due to who they marry, but to who they are. So the word originally meant that (or at least highlighted that) even before the "change" became apparent in usage.

So in short I would suggest that these truncations allow us to view evidence of a pre-existing split in the lexeme, just like words might mean something a little different from what we think based on usage-based definitions. At least that's one hypothesis to consider.
24
Semantics and Pragmatics / Apocope, meaning change and lexeme
« Last post by vox on October 17, 2017, 05:23:30 PM »
In French
-manifestation means either ‘show, display’ as in People’s manifestation of support or ‘protest’ as in A protest in the street
-manif means only ‘protest’, the first meaning is absolutely excluded
The polysemy is reduced with the apocope.

I wonder how to analyze this case : two forms for the same lexeme or two lexemes partially synonymous ?

Thank you.
25
Linguist's Lounge / Re: What would you change in your native language?
« Last post by panini on October 17, 2017, 08:28:41 AM »
I'd like the morphology to be improved. English is kind of pathetic as far as morphology goes: we don't have much at all by way of person, number, tense, aspect, mood, polarity etc. as verbal inflection, not to mention derivation e.g. causative, reciprocal, pluractional (compare Sanskrit, most Bantu, Arabic, Klamath). I'd like a morphology where there would be a couple million inflected forms from each root. Then I'd like it to do something awesome with those riches. Like in Arabic, with glides coming and going, vowels changing; Klamath with that syncope rule; Bantu with all sorts of tone changes. Also, some better consonants, like ʕ, qʷ', and  8) (as you can tell, that's not a currently sanctioned IPA sound). If not that, at least a decent pitch-accent system.
26
Linguist's Lounge / What would you change in your native language?
« Last post by FlatAssembler on October 17, 2017, 07:15:39 AM »
So, what would you change in your native language if you could? In Croatian, I would get rid of the pitch accent, and make the stress predictable, to always be on the penultimate syllable. I would also make the phonotactics a bit more restrictive, not to allow those complex consonant clusters which are hard to pronounce even by the native speakers (like in "hrčcima" ("to the hamsters"), pronounced /xrtʃtsima/, six consonants in a row).
27
You can certainly reduce the programming languages to just a few words while retaining the expressiveness. It's actually been done: see "One instruction set computer".
28
Linguist's Lounge / Re: So I've created a very functional language with only 32 words,
« Last post by panini on October 16, 2017, 05:43:11 PM »
So just to understand how this works, how exactly do you say the following, using those 32 words?

  • I looked at a car
    An antelope looked at a car
    A gerenuk looked at a car
    A hartebeest looked at the dikdik
29
Linguist's Lounge / Re: So I've created a very functional language with only 32 words,
« Last post by Daniel on October 16, 2017, 02:22:56 PM »
This has been attempted before, and there are several limitations to the approach and remarks I can make.

1. Humans are great at learning vocabulary, so there is in a cognitive sense little need to do this. Yes, it takes some time to learn words so there may be a shortcut allowed by this approach, but only if it really results in a fully expressive language. Does it? Can it?

2. A combination of limited morphemes can work in a minimally expressive language. The go-to example of Toki Pona, which is a minimal language (around 120 morphemes, admittedly more than yours) that does not attempt to replace spoken languages in expressiveness but rather offers an alternative form of expression-- that's the point.
https://en.wikipedia.org/wiki/Toki_Pona
http://tokipona.net/

3. Another relevant experiment is Simple English:
https://simple.wikipedia.org/wiki/Main_Page
The idea is that by using only around 800 words, English learners will more easily be able to communicate, use Wikipedia, etc. The idea is interesting and actually somewhat effective. But what is the result? The Wikipedia experiment has shown something very important: instead of just learning 800 words, a Simple English user must actually memorize many more collocations as well, phrases with two or more words, to substitute for the other beyond-800 words found in English. There are simply more than 800 ideas in English that are not easily built up from simple parts. Even if you could in theory only talk about things using circumlocutions, users would want direct and consistent ways to refer to specific concepts. An example from Toki Pona is "crazy water" used to refer to alcohol. So in reality that's another new word in Toki Pona (along with many others, and similarly so for many scientific and other terms in Simple English), which must be memorized, and in the end very little is "saved" by having few morphemes, because in fact those idiomatic collocations are themselves necessarily new morphemes even though they are build from recognizable parts. It's like saying that English "greenhouse" is really just two morphemes stuck together rather than adding a new word/morpheme to the language-- an illusion at best, and more realistically just a delusion of whoever is counting. In the end, if the language you made is practical at all, it's just a matter of time and use until there are dozens and then hundreds and then thousands of collocations like that. Give it even more time and sound change will take over, and you'll end up with unrecognizable derivations, just like in natural languages.

4. While many general ideas can be broken down into a small number of parts, there is simply no way to get to more specific concepts like "tree" or "squirrel" or "sing" (without adding idiomatic collocations). An example of how this works out, and very similar to what you have designed, is NSM, which attempts to find the "universal" basic meanings from all human languages:
https://en.wikipedia.org/wiki/Natural_semantic_metalanguage
The proposals range from around 11 to over 70. It's important to read carefully what the proposals really are, however. They are not attempting to reduce all vocabulary to these primes. Instead, they are finding primes within some aspects of existing vocabulary. In other words, yes, this can work (according to that theory) for some vocabulary (or parts of vocabulary), but they do not really attempt to replace words like "tree" or "squirrel" or "sing" with these primes. So the realistic goal of that methodology is not to reduce human language to the fewest possible primes (as you might think first looking at it), but instead to come up with as many primes as possible that are actually found in all languages. Obviously the number of primes should be minimal (completely decomposed to their basic elements-- that's what a prime is, by definition), but the goal isn't to artificially reduce all of them but instead to actually find all of them. There is a lot of overlap with the concepts you mentioned above and those proposals, so you might want to look into them. There are also many critiques and criticisms of that approach, so you can look into that side of things too. One issue is that languages do not really seem to consistently use the primes on the surface in the structure of words. So if there is any truth to that analysis, it's not because it's transparently how words are derived, so it wouldn't line up with your proposal in a literal sense.

5. More generally, the idea of having few morphemes building up many meanings in a language is called Oligosynthesis, coined by Benjamin Whorf around 1928. It's confusingly not the opposite of polysynthesis (combining many morphemes into a single word) but actually (possibly) correlated with it (using few morphemes total to make up all words, presumably sometimes via polysynthesis).
https://en.wikipedia.org/wiki/Oligosynthetic_language
Whorf pursued the idea of Oligosynthesis for a couple years just as he was getting into Linguistics and before his formal study of the subject, really just a brief moment in his career. After he got involved in serious study of linguistics, he moved on, and it doesn't seem that anyone else has taken up the idea very seriously. He originally proposed that some indigenous languages of Mexico had oligosynthetic structure because it seemed that just a few basic roots occurred over and over again, although that analysis does not seem to have been widely accepted. Really, the idea seems to have been more or less forgotten (or consciously ignored) in linguistics research.
However, I recently did some research on the topic, with one crucial difference: I wasn't claiming that languages as a whole are truly oligosynthetic (there are various reasons that doesn't seem practical) but that some subsystems of languages could be. So rather than looking for languages with a very small dictionary, the question is whether some languages so oligosynthetic tendencies in some areas. In fact, one that I identified is how serial verb constructions (think of "go carry" as "take" and "come carry" as "bring", sort of like compounds but actually still separate words and possibly stacked) can in some languages seem to augment the lexicon. Of course very often they do become lexicalized (and therefore no longer oligosynthetic because they're newly memorized if somewhat transparent morphemes!). But some langauges seem to do very well with productive and systematic usage of just a few verbal roots. So in that sense there are aspects of natural languages that work like that, and similarly the system you propose could function but only as part  of a fully expressive communication system. A completely oligosynthetic language seems inherently limited, or at the very least unlikely to be stable over continued usage.

I've written more about oligosynthesis here by the way:
https://www.quora.com/Why-don%E2%80%99t-natural-oligosynthetic-languages-exist/answer/Daniel-Ross-71
https://www.quora.com/If-you-have-to-invent-a-language-of-20-words-only-What-would-they-be-And-why/answer/Daniel-Ross-71
Also consider:
https://www.quora.com/What-are-examples-of-useful-artificial-languages/answer/Daniel-Ross-71
30
Historical Linguistics / Re: A question about Proto-Indo-European phonology
« Last post by Daniel on October 16, 2017, 01:53:10 PM »
You'll have to dig deeper than Wikipedia to find the details of the theory. It just seems unlikely that something this obvious was missed by other researchers for 200 years, for well-known roots that are thought to be related.

Note that it falls at the end of a root, which might make some unusual things happen.
Pages: 1 2 [3] 4 5 ... 10