Recent Posts

Pages: 1 [2] 3 4 ... 10
11
Phonetics and Phonology / A question about p and b
« Last post by nguyen dung on November 20, 2017, 01:37:33 AM »
What is a practical sign to distinguish p and b in English:voiceless and voiced or strong and weak energetic flow of air or both things?
12
Linguist's Lounge / What types of words are time referents
« Last post by josephusflav on November 20, 2017, 12:26:38 AM »
I know verbs,nouns, adjectives can bare temporal markings.
 
"God exists" verb
"God is a existing being" adjective
"Swimming is fun" noun

Are there other types of words indicate time?



13
Morphosyntax / What is "plus" as a part of speech?
« Last post by LinguistSkeptic on November 19, 2017, 09:32:19 PM »
So, what is "plus" (as in "two plus two equals four") as a part of speech? Is it a conjunction or a preposition? Or maybe something else?
14
Linguist's Lounge / Re: So I've created a very functional language with only 32 words,
« Last post by Daniel on November 18, 2017, 07:02:40 PM »
Indeed, as Panini pointed out, you have designed a way to (potentially) analyze everything, but not to easily refer to specific things. Your system is perhaps better for clearly showing your reasoning about certain ideas and to avoid relying on assumptions, but it is entirely inefficient when it comes to simply referring to a category like "tree" or "dog" or "aardvark". Natural languages typically put more emphasis on actually referring to things, and less to being analytical, and it is essentially unimaginable this would actually work for people in real life. The obvious result is that after a minimal amount of usage it would start to develop idioms and then longer words to refer to specific concepts. And it would no longer work as proposed, because that's not how human languages work.

Something you might enjoy reading about is Ithkuil, which is a constructed language proposed for some of the same reasons that might be motivating you:
https://www.newyorker.com/magazine/2012/12/24/utopian-for-beginners

I think the concept is interesting. However, personally I would actually prefer for all of those grammatical devices to be optional so that we can choose whether to express ourselves clearly or in general terms. That would be a very powerful language, allowing us to express ourselves as we want to, rather than making us be explicit about every detail. Of course your proposal goes farther than that, breaking down all concepts into your few basic words, but Ithkuil is clearly more functional because it does have as many words as needed, but modifies them grammatically to express nuance and so forth.
15
Linguist's Lounge / Re: So I've created a very functional language with only 32 words,
« Last post by panini on November 18, 2017, 10:26:19 AM »
Mind you, the names of many of these different creatures come from other languages, meaning that loanwords ("dik'd(o)iwk") would usually suffice to describe them.
So the claim is not that you have a maximum of 32 words – you can have any number of words in the language – and the claim really comes down to saying that it is possible to express any idea with just those 32 words, but for convenience you can draw on other words (borrowed from other languages). This raises the question whether "A hartebeest looked at the dikdik" could also be classified as an utterance of your language, one that uses a lot of loan words. You have a long expression that translates as "the four-legged animal named "Gerenuk", but why not simply call it "ge're'nuk'"?

Perhaps the answers would be clearer if we knew how to say a few much simpler words: "cat", "dog", "hand", "foot". Not just "what is the final word?", but "how to you reduce the output to the 32 basic words? I can't make any sense of the notation
'00001 - r (English)/e ("egg")/z ("foxes", "pose")/-y ("boil", "mile", "eye", "kind", et cetera)'. Are you trying to also devise a spelling system free of standard phonetic conventions? That is, what is the actual IPA content of your particle 00001?
16
You can certainly reduce the programming languages to just a few words while retaining the expressiveness. It's actually been done: see "One instruction set computer".

Sounds interesting! I suppose it is quite alike.
17
So just to understand how this works, how exactly do you say the following, using those 32 words?

  • I looked at a car
    An antelope looked at a car
    A gerenuk looked at a car
    A hartebeest looked at the dikdik

Mind you, the names of many of these different creatures come from other languages, meaning that loanwords ("dik'd(o)iwk") would usually suffice to describe them. Even so, I took on this challenge and devised the following words:

tsen'kyuwk'gzyawrk'tholk'bwaylsh'kLuz'kluk'dzyawrk'thalp'ku'byuuwlmv'thLalsh - Dikdik

Now, yes, it took twelve syllables to describe a Dikdik with Truespeak's limited vocabulary, but bare in mind that context can allow for shorter words to be used instead.

gzyawrk'tholk'bwaylsh'kLuz'kluk'dzyawrk'thalp'ku'byuuwlmv'thLalsh - Deer (or other four-legged animal with two horns)

gzyawrk'tholk'bwaylsh'kLuz - Four-legged animal

rar - Animal

However, an even better way of saying any of these animals is as follows:

ge're'nuk'guu'gzyawrk'tholk'bwaylsh'kLuz (the four-legged animal named "Gerenuk")

dik'di'kuug'gzyawrk'tholk'bwaylsh'kLuz (the four-legged animal named "Dikdik")

Et cetera.

As for "car", the description is surprisingly similar to that of a four-legged animal, and goes as follows:

gzyawrk'tholk'bwaylsh'kumth'rar

it basically means "four-legged industrial/mechanical creature" ("leg" in all these examples can also be interpreted as "wheel").

Now, the other necessary words are simpler to interpolate:

dhwo(o)lk'znv - Looked at

dhil - I (just one way of saying it, out of the dozens I've found)

A reminder that the final words can be said just one after the other, so-

"dhil dhwo(o)lk'znv gzyawrk'tholk'bwaylsh'kumth'rar" (8 syllables)

would mean-

"I looked at a car"
So to conclude, these sentences should be relatively easy to say in Truespeak. Needless to say, there are probably countless other ways (and probably more concise ones) to say these exact sentences.
18
This has been attempted before, and there are several limitations to the approach and remarks I can make.

1. Humans are great at learning vocabulary, so there is in a cognitive sense little need to do this. Yes, it takes some time to learn words so there may be a shortcut allowed by this approach, but only if it really results in a fully expressive language. Does it? Can it?

2. A combination of limited morphemes can work in a minimally expressive language. The go-to example of Toki Pona, which is a minimal language (around 120 morphemes, admittedly more than yours) that does not attempt to replace spoken languages in expressiveness but rather offers an alternative form of expression-- that's the point.
https://en.wikipedia.org/wiki/Toki_Pona
http://tokipona.net/

3. Another relevant experiment is Simple English:
https://simple.wikipedia.org/wiki/Main_Page
The idea is that by using only around 800 words, English learners will more easily be able to communicate, use Wikipedia, etc. The idea is interesting and actually somewhat effective. But what is the result? The Wikipedia experiment has shown something very important: instead of just learning 800 words, a Simple English user must actually memorize many more collocations as well, phrases with two or more words, to substitute for the other beyond-800 words found in English. There are simply more than 800 ideas in English that are not easily built up from simple parts. Even if you could in theory only talk about things using circumlocutions, users would want direct and consistent ways to refer to specific concepts. An example from Toki Pona is "crazy water" used to refer to alcohol. So in reality that's another new word in Toki Pona (along with many others, and similarly so for many scientific and other terms in Simple English), which must be memorized, and in the end very little is "saved" by having few morphemes, because in fact those idiomatic collocations are themselves necessarily new morphemes even though they are build from recognizable parts. It's like saying that English "greenhouse" is really just two morphemes stuck together rather than adding a new word/morpheme to the language-- an illusion at best, and more realistically just a delusion of whoever is counting. In the end, if the language you made is practical at all, it's just a matter of time and use until there are dozens and then hundreds and then thousands of collocations like that. Give it even more time and sound change will take over, and you'll end up with unrecognizable derivations, just like in natural languages.

4. While many general ideas can be broken down into a small number of parts, there is simply no way to get to more specific concepts like "tree" or "squirrel" or "sing" (without adding idiomatic collocations). An example of how this works out, and very similar to what you have designed, is NSM, which attempts to find the "universal" basic meanings from all human languages:
https://en.wikipedia.org/wiki/Natural_semantic_metalanguage
The proposals range from around 11 to over 70. It's important to read carefully what the proposals really are, however. They are not attempting to reduce all vocabulary to these primes. Instead, they are finding primes within some aspects of existing vocabulary. In other words, yes, this can work (according to that theory) for some vocabulary (or parts of vocabulary), but they do not really attempt to replace words like "tree" or "squirrel" or "sing" with these primes. So the realistic goal of that methodology is not to reduce human language to the fewest possible primes (as you might think first looking at it), but instead to come up with as many primes as possible that are actually found in all languages. Obviously the number of primes should be minimal (completely decomposed to their basic elements-- that's what a prime is, by definition), but the goal isn't to artificially reduce all of them but instead to actually find all of them. There is a lot of overlap with the concepts you mentioned above and those proposals, so you might want to look into them. There are also many critiques and criticisms of that approach, so you can look into that side of things too. One issue is that languages do not really seem to consistently use the primes on the surface in the structure of words. So if there is any truth to that analysis, it's not because it's transparently how words are derived, so it wouldn't line up with your proposal in a literal sense.

5. More generally, the idea of having few morphemes building up many meanings in a language is called Oligosynthesis, coined by Benjamin Whorf around 1928. It's confusingly not the opposite of polysynthesis (combining many morphemes into a single word) but actually (possibly) correlated with it (using few morphemes total to make up all words, presumably sometimes via polysynthesis).
https://en.wikipedia.org/wiki/Oligosynthetic_language
Whorf pursued the idea of Oligosynthesis for a couple years just as he was getting into Linguistics and before his formal study of the subject, really just a brief moment in his career. After he got involved in serious study of linguistics, he moved on, and it doesn't seem that anyone else has taken up the idea very seriously. He originally proposed that some indigenous languages of Mexico had oligosynthetic structure because it seemed that just a few basic roots occurred over and over again, although that analysis does not seem to have been widely accepted. Really, the idea seems to have been more or less forgotten (or consciously ignored) in linguistics research.
However, I recently did some research on the topic, with one crucial difference: I wasn't claiming that languages as a whole are truly oligosynthetic (there are various reasons that doesn't seem practical) but that some subsystems of languages could be. So rather than looking for languages with a very small dictionary, the question is whether some languages so oligosynthetic tendencies in some areas. In fact, one that I identified is how serial verb constructions (think of "go carry" as "take" and "come carry" as "bring", sort of like compounds but actually still separate words and possibly stacked) can in some languages seem to augment the lexicon. Of course very often they do become lexicalized (and therefore no longer oligosynthetic because they're newly memorized if somewhat transparent morphemes!). But some langauges seem to do very well with productive and systematic usage of just a few verbal roots. So in that sense there are aspects of natural languages that work like that, and similarly the system you propose could function but only as part  of a fully expressive communication system. A completely oligosynthetic language seems inherently limited, or at the very least unlikely to be stable over continued usage.

I've written more about oligosynthesis here by the way:
https://www.quora.com/Why-don%E2%80%99t-natural-oligosynthetic-languages-exist/answer/Daniel-Ross-71
https://www.quora.com/If-you-have-to-invent-a-language-of-20-words-only-What-would-they-be-And-why/answer/Daniel-Ross-71
Also consider:
https://www.quora.com/What-are-examples-of-useful-artificial-languages/answer/Daniel-Ross-71

Alright, first I'd like to thank you for taking the time to answer my post in such an informative and extensive way; it means a lot to me! I'd also like to apologise that it took me myself so long to respond, since until today I was busy.

Now, the main thing I want to say relates to the most significant difference between languages such as Toki Pona and Simple English; one relating to the way collocations function.

In the languages you have mentioned, the collocations are more or less axiomatic to the languages, meaning that simply by knowing "wonder" and "dog", one will not know that "wonder dog" means "horse". This, in a sense, is an additional word to learn in the language.

However, in Truespeak the situation is different. Since all the collocations are derived directly from the basic particles and the way they interact, not only can one extrapolate what a collocation would mean from the particles that construct it, but one could also interpolate collocations of their own, that anyone could understand, from the 32 particles.

This means that instead of saying "wonder dog", one would say "fast, four legged creature", or something along those lines (those words themselves would be constructed by smaller structures). In general, though, animals, plants and places should be expressed using loanwords ("h(o)owrs" or "'h(o)owrs' animal"), since it's easier. This does create the problem of adding morphemes, but keep in mind that a lot (but not too much) in this language is to be derived from context, similarly to Japanese in many cases, not requiring you to learn them given the context.

Another point I wanted to address is that indeed, over time humans would begin to change the words in such a way that this language would become no different that many other languages of our kind. However, in the fictional universe from which this language comes, it was not originally created by or for humans (rather by Godlike beings far more "perfect" than us), and it wasn't really the point of it to begin with.

Tell me what you think, in your time!
19
Outside of the box / Re: Croatian toponyms
« Last post by Daniel on November 14, 2017, 02:15:03 PM »
Let's put it this way:

At least FlatAssembler is enrolled in the class, while you're some random stranger throwing rocks in the window.

As for contribution, what I meant was that you could do something other than complain about the ideas others suggest.
20
Outside of the box / Re: Croatian toponyms
« Last post by LinguistSkeptic on November 14, 2017, 01:56:25 PM »
What do you mean by "scientific contribution"? What does my "scientific contribution" have to do with whether his ideas are sensical? And isn't making a blog about your pseudoscientific ideas worse than doing nothing?
Pages: 1 [2] 3 4 ... 10