| 
  • If you are citizen of an European Union member nation, you may not use this service unless you are at least 16 years old.

  • Whenever you search in PBworks, Dokkio Sidebar (from the makers of PBworks) will run the same search in your Drive, Dropbox, OneDrive, Gmail, and Slack. Now you can find what you're looking for wherever it lives. Try Dokkio Sidebar for free.

View
 

"Verb-argument fusion"

Page history last edited by Matthew McVeagh 1 year, 7 months ago

"Verb-argument fusion"

 

Matthew McVeagh | my conlangs

 

An experimental language which expresses combinations of verbs and arguments in unique morphemes rather than separate ones for each entity.

 

This language idea is based on one by Paulo Eduardo Padilha. He wanted to create an alien language in which combinations of sentence elements, such as subject, verb and object, were expressed by unique morphemes/words rather than sequences of separate words/morphemes. So "John hit Jim" and "Jim hit John" would look nothing like each other; similarly "The dog bit the man" and "The man bit the dog". One single signifying phoneme sequence would express each of these sentences. He added that dependents/modifiers of the core clause elements could be expressed by separate words. So although "The bad man bit the unfortunate dog" would be partly expressed by "{bite-man(SBJ)-dog(OBJ)}", it would also require words for "bad", "unfortunate", and maybe "the"/definiteness and past tense. This would certainly reduce the complexity of the main word from what it might have been.

 

What would this require and how could it be done? Obviously there would be a huge number of such combinations of verb and core arguments, so the number of morphemes required would be similarly massive. In turn the size of these morphemes would be comparatively large in order to differentiate more. You could reduce their size somewhat by enlarging the phonological options (phoneme inventory and phonotactics). You'd still require a heck of a lot of possible words. You could even make it more complex by combining the modifiers into their own combined word, e.g. "{(PST)-bad.(DEF)-unfortunate.(DEF)}" for "The bad man bit the unfortunate dog".

 

You could reduce still further by making it more oligolexical like Toki Pona et al. In this case it would be oligosynthetic, but lexically fusional rather than the usual agglutinative pattern that oligosynthetic languages take. In effect more semantic content would be removed from the higher level syntactic nodes and relocated in dependents, which would of course be expressed by separate words. So in more complex sentences you might still have quite a few words, but they would have quite a different structure from usual languages. It would be like drawing a picture by doing all the most basic bits first and then adding detail all over, rather than doing one corner thoroughly and gradually moving over the whole sheet.

 

Optionally, you could even combine this language idea with Semitic-style non-concatenative morphology. One phoneme type (the template) could be the verbal aspects of the clause, the other (the fill-in) could be the nominal/adverbial. Still, that would be breaking down the fusionality a little.

 

Further thoughts:

  • Constructing the lexicon would probably require some kind of automated assignment of meanings to morphemes, given a generated series of phonologically possible morphemes and an array of combinable semantic units for each set of combined word classes.

  • It would be largely impossible to speak, and wouldn't even be useful directly as a stealthlang. But it could perhaps be a convenient code language if used via autotranslations.

 

 

Comments (0)

You don't have permission to comment on this page.