2026-02-19T18:45:10Z
2026-02-19T18:45:10Z
2022-10-30
2026-02-19T18:45:10Z
Morphological systems often reuse the same forms in different functions, creating what is known as syncretism. While syncretism varies greatly, certain cross-linguistic tendencies are apparent. Patterns where all syncretic forms share a morphological feature value (e.g., first person, or plural number) are most common cross-linguistically, and this preference is mirrored in results from learning experiments. While this suggests a general bias towards natural (featurally homogeneous) over unnatural (featurally heterogeneous) patterns, little is yet known about gradients in learnability and distributions of different kinds of unnatural patterns. In this paper we assess apparent cross-linguistic asymmetries between different types of unnatural patterns in person-number verbal agreement paradigms and test their learnability in an artificial language learning experiment. We find that the cross-linguistic recurrence of unnatural patterns of syncretism in person-number paradigms is proportional to the amount of shared feature values (i.e., semantic similarity) amongst the syncretic forms. Our experimental results further suggest that the learnability of syncretic patterns also mirrors the paradigm’s degree of feature-value similarity. We propose that this gradient in learnability reflects a general bias towards similarity-based structure in morphological learning, which previous literature has shown to play a crucial role in word learning as well as in category and concept learning more generally. Rather than a dichotomous natural/unnatural distinction, our results thus support a more nuanced view of (un)naturalness in morphological paradigms and suggest that a preference for similarity-based structure during language learning might shape the worldwide transmission and typological distribution of patterns of syncretism.
Artículo
Versión publicada
Inglés
Llengües artificials; Tractament del llenguatge natural (Informàtica); Morfologia (Gramàtica); Artificial languages; Natural language processing (Computer science); Morphology (Grammar)
The MIT Press
Reproducció del document publicat a: https://doi.org/10.1162/OPMI_A_00062
Open Mind: Discoveries in Cognitive Science, 2022, vol. 6, p. 183-210
https://doi.org/10.1162/OPMI_A_00062
cc-by (c) Saldana, C. et al., 2022
http://creativecommons.org/licenses/by/4.0/