Meunier, FannyMeurant, LaurenceLEPEUT, AlyssonAlyssonLEPEUT2025-02-042025-02-042021https://dial-mem.test.bib.ucl.ac.be/handle/123456789/23848The human capacity for language is not simply vocal but also a visual phenomenon. Much of the information conveyed when people talk, face-to-face or on the phone, involves the use of meaningful multimodal practices that include facial expressions, eye gaze, and gesture. Gesture can be defined as any recognizable bodily behavior that has the potential to be meaningful in context, even if they are not necessarily intentional. Across languages and cultures, gesture is ubiquitous and natural in human communication and cognition. Yet, connecting gesture to spoken languages only shows the glass as half full when considering the natural languages where the body bears the full weight of language expression and comprehension, namely, sign languages. Since sign languages are natural languages just like any other spoken language, there is no reason to assume that signers do not also gesture. But how do the gestures performed by signers, on the one hand, compare to those produced by speakers, on the other? The current study examines one of such elements – known as the Palm-Up (or PU) – and its interactional discourse functions in two languages: French Belgian Sign Language (LSFB) and spoken Belgian French (BF). Through multimodal, corpus-based analyses of the PU in roughly three hours of video-recorded material of spontaneous conversations of (1) four LSFB signers and (2) eight BF speakers, it is exemplified how a certain gesture continuously contributes to linguistic meaning and the management of social moves within the interaction itself. The results reveal that the frequencies of the PU do not show any clear-cut distinction between LSFB signers and BF speakers under study. However, signers and speakers use the PU for different interactional purposes. The results overall demonstrate that observing a similar gesture in a sign and spoken language reveals how efficiently hearing and deaf participants achieve a number of pragmatic goals during the course of their face-to-face conversations. Such a perspective opens up a new scientific (linguistic) paradigm on the multimodal complex nature of the human language capacity, independent of the language modality in which it is manifested, whether signed or spoken. Ultimately, this study unveils how considering gesture in spoken language on a par with its signed counterpart, favoring commonalities between gesture and sign rather than exacerbating differences, reinforced the argument for gesture to be part of linguistic activities, and as to what it means for spoken and sign languages to be embodied languages.Sign LanguageGestureInteractionMultimodal CorporaAn interactive study of Palm-Up in LSFB and BF conversationstext::thesis::master thesisthesis:29287