Abstract

Multimodality has two principal approaches, a text-based one that examines texts comprising simultaneously of different modalities (e.g., text + image) and a discourse-based one that considers language production as an integrated linguistic, prosodic, embodied (gestural) phenomenon. As analysis of discourse as multimodal events and multimodal texts become ubiquitous in the digital era, this is an area of great activity and the Applied Linguistics Lab has several projects related to these subject: Dr. Attardo and Dr. Pickering have developed a research program integrating prosody, facial expressions, and gaze in discourse, including face to face communication, CMC, and visual communication, combining acoustic analysis, gestures, and gaze (eye tracking) in an interactionist framework, applied primarily but not exclusively to humous discourse. Dr. Li analyzes how composers interact with readers via digital genres. Her research team aim at developing the metadiscourse framework to evaluate students' digital/multimodal products (e.g., infographics), which is expected to enhance students' competence in visual design and their digital literacy skills. Dr. Cheng's teaching and research emphasize multimodal online instruction and teacher feedback via the use of Web 2.0 tools (i.e., VoiceThread, Perusall, and Screencast-O-Matic).

  • Attardo, S. & Pickering, L. (2023) Eye Tracking in Linguistics. Bloomsbury Publishing.
  • Gironzetti, E., Attardo, S., & Pickering, L. (2019). Smiling and the Negotiation of Humor in Conversation. Discourse Processes, 56(7), 496–512. https://doi.org/10.1080/0163853X.2018.1512247
  • Simarro Vázquez, M., El Khatib, N., Hamrick, P., & Attardo, S. (2020). On the order of processing of humorous tweets with visual and verbal elements. Internet Pragmatics.
Navigate This Page