Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG
Research output: Contribution to conference › Paper › Research › peer-review
Standard
Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG. / Baceviciute, Sarune; Mottelson, Aske; Terkildsen, Thomas Schjødt; Makransky, Guido.
2020. Paper presented at CHI 2020, Honolulu, Hawaii, United States.Research output: Contribution to conference › Paper › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - CONF
T1 - Investigating Representation of Text and Audio in Educational VR using Learning Outcomes and EEG
AU - Baceviciute, Sarune
AU - Mottelson, Aske
AU - Terkildsen, Thomas Schjødt
AU - Makransky, Guido
PY - 2020
Y1 - 2020
N2 - This paper reports findings from a between-subjects experiment that investigates how different learning content representations in virtual environments (VE) affect the process and outcomes of learning. Seventy-eight participants were subjected to an immersive virtual reality (VR) application, where they received identical instructional information, rendered in three different formats: as text in an overlay interface, as text embedded semantically in a virtual book, or as audio. Learning outcome measures, self-reports, and an electroencephalogram (EEG) were used to compare conditions. Results show that reading was superior to listening for the learning outcomes of retention, self-efficacy, and extraneous attention. Reading text from a virtual book was reported to be less cognitively demanding, compared to reading from an overlay interface. EEG analyses show significantly lower theta and higher alpha activation in the audio condition. The findings provide important considerations for the design of educational V environments.
AB - This paper reports findings from a between-subjects experiment that investigates how different learning content representations in virtual environments (VE) affect the process and outcomes of learning. Seventy-eight participants were subjected to an immersive virtual reality (VR) application, where they received identical instructional information, rendered in three different formats: as text in an overlay interface, as text embedded semantically in a virtual book, or as audio. Learning outcome measures, self-reports, and an electroencephalogram (EEG) were used to compare conditions. Results show that reading was superior to listening for the learning outcomes of retention, self-efficacy, and extraneous attention. Reading text from a virtual book was reported to be less cognitively demanding, compared to reading from an overlay interface. EEG analyses show significantly lower theta and higher alpha activation in the audio condition. The findings provide important considerations for the design of educational V environments.
KW - Faculty of Social Sciences
KW - Virtual reality
KW - Educational Technology
KW - Learning
KW - Cognitive Load
KW - EEG
U2 - 10.1145/3313831.3376872
DO - 10.1145/3313831.3376872
M3 - Paper
T2 - CHI 2020
Y2 - 25 April 2020 through 30 April 2020
ER -
ID: 237997762