Learning2gether with Karen Price on Perceptual computing in MOOC interaction and assessment

Learning2gether Episode 199


Download Audio mp3:
https://learning2getherdotnet.files.wordpress.com/2014/01/karenprice_perceptualcomputing_take2.mp3
(Note, the audio has gaps removed where silence occurred in the hangout, because the video being played was not recorded on YouTube; we may try to repair this in time)

On Tue Jan 28 we met with Karen Price who demonstrated some of her work on Perceptual computing in MOOC interaction and assessment. This was presented as an Electronic Village Online event for Week 3 (Networking) in MultiMOOC EVO session

Where? Google Hangout

A problem with this video, explained by Karen …

Since the frame rate for videos when sharing from pptx in Hangout is pretty poor, opting to show videos via the YouTube app in Google hangout allows users to see videos with normal FPS. However, the moderator must “share” his/her screen /YouTube app for the videos to be captured. Otherwise, only the screenshare (i.e. avatars and pptx) is captured and the videos (which the moderator and Google Hangout members would have seen) are not there. So, anyone who didn’t view the videos at that time via the YouTube app won’t have seen them and won’t have them in their YouTube channel. That’s why I was recommending “annotations” with the archived hangout to enable viewers to see the explanatory videos.

Thinking caps on!

Announcements:

Karen’s presentation at the  at the 2013 WorldCALL conference in Glasgow

Karen Price presented a 30 min Research and Development paper on 11th July at the 2013 WorldCALL conference in Glasgow, Scotland, entitled Multimodal interfaces: blending gaze, gesture, movement and speech to overcome the limitations of keyboard, mouse & touchscreen. This session expanded on the lines of this presentation with many video examples brought into the Hangout.

Abstract

Human-to-human communication depends upon and integrates gaze, gesture, movement and speech. However, most user interfaces for language learning consist of keyboards, mice or touchscreen interfaces. These click-and-type / touch interfaces exclude many hints and signals, explicit or implicit, which are integral to language interactions. In contrast, multimodal interfaces process two or more combined user input modes such as speech, pen, touch, hand gestures, gaze, head and body movements in a coordinated manner with multimedia system output. For example, the processing of user-generated speech and non-speech sounds, in parallel with the user’s gaze, can generate appropriate listening behavior for a conversational virtual agent, triggering backchannel signals according to the user’s visual and acoustic behavior. (Bevacqua, et al, 2012) The presenter argues that multimodal interfaces do not simply offer the option of using a different mode or channel of communication (e.g. speaking vs clicking a button vs writing), but that their use enables the cross-modal synchronization of timing and meaning that is evident in human-human communication. Additionally, studies document significant improvement in the recognition of speech from accented L2 speakers when multimodal input is processed simultaneously. (Oviatt, 2008) Motion-sensing and gesture detection eliminate controllers and remotes, enabling users to interact with computer monitors, video, games, and music through physical gestures. As TV characters speak, children can participate in on-screen activities by responding and interacting in intuitive, physical ways by jumping, moving forwards or backwards, catching and throwing balls. Moreover, user-specific feedback and error correction can be offered to users in two-way conversations by onscreen characters in response physical and spoken responses to questions and suggestions. This paper reviews the capabilities of a variety of multimodal interfaces and gives participants a glimpse of intriguing commercial and academic applications as well as selected, relevant research in applied linguistics, gaming, and the behavioral sciences.

http://www.worldcall2013.org/index.asp

Program: http://www.worldcall2013.org/programme.asp 

Proceedings: https://dl.dropboxusercontent.com/s/p3853ngyb94dazq/Short%20Papers.pdf (link broken)


 

Earlier this week

The official EVO BbC Elluminate Calendar

And check out http://my.calendars.net/eslhome_conferen

2014-01-31_1316jan

Sun Jan 26 1400 GMT MultiMOOC and YLTSIG EVO sessions joint event

An Electronic Village Online event for Week 2 (Declaration) in MultiMOOC EVO session

Posted here: https://learning2gether.net/2014/01/27/vance-stevens-chaos-in-learning-and-resolving-that-chaos-through-networking/

Mon Jan 27 0200 ET Badges and Competency-based Learning

Monday, Jan. 27, 2-3:00 ET: Badges and Competency-based Learning – In the Open Badges MOOC session on “Learning Providers” last fall, Richard Culatta, Director of the Office of Educational Technology in the U.S. Department of Education commented that “it’s really about competency-based learning… and badges are a nice way to get there.” Competency-based learning provides a framework for defining learning objectives and shifting the focus to mastery of these objectives rather than “seat time.” Badges provide clear ways of verifying learning, with portable, transparent evidence of mastery. This session will demonstrate how badges are a logical and highly motivational component in competency-based learning.

From the MOOC on badges: badges.coursesites.com

One thought on “Learning2gether with Karen Price on Perceptual computing in MOOC interaction and assessment

  1. Pingback: Ali Bostancioglu on Technology Professional Development: Networking and Online Communities | Learning2gether

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s