Learning2gether from live events webcast from TESOL 2013 in Dallas March 21-23

Learning2gether Episode 149

March 21-23, Learning2gether traveled to Dallas to help webcast live at from the Electronic Village and academic sessions taking place during the TESOL Convention

 

Ellen2013mar23

Above, Vicki Holmes presenting on Word Clouds in the CALL-IS EV in Dallas, with co-presenter Ellen Dougherty Skyping in via small window lower right of screen, later expanded to full screen share so she could direct the workshop from Abu Dhabi – picture, Vance Stevens

Many of these events were held in the Webheads Bb Collaborate / Elluminate room: 

 http://learningtimesevents.org/webheads/ 

For definitive updates to these events, please see the official sites:

Thu March 21 – CALL Academic Session: Gaming and Language Learning

Recordings:

Thu March 21  – TESOL CALL-IS Open Meeting 

Recordings:

Fri March 22 – Technology Showcase Event: Mobile Apps for Education
Recordings: 

Fri March 22 – CALL-IS & Elementary Education InterSection Session: New Tools/Techniques in CALL

Recordings:

Sat March 23 – ITA, CALL, & VDM-IS InterSection Session: New Technology Horizons for International Teaching Assistants

Recordings:

Sat March 23 – ESP, CALL, & VDM-IS: Harmony in ESP Practice with Computers, Video and Digital Media

Recordings:

 

Fri March 22 19:00 to 19:50 GMT – Webcasts from the EV Fair Classics

Nina_evelyn_mike

Evelyn Izquierdo, and Miguel Mendoza at EV Fair Friday (picture by Elizabeth Hanson-Smith)

Sat March 23 16:00 to 16:50 GMT – Webcasts from the EV Fair Classics

 

2013mar23vanceev-tesol

Vance Stevens presenting MultiMOOC at EV Fair Saturday (picture by Elizabeth Hanson-Smith)

2013mar22tesoldallaspanoramaomni

http://wia-in-dallas.blogspot.com

Taps for Tapped In after March 18, 2013

Learning2gether Episode 148

Farewell to TappedIn

End of an era:

Theresa Almeida d’Eca, Nina Liakos, Michael Coghlan, Vance Stevens, David Weksler, and Jeff Cooper met at Tapped In one last time and set up a wiki at http://tappedin2013.pbworks.com/ 

 

You are invited to join and contribute URLs to or upload the artifacts you have

managed to save from TappedIn.

Vance has saved a couple here:

Where?

Another one bites the dust:

Outcome …

We set up a wiki at http://tappedin2013.pbworks.com/ which you are invited to join and contribute URLs to or upload the artifacts you have managed to save from TappedIn.

Check back here to see what we’ve accumulated and help with the display, if you wish.

Announcements:

Also on tap for this evening …

Sun Mar 17 1500 GMT – IATEFL Webinar: Rethinking the Language Classroom – From Mobile to Learning

Carla Arena

with Carla Arena (Brazil)

When: Sunday 17 March 2013, GMT 15:00
Find the time in your time zone by clicking this link.


Where
: The event takes place in the IATEFL online conference room.
http://ltsig.org.uk/events/13-future-events/265-17313-webinar-rethinking-the-language-classroom.html

See the schedule for remaining webinars in this series: http://ht.ly/fXMsQ

Announcements:


Veillance Hangout with Alex Hayes and posse

Learning2gether Episode 147

Download:

https://learning2getherdotnet.files.wordpress.com/2013/03/2013march10alexhayeshostitchup.mp3

In which Vance Stevens hangs out with Alex Hayes and friends, talking about veillance, augmented and augmediated reality, and issues around wearable technology …

The event was being streamed at http://webheadinaction.org/live but the recording was compromised when Google Hangout went off air only 15 minutes into the Hangout

2013-03-09_2325alex

 

This sent out everywhere:

Join us in Google Hangout this Sunday at noon GMT to discuss with experts this fascination topic.

As usual with Hangouts, we can’t start them until right before the event, so we will post the direct link to the hangout at http://learning2gether.pbworks.com and also at http://webheadsinaction.org/live. That’s where you can go to listen in on the stream and join in the text chat, in case you can’t or don’t want to get into the hangout itself. But be aware that the stream can only be set up right at noon GMT, when we start the hangout.

If you are hanging out with us, be sure to wear a headset (otherwise the sound from your speakers back into your mic could cause echo). And if you enter the hangout be sure to mute or switch off the stream (otherwise you’ll hear two sound channels, each overlaid at a disconcerting lag from the other.)

 

Alex’s blog post on the discussion stimulating this conversation:
http://sites.ieee.org/istas-2013/

Meanwhile, here’s more from Alex:

Vance Stevens from Webheads and I are running a Google Hangout for anyone interested in joining us – http://learning2gether.pbworks.com/w/page/32206114/volunteersneeded#SunMar10noonGMTnbspVeillanceHangoutwithAlexHayesandposse

The topic you will see is Veillance – the domain with all it’s disciplines such as surveillance, sousveillance, dataveillance, uberveillance and so on. We are sure to also speak of education, engineering, diffusion of innovation, privacy, personal security and a host more emergent themes & technologies.

We think it is pertinent topic given that we are on the brink of Google Glass going live soon, that Vuzix and host of other augmented and augmediated reality hardware is set to become distributed and visible throughout our communities worldwide.

A podcast was recently distributed by IEEE ( http://spectrum.ieee.org/podcast/geek-life/profiles/steve-manns-better-version-of-reality) and featured Professor Steve Mann whom I’m working closely with on this event –http://veillance.me

Have a listen to the podcast. It’s is very revealing, insightful and will serve as the base point for our discussion. You will note that I have invited Steve to join us here in this discussion. I’ve also invited a number of others whom have I just met virtually

Some background

Wed Feb 27 Coach Carole organized an “in depth look at Augmented Reality through the eye of Christopher Winter”

What if your students could stand in any room and by looking through their phones camera, be presented with an entirely new virtual world? What if instead of 30 students crowding around one physical object, they could each have an almost tangible replica on their desks to study, even when at home? Augmented Reality is not a new technology and was in fact in danger of fading out before it really had chance to make an impact. The fact was that holding a piece of paper up to a webcam in order to see a 3D object on a desktop computer is neither engaging nor fun. It was less than intuitive and quite cumbersome.  Fast forward to the present day and thanks to the mobile revolution Augmented Reality is not just an option, but an integral part of effective mobile delivery

Announcements:

 

Vance Stevens, Rita Zeinstejer, Nelba Quintana: Widening the audience for student writing with Writingmatrix and Paper.li

Learning2gether Episode 146

Download:

https://learning2getherdotnet.files.wordpress.com/2013/03/writingmatrix03mar2013-10-001-64k.mp3

Panelists: Vance Stevens, Rita Zeinstejer, Sasa Sirk, Nelba Quintana

What is this about? 

What is Writingmatrix?

A few years ago, Vance Stevens coordinated Nelba Quintana and Rita Zeinstejer in Argentina, Doris Molero in Venezuela, and Sasa Sirk in Slovenia in a global project to put student writers in touch with each other through blogging, by tagging their posts ‘writingmatrix’.  At the time, the students were able to locate each other’s blogs by using Technorati.  However Technorati has since tightened what its searches will return in order to reduce clutter for whom it perceives are the most important users of its services (not casual educators). Therefore Technorati no longer works well for this purpose.

Meanwhile, one of the serendipitous outcomes of conducting the recently ended EVO MultiMOOC session was a greater understanding of how Paper.li works. Accordingly we have been experimenting with Paper.li in hopes of using it to achieve connections between student writers a world apart that worked so well when we could use Technorati effectively. Some results of these experiments were reported in these webcasts URLs:

Today’s webcast seeks to bring some of the original Writingmatrix team together to talk about what made the project a success and speculate on how Paper.li might help us to revise the project.

Recording

Where: Blackboard Collaborate (Elluminate)

Announcements

From http://support.paper.li/entries/20023257-What-is-Paper-li-

So how does Paper.li work? 

Anyone with a Twitter or Facebook account can log-in and create a paper. We provide you with easy to use tools to select your content. You choose your content streams and can create queries and searches based on Twitter users, #tags, keywords, Facebook, your own Twitter timeline, Google+ users, RSS feeds and more.

After you have chosen your sources, we go to work. Behind the scenes, it goes something like this: we extract all tweets that include URLs based on your content selection we extract the content found on these URLs:

  • text, e.g. blog post, newspaper article

  • photo, e.g. Flickr, yfrog, Twitpic, …

  • video, e.g. YouTube, Vimeo, Dailymotion, …

  • analyze the extracted text for language ( EN, ES…) and for topic, e.g. Politics, Technology, …

  • surface the day’s most relevant articles (using paper.li magic)

  • construct a newspaper frontpage using the filtered articles, photos and videos

Please note: Currently Facebook and Twitter are seen as two separate accounts by Paper.li. This means papers created under one account will only show under paper settings for that account. If you create a Paper.li under your Facebook account, it will not be visible in your Paper Settings when you are logged in under your Twitter account. And vice versa. We are working to change this.

From Kristi Hines: http://www.stayonsearch.com/how-to-use-paper-li-the-daily-twitter-newspaper

What Types of Content Does Paper.li Pull?

Paper.li selects specific types of tweet to generate content for a paper. First and foremost, it is looking for tweets with links, as the title of each “story” on the paper will link directly to the page, blog post, article, etc. If there are any images on the page, blog post, article, etc., it will sometimes pull those as a thumbnail for the news story. It also pulls tweets with links to videos from YouTube, BrightCove, and other popular video sharing sites for the media section. It doesn’t necessarily have to be a link directly to the video, however. Some videos are pulled from blog posts with an embedded video.

How Does Paper.li Choose Content?

This one is a mystery to me. I thought it *might* be based on tweet popularity until I saw that some of the tweets added to the paper had been the first tweet for a link done within an hour of the paper’s creation. It could be based on the influence of the Twitter users in the list, but I’ve seen some users with little authority get their tweets listed as well. So essentially,it’s completely random.

Getting the Right Content for Your Audience

This means that getting content on a particular topic based on a user or a Twitter list may not be as easy as you think. Not only may some members of your following or Twitter list not stick to tweeting about one topic, but some members may tweet something that gets misinterpreted by the paper, as seen below.

So how do you ensure your papers have the right content for your audience? There really is no guarantee. I would say that out of the three options for paper creation, hashtags seem the way to go, although some tags are overly abused, such as #linkbuilding gets repeated by the same users over and over and sometimes for services, not useful content. So use Paper.li at your own risk!

From Ryan Taft: http://www.1stwebdesigner.com/design/paper-li-grow-brand-awareness-traffic/

How Does Paper.li Work?

Paper.li allows users to create their own online newspaper through links shared on Facebook and Twitter. Once set up Paper.li automatically collects links from within Facebook or Twitter, organizes them into an easy-to-read newspaper format, complete with imagery, headlines, and article descriptions. Subscribers receive their online newspaper each day filled with top stories around the same content topic as your website. It’s a great way for you to automatically aggregate online content relevant to your website topic and push it out to your online community … If you’re on Twitter, you can configure the system to tweet your Paper.li newspaper automatically.

You can have up to 10 content streams from which to pull article links from … You can organize them in the order of importance. If you choose a Single Twitter User and a Twitter Keyword, you can rank them in order of importance of where Paper.li should pull articles from first, second, third, etc. The options include:

Single Twitter User

Your Twitter Stream and the people you follow

Twitter List

Twitter #Hashtag

Keywords on Twitter

Keywords on Facebook

RSS Feed

Single Google+ User

Keywords on Google+

Breakthrough

As a result of our session, Rita did some further investigation, and wrote us …

Hi, Vance and all,

Quite enthusiastic at “revisiting” our project, I’m now exploring a different tool –Tweeted Times (formerly known as The Twitter Times), a real-time personalized newspaper generated from your Twitter account, which I find more reliable than Paper.li.

If you compare today’s edition in both, you’ll see many more entries in TT, which are postings I made yesterday in Twitter –some of them via Scoop.It –in fact, Paper.li does not show any!!! Which means that, for some reason, Paper.li ignores some postings, even when they come via Twitter.

Take a look at the page I opened at TT http://tweetedtimes.com/#!/search/writingmatrix/en and let me know what you think.

Here’s what I think …

Tweeted Times does indeed seem to be doing a better job than paper.li.  It not only gets the Scoop.its that were missing from Paper.li but it also picks up the paper.liitself.

I think you’re on to something here, Rita

Not only that, this is a great illustration of true MOOC like behavior, where the idea is to assemble 1000 participants (webheads) on the upshot that one of them might be able to stimulate one or more of the others (Rita) to come up with a breakthrough as a result of a collaboration that couldn’t have happened with such a result in a much smaller grouping, which would lack critical mass for significant probability of achieving such a breakthrough.

still pondering that one …