Hello, I'm going to talk to you about an idea we've got and we've called it annotating multimedia
for
community folksonomy and ontology building

and we're
the Learning Societies lab at the school of electronics and computer science,
at
the University of Southampton

now,
you'll be listening to, and/or watching, and reading the text, and looking at the Powerpoints on your browser

and if you look at the browser display, you can move the timeline of the recording

you
can click on the text and the audio recording will move to that position in the text

there
are some thumbnails of the Powerpointslides along the bottom and you can enlarge the frame

you
can click on any PowerPoint slide, and again the recording will jump to that position

you
can click on a ... you can select for the system to show you where you are and this is if you scroll through
to another position and it's playing back the the sound if you actually want to see
and
jump to, the correct position, you can click that



OK
so whats the aim of the idea

well
it's to make multimedia resources easier

easier
to do lots of things to find things to organise things and to use multimedia

multimedia
is great you can put videos audios up but to actually use it for learning
is
harder

and more and more audio and video is going on the Web

and
unlike text documents on the web which you can link to, you can search

multimedia
you tend to go and find a multimedia clip and play it from start to end

you
can try and find bits in it but it's quite difficult to find the bits you want

also,
if you're a deaf student and you can't hear the audio then it's fairly useless to you

so
what's the solution



well
the solution is to put the audio, video ... any sort of images

together
with subtitles or captions as they're known in North America

and
the problem is, creating these text captions is very time-consuming manually and so the idea is to use
speech-recognition
to automatically create these captions and synchronise them with any audio or video,
or
images

I haven't recorded the video for this demonstration but there would be
a
video of me talking if I had

so
that's the starting point and that means you can find wherever you are in the stream of audio or text

if
you prefer to read things you can read them,

if
you prefer to listen you can listen

and
and you can jump between the media as your preference dictates

but
the idea is to do more than this

the
idea is to be able to tag, or highlight or annotate the multimedia

and you
can do this because you can select any word or any section

and
be able to tag that with whatever words or ideas you want you can add your own notes
you
could bookmark in ... you could link in from the media from the words to another resource

and
we are developing a prototype of that at the moment

and
that would then allow you to do a lot of things ... you'll be able to search,
the
multimedia ... organise it ... find things ... index it

but
also be able to collaborate with others, link to their information, their media streams

be
able to annotate each other's information and media whether they're blogs,
or
presentations

and
this will help with the learning by making the learning more inclusive .... more personal

you
can re-use the media much more easily because you don't have to copy and edit a clip you can just link to
a
position and you can have a unique, URL or URI for a position in the media

because
it could be at each word can have a unique URI

and
it is very flexible



and
here is a display similar to the one you're using at the moment but,
this
has a video showing as well



so
the idea of the selection and highlighting is ...

there
is a text window and you can see something highlighted and

you can add comments

you
can add tags and these are time synchronised with the media stream



so ...


at ECS
and in the Learning Societies Lab we've got a lot of expertise to do with
mobile,
ubiquitous computing ... information modelling ... the social Web

we've done a lot of work with hypertext, the web and knowledge technologies
for e-
learning

... have
a great deal ofexperience in accessibility and disability in technology

and
the use of speech recognition to automatically create synchronised captions
from
live or recorded audio or video

thank you
for listening