Multimedia Understanding - Fest Group 2001

General Aims

What will happen when cameras are usually found in palmtops, phones, laptops etc? Can they be used to guide people? Can they help people navigate information spaces? For example someone visiting a historical city could have links pop up in their palmtop/phone depending on what the camera sees. This could be aided by some position awareness as well (eg GPS), but in some situations (in museums or caves) only vision could help. We aim to build a system which tests this hypothesis. It relies on the vision system knowing when it has a 100% accurate hit! Otherwise it would keep showing false links and annoy the user.

The end demo will hopefully be:
a laptop with a webcam and WiFi card sees the Zepler portait - takes photo - send it to the server Corot, it does content-based retrieval and if it sees one of the target paintings returns an ID. The ID is sent to Linkey - which hopefully returns a link to the text caption of the painting!

We'll need

  1. To get the webcam talking to the laptop under Linux
  2. write code which grabs the frame (have that already!) and sends it to corot
  3. set up some code on corot which does a couple of CBR lookups using Artiste code
  4. a script to pass the URL of zepler pic to Linkey!
  5. Links to be made for Linkey to use! - say for Zepler and the Monet painting? maybe a lab area?
  6. some rendering of the final data on the laptop (could be plain text - could be ARToolkit?! and goggles!)

Also possible?

Sketch 1