Skip to main content
Research project

Image-based Conversational Search for Users with Intellectual Disability

Project overview

WSI Pilot Project
Content-based Image Retrieval (CBIR) technology was developed to enable multimedia search in addition to the traditional web/document/text search more than two decades ago. One of the well-known problems in CBIR is call Semantic Gap (SG), which is a gap between how computers understand images and how users understand the images. One of the ways to bridge the SG for CBIR is to bring the users into the search loop through enabling the communication between the users and the CBIR system, so that the users could tell the system their needs and their preferences.  One approach to bring the users to the search loop is called conversational search. Many conversational search technologies have been developed for web/document/text/voice search. For example, Apple, Google and Amazon have developed Siri, Google and Alexa for their voice search tool based on the natural language processing technologies, and many web search platforms use chatbot to enable the text based conversational search. However, there is very little conversation search technologies have been developed for CBIR search, where the conversational interaction is needed the most due to the Semantic Gap. 
In addition to the above research needs, the Global Cooperation on Assistive Technology (GATE) program launched by the World Health Organization, which aims to improve access to high quality, affordable assistive technologies for people with varying disabilities, diseases, and age-related conditions. The GATE program has identified that people with Intellectual Disability (ID) need more assistive technologies than the other disabilities. Some technologies have been developed for people with ID in communication, learning and decision making, such as Grid 3, Board maker, Touch chat, etc. However, there is no technology has been created to support independent information searching and discovery for people with ID, which will help to further improve their learning, communication and decision making and further improve their quality of life. Current research shows that people with ID prefer visual aid when they interact with technologies and access information, and they find easier to learn and communicate through images than words. Therefore, we are motivated to design and evaluate an image-based information searching system for people with ID to assist their learning, communication and decision making.  This polit student will focus on designing and evaluating the technology for university student with ID.

Staff

Lead researcher

Dr Haiming Liu PhD, PgCAP, SFHEA

Associate Professor

Research interests

  • User-centred interactive information access
  • User preference and behaviour modelling
  • Multimedia information retrieval

Collaborating research institutes, centres and groups