Research

 People

 Publications

 Sponsors & Affiliates

 News

 Related Sites

 Offered Courses



The Dynamic Generation of User-Customized Multimedia Presentations by Hyun Shin

The rapid growth of online digital information over the last decade has made it difficult for a typical user to find and read information. A recent study shows there are around 40 million Web sites. The amount of digital media, non-textual information including images, audio, and video, on the Web, is enormous and is growing at a staggering rate. In addition, users of new media now have great expectations about what they can access online and are demanding more powerful technologies. However, most of the current web services have a limited way to present multi-modal elements. In addition, web search engines retrieve a huge amount of hyperlinks instead of a real story. Furthermore, there is no web search system can accommodate a user’s intention to retrieve what the user expects to read.
In order to solve these problems, the proposed system will create story structures that can be dynamically instantiated for different user requests from various multi-modal elements. In addition, the proposed system focuses on quality of the results not quantity of the results. Furthermore, the system leverages information so that a user will read an appropriate level of story depending upon the user’s intention level ranging from general to specific.


Figure 1 Overall functional architecture

The overall functional architecture of the system is illustrated in figure 1. The system has two key phases: story assembly and content query formulation. In the story assembly phase, a novel structured rule-based decision process is introduced to determine a proper story type and to invoke a primary search and a secondary search in the content query formulation phase. Note that there are currently four domain independent story types – summary, text-based, non-text based, and structured collection story type. At the beginning, the story assembly module receives a modified user’s request from a query processing procedure, which consists of related concepts, a level of generality spectrum, media types that a user prefers and so on. These inputs then invoke a primary search to retrieve multi-modal content objects, along with a constraint-based k-nearest neighbor search. These results are sent to the story type decision module to determine a proper story type and then fill in the chosen story type with multi-modal elements (content objects). If it is necessary, this decision module also invokes a secondary search to get extra elements. A sample text-based story type result is delineated in figure 2.

Figure 2 A sample text-based story type result


Home | Research | People | Publications | Sponsors & Affiliates | News | Offered Courses

© 2000-2013 Semantic Information Research Laboratory. All Rights Reserved.