Presentation: At Qualcomm Research in San Diego, CA (2008) "From Computational Photography and Video to Computational Journalism"

From Computational Photography and Video to Computational Journalism

 
Abstract

Digital image capture, processing, and sharing has become pervasive in our society. This has had significant impact on how we create novel scenes, how we share our experiences, and how we interact with images and videos. In this talk, I will present an overview of series of ongoing efforts in the analysis of images and videos for rendering novel scenes. First I will discuss (in brief) our work on Video Textures, where repeating information is extracted to generate extended sequences of videos. I will also describe some our extensions to this approach that allows for controlled generation of animations of video sprites. We have developed various learning and optimization techniques that allow for video-based animations of photo-realistic characters. Using these sets of approaches as a foundation, then I will show how new images and videos can be generated. I will show examples of Photorealistic and Non-photorealistic Renderings of Scenes (Videos and Images) and how these methods support the media reuse culture, so common these days with user generated content.   I will then describe some of our new efforts that move towards understanding mobile imaging and video, and also discuss issues of collaborative imaging and authoring and ad-hoc sensor networks, and peer production with images and videos, leading to a new concepts of how computation has impacted journalism. Time permitting, I will also share some of our efforts on video annotation and how we have taken some of these new concepts of video analysis to classrooms.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.