Real-time multi-frame superresolution

Design team members: Calvin Chan, Michael Frankovich, Jeff Glaister, Adrian Tang

Supervisors: Dr. Alexander Wong

Background

The introduction of High-Definition (HD) displays in the market has led to the adoption of many different standards of video resolution. Content can now be encoded in resolutions ranging from Standard Definition 480i (DVD quality) to Full HD 1080p to be consumed on displays that support a variety of different resolutions. The problem of scaling a low resolution source to a high resolution display is extremely evident to those who have attempted it. Conventional upscaling methods cause noticeable degradation in picture quality, therefore there exists a need to design a solution that minimizes this degradation.

Some solutions do exist on the market to tackle this problem such as Progressive Scan DVD players and DVD upconverters. While these solutions help make the picture more pleasing to the eye, the missing detail is simply hidden from the eye through applying different visual filters.

State-of-the-art research is being done on multi-frame algorithms in academia. These methods use temporal information within the video to enhance and recover the missing detail. However, there is no viable consumer product that can currently perform this task in real-time.

Project description

The objective of the team is to develop a real-time version of the multi-frame superresolution algorithms that are presented in current academic research papers.  In doing so, a consumer product which takes a low resolution input and provides a true high resolution output with recovered detail can be achieved.

This is a consumer need since many people still enjoy Standard Definition (SD) content available on DVD’s and broadcast television with using their new High-Definition televisions.  Producing a Real-Time Multi-Frame SuperResolution (RTMFSR) algorithm would allow them to better enjoy their archived content on this new technology.

In addition, creating a RTMFSR solution can help reduce bandwidth requirements for streaming video on the internet.  This can essentially maintain current video quality to the user while allowing content providers to lower the quality of the source content by relying more on client-side content processing (RTMFSR).

Design methodology

The methodology in developing a RTMFSR solution follows three key steps.

  • Rolling Source Analysis for Usable Frames
  • Fast and Robust Method of Super Resolution
  • Visual Filtering

The processing pipeline essentially reads in source content and analyzes frames before and after the target frame to determine which information can be used to enhance the picture. The usable frames are then subjected to a fast and robust method of super resolution to build a new picture at the higher, desired resolution. A patch-based approach will be taken to decrease processing time negating the conventional processor-intensive requirement of motion vector calculation. When reconstruction of the new high-res frame is complete, several visual filters including deblocking, colour correction, etc. will be performed on the image to make it more visually pleasing and the final, optimized frame will be delivered to the display.