An automated, reliable way to measure image quality
Researchers from Fraunhofer Heinrich Hertz Institute and BIFOLD, led by Sebastian Bosse, have received the prestigious IEEE Signal Processing Society Best Paper Award for their groundbreaking work on image quality assessment using deep neural networks. Congratulations to the team! The award-winning research introduces a novel approach to automatically assessing image quality that closely matches human perception. Their deep neural network system works in two ways:
1. With a reference image (full-reference): comparing a processed image to its original version
2. Without a reference image (no-reference): evaluating image quality standalone
This innovative technology can be used, for example, to determine how much quality is lost when streaming services compress videos for faster delivery, when social media platforms optimize uploaded photos, or when smartphone cameras process images in challenging lighting conditions.
Digital images and videos are a constant companion in our everyday lives. From noticing pixelation in streamed movies to blurry video calls or poor-quality photos on social media, image quality directly impacts our experience. One my think of it like having a professional photographer instantly evaluate billions of images to ensure the clearest possible version reaches the screens — all happening automatically behind the scenes. The technology will improve numerous technologies:
- Streaming services can deliver better picture quality while using less bandwidth
- Video conferencing platforms can prioritize the most important visual elements
- Social media services can better preserve image quality during processing
- Camera manufacturers can enhance image processing algorithms
Unlike previous approaches that used handcrafted features or simplified models of human vision, this system uses deep learning to discover quality-relevant patterns directly from data, employing a much deeper neural network (10 convolutional layers) than previous attempts. It processes images in small patches, giving more attention to visually important areas and uniquely learns both local quality assessment and the relative importance of each image region. This new approach builds on recent advances in deep learning to create a system that can "see" images more like humans do. It outperforms existing methods on standard test databases and shows strong ability to generalize to new, unseen types of images and distortions.
The Publication:
S. Bosse, D. Maniry, K. -R. Müller, T. Wiegand and W. Samek, "Deep Neural Networks for No-Reference and Full-Reference Image Quality Assessment," in IEEE Transactions on Image Processing, vol. 27, no. 1, pp. 206-219, Jan. 2018, doi: 10.1109/TIP.2017.2760518.
The Award
The IEEE Signal Processing Society Best Paper Award is one of the most prestigious recognitions in the field of signal processing. Each year, thousand of papers are published across IEEE SPS journals, and even mor are submitted to major conferences like ICASSP. From this vast pool, only a small number of papers receive this distinction.
The award recognizes work of exceptional merit, originality, and impact within the Society's technical scope. Recipients are selected through a rigorous process involving technical committees, editorial boards, and the Society's Awards Board. This achievement highlights not only the technical excellence of the research but also its potential to transform how we assess and improve visual quality in our increasingly digital world.