Well now you can see the poor quality photos from that anime you just watched
Google’s new AI upscaling tech turns low-res photographs into high-resolution ones. This new AI tech was produced by the Brain Team. They have introduced two new diffusion models that should create high-quality images. These models are called Super Resolution 3 (SR3) and Cascaded Diffusion Model (CDM)
Super Resolution, as described by the researchers, is a “diffusion model that takes as input a low-resolution image, and builds a corresponding high-resolution image from pure noise.”
To make this diffusion model possible, machine learning was used, where noise was consistently added to a corrupted high-resolution image until only “pure noise” was left. After that, it reverses this process to remove the leftover noise until it reaches a certain point of “guidance of the input low-resolution image.” This method also works for upscaling images (as stated by the title obviously).
Cascaded Diffusion Model
This diffusion model was made possible also through machine learning. The researchers described it as ” a class-conditional diffusion model trained on ImageNet data to generate high-resolution natural images.”
As per the name itself, Google has built a cascaded set of different diffusion models. In simple terms, the AI takes one low-res image which is then followed by a series of SR3 images that can upscale the image without destroying its quality. So a 64×64 image can be diffused into a 264×264 and that image can further be upscaled to 1024×1024 without ruining its quality.
Google and the Brain Team research are finding ways to improve the algorithm as well as find a purpose for them but they also said this poses design challenges “With SR3 and CDM, we have pushed the performance of diffusion models to the state-of-the-art on super-resolution and class-conditional ImageNet generation benchmarks”, they said in a blog post.