Ozcan Lab/UCLA The system uses an algorithm that encodes a high-resolution image to a lower-resolution one, and then translates the compressed image back to its original resolution by a decoder that unscrambles incoming light.
A UCLA team has developed a technology for projecting high-resolution computer-generated images using one-sixteenth the number of pixels contained in their source images. The system compresses images based on an artificial intelligence algorithm, and then decodes them using an optical decoder — a thin, translucent sheet of plastic produced using a 3D printer — that is designed to interact with light in a specific way as part of the same algorithm. The decoder consumes no power, which could result in higher-resolution displays that use less power and require less data than current display technologies.
Projecting high-resolution 3D holograms requires so many pixels that the task is beyond the reach of current consumer technology. The ability to compress image data and instantly decode compressed images using a thin, transparent material that does not consume power, as demonstrated in the study, could help overcome that barrier and result in wearable technology that produces higher quality images while using less power and storage than today’s consumer technology.
The system uses an algorithm that encodes a high-resolution image to a lower-resolution one. The result is a pixelated pattern, similar to a QR code, that is unreadable to the human eye. That compressed image is then translated back to its original resolution by a decoder designed to bend and unscramble the incoming light.
Testing the system on images in black, white and shades of gray, the researchers demonstrated that the technology could effectively project high-resolution images using encoded images with only about 6% of the pixels in the original. The team also tested a similar system that successfully encoded and decoded color images.
The technology could eventually be used for applications like projecting high-resolution holographic images for virtual reality or augmented reality goggles. By encoding images using a fraction of the data contained in the original and decoding them without using electricity, the system could lead to holographic displays that are smaller, less expensive and have faster refresh rates.
The technology could appear in consumer electronics as soon as five years from now, according to the paper’s corresponding author, Aydogan Ozcan, Chancellor’s Professor of Electrical Engineering and Bioengineering, Volgenau Professor of Engineering Innovation at the UCLA Samueli School of Engineering and an associate director of the California NanoSystems Institute at UCLA.
Other potential applications include image encryption and medical imaging.
The co-first authors of the study are UCLA doctoral students ÇaÄatay IÅ?l and Deniz Mengu. Mona Jarrahi, UCLA’s Northrop Grumman Professor of Electrical Engineering, is a co-senior author. Additional authors are Yifan Zhao, Anika Tabassum, Jingxi Li and Yi Luo, all of UCLA.
The study is published in Science Advances.