The concept of selective imaging – the idea that you keep only the parts of an image that you need – is not new. Blurring, encryption, and in-camera editing are among the methods currently available. However, they present security issues because editing takes place after data collection, which means that RAW files are still vulnerable. A recent study may present a solution. In it, a team from the University of California, Los Angeles (UCLA) has developed a privacy camera capable of erasing unwanted information through a technique called light diffraction.
Related: How to use AI to sort and edit your photos faster
Problems with current technology
Simplistic solutions such as using low-resolution files for data capture are impractical because they sacrifice image quality for the entire image. Some technologies such as autopilot are unable to operate at this resolution. Additionally, there was the issue of potential information retrieval using models capable of reconstructing the image. The UCLA team explored technology that would edit unwanted objects out of a frame before creating a digital file, solving the problem of saving information in RAW files.
“Passively applying privacy before images are digitized may potentially provide more desired solutions to both of these previously described challenges,” the study authors write.
Related: 1 Million Lucky Creators Will Beta Test AI DALL-E Image Generator
How does the UCLA privacy camera work?
The UCLA privacy camera works via a concept known as diffractive computing. The process gives the camera the ability to capture what the researchers call “target types” (in their example, a handwritten “2” with good accuracy. Any input that doesn’t match the target type is erased so that the output shows only the objects of interest.Essentially, the diffractive layers stitched together in 3D act as a filter, so that only necessary information can pass.
As a result, target types will appear on the output clip, while everything else is rendered as noise (a photographer might liken it to background blur). This method is said to be safer than others because all information about non-targeted types is never recorded. As a result, the final image requires less storage and transmission effort from the camera.
The UCLA Privacy Camera Experiment and Results
To test the idea, the researchers used 3D-printed diffractive layers capable of processing a “target type” from a group of handwritten digits – in this case, the number two. They found that the model prevailed through handwriting variations and was able to identify multiples of the target type in a group.
“Unlike conventional privacy-preserving imaging methods that rely on post-processing images after they are scanned, our diffractive camera design enhances privacy protection by selectively erasing information from non-target objects during imaging. light propagation, which reduces the risk of recording sensitive raw image data,” the team concludes.
Although the technology is still in the early stages of development, it offers a potential new solution for a number of applications, from self-driving cars to monitoring tools. We’re interested to see where it ends up.