“it’s a good time to remember we shouldn’t trust everything we see”

The era of easily faked, AI-generated photos is quickly emerging—Dave Gershgorn, Quartz

Until this month, it seemed that GAN-generated images that could fool a human viewer were years off. But last week research released by Nvidia, a manufacturer of graphics processing units that has cornered the market on deep learning hardware, shows that this method can now be used to generate high-resolution, believable images of celebrities, scenery, and objects. GAN-created images are also already being sold as replacements for fashion photographers—a startup called Mad Street Den told Quartz earlier this month it’s working with North American retailers to replace clothing images on websites with generated images.

Nvidia’s results look so realistic because the company compiled a new library of 30,000 images of celebrities, which it used to train the algorithms on what people look like. Researchers found in 2012 that the amount of data that a neural network is shown is important to its accuracy—typically, the more data the better. These 30,000 images gave each algorithm enough to data to not only understand what a human face looks like, but also how details like beards and jewelry make a “believable” face.

The era of easily-faked photos is quickly emerging—much as it did when Photoshop became widely prevalent—so it’s a good time to remember we shouldn’t trust everything we see.

Published by

Rigaroga

Rigaroga's Adventures in Geekery and Nerdy Mishegoss.