Glitch

Glitch by Hugh Howey is a far too brief vignette to be good, but isn’t bad. It’s an okay short about a robot gladiator who becomes sentient which causes a moral dilemma and struggle for control. It’s like an excerpt from, or a pitch for, a larger story, but after the end I didn’t really need anything more from it. I’m not sure there was much there that hasn’t been told before more fully and quite well in other similar stories, such as classics like Thomas J Ryan’s The Adolescence of P-1 as just one example.

I made 5 highlights.

Originally posted on my personal blog at Glitch

“it’s a good time to remember we shouldn’t trust everything we see”

The era of easily faked, AI-generated photos is quickly emerging—Dave Gershgorn, Quartz

Until this month, it seemed that GAN-generated images that could fool a human viewer were years off. But last week research released by Nvidia, a manufacturer of graphics processing units that has cornered the market on deep learning hardware, shows that this method can now be used to generate high-resolution, believable images of celebrities, scenery, and objects. GAN-created images are also already being sold as replacements for fashion photographers—a startup called Mad Street Den told Quartz earlier this month it’s working with North American retailers to replace clothing images on websites with generated images.

Nvidia’s results look so realistic because the company compiled a new library of 30,000 images of celebrities, which it used to train the algorithms on what people look like. Researchers found in 2012 that the amount of data that a neural network is shown is important to its accuracy—typically, the more data the better. These 30,000 images gave each algorithm enough to data to not only understand what a human face looks like, but also how details like beards and jewelry make a “believable” face.

The era of easily-faked photos is quickly emerging—much as it did when Photoshop became widely prevalent—so it’s a good time to remember we shouldn’t trust everything we see.

“We know that AI terrifies us in the abstract sense. But can AI scare us in the immediate, visceral sense?”

MIT researchers trained AI to write horror stories based on 140,000 Reddit posts—Thu-Huong Ha, Quartz [see also]

The team behind Shelley is hoping to learn more about how machines can evoke emotional responses in humans. “The rapid progress in the field of Artificial Intelligence (AI) has people worried about everything from mass unemployment to the annihilation of the human race at the hand of evil robots,” writes researcher Iyad Rahwan by email. “We know that AI terrifies us in the abstract sense. But can AI scare us in the immediate, visceral sense?”

Shelley, named after Frankenstein author Mary Shelley, is interactive. After the program tweets a few opening lines, it asks people on Twitter to continue the story, and if the story is popular, it responds to those responses.

Using information from 140,000 stories from Reddit’s r/nosleep, Shelley produces story beginnings that range in creepiness, and in quality. There’s some classic “scary stuff,” like a narrator who thinks she’s alone and then sees eyes in the dark, but also premises one can only imagine are Reddit-user-inspired, like family porn.

“It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel”

Stunning AI Breakthrough Takes Us One Step Closer to the Singularity—George Dvorsky, Gizmodo

In a tournament that pitted AI against AI, this juiced-up version, called AlphaGo Zero, defeated the regular AlphaGo by a whopping 100 games to 0, signifying a major advance in the field.

Now, every once in a while the field of AI experiences a “holy shit” moment, and this would appear to be one of those moments.

“What we’re seeing here is a model free from human bias and presuppositions: It can learn whatever it determines is optimal, which may indeed be more nuanced that our own conceptions of the same. It’s like an alien civilization inventing its own mathematics which allows it to do things like time travel,” to which he added: “Although we’re still far from ‘The Singularity,’ we’re definitely heading in that direction.”

“I believe that the information bottleneck idea could be very important in future deep neural network research”

New Theory Cracks Open the Black Box of Deep Neural Networks—Natalie Wolchover, Wired

Last month, a YouTube video of a conference talk in Berlin, shared widely among artificial-intelligence researchers, offered a possible answer. In the talk, Naftali Tishby, a computer scientist and neuroscientist from the Hebrew University of Jerusalem, presented evidence in support of a new theory explaining how deep learning works. Tishby argues that deep neural networks learn according to a procedure called the “information bottleneck,” which he and two collaborators first described in purely theoretical terms in 1999. The idea is that a network rids noisy input data of extraneous details as if by squeezing the information through a bottleneck, retaining only the features most relevant to general concepts. Striking new computer experiments by Tishby and his student Ravid Shwartz-Ziv reveal how this squeezing procedure happens during deep learning, at least in the cases they studied.

Tishby’s findings have the AI community buzzing. “I believe that the information bottleneck idea could be very important in future deep neural network research,” said Alex Alemi of Google Research, who has already developed new approximation methods for applying an information bottleneck analysis to large deep neural networks. The bottleneck could serve “not only as a theoretical tool for understanding why our neural networks work as well as they do currently, but also as a tool for constructing new objectives and architectures of networks,” Alemi said.

“to develop and promote the realization of a Godhead based on Artificial Intelligence”

God Is a Bot, and Anthony Levandowski Is His Messenger—Mark Harris, WIRED

Many people in Silicon Valley believe in the Singularity—the day in our near future when computers will surpass humans in intelligence and kick off a feedback loop of unfathomable change.

When that day comes, Anthony Levandowski will be firmly on the side of the machines. In September 2015, the multi-millionaire engineer at the heart of the patent and trade secrets lawsuit between Uber and Waymo, Google’s self-driving car company, founded a religious organization called Way of the Future. Its purpose, according to previously unreported state filings, is nothing less than to “develop and promote the realization of a Godhead based on Artificial Intelligence.”