select all

Janelle Shane’s Neural Network Keeps Producing Hilarious and Terrifying Creations

Photo: Nick Veasey/Getty Images

Let’s say you’re hungry. Right now, if you head over to the fridge, you might peek inside and find some pizza, or a leftover turkey sandwich, or maybe an old piece of birthday cake. In the future, however — thanks to machine learning and neural networks — you might open that same refrigerator to find delicacies like Completely Meat Circle, Artichoke Gelatin Dogs, and delectable Crimm Grunk Garlic Cleas.

Those recipes were generated by Janelle Shane, or more accurately, they were generated by a neural network that Shane administers, inputting large sets of data and receiving dangerous transmogrifications in return. In addition to inedible recipes, Shane’s network has produced lists of paint colors (Rose Colon, Flumfy Gray), heavy-metal bands (ChaosWorge le Plague, Squeen), and pickup lines (“You look like a thing and I love you”). Shane took a few minutes last week to explain where she gets her ideas, and what it’s like to actually eat one of those awkward recipes.

What’s your background? Are you a computer scientist?
I’m an electrical engineer, so my degrees are all in physics and electrical engineering, and I do holographic laser beams for a day job. This neural network is completely a hobby of mine, a side project.

How did you get started on it? What made you want to tinker with it?
Well, I’ve been interested in learning algorithms in general for a while actually. When I was just starting out at Michigan State, I started out in a research group that was studying genetic algorithms. I’d heard an interesting speech about how genetic algorithms work, and how they come up with things people never would have thought of. My research there morphed into using genetic algorithms for lasers, so I got interested in the optic side of research, but I’d always had that interest in machine learning.

So I happened to come across a list of neural-network cookbook recipes by Tom Brewe. I was reading these neural-network recipes, and I remember one of them had shredded bourbon in it, and for some reason that one struck me. They were all funny, but that one in particular, I could barely breathe. I read through the rest of the list, and I read through them all, and I wanted to read more, and I realized that the only way to read more was to learn how to generate more. Fortunately, the neural-network framework he used, char-nn, is open-source and free to use, so I was able to get started right away.

Where did you find the data set?
In this case, I found them on a genealogy website, and he had collected tens of thousands of recipes in a format that was already really easy for a neural network to digest. It was a different database than Tom Brewe used, and that was another reason I wanted to use it. I thought I might get different and better results with a separate data set.

Did you find it relatively easy to set up?
Yeah, my level of knowledge going into it was basically zero. I’ve done some programming, but not in Lua, which was the language that this was written in. There were a lot of things that I was learning how to do for the first time, and I found it a really great project for getting to learn some of this stuff — how neural networks work, and how to set up those computations on my machine.

When did you start noticing that your lists and experiments were picking up steam on Tumblr and elsewhere?
It was about a year after I’d done the first experiments. I’d had some fun with it, and I’d kind of gotten interested in other projects, and then about a year later, I started getting notices that I had all these new followers on Tumblr. To this day, I don’t know why or what sparked that, or who noticed it first. “Where did all these followers come from?” This post has 30,000 notes on it, and I can’t scroll past to see what the timeline of that was, so I might never know. I guess that just goes to show that just because something isn’t popular on the internet doesn’t mean it’s not interesting.

I know some of this is for laughs, but is there a broader goal in working with this stuff?
For me, it’s pure entertainment. I’m learning useful skills; I’m having a whole lot of fun. It’s been also pretty fun to start talking to people who are in machine learning, who do this kind of stuff for a living. It’s been fun to see what the latest and greatest in the field is. I’m not trying to solve any particular problems.

What have you heard from people who work in machine learning?
I’ve heard that they’re very entertained. Some of them have been kind enough to offer me new data sets, things that they’ve collected for their own purposes, or they already have tools to easily select. There have been a lot of really nice people who’ve contacted me.

Many of the people I’ve heard from have been just starting out as undergrad computer-science students, or even high-school students with an interest in programming. Several of them have said the blog has inspired them to start experimenting with neural networks — I love hearing that.

Where do you get your ideas for data sets?
Often, I will stumble across a chance posting about something else, and it’ll spark, I wonder if I could find a data set of that. Quite often now, it’s people suggesting things to me. So, for example, the paint-colors data set came because somebody on Twitter suggested a neural network might be able to name paint colors.

It seems to have worked.
Yeah, depending on your definition. I’ve seen enough people say that they would prefer to paint their rooms in these colors, some of the more froufrou colors.

A selection of new paint colors. Photo: Janelle Shane

Do you have any particular favorites? Either bigger projects or specific results?
One of my favorite paint colors that hasn’t gotten a lot of attention so far is the Peacake Bring. For some reason, that one just gets me — it’s sort of quasi-grammatical, and you can sort of imagine some kind of story behind it. What does Peacake mean?

It’s almost like you get the right string of syllables in a row, and it hits you, even if it doesn’t make sense.
The neural-network recipes has been fun, too. Recently, it had generated a recipe that was an ordinary cake recipe — chocolate, peanut-butter, gluten-free cake — all the way up to the very last ingredient, which was a cup of horseradish. After I posted this to the internet, somebody contacted me on Twitter and said they’d made it, and it was delicious. And they showed me pictures of this moist cake thing, and I said, “Oh, that looks pretty good.” And they said, “Oh, yeah, the horseradish. It got this interesting spicy background to it.”

And so I had some friends coming over for the weekend, so I thought, I could try that. It was the most horrible chocolate thing I have ever tasted in my life. I opened the oven and my eyes just watered. It was so bad.

Was it edible?
Yeah, it was edible; it was just weird. The texture was fine; it was maybe a little dry, but perfectly passable. There are a few strange souls who really like horseradish. I think maybe 1 in 20 people who tried it thought it was delicious. Maybe the rest thought it was okay or interesting, and there was some other small proportion that was like me and could not stand to eat more than one single bite. The two different parties that I took it to, I found out that somebody had quietly taken a bite out of one of these cupcakes, and abandoned it somewhere.

You didn’t tell them what it was?
I did not tell them at first. The first time, I took it to a party where people all knew me, and they had to try to guess the secret ingredient. They found it hard to do so: “Oh, is that some kind of booze in there, or vinegar, or sourdough?” And this being Colorado, people said, “Are you sure this is … legal?” And I’m like, “Yeah, it’s fine — legal in all 50 states.” And then once they guessed horseradish, everybody could taste it.

The second time, I brought the leftovers (somehow, nobody ate more than one, except for that one guy) to an Analyze Boulder event. They actually had a neural-network-themed event going on, with speakers and everything. They were serving beer, and this was after work, so I said, “Okay, I’ll just set these things out.” I set out a little sign next to them, and it was two pages, and the cover said, “This is Chocolate, Baked and Served, a recipe designed by neural network. Try to taste the secret ingredient.” Then the recipe itself was on the second page, if you flipped though.

I just set that up by the beer and watched people out of the corner of my eye. You’d get people who’d come up to it and say, “Hey, small, chocolate-brownie, bite-shaped thing!” Then pause … and then go back and look at the first sign … and they’re chewing a little bit slower … and they’re looking at the sheet; “Oh, neural network”… and then they’d turn it over to read the recipe, and they’re chewing slower, and then they’d just walk away.

Google and Facebook have been leaning a lot on talking about AI this past year, and talking it up. When you see them talking about the amazing ability of AI and then contrast is with your own experience, what’s your reaction to that?
They’re definitely working at a more sophisticated level, algorithm-wise, than I am. So it’s not surprising that they’re getting more usable results. For example, IBM’s Watson actually tackled the problem of generating cookbook recipes. There’s a thing online where you can enter an ingredient and have it generate some cookbook recipes for you. They’re kind of unusual, but they’re a whole lot more doable than the recipes that my neural network is coming up with. They won’t ask you to cube the water or shred the flour; they won’t forget about the ingredients list by the time they get to the instructions.

They’re definitely putting a lot more sophistication behind their algorithms, and are therefore getting better results. Although, I guess it depends on your definition of “better,” because the IBM Watson results are definitely not as funny as the neural network’s results, and that was my goal. So in some ways, I’ve got the ideal tool for the job.

Is there anything else on your bucket list? Anything you’d like to tackle that you haven’t yet?
I’ve had some data sets on my wishlist for a while that it looks like I’m now getting my hands on. One thing I’m working on now is seeing if the network can generate names for craft beers. It’d be fun to do racehorse names, or show-dog names.

This interview has been condensed and edited for clarity.

How This Neural Network Keeps Producing Hilarious Creations