Last weekend I started playing around with Deep Dreams, an artistic, surreal application of Google’s deep belief neural networks and applied it to a picture from my colleague Steve Eldridge of Vixlet HQ’s inspiring view of downtown LA:
Here’s what it looks like at the deepest layer of the deep dream neural network (I’ll explain what that means shortly).
Although there is a tutorial already, I thought I’d make a blog post about it because I had to make some modifications to the code there and people were asking about how I got it working.
First of all, the part that I had to modify was the image loading commands. The tutorial used ipython notebooks, which I’m not very famliar with. Also they loaded the image into a numpy array via PIL, python image library. I’m not sure if it was due to PIL or ipython notebooks, but this didn’t work for me. Instead, I loaded the image to a numpy array via opencv (called cv2 in python). That’s an obvious thing to try if you’re familiar with computer vision in python, but otherwise it would be a place to get stuck. So basically, I used the code from the tutorial but added it to a single script and changed a couple of lines:
The core python library that performs the neural network operations is Caffe. If the code above doesn’t work, it could be because I used a version of Caffe that I compiled to make use of the NVidia Cuda libraries. I don’t think that is necessary, but I had been using that to speed up unrelated computer vision work I’ve been doing.
Now I’ll explain a little bit about the inception neural network that deep dreams is based on. The neural network is visualized below.
You may want to open the pdf (https://abewrites.files.wordpress.com/2015/07/tmp.pdf) to see it in more detail. The left is the lower level of the neural network that processes the image input and the right is the deeper level of the neural network. The intermediate levels range from the shallow end which identifies edges and shapes, to patterns, and then to patches and objects in the deep end. The “end” named argument to the deepdream function specifies what layer to output, given the input image. Below are a selection of various layers from shallow to deep:
Here’s one where I ran the deep dream several times:
Social signals generated from users tends to follow a diurnal pattern where there’s activity at certain hours of the day and lulls at others. This pattern is interesting but often it is other patterns that are more salient, like spikes in traffic. When the spikes are combined with the diurnal fluctuations, we have a signal separation problem. In this framing of the problem, the observed web traffic is modeled as an underlying trend signal multiplied by the harmonic noise of daily fluctuations. Multiplication in the time domain is equivalent to addition in the frequency domain. So to recover the underlying trend signal we subtract the average frequency spectrum from the daily observed frequency spectrum.
It turned out that this was an easy problem to solve in R, so I wanted to share my experience. First, to visualize the process below is an example of a snippet of the observed signal over about 3 weeks.
you can see that there are several big spikes as well as smaller daily bumps. The goal is to remove the daily bumps so that we can see underlying trends. To do that, we use the FFT to convert the signal to the frequency spectral domain, average the the daily spectra, then subtract the average from the observed daily spectra. One of the cool things about R is that the FFT is a built in function. However, we had to import a library for the Hamming window. The daily spectra correspond to sliding windows of one day length. The edges of the window need to be rounded to prevent artifacts, which is what the Hamming window is needed for. library(‘e1071’) (line 5) imports the Hamming window and library(zoo) imports ‘rollapply’ (line 2), the function that applies the window across the signal (line 14).
The output looks like this:
One question that comes up is how this compares to just plain old median smoothing. Median smoothing is already a part of the code to generate the output because it acts as a low frequency filter to remove high frequency artifacts from cutting up and reassembling the signal. But what if we just apply plain old median smoothing to the original smoothing without the FFT. The result of just using median smoothing looks like this.
Subjectively, it seems to me that the FFT frequency subtracted signal shows a bit more resolution and does a better job of removing the DC (constant) component of the signal.
My old job analyzing social media (mainly Twitter) at Adly disappeared after it was acquired and I started working at Vixlet. Vixlet’s aim is to connect people with what they are passionate about, which to date has been mainly music (Slipknot) and sports (baseball, football, and tennis). My role there is to do R&D and analytics. It’s been a productive 2.5 months so far and I’ve had a lot of fun (i got to go to London to the Barclay’s ATP tennis tournament for work!). We started a site, labs.vixlet.com to show demos from our work. One demo is the “Passion Bot”, a dialog agent to talk about a user’s passions. Another was a visualization of Twitter traffic about tennis during the Barclay’s ATP tournament. Another demo was a chatbot for tennis trivia.
L.A. is a very stimulating city. One way that L.A. is stimulating is that it has a lot of art galleries and exhibitions. Last weekend I went back to the Mike Kelley exhibit at MOCA and I noticed some things that I didn’t see the first time. One of the pictures was about spelunking and a friend, Richard, pointed out that it was about Plato’s allegory of the cave.
Exploring from Plato’s Cave, Rothko’s Chapel, Lincoln’s Profile
So Plato’s cave was in the back of my mind recently.
Another way L.A. is stimulating is the mix of cultures, languages, and food. I was at the Hong Kong Market mall in West Covina and I saw a Korean restaurant with the Chinese characters 明洞, so that sparked my curiosity. It turns out that it’s a town in Korea, Myeong-dong, written in Chinese characters. When I looked it up in the Chinese dictionary it appeared to be the character for “Bright” and the character for “Cave”
In the last link, you can see that the character for cave is used in words like “see clearly” and “insight”. This seemed somewhat related to Plato’s allegory of the cave. However, in Plato’s allegory of the cave the insight (seeing how things are rather than just shadows) came from leaving the darkness of the cave, but in Chinese, it seems like insight comes from the meaning of piercing, like a how a cave pierces the mountain (the same character is used in the word for pierced ear, tunnel, etc.).
A lot of times it doesn’t make sense to take Chinese characters too literally (this applies to words and expressions in other languages in general), so I’m probably reading into it to much. But it can make the characters more interesting and aid in remembering them.
When I stepped down from my former position as chief technology officer at Annenberg Innovation Lab, I had many plans for how I would use my free time before I found a new job. It’s not completely a vacation free from cares. I still need to find a new job, learn some new skills, follow up on some publications, and sail to Catalina Island, all of which require some work. Since I’m not getting paid for it, I’m considering it vacation. Still, if I get too laid back I won’t do all that I had planned, so I decided to set some objectives to measure my vacation. Here are my objectives and how I can measure their successful completion.
- Get a job. This is my primary goal for my vacation and if I don’t find a job in about two months I would consider this vacation a failure. Metrics: right now I’m looking for something that is primarily interesting and fun work, that is in either LA, the Bay Area, or California in general, working with good people, that ideally has something to do with what I studied (natural language processing), and that pays well (in that order). Outcome: in progress.
- Get in touch with friends. Luckily one of the first things I did on my vacation was to go to my high school friend, Scott Hagen’s, wedding so I reconnected with a lot of friends then and while I was home in Wisconsin and Minnesota. Metrics: the main metric is to get in touch with as many friends as possible and a secondary consideration is being able to meet in person versus just by email or phone. I guess a bonus point would be if getting in touch with a friend helped land a job. Outcome: in progress.
- Sail to Santa Catalina Island. Since I’ve had a boat (about a year and a half), Elly, my fiancee, and I been studying to made the 30 nautical mile trip from Marina Del Rey. Sailing on the ocean is a lot more involved than on lakes so I’ve been cautious but now that I have more time to read up and practice I’ve been itching to make the trip. Metrics: safely going and returning from Catalina, time taken, and amount of fun had. Outcome: Done! Last weekend Elly and I successfully made it to Isthmus/Two Harbors. The outward-bound trip took 10 hours, which was a bit long because we had to divert to Redondo/King Harbor to refuel due to calm winds in the morning. In the afternoon the wind and swell picked up a lot and we got a bit wet from spray. On the way back, it took 7 hours because we were more experience and we were in a hurry to beat the mid afternoon turbulence if it came back. It was calmer on the way back and we had the company of many boats and dolphins. Overall, it was fun despite some white knuckles and white cap waves.
- Publications. While I was working, it was hard to keep up with publishing my dissertation research, so I’m hoping to have time to do that over the vacation. Metrics: finish the one paper I have under revisions and start at least one more. Outcome: in progress.
- Getting into shape. When things get busy, for me it’s often exercise that gets put on the back burner. While I’m on vacation I want to exercise more and loose some weight. Metrics: frequency of exercise, amount of weight lost. Outcome: in progress.
- Reading. I have a bunch of books that I’ve been reading that I need to finish and others that I want to start. Metrics: how many of the books I want to read that I actually read. Outcome: in progress.
Here is some preliminary results of some analysis of Twitter data about the NBA Finals between the Miami Heat and San Antonio Spurs. First we tracked tweets containing #spurs, @spurs, and #gospursgo for the Spurs and #miamiheat and @miamiheat for the Heat, based on http://www.vegau.com/resources/NBA-twitter-hashtags/ .
In this the Spurs won in a blow out, pretty much starting in the second quarter. So far the interesting things I noted were that people tend to tweet during the half time and after the game (7:10 and 8:31). Comparing this game with the previous game, it seems that the winners tended to get more tweets. But that’s not to say that there is causation or even direct correlation b/c we could also say that the home team got more tweets in both cases.
Here are some salient timestamps that Jon and I identified during the game.
6:15-Duncan dunk -Spurs up by 7
6:21-good LeBron James play -Miami coming back
6:49-Danny Green 3 pointer-Spurs go up by 8
6:57-spurs go up by 10
7:01-Neal 3 pointer,Spurs by 11
7:09-heat tie it up
7:10-Neal hits at buzzer spurs up at halftime
7:43-largest spurs lead of the night
7:58: End of third, 758
8:09-Spurs slam dunk
8:10-Spurs 3 point
8:16-Spurs 3 point
8:18-Spurs 3 point
8:22-Heat 3 point
In the last post, I aggregated the counts of tweets in python and generated a table that I used in R. This time I wanted to go from the raw input of tweets and the output of sentiment analysis to aggregated hourly counts using R, and this time for movies that came out on Jan 11, 2013 (Gangster Squad, Zero Dark Thirty, and A Haunted House.
Do this in R turned out to be harder than I expected and I had to install some libraries, namely “zoo” (which stands for “z ordered observations”)and “chron”, which can be seen in lines 1-5 in the gist below*. I also had to massage the data a little more than I was expecting. After reading it in (line 8), I had to muck around with making the tweet column be character strings instead of factors, and parsing the date info into a zoo object. Line 20 actually does the aggregation. After plotting it, I noticed a spike on Sunday evening. It turned out there was a lot of (re)tweets about Zero Dark Thirty and one about a ticket give-away for Gangster Squad:
“RT @goldenglobes: Best Actress in a Motion Picture – Drama – Jessica Chastain – Zero Dark Thirty – #GoldenGlobes”
“RT @vuecinemas: Help stop the mob with #GangsterSquad on Jan 10th! To win one of 5 movie packs, follow us and retweet this message by 5p …”
“Woot! RT @Bad_Wobot1013: Awesome Jessica Chastain!! ”
*Note that if you’re using Linux, you’ll need to have the R-devel packages to build this libraries in the installation process