Analyzing Twitter with Excel, Part 3

By Mark Gibbs, Network World |  Open Source, CURL, Excel

For the last two weeks I've been considering how to analyze Twitter messages using Excel 2003. Having been thwarted by the deficiencies of Excel and Twitter (turns out that Twitter Search returns malformed HTML and Excel won't tolerate that) I have a new plan: Let's use cURL to retrieve the raw XML returned by Twitter search and then haul that data into Excel for the analysis.

CURL is a free open source tool that lets you perform data transfers specified by URLs (you can think of cURL as a sort of Swiss Army knife utility for Web data retrieval -- I give it 5 out of 5, it is that good). For some background on cURL check out my Gearhead column from a couple of years ago.
I'm going to use cURL to retrieve the Twitter Search data for a specific day. I can't do this efficiently in Excel using XML Maps because once you have defined an XML source you can't easily change it. You can change the XML Map URL through Visual Basic for Applications, but messing with VBA is how people descend into madness.

That said, I'm not sure that madness isn't a natural consequence of all software engineering and, in this case, the solution is going to be particularly demented and ugly as we're forcing weakly structured content to do what it was never meant to do.

Anyway, I'm going to employ cURL to retrieve the content from Twitter search using a batch file (I've wrapped the line below for readability -- you should combine the lines into a single line with no spaces in the text between the double quotes):

curl "http://search.twitter.com/search?q=rovio
 &since=%1&until=%1&rpp=100&page=1" --o tweets%1-1.htm

If the batch file is called, say, getrovio.bat, then the command line for this request would be of the form "getrovio 2009-04-01". This will get just the first page of matching Tweets for April 1, 2009.

Now if the term we're searching for is really popular we're going to need to run the search multiple times, but here's the problem -- Twitter will return a Web page even when there's no data so we need to analyze the content to look for the string "no results", which is what Twitter will report. So, we're going to use grep on the output file:

grep "no results for" tweets%1-1.htm

If grep can make a match it will set the errorlevel to 1 while no match will result in an errorlevel of 0.

Assuming we might need, say, 5,000 results for a really popular search term, we need to ensure we loop through the curl and get the Tweets up to 50 times. Here's how we'll do it (note that I've used the environment variable 'counter' as the page index and I've split the curl command over two lines for readability):

@echo off
set counter=0
:start
set /a counter+=1
echo %counter%
curl "http://search.twitter.com/search?q=rovio&since=%1&until=%1
 &rpp=100&page=%counter%" -o tweets%1-%counter%.htm
grep "No results for" tweets%1-%counter%.htm
echo %errorlevel%
if %counter% == 10 goto :next
if %errorlevel% == 0 goto :next
goto :start
:next

When we create another batch file to call the batch file above and provide a list of dates we're interested in we can run it and have a collection of files named 'tweetsYYYY-MM-DD-N.htm'. We'll do that and start to extract stats ... next week.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Answers - Powered by ITworld

ITworld Answers helps you solve problems and share expertise. Ask a question or take a crack at answering the new questions below.

Join us:
Facebook

Twitter

Pinterest

Tumblr

LinkedIn

Google+

Ask a Question