Drawbots and Data Visualisation – Part 3

This is the final part of a three part post on the work I have been doing over the summer towards my final postgraduate exhibition. In the previous posts I explained how I built my drawing machine and why I decided to try my hand at some data visualisation. In this post I will explain a bit about my process in collating and visualising the data to make drawings the illustrate the amount of love and hate shared over 24 hours on twitter in different cities around the world.

Essentially I will try to explain how I got from this:

Screen Shot 2015-09-15 at 12.37.18

to this:

Screen Shot 2015-09-15 at 12.25.46 1

to this:

IMG_4057

If you haven’t already guess this post my get a bit techy, but I’ll try keep it brief and punctuated with lots of pictures.

So it started to dawn on me that I was going to have to do some coding for this project, of which my sum experience to date was a bit of HTML and copying and pasting Arduino sketches. So I began to tentatively look for a platform/language that relatively quick to learn with good online documentation and, as a broke student, ideally something free and open source.

Screen Shot 2015-09-15 at 12.23.55

Processing fits the bill perfectly, it’s an open source platform based on Java designed for artists and designers. I won’t go into it in too much detail here, but I’ll definitely do a post of my favourite art projects built with it at some point. Tou can check out loads on the Exhibition section of their website.

One of the Processing’s biggest advantages is the huge online community using the programme, as an open source programme users are constantly contributing, by sharing projects and offering support through forums and building new libraries.

Once I had decided that I would use twitter as my data source I needed a way to interact with it’s API, after reading Jer Throp’s great tutorial on this, I decided to us the Twitter4J library.

tumblr_nqeiegehFT1s5rpd7o1_1280

Inspired by Twistori (above) I began playing around making programmes  that searched twitter for different terms on twitter and displaying the tweets on screen. seeing all this data pop up was a pretty spooky experience, of course it was all made public by the users and could be found ‘by hand’, but seeing it pulled from the internet by an autonomous programme got me thinking again about how much info we are happy to put out there. It also made me realise how banal most of the stuff people on twitter talk about, and just how much people love One Direction (it’s a LOT):

WARNING: as you can probably guess, tracking all the hate on Twitter is not a particularly life affirming past time and some of the language in this video is pretty unpleasant. Don’t blame me, blame society. You have been warned…

However I was finding that the same tweets were coming up again and again and if I wanted to create something that could keep track of the times a term was used over a set period I would have to utilise Twitter streaming api. This proved slightly more complicated especially when filtering very popular terms such as ‘love’. The word was being used so often globally that the programme couldn’t keep up. This meant I was getting very similar values each time (the maximum number of requests the programme could handle in the time frame), you can see it in the images below (love is shown to a different scale to the others otherwise the whole box would be black):

To bring the numbers down to a more manageable level I implemented a location filter, which used longitude and latitude to put a bounding box over an area. The twitter API then sent all geotagged tweets from within that area. As I couldn’t combine a key word and a location filter I had to then make the programme break each tweet down and ‘read’ it for the key words.

This gave me the raw data I needed, a long list of how many times ‘love’ and ‘hate’ were mentioned on twitter on different cities around the world over a day. I was feeling pretty smug, but the only problem was that on its’ own, the data looked pretty uninspiring:

Screen Shot 2015-09-15 at 12.25.46 1

I experimented with representing the data in a number of different ways:

003

‘Love’, ‘hate’ and ‘envy’ shared on twitter in San Francisco over 24 hours

These were just visual aids for my benefit, it was the polargraph that would need to interpret and draw the data:

Looking at the way the pendulum-like way polargraph drew, I (with help from my dad) developed the programme so that it would plot the data diagonally following the line of the pen, rather than simply from left to right top to bottom (see below). I was hoping this would make the images easier to ‘read’ but it also opened up interesting possibilities for displaying the finished drawings.

IMG_4046 IMG_4040

I decided to present the drawings in grids of four, the drawing below shows four consecutive days in New York:

IMG_4057

and this one shows the same day in 4 different cities (clockwise from top right; Paris, London, San Francisco and New York):

IMG_4065

I felt presenting them in sets allowed for an easy way to compare the images without being too prescriptive. Individually these images are difficult to interpret, when presented together they given one another meaning providing a frame of reference that allows interpretation to be a creative rather than scientific process. Time flows from the centre of the grid out, like ripples in a pond. There is both an individuality on commonality to each location/day that I find intriguing. I like how visual pieces of information they provoke questions and challenge preconceptions. what are the concentrated areas of love in Paris? Why does San Francisco, a city with a historical reputation for free love, have such a proportionally high concentration of hate?

Of course, the whole notion of measuring our emotions digitally and distilling them into a drawing is a little tongue in cheek, but it does raise some interesting questions about the limits between computer and human interaction, especially in the nuanced ways we utilise language. The programme does not have a sarcasm filter for example. It did get me thinking about how extraordinarily complex language is, it’s a wonder we understand each other at all.

I will be showing a number of drawings and the polargraph in action in my final postgraduate show at Aberystwyth University School of Art:

POSTgradPoster2015_sept2

 

For more information about this project see my blog posts:
Draw bots and Data visualisation – Part 1

Drawbots and Data Visualisation – Part 2

Drawbots and Data Visualisation – Part 3

Advertisements
Tagged , , , , ,

3 thoughts on “Drawbots and Data Visualisation – Part 3

  1. […] Drawbots and Data Visualisation – Part 3 […]

  2. […] Drawbots and Data Visualisation – Part 3 […]

  3. […] the final part of this post I get a bit techy and explain how I interpreted the twitter data in Processing and […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: