group 3.3

Blogpost 3: Filter Bubbles, a Facebook Bot and Lots of Data

After our first fake Facebook account got deleted, we collectively decided to create a new one. Our new bot followed the same directive as the initial one, with as biggest difference that it now actually got to live a life on Facebook. We changed the name from Alexander Ivanov to Thomas Ivanov and again focused his interests on Russian nationalism. We built up a profile with a different picture than before, but obtained it in a similar manner from Shutterstock. At the beginning, we thought that the picture was the cause of our first bot being deleted, but in hindsight we assume it was the name. Even though we only changed the first name, Alexander Ivanov seems to be a very common name which could be a possible cause. Yet we can only speculate about the cause of our account being disabled. Another difference whilst creating the new profile was accessing the site from a different IP address. We believed that initially creating the profile from the University’s IP address would lead to issues. However, the first bot, which got deleted, was made from a personal IP address, whereas the second bot was made from the University’s IP address.

What caught our attention is an obvious absence of advertisements in the timeline of the bot. All of us in the group get advertisements regularly; some are incorrect regarding our interests, but they are clearly present. Generally, we find that we perceive advertisements after every two or three posts. Thus it is notable and interesting that our bot did not receive any advertisements at all. We think this may be due to the account being so new. Perhaps Facebook did not yet have a good enough idea of who Thomas Ivanov is, in order to personalise advertisements for him. However, there should still be advertisements that could be connected to his age, sex and/or location.

Now let’s move on to the data we acquired from our fake profile. Firstly, we extracted the data as a csv.file from the fbtrex tool. Once we had that data in an Excel file, we counted the amount of posts, photos, events and groups. Afterwards, we put all this data in raw graphs; the value corresponding to the amount present of that group in the file. To present our data in an organized way we debated about what graph to use. Our primary thoughts were to chose either a scatterplot, boxplot or sunburst, since these turned out to be the easiest to understand and the most popular, according to Kennedy Elliott. Eventually, we chose a circle packing graph which shows circles representing hierarchies and representing values. This graph shows how certain elements are proportionate to each other. Once we decided on the graph we played around with the colors and dimension for a little bit to determine which suited our approach best.

Visualisation of data types extracted from fake Facebook profile

Visualisation of data types extracted from fake Facebook profile

We were planning on comparing the bot account with the account of Nadia, the most active Facebook user of our group, but when we wanted to put her data in a graph it turned out that a lot of the data wasn’t identified as either post, photo, event or group. This was true for more than 90% of all the data gathered. We then downloaded the data of other members of our group and we encountered the same problem. Three of our group members’ data displayed this issue, whereas with the bot everything was identified even though there was less data. We wonder if this has anything to do with the Facebook accounts being older. This seems to be the only logical explanation for this, since the account of the bot was used on the laptop of a group member whose private data was not identified. It thus could not have been a flaw in the download of the tool or the browser.

Lastly we wanted to share our thoughts of the fbtrex tool we used for this research. We are still a little confused regarding how useful the outcome of our research is. First of all, we want to address that there is no control group in this experiment, making it in our opinion difficult to really obtain any substantial data. We also are skeptical about obtaining useful results of the tool when it comes to comparing it to older profiles (like the one’s of our group members), especially considering the data from older profiles, given to us by fbtrex, was often not completely identified. We hope to find out soon how this tool could be beneficial when figuring out filter bubbles.

Blogpost 2: The Deleted Russian Bot and Other Facebook Experiences

Our plan for our Facebook bot was to set up a profile for Alexander Ivanov, a Russian expat who moved to The Netherlands and is currently learning Dutch. He is a 35-year-old programmer who was actively involved in Facebook, thus liking and sharing posts about Russia and its army in particular. Our intention was for him to be ‘interested’ in events within The Netherlands to get to know the country better. Regarding his background, we came up with the idea that he went to Orenburg State University in Orenburg, which is more nationalistic than other parts of Russia and would explain where his expressions come from. This bot would therefore collectively be designed with the intention of researching the filter bubble of Russian nationalism.

Despite our carefully thought out profile, we did not succeed in creating it. Our bot was disabled by Facebook whilst it was trying to verify the picture, so we came as far as merely submitting general information, such as his name, e-mail and phone number. Because of this, we were unable to even post or follow other people like we intended to. We were consistent with using the same IP address whilst using the bot, so the fault did not lie there. We also considered that the phone number we needed to use to verify the account was perhaps already in use, but after checking it, this claimed not to be true and that the number was not linked to a current Facebook account at the time of making the bot. As a group, we discussed the cause of the unsuccessful Facebook page even further and came to the conclusion that the fault lied in the picture, as it was retrieved from Shutterstock.

First Step of Verification

First Step of Verification

The Result of the Verification

The result of the verification

If we had succeeded in actually making the Facebook account, we would have expected to see a completely different timeline altered to the specific interests of our bot. Especially considering how different his life and thoughts are to ourselves and also considering the difference in gender.

To us, our research proved that in this present time, it is more difficult to set up a Facebook account than it was before. It seems like their verification process is significantly stricter after issues surrounding previous events such as for example the 2016 presidential elections of the United States of America.

Ethically, we thought that making the bot was not wrong per se, but the idea of making a fake profile proves otherwise. For research purposes, we understand that this was a task that we needed to complete. However, taking terms like ‘catfishing’ into consideration, we unanimously agreed that this research was not ethically justifiable despite using the photo from Shutterstock. Using someone else’s photo and attaching it to a different identity without their consent is outright wrong.

Whilst discussing the fbtrex tool in class we had our apprehension regarding the feature. Now that we have had the feature for a week, we do not see this as a fit tool for this kind of experiment due to the fact that the tool is suited for quantitative data research, which was not the case for this small group-work research. Our research into these filter bubbles was aimed at looking at the overall ‘experience’, meaning we focused primarily on qualitative data, such as the differences in our newsfeed or general activities.

However, our group members have noted that early in January Facebook had changed it’s algorithm to favour content posted by friends rather that any news outlets etc. Upon discussion and comparison of our news feeds we have come to the conclusion that our feeds still primarily focus on friend-activity, events around us, advertisement primarily focused on shopping, as well as random entertainment videos. One of our team members, Nadia, who is most active on Facebook, as well as with a clear-cut political inclination, had more news articles show up than the rest of us. The other team members did not really notice a significant change that stood out. We expected that the people who are more active on Facebook would have a more tailored and different newsfeed than the people who are less active, but we did not find this to be true.

In our personal opinion on how this research could be improved for further studies, rather than just acting as a confirmation bias towards filter bubbles which have been taught to us in lectures, this kind of tool and research must be approached from a quantitative perspective. Multiple factors were not controlled and thus do not provide reliable data for analysis and comparison between accounts.

Blogpost 1: Fake News and False Flags

Our group takes the position that the article ‘Fake News and False Flags by The Bureau of Investigative Journalism does not feature enough explicit quantitative data, in order to be qualified as a ‘data’ journalism project, rather it is primarily based on citizen journalism, namely the single interview of Martin Wells.

The article investigates how the Pentagon – the US Department of Defense – paid British PR firm Bell Pottinger half a billion US dollars to work in Iraq. The purpose was for them to make fake terror and TV segments that were manipulated in the way that it looked like it came from Arabic news sources.

Iraqis walk under billboards showing posters urging people to report terrorist acts in Baghdad in 2006. "For the sake of Iraq, open your eyes," says the slogan.
Iraqis walk under billboards showing posters urging people to report terrorist acts in Baghdad in 2006. “For the sake of Iraq, open your eyes,” says the slogan.

The article makes allusions or hints towards official documents, data or statistics being ‘unearthed’, however, it is not transparent enough: “A document unearthed by the Bureau shows the company was employing almost 300 British and Iraqi staff at one point.” This does not provide the insight into which documents presented this number. Further on, the journalists write: “The Bureau has identified transactions worth $540 million between the Pentagon and Bell Pottinger for information operations and psychological operations on a series of contracts issued from May 2007 to December 2011.” In these numerical statements the journalists do not refer to any data collection process or analysis, nor are any of the contracts mentioned revealed. One claim is based on data unearthed in a “similar contract”, whereby the journalists claim that “we have been told”, demonstrating their reliance on their anonymous sources or Martin Wells.

Martin Wells inside Camp Victory
Martin Wells inside Camp Victory

Other quantitative claims seem to be based on hearsay: “Lord Bell told the Sunday Times”. Formulations such as “the bulk of the money was for costs such as production and distribution”, points towards missing data, or evidence for their investigation. Further sentences point towards numbers and use quantifiable words such as “a tide of violence”. Other facts mentioned still seem to be reliant on the numbers are given by Martin Wells: “five suicide bomb attacks”, which does not refer to any specific data or any evidence of such an event occurring.

Another point of criticism is that the numbers mentioned in the story are there for illustration or ‘evidence’, however the real story could be understood without them, for example saying an enormous amount of money instead of $500 million, would prove to have the same effect on the overall presentation and flow of the journalistic story. Furthermore, the reader is left to assume that the numbers mentioned are factual, and lack a thorough context. Such statements are easy to follow and if based on one real-life account, namely that of Martin Wells, lead us to believe that it is necessary to remain critical of such a journalistic ‘investigation’.

In terms of visual proof, the article does not provide much either. It features a 10-minute interview with Martin Wells, alongside some personal photos of his of his time in Iraq. However, the remaining visuals are just general photos, found through Getty Images.

A soldier rides on top of his vehicle past a billboard urging Iraqis to take part in the upcoming elections in Basra in January 2005
A soldier rides on top of his vehicle past a billboard urging Iraqis to take part in the upcoming elections in Basra in January 2005

Upon further investigation into the works of the journalist, it would seem that there is a strong preference for writing narrative-based journalistic works and focusing less on numbers. It is possible to speculate that the journalists, Crofton Black, and Abigail Fielding-Smith, have a preference for writing in such style, as the topics dealt with are incredibly sensitive, and involve several governmental institutions, the army, as well as other institutions who are not easily criticized and whose data is not available to the public. In terms of improving this specific article of investigative journalism, a strong preference should have been made on having multiple interviewees, in order to support Martin Wells argument. However, one of our personal limitations includes the fact that our criticism is based solely on this article, which is meant to be part of a larger investigative story on Privatised War by The Bureau of Investigative Journalism.

Essentially, it can be argued that this format of investigative citizen journalism cannot easily be compared to other works of ‘data journalism’, considering the lack of transparency in methodology, research, and analysis.

Link to the article: Fake news and false flags