The Facebook Files (are not as damning as the Guardian claims they are)

Content Warning: This post and, in particular, the contents of the sources for this post, contain references to, and images of, graphic violence and abusive comments.

On Sunday, the Guardian released a series of excerpts from Facebook’s moderation guidelines [1]. These guidelines are for use by moderators to determine which content should be taken down, censored to certain age groups, or marked as disturbing ensuring that content does not automatically play on a user’s timeline. The guidelines have caused some controversy, which the Guardian appears to be revelling in, and Yvette Cooper has made a public statement about the inadequacy of the Facebook moderation process [2]. With good reason; Facebook live has enabled the streaming of a host of disturbing content as of late including videos of murder and sexual assault [3].

Before we judge Facebook’s moderation rules, it is worth looking at how they are enforced and the problems Facebook faces when trying to do so. Facebook uses a number of tools to identify content that needs to be removed, the simplest being asking users to flag content. You can flag any content as spam, offensive or inappropriate and that content will be removed from your timeline (you can also remove something from your timeline without reporting it) [4]. This feature alone is not sufficient to moderate content, however – it took over an hour for the Facebook Live video of the murder of Robert Godwin Sr. to be reported [5]. In order to tackle this problem, Facebook is exploring new AI techniques which can identify, for example, sexual imagery so that they can be automatically reported [6].

Another issue with the current system is that users may be reporting content which does not actually contravene Facebook’s community standards [7]. As such, Facebook employs around 1,000 moderators worldwide to check flagged material [8]. They have the ability to delete content, disable user accounts or pass content on so that it can be reported to the police [8]. Given that a moderator’s job essentially consists of looking at disturbing and graphic images all day, and that the position is quite poorly paid, the majority of moderators quit between 3 and 6 months of starting and many likely suffer from PTSD as a result of their employment [8]. The task of moderators is enormous – there are millions of content reports to sift through a week [9]. As such, Facebook is hiring 3,000 new members of staff to help with the moderation [9], however the efficacy is always going to dependent on their training and the internal guidelines provided by Facebook.

So, what do the guidelines actually say? The Guardian has leaked extracts from documents on sextortion and revenge porn, sexual activity, sex and nudity in art, non-sexual child abuse, credible threats of violence, graphic violence and cruelty to animals [10]. These documents outline how to determine whether or not content should be removed, marked as disturbing, escalated or ignored. Just before we look at the guidelines in more detail, I think it’s worth noting that, because of the way that the Guardian has chosen to present these files, it is not possible to evaluate their completeness. Potentially, this is just a small extract of the documentation provided to moderators, or maybe this is all they get. It is also unclear from this leak whether moderators receive additional training and guidance. The Guardian has stated that they have removed some distressing content from the Facebook Files, but haven’t been clear about how much (they claim to have removed ‘many’, that well known standard unit of measurement). As such, it’s only really possible to evaluate the material available and recognise that this is not the same as fully evaluating Facebook’s moderation process.

Having said that, the Facebook Files seem to suggest a pretty woeful moderation policy. The guidelines appear to be explained through example and it is often not clear whether these examples are a complete set or not. For instance, the globally protected vulnerable groups listed in the Credible Guideline Policy are homeless people, foreigners, and Zionists [11]. (This means that violent threats are only automatically considered credible if they are directed at these groups and not to any others.) There are no clear rules on why some groups are considered vulnerable and some aren’t and the documentation does not explicitly say whether or not these are the only vulnerable groups, or not. Assuming that this documentation is all moderators have to go on, they are likely very unprepared for content which is not included in the documentation, or for content that cannot be clearly linked to an example given.

Moderation policies built on example, as opposed to clear definitions, are poor because they leave a lot of space for individual moderators to decide the parameters of what is acceptable content beyond the examples they are given. This allows for the policy to be enforced unequally (a flaw in and of itself) and inadequately. If Facebook has a specific idea of who they consider to be vulnerable, for example ‘activists’, they need to clearly state it and not give a handful of examples (tbf, they haven’t even done that) and allow moderators to give their best guess.

Facebook’s policy appears to be centred around keeping as much content up as possible. They will allow content that calls for violence against certain groups or individuals as long as that content wouldn’t actually increase that person/group’s chance of facing violence: “we aim to allow as much speech as possible but draw the line at content that could credibly cause real world harm” [10]. I’m not going to pass judgement on the approach Facebook is taking to decide what should and should not stay up (just yet). But, I would like to point out that “real-world harm” is a pretty subjective concept and therefore a shaky foundation upon which to build your moderation policy. Do you consider someone feeling threatened or upset to be a real-world harm? If so, a non-credible threat may still contravene your policy. Once again, the guidelines allow for individual moderators to decide for themselves what content is and is not ok.

Some of the moderation policy is actually built around hard and fast rules, presumably to limit this subjectivity. For example, in their guidelines on sexual content, the guidelines are explicit in that “general threats of exposure” do not need to be removed, whereas specific ones should be, with some (more) examples to illustrate the difference between the two [12]. It seems to be that where rules can be applied, they are provided. Whilst it is frustrating that there are undefined terms in the documents, the terms left undefined are actually quite hard to define (and they may well be given greater context in other areas of the guidelines). It is also worth noting that explicit rule based systems can cause their own problems in the sense that they can be gamed. For example, if you know that Facebook won’t consider threats credible if they contain the phrase “when hell freezes over” instead of a real-world date or time, you might start using that phrase to threaten people. A moderation system becomes particularly vulnerable when someone leaks all of these rules to your users, see the Good Fight episode 6 (Social Media and its Discontents) for an example. (As adorable as I think the show and its attempts to discuss online issues are, I don’t think its portrayals are realistic, just fyi.)

One section of the guidelines which I actually think is quite well-handled is that on depicting graphic violence. Facebook both wants to allow users to share graphic images of important events, for example to raise awareness of or provoke discussions on conflict, and to stop users from posting images of violence that they condone and enjoy viewing. It is genuinely difficult to draw the line between these two groups – do you consider the photograph of Alan Kurdi’s body to be a powerful image that has helped to humanise the plight of refugees fleeing the Syrian conflict or an image of dead child taken without the consent of their parents and shared despite the child’s father’s express wishes? Instead, Facebook has chosen to determine whether or not content should be removed not by looking at the content itself but by looking at the context in which it was shared. If a video or picture sharing graphic violence is done so in a way that “express[es] sadism” it is removed [12]. And this time, sadism is actually defined (they dedicate 4 whole slides to its definition) demonstrating employees at Facebook can actually use a dictionary.

Ultimately, it seems, Facebook is aiming for a moderation policy that allows for the most content possible and so are only willing to remove content is it is likely to put other people off from participating on the site and producing their own content. This seems fair, given the nature of their business, even if it seems distasteful or bizarre, for example because their default position is to allow images of animal cruelty (it is hard to not be judge-y) [13].

Now, I don’t want you to think that I like Facebook from this post. Some of their approaches are pretty naïve, for instance, their policy to leave up content (or, as they put it, “evidence”) of child abuse to help identify children [14]. Just take it down and pass it onto the authorities. I also think that individuals depicted in any content shared on Facebook should have the right to demand it be taken down, even if that content does not otherwise contravene the moderation policy. But, other than those two (really, quite straight forward and obvious things), I certainly think that Facebook has a better moderation policy than the Guardian headlines would lead you to believe.

  1. https://www.theguardian.com/news/2017/may/21/revealed-facebook-internal-rulebook-sex-terrorism-violence
  2. https://www.theguardian.com/news/2017/may/22/no-grey-areas-experts-urge-facebook-to-change-moderation-policies?utm_source=dlvr.it HYPERLINK
  3. http://uk.reuters.com/article/us-chicago-violence-facebook-idUKKBN17403K
  4. https://www.facebook.com/help/408955225828742?helpref=faq_content
  5. https://newsroom.fb.com/news/h/community-standards-and-reporting/
  6. https://www.theguardian.com/news/gallery/2017/may/22/what-facebook-says-on-sextortion-and-revenge-porn
  7. https://www.facebook.com/communitystandards
  8. http://wersm.com/how-does-facebook-moderate-content-infographic/
  9. https://www.ft.com/content/400414f8-300e-11e7-9555-23ef563ecf9a
  10. https://www.theguardian.com/news/series/facebook-files
  11. https://www.theguardian.com/news/gallery/2017/may/21/facebooks-manual-on-credible-threats-of-violence
  12. https://www.theguardian.com/news/gallery/2017/may/21/facebooks-internal-guidance-on-showing-graphic-violence
  13. https://www.theguardian.com/news/gallery/2017/may/21/facebook-rules-on-showing-cruelty-to-animals
  14. https://www.theguardian.com/news/gallery/2017/may/21/facebooks-internal-manual-on-non-sexual-child-abuse-content
Advertisements

Changing Perspectives

What is it?

Perspective is an API (Application Programming Interface – a set of building blocks that can be put together to make software) designed to help moderators combat the negative effects of toxic comments [1]. It uses machine learning to provide a real-time evaluator for the “toxicity” of comments, where toxicity is defined as “a rude, disrespectful, or unreasonable comment that is likely to make you leave a discussion”[1]. The tool was made by Jigsaw, an “incubator” (whatever that means) of Alphabet (as in Google). A company that tries to create technology to “tackle some of the toughest global security challenges facing the world today” [2]. It was made publicly available last week (23/02/2017) [3] but the lead researchers published a paper on Perspective last year [4].

How does it work?

Perspective produces a toxicity percentage for any input – the likelihood that the comment is a personal attack. This percentage is determined by a machine learning classifier [4]. Machine learning classifiers take unseen data as input and then sort this data into different categories (or classes). They learn how to do this by first being trained on categorised data. Essentially, you provide your classifier with a sorted data set and the classifier tries to find attributes which are common amongst samples within the same category but differ between samples from different categories. When it is fully trained, it will continue to learn from new, unclassified data it is given, hence the name (machine learning classifier).

For example, suppose I had the following set of sentences:

“I am happy” – positive

“Today made me happy” – positive

“I am sad today” – negative

“I feel very sad” – negative

If I gave this information to my classifier it would associate the word “happy” with positive and “sad” with negative, however the words “I”, “am”, and “today” present in both categories and so the classifier would “know” not to use these as class indicators. The words “made”, “me”, “feel”, and “very” may prove problematic as they, too, only appear in one category but it is obvious (to us) they do not determine whether a sentence is positive or negative. If we asked the classifier to classify the sentence “I feel very cheerful” it might think it was a negative sentence and then learn that the word “cheerful” was an indicator for negative sentences. Additionally, if we asked the classifier to classify the sentence “Hello, my name is Cerys” it would be unable to, as none of the words in that sentence have been seen by the classifier before. To counteract this problem, we could train the classifier on a much, much larger corpus of classified sentences.

Perspective was trained on 100 thousand hand classified comments from Wikipedia and 63 million machine classified comments [4]. As long as these comments were varied enough, this classifier will be much, much more accurate than the above example. In order to ensure there were enough toxic comments, 17% of the corpus used came from comments that led to users of Wikipedia being blocked by moderators [4]. The initial input data was hand trained and the creators of Perspective chose to do this through crowdsourcing using the platform Crowdflower [5]. Potential annotators were required to pass a test before they could participate in which they had to annotate 10 randomly selected comments as being a personal attack or not and had to annotate 7 out of the 10 correctly in order to be selected.

Each comment in the training data was annotated by 10 annotators, who were each asked the question “does this comment contain a personal attack or harassment?”, and then received a label based on the aggregate of their answers. It was found that the majority of comments did not receive a unanimous classification, reflecting the subjective nature of evaluating comments. The machine learning classifier was both trained on comments that were labelled as either toxic or non-toxic (based on the majority opinion) and on comments that were labelled in percentages (e.g. if 7 out of 10 annotators considered a comment toxic it would labelled as 70%). This latter method produced a better classifier.

It is worth noting that the annotators were presented with comments in isolation, completely out of context of the conversation they came from. This makes it much harder to assess tone and intent within a comment. Further, the classifier was developed only on text-based features, i.e. information about the commenter (e.g. their past behaviour) and the conversation context were not given to the classifier. Both of these decisions contribute to the accuracy of the classifier. Firstly, this is because certain comments may have appeared more or less toxic within the full conversation and so omitting this information could affect the training data. And, secondly, information about the commenter may have helped the classifier to distinguish between comments that had similar phrasing but different meanings. For example, if one commenter consistently uses the word gay in a derogatory manner and another positively, they could both use the phrase “haha, that’s so gay” but mean entirely different things.

Further, no demographic information about the annotators was provided. Determining the toxicity of a comment may be heavily dependent on the individual, their cultural background, and their exposure to toxic comments in the past. For example, in Scotland, the word “cunt” can be used as a term of endearment, but in the U.S. it is considered a pretty awful insult, and so comments containing that word will be interpreted differently by those two groups. Similarly, words, such as gay, which are often appropriated as insults, are interpreted differently by different groups of people. If you type “I am gay” into the experiment section on Perspective’s website, it returns a toxicity percentage of 86% (so super inaccurate). It is difficult to assess the cultural bias of the classifier without demographic information, however, Perspective is using its website to expand the information it is being trained on. Anyone can test phrases to see their toxicity and provide feedback on how accurate they think Perspective is. This has the potential to counteract the underrepresentation of particular perspectives, though it also makes the classifier vulnerable to interference from trolls who have previously relished in manipulating AI [6].

Will it work?

Regardless of the potential methodological issues described, could Perspective be a useful tool to combat online harassment? The answer to this question is heavily dependent on how the tool is used. There have been some concerns about the use of this tool increasing censorship, if websites decide to replace existing, human moderators with an automated system [3]. Given the subjective nature of toxicity and the cultural biases already discussed, perhaps this will not be an improvement on existing methods of comment moderation.

It is worth pointing out, however, that moderation systems designed around individuals or small teams who delete comments based on whether or not they think they are toxic, or are toxic by some preaggreed definition, are also subject to these biases. In fact, this is one of the ways that Perspective may improve on the human model, as it can incorporate a much larger aggregate of different cultural perspectives. Further, whilst an algorithm cannot read sarcasm or tone in a comment, arguably humans often cannot either, so a shift towards AI in this area is not necessarily a compromise on free speech in the name of efficiency. Also, this is simply a general criticism of comment moderation in general, as opposed to this particular method of moderation, but that is a whole separate blog post.

The goal of Perspective, it seems, is not to moderate comments but to be a useful tool for comment moderators. The website suggests that the classifier be used to direct moderators towards toxic comments, thus simply speeding up the current system [1]. Alternatively, the experiment interface from Perspective’s website could be built into a forum to allow commenters to see the potential harm of their comment. This may be a useful way of preventing toxic comments from being written in the first place, as opposed to focusing on deleting them after the fact. Trying to achieve this objective has been fruitful in the past; the creators of League of Legends switched their in-game chat feature from a default setting to opt in which, alongside a number of other scientifically derived interventions, led to a dramatic reduction of bullying and harassment [7].

Perspective could also be used to gain greater understanding of the nature of toxic comments. Indeed, in [4], some analysis was conducted on the nature of toxic comments and the commenters who make them. The researchers found that, whilst anonymous commenters were more likely to make toxic comments, toxic comments weren’t more likely to come from anonymous commenters. They also found that, whilst the majority of toxic comments come from a large number of infrequent commenters, a small proportion of commenters do make a disproportionate number of the toxic comments. And, they found that, on Wikipedia, the majority of toxic comments go unmoderated.

There are many more questions that could be asked, the answers to which would be useful for moderators. For example, Perspective could be used to investigate the toxicity of conversations over time, to see if there is a point of no return beyond which the conversation is mostly toxic comments as opposed to discussion. Or it could be used to investigate the number of participants in conversations before and after they become toxic, to see if toxic comments encourage or discourage participation from certain types of commenter.

Perspective is an ongoing, ever-evolving project that seems to hold more value as a measurement tool than as an actual moderator. I think it will be interesting to see what conclusions are drawn about toxic comments from the research it enables, and their implications for online debate and discussion.

 

[1] https://www.perspectiveapi.com/

[2] https://jigsaw.google.com/vision/

[3] https://www.wired.com/2017/02/googles-troll-fighting-ai-now-belongs-world/

[4] Wulczyn, Ellery, Nithum Thain, and Lucas Dixon. “Ex Machina: Personal Attacks Seen at Scale.” arXiv preprint arXiv:1610.08914 (2016).

[5] https://www.crowdflower.com/

[6] http://www.theverge.com/2016/3/24/11297050/tay-microsoft-chatbot-racist

[7] http://www.nature.com/news/can-a-video-game-company-tame-toxic-behaviour-1.19647

What’sOccurring?

Earlier this morning (13th Jan, 2017), the Guardian published an article entitled “WhatsApp backdoor allows snooping on encrypted messages” which has stirred up something of a controversy. The article claimed that a security researcher had found a backdoor that would allow Facebook, or potentially governments, to read the encrypted messages of WhatsApp users [1]. A “backdoor” is literally just a feature (intentional or not) of a computer system which allows unauthorised access, in this case anyone who doesn’t have the keys to the accounts of whomever is communicating via WhatsApp. For this story, the Guardian asked Facebook to comment on whether or not this feature had been used to access messages on behalf of other parties, insinuating that it was a design feature Facebook was aware of and further evidence that Facebook’s acquisition of WhatsApp was harmful to the privacy of its users.

WhatsApp protects user messages through a process called end-to-encryption [2]. This means that, when two users (Alice and Bob) want to communicate, they will personally generate their encryption keys (or rather their devices will) and then share them only with each other, and not with WhatsApp. Because WhatsApp doesn’t have either user’s encryption keys they cannot decrypt messages send from Alice to Bob (they are encrypted from one end of the journey to the other). This is different from server-to-client encryption for which, the server (WhatsApp) and the sender (Alice) would generate keys, Alice would then send their message to WhatsApp who would pass it onto Bob. The Guardian is claiming that this “backdoor” allows for them to insert themselves between Alice and Bob in order to steal their messages.

In response to the article, however, several researchers and other publications have pointed out that the so-called backdoor is actually a standard attack called the “man in the middle attack” (MitM attack) [3-7]. Lots of encryption tools are vulnerable to this kind of attack and, whilst measures can be put in place to protect users against them, they are not necessarily an intentional feature implemented by the creators of the system.

A MitM attack works as follows. If WhatsApp wanted to take a copy of Alice’s message to Bob, they could trick Alice’s device into thinking that it was sharing keys with Bob when, instead, it was sharing keys with WhatsApp. When Alice sends their message, WhatsApp could intercept it, decrypt it (as it now has the key), and reencrypt it and pass it onto Bob if they wanted to cover their tracks. This attack is only possible if Alice is unable to distinguish between Bob and WhatsApp.

In order to prevent the attack, WhatsApp requires users to generate unique identification keys. So, when Bob and Alice prepare to send their messages, Bob will first establish themselves with their id so that Alice knows they have the correct key. However, users need to be able to generate new keys when they change devices (which is why you can have WhatsApp on your phone and your computer web browser). What the Guardian article is claiming is that this feature allows WhatsApp to carry out a MitM attack because WhatsApp has the ability to generate new sets of keys if a user is offline.

So, if Alice were to set up a communication with Bob and then switch off their device, WhatsApp could generate a new key, pretend that it belongs to Alice, and trick Bob into resending their messages to WhatsApp. This is possible because of two aspects of the WhatsApp protocol. First, WhatsApp automatically resends messages that failed to send without requesting permission from the sender (which makes sense given the majority of people who use WhatsApp would get annoyed by their phone repeatedly asking to resend messages they haven’t managed to send) and because, by default, devices receiving messages don’t check id when a device they’ve been communicating with changes keys.

To conclude, the design of the WhatsApp encryption protocol does mean that Facebook would be able to intercept your messages, if it wanted to and you occasionally turn your device off. However, this is not necessarily part of a nefarious plot (or security related government mandate, depending on which side of the fence you sit) and could just be that WhatsApp prioritised usability when dealing with a vulnerability inherent to this type of system. Also, if you go to your security settings you can change the default so that you are notified when other devices you communicate with have changed their keys. Doing so will allow you stop communicating with them until they can be verified to avoid MitM attacks. Whilst you can’t change the fact that your unsent messages will resend automatically, I would also recommend deleting unsent messages until you are able to send them.*

*I am not claiming these actions will protect you against MitM attacks, just make it a bit harder for attackers to carry them out.

 

  1. https://www.theguardian.com/technology/2017/jan/13/whatsapp-backdoor-allows-snooping-on-encrypted-messages
  2. https://www.whatsapp.com/security/
  3. http://gizmodo.com/theres-no-security-backdoor-in-whatsapp-despite-report-1791158247
  4. https://twitter.com/kaepora/status/819893101162442752
  5. https://twitter.com/martinkl/status/819870987823042560
  6. https://twitter.com/FredericJacobs/status/819866979020443648
  7. https://twitter.com/matthew_d_green?ref_src=twsrc%5Egoogle%7Ctwcamp%5Eserp%7Ctwgr%5Eauthor

A Response to the Response to the Removal of Lord Carey of Clifton (in poster form)

I was a student at King’s College, London from 2011 until 2014. I actually chose to study at that university, not because of its religious nature nor esteemed alumni but, because they gave me a place on a decent maths course and I was basically rejected from everywhere else. I enjoyed my time at KCL and was provided with many opportunities, both academic and extra-curricular, for which I am grateful. Since graduating, I have followed a number of different attempts of my almer mater to make itself more inclusive and reflective of its diverse student population many of whom, I am sure, chose the university for its religious history and many of whom did not. Some of the campaigns have been more successful than others, admittedly, but most are pushed by an active, and loud, student voice [1].

The most recent concerns the KCL wall of fame, a series of posters featuring famous alumni covering the front of the Strand campus. More specifically, one poster on the wall of fame which depicts Lord Carey of Clifton, the Archbishop of Canterbury from 1991 until 2002 and a man who holds strong and vocal views about same-sex marriage [2]. This poster was the subject of a student campaign starting in 2010. Advocates such as the current SU President Ben Hunt called for the poster to be removed and it now has been, nominally to make space for a new video board [3].

The removal has, unsurprisingly, been the subject of controversy. KCL academic Niall McCrae and pastor Jules Gomes published an open letter in Conservative Women which defended Lord Carey and criticised the removal of his poster (though not the removal of Archbishop Desmond Tutu, nor Sir Michael Howard who both supported same-sex unions in one form or another). The letter can be read in full here [4]. I didn’t like the letter so have written a response.

In the brief history of the discussion surrounding the poster’s removal, the authors describe how Lord Carey was not that bothered by the publicity caused but was worried about growing censorship in academia [4]. He has, in fact, been quoted as describing himself as “entirely relaxed” by the discussion, if concerned about free speech [5].

Here is my first point of contention, the University’s decision to change the “wall of fame” to make it more representative is not censorship. One would expect a university wall of fame to evolve over time, not least because (one would hope) the university would be producing more alumni with achievements to feature on it. But, also, because the values held by the university would change over time (just one example – KCL now allows women to study and obtain qualifications) and so the ideals and achievements deemed wall of fame worthy would change over time.

Featuring an individual who supports conversion therapy [6] and does not believe that same sex and differing sex relationships are the same or should be treated as such [7] on your wall of fame does suggest that you support and celebrate his views. If the student population, over time, decides that they do not, in fact, support or celebrate these views it would make sense to change the way in which you discuss or display that individual’s alumni status.

Which brings me to my second contention, the following sentence:

“Two years later, LGBT activists were impatient: the allegedly homophobic ex-Archbishop was still there, traumatising students.”

Mate, there’s just no need for that tone. One of the most aggravating effects of the Spectator’s Generation Snowflake article [8], has been the characterisation of any request by a university student as overly emotional and exaggerated to the extreme. It feels as if any calm and rational argument, such as “applauding a homophobic individual on our wall of fame sends the message that this university supports his views, this makes LGBT+ individuals feel less comfortable as members of this institution” can easily be twisted into an unsupportable and ridiculous statement simply by reminding everyone that a student made it.

It is pretty scary to know that the institution that is supposed to protect and support you through your higher education supports the homophobic views of Lord Carey if you do have same sex partnerships or are affected by the view that sexuality is something to be “cured”. It is also unpleasant to consider yourself someone who supports an institution promoting those views (or the individuals who hold them). Walking past that poster, every day, serves as a constant reminder that you are also condoning those harmful positions. So, whilst I wouldn’t say it was traumatic, I would argue that it’s an unnecessary unpleasantness.

But, claim McCrae and Gomes, Lord Carey isn’t actually homophobic. Presumably, they also think that Lord Carey is someone that KCL should be celebrating and that it is upsetting for them to see that their institution does not value the good that they perceive Carey as having done to be sufficient for him to remain on the WoF. I am intrigued as to whether, if McCrae or Gomes could be convinced that Carey is homophobic, they would change their mind. So, I have written a response to their argument that he isn’t.

Their argument mostly seems to be that the ex-Archbishop would have encountered gay people during his work and is required to treat them the same as any other human being. Which is somewhat in contradiction with words he has actually said, with his mouth:

“Same sex relationships are not the same as heterosexual relationships and should not be put on the same level” [8].

However, there is evidence that Lord Carey did treat gay and straight people the same as he claimed that he ordained people he knew to be gay.

They conclude their argument with an extract from a lecture given by Lord Carey on tolerance in which he describes true tolerance as the respect given to someone’s views despite the pain involved. If it hurts you to acknowledge that someone is attracted to (and or in love with) a person/people who identify with the same gender as them then you are homophobic, regardless of how tolerant you are. If you believe, and support, the idea that homosexuality is something inside you that can be changed and, despite the harm it causes [9], individuals should be encouraged to explore “therapies” designed to suppress their sexual identity then you are homophobic. It is not even the case that Lord Carey does not act on his homophobic thoughts, he was a prominent activist during the Marriage Equality bill [10].

I think what I find most bizarre is that, during this argument, the authors point to other issues for which Carey has deviated from the “conservative evangelical” view. If he was prepared to change his, and the Church’s, stance on the ordination of women and on assisted suicide, why not same-sex marriage?

There is one final comment that should be made on the article. McCrae and Gomes chose to describe the LGBT+ activists who campaigned for the removal of the poster as the “Gaystapo”. Now, I do love a good pun, but stapo right there. During WWII the Gestapo arrested maybe 100,000 gay men, interning as many as 15,000 in concentration camps [11]. It’s disgusting to liken activists who campaigned to make their campus more LGBT+ friendly to an organisation that systematically destroyed the German LGBT+ community through the eradication of its literature, homes, and population as well as promoted conversion “therapies” such as hard labour and castration.

If you want to have a discussion about the presence of Lord Carey on campus and it what it might mean to staff and students to have his poster removed then fine, let’s have that discussion. But it’s not going to happen in a tolerant and respectful manner when the experiences of LGBT+ students are belittled and their actions are likened to genocide.

  1. http://thetab.com/uk/kings/2016/02/25/kings-students-campaigning-diversify-campus-7720
  2. http://thetab.com/uk/kings/2016/02/25/kings-students-campaigning-diversify-campus-7720
  3. https://www.buzzfeed.com/patrickstrudwick/a-university-lecturer-has-compared-student-lgbt-activists-to?utm_term=.ijELj34W1#.bfmEk6QRD
  4. http://www.conservativewoman.co.uk/niall-mccrae-rev-jules-gomes-kings-college-wrong-erase-carey-wall-fame/
  5. http://www.christiantoday.com/article/archbishops.george.carey.and.desmond.tutu.removed.from.kings.college.london.display.in.gaystapo.row/103199.html
  6. http://www.telegraph.co.uk/news/religion/9046487/Lord-Carey-backs-Christian-psychotherapist-in-gay-conversion-row.html
  7. http://londonstudent.coop/news/2015/02/19/lord-careys-kcl-window-taken-lgbtq-campaign-win/
  8. http://www.spectator.co.uk/2016/06/generation-snowflake-how-we-train-our-kids-to-be-censorious-cry-babies/
  9. https://www.theguardian.com/world/2012/apr/13/gay-conversion-therapies-bullies-missionary
  10. http://www.bbc.co.uk/news/uk-politics-22727808
  11. https://www.ushmm.org/wlc/en/article.php?ModuleId=10005261

 

The 411 on Rule 41

In the U.S. Rule 41 is a Federal Rule of Criminal Procedure which specifies the scope of issued warrants [1]. It details when and where warrants can be issued, the burden of proof required, and the procedure which must be followed when a warrant is issued. On the 7th of December 2016, Rule 41 was changed making it more applicable to cybercrime [2]. There were two amendments to the rule. Because of the first amendment law enforcement (L.E.) are able to obtain a warrant from one district and apply it to devices in other districts if their location has been hidden. Further, L.E. now has the power to collect information remotely if they have to collect information from many different devices. The second amendment requires L.E. to make reasonable efforts to notify, i.e. serve the warrant to, individuals when their devices are being remotely accessed or their data is being collected.

These amendments, whilst small, will grant L.E. a considerably greater amount of power in cybercrime investigations. Rule 41 has been a major blockade in investigations where suspects are hiding their identity through Tor, or other anonymity providing tools. Take, for instance, Operation Pacifier – because the suspects identified were residing outside the jurisdiction of the warrant issued, the FBI is struggling to make many convictions [4]. This is actually the case that inspired the changes to Rule 41 as it demonstrated very clearly the problem with a system where L.E. must specify the geographical limitations of their power prior to identifying the location of their suspect.

Whilst this change seems sensible, even startling reasonable for 2016, privacy advocates in the U.S. have raised several issues with the rule change. This is mostly because of the way that L.E. are now able to collect information. Firstly, they are now able to do so on a much larger scale. Rule 41, used to severely limit the number of privacy enhancing technology (PET) users that could be investigated at any one time because of the fact that you needed separate warrants for separate jurisdictions, this is no longer the case. Secondly, the first amendment to Rule 41 also allows for L.E. to collect information remotely and many are worried that this is going to involve L.E. hacking computers or uploading malware onto their devices. The Tor project has specifically addressed this issue pointing out that this power would allow L.E. to introduce vulnerabilities to the devices of individuals trying to hide their anonymity [5].

In the fight against cybercrime, these all seem like good things though, surely? When you are targeting child pornography sites with 11,000 visits a week [4] you want to be able to efficiently investigate large numbers of suspects. Further, when your suspects are using technology to make themselves anonymous, introducing malware to deanonymise them or disable that technology seems like a reasonable step in your investigation. But, one of the most concerning aspects of the new rule is that it implies all users of PETs are susceptible to these amendments. Simply by accessing the internet through Tor, you make yourself legally vulnerable to L.E. exercising their new power, regardless of your online activity.

Lots of people use Tor, or similar technology, to browse the internet without committing crimes. Many rely on PETs to avoid persecution or violence when sharing information. It is dangerous to infiltrate their computers with malware specifically designed to weaken their device’s ability to protect their identity and wrong to do so if they do not fall within the scope of the L.E. investigation. For example, if L.E. were to infect a suspect’s device and adjust their privacy settings they may leak information to other attackers. Or they could transfer information from a suspect’s device to a government server without encrypting it first, just like the FBI did in Operation Pacifier [8]. This would allow other parties monitoring the suspect to collect sensitive information without gaining access to their device at all.

The solution, therefore, seems to be to only allow L.E. to use these extraordinary powers on the individuals who have committed crimes – only infiltrate the devices being used to download child pornography and the like. Unfortunately, the whole point of PETs is that they make it very difficult to distinguish between the group of individuals L.E. needs to investigate and the group of individuals that very much needs L.E. to leave them alone. Further, the new powers may also allow L.E. to specifically target the victims of cybercrimes, e.g. computers that are already infected with an offender’s malware, in their investigation so it is unlikely this rule will only affect cybercriminals.

Often legislation such as Rule 41, which tries to better equip L.E. in the fight against cybercrime, comes at the cost of the rights of a particularly vulnerable group within society. There are many reasons for individuals to want to hide their communications and/or their location [9]. Anonymous browsing and communications are vital tools for activists, whistle blowers, and oppressed groups. As Governments across the world (including mine) are granting more and more powers of online surveillance [10], these vulnerable groups and all other citizens, are put in a position where they have to trust L.E. absolutely to not abuse their position, something which is becoming harder and harder to do [6].

The committee evaluating the proposed changes to Rule 41 before they were passed dismissed the fears of privacy advocates because they “misunder[stood] the scope of the proposal” [7]. Which I think demonstrates a misunderstanding of how willing L.E. can be to collect information: if you can now legally infiltrate 11,000 computers, why would you settle for only 137? The wider that L.E. spreads its net, the more likely it is to make mistakes. Whilst I can imagine that a system filled with red tape and paperwork is frustrating, the system is designed to be filled with checks and balances that make it hard for L.E. to investigate or harm innocent individuals.

It’s ok though; the changes to Rule 41 include a reminder that L.E. should tell people when a warrant has been issued against them. (This is really something that I would hope they were doing before these changes came to pass.) Regardless, what they don’t include is an increased burden of proof for L.E. to demonstrate they deserve a warrant. Prior to the changes, investigating cybercrime was too difficult, now it is almost too easy.

 

  1. https://www.law.cornell.edu/rules/frcrmp/rule_41
  2. https://www.techdirt.com/blog/?tag=rule+41
  3. https://noglobalwarrants.org/images/proposed-amendment-rule-41.pdf
  4. https://thegreyareaweb.wordpress.com/2016/07/22/on-the-results-of-the-fbi-child-pornography-operation/
  5. https://www.deepdotweb.com/2016/09/24/tor-joined-fight-stop-upcoming-changes-rule-41/
  6. https://www.techdirt.com/articles/20160420/06055234219/fisa-court-still-uncovering-surveillance-abuses-nsa-fbi.shtml
  7. http://nationalsecuritylawbrief.com/supreme-courts-approves-change-to-rule-41-search-and-seizure-warrants-for-electronic-property/
  8. http://arstechnica.co.uk/tech-policy/2016/06/fbi-exploit-that-revealed-tor-enabled-child-porn-users-wasnt-malware/
  9. https://www.torproject.org/about/torusers.html.en#activists
  10. https://www.theguardian.com/world/2016/nov/29/snoopers-charter-bill-becomes-law-extending-uk-state-surveillance

We Need to Talk About Queer Representation in Harry Potter

Right, let me start off by laying my HP credentials right out on the table. I was introduced to J.K. Rowling’s great works by my mum, who read Harry Potter and the Philosopher’s Stone to my brother and me on an 8-and-a-half-hour journey to the South of France for a caravan holiday. We soon replaced her with the Stephen Fry audio books, of course, but I credit my mum with introducing me to my first ever love. I read the books until I could quote them. For my eleventh birthday I sent invitations out on yellow paper and in green ink. And, still, my proudest moment to date is the time when my dad helped me dress up as a 4ft Hagrid for a village. I won a bouncy ball in the shape of a snitch, it lit up when you bounced it. It wasn’t a completely carefree relationship, though; I did not like the first film when it came out. To be fair, none of my friends did, either because thought it left out too much. So, we wrote our own script (by taking every word of dialogue from the book) and forced the reception students at our Primary School to watch us perform excerpts every lunchtime.

I do also feel obliged to admit that I did, technically speaking, go through a phase were I denied my fandom. In my first year of university, I pretended I had outgrown it (to my shame.) I was a bit of a prat, in general, though – I now put my poor form down to attempting to navigate a complicated relationship with my gender and sexuality. Which brings us nicely round to the objective of this (somewhat) self-indulgent piece.

I have read dozens of essays explaining how the Harry Potter series is a metaphor for coming out the closet. I’d wager more than I can think of witty alternative titles for this piece. I sympathised with all of them, thought they had a point, too. That is until I remembered that there are no queers in that universe. Oh, sorry, there are two. (No women though, you can’t be a lesbian AND a witch, stop being so ridiculous.) I’m still not exactly sure how I feel about J.K. Rowling’s announcement, from 2007, that Dumbledore had been gay the whole time. I remember seeing a news piece not long after about this guy who had just had a massive Dumbledore tattooed on his arm and how angry he was because all his friends now called him “bender” or “bender-lover” (which is somehow worse). The more that I thought about Rowling’s decision, the more angry I was – I felt that she was trying to capitalise on the pink pound, after the fact, when the market research was in and showed that closeted teenagers across the country were craving for merchandise. I then felt terrible for assuming the worst of someone who was so important to my childhood and convinced myself that, instead, I should be grateful for the representation. Beyond that confusion, I felt hopeful that there might be more LGBT+ characters in her writing.

How wrong I was.

When the Cursed Child came out, I was incredibly excited. My girlfriend sat online all day to get us tickets and standing in the queue made me feel exactly as I did, aged 10, waiting for my copy of the Order of the Phoenix from Waterstones. The play is actually fantastic. The characters aligned perfectly with how I had imagined them, the magic was beautiful, and there was enough ridiculousness in the plot to fill a lifetime’s worth of pub chats. But, by the end of Part 1, I was most excited about the relationship between the two protagonists, Albus and Scorpius. It was so clear that they were gay. So. Clear. And I thought, “this is it, she’s only bloody done it”. But, she hadn’t, because they’re “just friends” and one of them “gets the girl”. I was genuinely livid walking out of the theatre after Part 2. It seemed impossible to me that the portrayal of their relationship was anything but deliberately misleading. This was queerbait for the HP fans who love to write about forbidden LGBT+ romances (a practice that I do wholeheartedly support).(4)

This anger was, once again, accompanied by a swathe of guilt. I had still enjoyed the play, Rowling was only a consultant, maybe I had actually imagined the whole thing. (Sorry, guys, but if they’re not in love with each other, why does Scorpius explicitly refer to Albus as his Lily?) Taking a step back, there were a lot of other things going on; I saw the play in the wake of the Orlando shooting, on the day of J.K. Rowling’s unfortunate tweet1, I was in a particularly overprotective frame of mind. I decided that I needed to distance myself a bit and start trusting her judgement again (by rereading tCoS).

Which brings us to Fantastic Beasts, the muse for this (growing ever more) self-indulgent essay. I went into that movie with high hopes and the best intentions. These were crushed.Lo and behold, our first openly queer character in a Harry Potter film is a child groomer and most likely a molester too. This means that, if you have been named as gay in the franchise, you have a 50% chance of being a sexual predator2. Not on. Whilst I recognise that it is wildly unfair of me to presume that Albus and Scorpius’ relationship in the Cursed Child was intentional queerbait by a marketing team who prioritised the fanfiction writing fanbase over the LGBT+ community, I have no such self-restraint for whomever is behind the writing and direction of Grindlewald. I struggle to see how it was not intentional to develop the character in that way and believe there is no excuse for not taking a step back and contemplating how upsetting it might be for LGBT+ fans to have their lack of representation flaunted in such a way.

I have heard every excuse in the book for the lack of LGBT+ representation. And, I honestly sympathise with a lot of them – if Rowling had written openly gay characters or same sex relationships into books 1 to 5 they likely wouldn’t have been allowed in school libraries. But, it’s 2016 and google is awash with rumours that Dumbledore and Grindlewald’s relationship will be “explored” in the movies. Our representation in this most beloved of worlds is no better than fifteen years ago, some could say it is worse. I, for one, have had enough of it. And by that, I don’t mean that I will stop loving the series (obvs), but I am done with the idea that Rowling is an ally3 and I no longer have any expectation for there to be LGBT+ characters in the future films.

 

  1. K. Rowling tweeted about one of the victims from the shooting. She tweeted about him because he worked on the Harry Potter ride at Universal studios. Many people thought her tweet was heartful and moving, I felt that it was an appropriation of other people’s grief, particularly because her phrasing implied she would not have known his name had he not been tangentially related to her franchise.
  2. The probability increases if you take a more cynical view of Dumbledore and Harry’s relationship than I do.
  3. I find the woman so confusing. On the one hand she is a boss at Twitter and does use her platform to stand up for LGBT+ individuals and she’s lost her billionaire status because of the sheer amount of money she donates to charity. But, that just doesn’t give her a free pass. She is not immune from criticism because of how much respect I (or anyone else) has for her and I think we need to start telling her that she needs to be less crap. Mostly, because I think she might actually listen.
  4. A small addition to clarify somethings in view of what people have said after reading this: There are lots of same-sex friendships in Harry Potter, and they are real friendships with ups and downs and trust issues and flaws and other explorations, that’s one of the main themes of the series. There are also many relationships between male and female characters given lots of space to develop differently.
    There are no same sex relationships.
    This is a big deal because it makes it hard to see oneself in the universe as a queer person (at least for me). As a result, when I came out, my relationship with the books changed in a way I really didn’t want it to. Because HP was such a big part of my childhood it felt like my sexuality had changed who I was, which was sad (I’m not saying every LGBT+ fan has had the same experience, I’m just trying to explain why representation matters so much to me).
    When watching CC, the disappointment of the protagonists’ relationship being platonic was crushing. A huge part of that was because I felt like it had been dangled in front of me, on purpose, and I felt incredibly foolish for believing their would be queer characters.
    I don’t think that I 1) would have been so desperate for their relationship to not be platonic nor 2) cared as much about it if there were other (casually) queer characters in other parts of the franchise.

Hype for Hyperion

Operation Hyperion, specifically the multinational Dark Net focused drugs operation (as opposed to the University of Portsmouth police body camera evaluation project [1] or a British army action in Lebanon in the 1980’s [2]), took place this year from the 22nd to the 28th October [3]. It was a combined action formed of law enforcement entities from Canada, the U.S., Australia, New Zealand, and Europe which resulted in the shutting down of some Tor websites, a number of arrests, and the identification of many Dark Net Market (DNM) users [3].

For the unfamiliar, DNMs are sites on the Dark Web, the part of the internet you can only access using specific privacy enhancing technologies such as Tor [4], where you can buy all manner of illegal (and occasionally legal) items, from drugs to weapons to fake identities. The first famous DNM was Silk Road which came online in 2011 and was shut down in November 2013 when the FBI arrested Ross Ulbricht, the site administrator [5]. It has been estimated that around $200 million was spent on the site across its lifetime [6].

Unsurprisingly, DNMs are of interest to law enforcement as they provide a seemingly idyllic environment for offenders. The use of Tor, a web browser that obscures the link between a user and the site they are visiting [7], provides pseudonymity (your activities are connected to an online pseudonym that supposedly cannot be traced to your real identity), making offenders harder to identify. Many users take extra precautions such as using gloves and masks when handling products, or using postal addresses that they can disassociate themselves from [6]. Whilst law enforcement has had several successful attacks on DNMs in the past (for example, Operation Onymous in 2014 which resulted in at least 17 arrests and the closing down of 9 drug markets [8]), it is still unclear that this has had a substantial impact upon the DNM population. It would appear that each time a website is shut down, more are created to fill its place [9].

The most recent effort, Hyperion, may have employed a slightly different approach and is being celebrated as quite the success [10]. There have been multiple arrests and many more users have been approached and warned by law enforcement [11]. The New Zealand police claim to have identified 300 people involved in the online distribution of drugs, made 6 arrests and issued 66 formal warnings [12]. The Canadian police have arrested one person in Quebec and the Swedish authorities have stated that they have approached 3,000 people identified as having used DNMs [11]. During this operation, the Dutch police took over the server of a DNM (Hidden Service) captured in Operation Hyperion, and posted a list of known and arrested vendors on the front page [13]. Their method is somewhat reminiscent of the launch of Silk Road 2.0; the administrators appropriated the FBI warning page for their launch campaign [14]. The new front page now bears the words

The Police and the Judicial Authorities of the Netherlands are not only active in the real world, but also in all corners of the internet. Here we trace people who are active on Dark Net Markets and who offer illicit goods or services there. Are you one of them? Then you have our attention.

presumably to act as a deterrent [13].

What is interesting about this operation is that it is not clear how it took place. None of the Law Enforcement agencies seem to have explicitly revealed how they obtained their information and what has been released implies that several different tactics have been used. Some sources suggest that vulnerabilities in Tor were exploited, whereas others imply that information was garnered by tracing the physical packages delivered from vendor to buyer [11]. Speculation on forums seems to convey both confidence in Tor but also a fear for the future of DNM users [15]. Whilst there have currently been fewer reported arrests than Operation Onymous, it will be interesting to see if this effort will have a larger long term impact on the DNM community and whether or not simply warning or approaching users acts as a meaningful deterrent.

  1. http://www.revealmedia.com/case-studies/operation-hyperion
  2. http://www.iwm.org.uk/collections/item/object/205124378
  3. https://nakedsecurity.sophos.com/2016/11/07/tor-marketplaces-shut-down-by-operation-hyperion/
  4. http://www.dictionary.com/browse/dark-web
  5. https://en.wikipedia.org/wiki/Silk_Road_(marketplace)
  6. Lavorgna, Anita. “How the use of the internet is affecting drug trafficking practices.” The internet and drug markets (2016): 85-90.
  7. https://www.torproject.org/
  8. https://www.wired.com/2014/11/operation-onymous-dark-web-arrests/
  9. http://www.gwern.net/Black-market%20survival#data
  10. https://btcmanager.com/news/tech/law-enforcement-celebrate-darknet-busts-around-the-world/
  11. http://motherboard.vice.com/read/operation-hyperion-targets-suspected-dark-web-users-around-the-world
  12. http://www.police.govt.nz/news/release/kiwi-darknet-illegal-drug-buyers-identified-during-worldwide-operation
  13. https://nakedsecurity.sophos.com/2016/11/07/tor-marketplaces-shut-down-by-operation-hyperion/
  14. http://www.forbes.com/sites/andygreenberg/2013/11/06/silk-road-2-0-launches-promising-a-resurrected-black-market-for-the-dark-web/#649a9e8761c5
  15. https://www.shroomery.org/forums/showflat.php/Number/23799267