Twitter recently announced that it will no longer allow “the sharing of private media, such as images or videos of private individuals without their consent”. The move takes effect through an expansion of the social media platform’s private information and media policy.
In practical terms, this means photos and videos can be removed if the photographer has not obtained consent from people captured prior to sharing the item on Twitter. Individuals who find their image shared online without consent can report the post, and Twitter will then decide whether it’s to be taken down.
According to Twitter, this change comes in response to “growing concerns about the misuse of media and information that is not available elsewhere online as a tool to harass, intimidate, and reveal the identities of individuals”.
While the move signals a shift towards greater protection of individual privacy, there are questions around implementation and enforcement.
In contrast to some European countries – France, for example, has a strong privacy culture around image rights under Article 9 of the French Civil Code – the UK doesn’t have such a strong tradition of image rights.
This means there is little an individual can do to prevent an image of themselves being circulated freely online, unless it’s deemed to fall within limited legal protection. For example, in relevant circumstances an individual may be protected under section 33 of the Criminal Justice and Courts Act 2015, which addresses image-based sexual abuse. Legal protection may also be available if the image is deemed to contravene copyright or data protection provisions.
On the one hand, the freedom to photograph is fiercely defended, largely by the media and photographers. On the other, private, unwanted or humiliating photographs can cause significant upset and distress, with a clash of rights between the photographer (the legal owner of the photograph) and the photographed (often laying no claim to the image).
My research
For my PhD, I surveyed 189 adults living in England and Wales on their experiences with images shared online, particularly via social media. My findings were published earlier this year.
Although some participants were not bothered or were even pleased to come across images of themselves shared online, for others, finding images which they didn’t consent to being posted made them feel uneasy. As one participant said:
Fortunately I didn’t look too bad and wasn’t doing anything stupid, but I’d prefer to control images of myself appearing in public.
Another said:
I was quite angry about having my image shared on social media without my permission.
A narrow majority of respondents supported an increase in legal protection of individual rights to mean that their image could not be used without consent (55% agreed, 27% were not sure and 18% disagreed). Meanwhile, 75% of respondents felt social media sites should play a greater role in protecting privacy.
I found people were not necessarily seeking legal protection in this regard. Many were just looking for some kind of avenue, such as it being the norm for photographs posted on social media without permission to be removed at the request of the person photographed.
Twitter’s policy change represents a pragmatic solution, giving individuals greater control over how their image is used. This may be helpful, particularly from a safety perspective, to the groups Twitter has identified, which include women, activists, dissidents and members of minority communities. For example, an image which reveals the location of a woman who has escaped domestic violence could put that person in significant danger if the image is seen by her abuser.
It also may be helpful to children subject to “sharenting” – having images shared online by their parents at every stage of growing up. In theory, these children can now report these images once they’re old enough to understand how. Alternatively, they can have a representative do this for them.
Teething problems
The change has understandably caused some consternation, particularly among photographers. Civil liberties group Big Brother Watch has criticised the policy for being “overly broad”, arguing it will lead to censorship online.
It’s important to note that this is not a blanket ban on images of individuals. Twitter has said images or videos that show people participating in public events (such as large protests or sporting events) generally wouldn’t violate the policy.
They also draw attention to a number of exceptions, including where the image is newsworthy, in the public interest, or where the subject is a public figure. But how public interest will be interpreted would benefit from greater clarity. Similarly, how this policy will apply to the media is not entirely clear.
Read more:
For the first generation to grow up on Facebook, online identities hold both promise and pitfall
There were already a few teething problems within a week of the policy launching. Co-ordinated reports by extremist groups pertaining to photos of themselves at hate rallies reportedly resulted in numerous Twitter accounts belonging to anti-extremism researchers and journalists being suspended by mistake. Twitter said it has corrected the errors and launched an internal review.
There are also concerns that minority groups may be harmed by the policy if they run into difficulties sharing images highlighting abuse or injustice, such as police brutality. Although Twitter has said that such events would be exempted from the rule on the grounds of being newsworthy, how this will be enforced is not yet clear.
There are certainly issues that need to be ironed out. But ultimately, this policy change does have the potential to protect individual privacy and facilitate a more considered approach to the sharing of images.
Holly Hancock does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.