Despite how far the UK has come in encouraging diversity and establishing new and improved equality regulations and rights, there are still many incidents and concerns over LGBTQ+ discrimination, hate crime and bullying online or otherwise. Technology and social media are a double-edged sword; along with improving the way we live comes the misuse from some users targeting and intentionally hurting people.
Online cyber bullying isn’t always easy to stop or identify the perpetrator. Once these comments are online, can they be taken down? Can action be taken against the users? This is where we turn to those hosting the sites, forums and social media groups. What obligations do they have to scrutinise what is being published? If there is a complaint, are they required to remove it, and are they accountable for the loss or injury these comments cause?
Platform hosts are business owners and should understand their responsibilities; the laws and rights available, and should encourage a transparent set of processes, policies or terms to avoid discrimination, harassment, defamation or online bullying from taking place or continuing. Only with a united stance will these awful incidents cease to be topics of conversation in the future.
What is a Hate Incident and when is this criminal?
If someone has been violent or hostile towards someone else because of their sexual orientation, this is known as a homophobic hate incident. Hate incidents can happen anywhere including online. It can be known, or unknown assailants causing the fear and upset.
It’s important to note you can still be the victim of a homophobic or transphobic hate incident if the perpetrator believes you’re a LGBTQ+ person, even though you’re not. Perceived sexuality abuse is still a hate incident. You can also be the victim of a hate incident because of your association or being an ally with members of the LGBTQ+ community.
How can a Hate Incident take place online?
This can be in the form of emails, WhatsApp messages or other instant messages, as well as posted comments on forums or social media like Twitter or Instagram. The post can be in written form or pictures, which includes teasing , bullying, threatening behaviour, persistent online abuse, inciting others to post abuse or worse cause physical damage, or assaults. It can be a one-off incident or part of an ongoing campaign of harassment or intimidation.
When is this reportable to the police?
When a homophobic or transphobic hate incident becomes a criminal offence, it’s known as a hate crime under the Criminal Justice Act 2003. This is where someone is frightened, intimidated or a victim of violence or the threat of violence due to their sexual or perceived orientation.
Are these forums doing enough?
Many websites and platforms expressly prohibit bullying, hate crime and harassment, and if reported the content and/or bully should be removed from the site in a swift manner. However, clearly some platforms are more efficient than others.
There are very active programs like The Gay & Lesbian Alliance Against Defamation (GLAAD) who teamed up with Facebook to reduce the amount of hate speech and anti-gay bullying that goes on around the internet. Fortunately, GLAAD was able to work with Facebook to remove content that was deemed to be harassment and incidents of hate, but what action against the instigator subsequently took place?
On Twitter, for example, if you violate their ‘Hateful Conduct’ policy they may ask someone to remove the violating content and serve a period of time in read-only mode before they can Tweet again. Subsequent violations will lead to longer read-only periods and may eventually result in permanent account suspension. If an account is engaging primarily in abusive behaviour, or is deemed to have shared a violent threat, they’ve said they will permanently suspend the account upon initial review. There is no mention to reporting this to the authorities – do they, will they? How good are the verification processes that the user details are genuine for someone to follow up incidents of hate crime? If these are more solid this could potentially prevent people hiding behind fake profiles making such comments as they are more likely to be held accountable. These underlying issues still need to be addressed and improvements like these would likely create a safer online space.
What can you do to protect yourself?
Many incidents of cyber bullying in the LGBTQ+ community take place during school age, and whilst there are support groups, new legislation and regulations, a lot of funding has stopped. Many children are too afraid to let their teachers or parents know and the effects can be devastating. So, whilst we need to tackle the processes to address these occurrences, this can only be achieved if people are encouraged to speak out. They need safe forums to do so, people that will listen and take action, as well as support and guidance. If we start here hopefully the new generation can be modelled to be more accepting.
In the meantime, there are a few practical things you can do:
- Familiarise with the Terms of Use before joining online platforms
- Change your privacy settings. Make it private.
- Block and un-friend
- Come off the account where the abuse is taking place
- Gather evidence – save any harassing comments, messages or emails (print screen in case it gets deleted).
- Report it immediately to the site, police or otherwise (if it’s a serious threat make sure you report it to the police).
- Tell someone that you trust
- Be supported – talk to a support group, specialist police unit, your employer or school. Don’t be alone in this.
- Don’t respond
It is clearly difficult to hold bullies accountable for their actions. We live in a society that protects free speech highly. Unfortunately this means many people believe they can say whatever they want, to whomever they want. We know that is not true. There are laws to protect people, but it isn’t clear where exactly the line is, and enforcement is somewhat hit and miss.
There needs to be a culture change and this starts at school, and runs through to employers and businesses to support diversity, and ultimately platform hosts and websites who have a duty to work united with everyone to protect people and address abuse immediately, as well as share the responsibility of accountability and reporting. Those suffering silently also need to be protected. So, even if a post is not directed at you, but you deem it hateful, report it. Only then can the parties address this behaviour effectively.