Policymakers must take preventative action against the recurring dangers of child predators on social media, Sheawin Leong writes.
TikTok has taken the world by storm. The Chinese app which launched globally in 2017 became the number one most-downloaded app on the Apple App store in 2018, and has remained in the global top four ever since.
The allure of TikTok is evident. While the app was initially popularised by dancing and lip-synching covers, it has evolved to include a genre of other entertainment, ranging from millennial-relatable memes to comedy videos and more. Its strict video-only interface puts a sixty seconds limit on each clip, allowing users to watch a plethora of clips within short periods – perfect for its youth demographic of mostly teenagers to young adults who have a relatively short attention span.
Initially a fun app for self-expression and entertainment, TikTok has become a platform for predators to prey on young children, with videos of children dancing innocently to song covers attracting sexually suggestive comments or messages from predators online.
In June 2018, the Ministry of Communications and Information Technology in Indonesia temporarily blocked the app because of “pornography, inappropriate content and blasphemy”. Earlier this year, the Madras High Court in India also banned the app alleging it was “encouraging pornography” and luring sexual predators.
In both Singapore and Australia, TikTok is the second-highest downloaded app, surpassing popular social media titans like Instagram and Facebook. In Malaysia, it remains as the 12th most downloaded and in Indonesia, the 18th.
Despite the app’s prolific use in the Asia-Pacific region and its predator risks, some countries such as Hong Kong, lack a specific government department to address children’s online safety and privacy protection online.
Currently, while several countries in the Asia-Pacific have launched regional initiatives aimed at preventing child abuse on the Internet, these efforts target more explicit forms of child exploitation such as the online sale or dissemination of child pornography material.
These initiatives neglect the less obvious forms of sexual harassment on social media platforms that presents itself within lewd comments, suggestive messages, or inappropriate photos, leaving young children vulnerable to sexual predators online.
The dangers of these apps are not new. From live-streaming apps like Bigo where users can send virtual stickers, coins and messages, to anonymous messenger apps like Tellonoym and Kik Messenger, these apps all share a history of being used by sexual predators to target young users.
Just as the behaviours of predators on social media apps have remained consistent in the last few years, the methods to deal with this troubling behaviour have remained disappointingly unchanged. Given the lack of corporate social responsibility towards protecting vulnerable users on these social media platforms, policymakers need to take action to regulate social media apps to protect consumers, especially children.
In Australia, the creation of the e-safety commissioner in 2015 has seen greater efforts to protect young people online through coordinating between governments, non-governmental organisations and social media users in identifying, reporting and removing illegal or unsafe content online, including image and text content.
If governments can introduce laws to tackle against the spread of fake news and enact social media laws to remove “abhorrent violent material” online, surely a similar set of rules and regulation can be put in place to ensure consistent monitoring, removal, and regulation of inappropriate content on these social media platforms?
Children need better protection on social media and this ongoing problem deserves not just more attention, but concrete action, if we want to call time on predators online.