As the creator economy continues to reach new heights, the challenge of ensuring online safety is growing at an equally rapid pace. New products, features, and capabilities—enabled by technology—require new, better, and faster security solutions.
The security solutions that have worked in the past—those able to handle the more traditional, asynchronous methods of communication like email, comments, and social media posts—are not sufficient for live streaming video, nor will they support the networked world of the future. We need a much more sophisticated, complex approach that combines the power of humans and AI.
Live streaming is real life. Of course, this fun, entertaining, and creator-friendly medium takes place in real time. Unlike live television broadcasts, which have a two to three second delay to fix glitches, insert beeps, or interrupt the feed, live streaming doesn’t offer an opportunity for takebacks.
For this reason, the Oasis Consortium has developed user security standards. As a founding member of this think tank, we partner with other trust and security experts from Metaverse builders, industry organizations, academia, nonprofits, government agencies, and advertisers to “accelerate the development of a better, more sustainable internet.”
I encourage my fellow CEOs and technology founders to investigate their own industry bodies so we can build expertise across industry boundaries.
AI + HUMAN OVERVIEW
Combining artificial intelligence (AI) with human intervention is the best solution available today, as AI doesn’t yet grasp nuances like humans.
When a live stream begins, AI is critical to monitoring screen captures and direct messages, scrolling chat box data and analyzing the live voice track, as well as capturing, reviewing and saving the transcript in real-time. However, when moderating content, context is key, and AI isn’t always attuned to it. Therefore, human oversight is crucial here to make real-time decisions about what is toxic or harmful – or not. Keeping our creators and their live streaming audience safe means making sure they are safe from harassment, hate speech, fraud or worse.
Algorithmic abuse detection uses predictive models that are constantly changing, usually based on so-called behavioral profiles. Data science methods categorize patterns of what constitutes “normal” (read: safe) online behavior. Anything that deviates from this “norm” is then identified as a potentially malicious entity.
Because the industry shares information across industries, we can locate and prevent damage from trolls and bots. If a troll escalates hate speech, harassment, or other anti-social behavior, the AI will report it immediately and take action in collaboration with human moderators. As an industry, we need to get good at making the on-screen experience instant and seamless because we’re so quick to deal with security issues.
Technology oversight alone will not keep people safe online. Tech companies need to develop a teaching and coaching mentality for users as well. Users are not passive consumers, they are active participants who have a role to play. It’s up to us to give them the tools to support their personal empowerment, know where to find resources, and take action to address both overt harassment and covert problematic behavior.
The General Counsel on online harassment may have been, “Oh, just ignore it.” But that is not possible as we live increasingly digitally. As a father of three young children, I am Wish We could all protect through technology-based interventions. But we can’t. We need people equipped with the tools of engagement. To be able to protect yourself by quickly clicking “Block” or “Mute”. Unfortunately, these do not always solve the problem. Sometimes victims of online hate choose to move on — but they need support to do so effectively.
After experiencing online hate firsthand, some individuals decide to turn the tables and start personal advocacy organizations and crisis hotlines. Some of the best known include the gaming and online harassment hotline by games critic Anita Sarkeesian and game designer Christopher Vu Gandin Le, and Crash Override by award-winning game developers Zoë Quinn and Alex Lifschitz.
Over the past 16 years, grassroots network HeartMob has trained more than 50,000 people in harassment interventions and published more than 15,000 personal testimonies to help others learn what to do when it happens to them. On a more formal basis, PEN International, the global writers’ advocacy network since 1921, has developed guidelines for practicing “counterspeech” to both expose and engage persistent trolls.
I’m often asked, “What is the ROI of security for us as a technology leader?” The question is an odd one. Security is like oxygen, like water. A social community cannot thrive without strong facilitation and practices. We’re not asking about the ROI of oxygen or water. Without trust and security in our live streaming spaces, we cannot build sustainable businesses. The work of protecting our communities will never end.
Geoff Cook is a serial entrepreneur, CEO of The Meet Group and Co-CEO of ParshipMeet Group.