Life&Style Writer Shania Leurs voices concerns for how digital innovation is harming women and girls

Written by Shania Leurs
Published
Last updated

Content warning: sexual harassment

Web 3.0 and AI were supposed to be the great democratisers. Everyone would have access to incredible technology, new opportunities would open up, and we would all benefit equally. Except that is not what is happening. Instead, there has been an explosion of digital abuse, surveillance, and exploitation that tech companies either can’t or won’t address. When tech moves fast, unsurprisingly it is women and girls who are the most vulnerable. 

Generative AI has created new ways to harass women online. Image generation tools can create disturbingly realistic images from text descriptions, and they have become weapons for creating non-consensual intimate images and 99 percent of these sexual deepfakes are of women. The technology allows abusers to generate fake explicit content of real people using just a few photos scraped from social media. In fact, Open AI has stated that “Sora can generate videos up to a minute long while maintaining visual quality and adherence to the user’s prompt.” Yes, really. Your Instagram profile pictures could be enough to create a video up to a minute long.

There has been an explosion of digital abuse

The scale of this problem is genuinely shocking. Secondary school age girls have discovered AI-generated nude images of themselves circulating among classmates. Women in professional settings find themselves targeted with deepfake pornography designed to humiliate and silence them. Public figures, journalists, and activists face campaigns of AI-generated sexual imagery intended to discredit them. The psychological impact on victims is severe. Many report feeling violated in ways similar to actual sexual assault. For teenage girls, the harm is compounded by school social dynamics where such images can destroy reputations and relationships overnight.

Whilst major AI companies claim to have safeguards in place, the reality is messier. These safety features can be bypassed with clever wording that tricks the AI. And even when platforms remove violating content, the images have often already spread beyond their control.

Smart glasses can also be used [for] covert surveillance

Unfortunately, that is not the only technological advancement that’s causing harm. The latest generation of smart glasses are marketed as convenient technology for capturing everyday moments but can also be used in darker ways, such as covert surveillance. Unlike phones, which are obvious when someone is filming, smart glasses allow recording without clear visual cues. You literally cannot tell if someone is filming you or just wearing the latest style in glasses.

For women, this technology creates new vulnerabilities in spaces where they should feel safe. Fitting rooms, gym changing areas, public toilets, even just walking down the street. There have already been documented cases of individuals using smart glasses to record women without consent, and the technology is only becoming more sophisticated and less detectable.

The problem goes beyond individual creeps with cameras. This technology ultimately shifts the power dynamic in public spaces. Women already navigate the world with a heightened awareness of their physical safety. Now they must also consider whether they are being recorded at any given moment. When tech companies release these products without adequate safeguards or even clear visual indicators that recording is happening, they are prioritising convenience and profit over women’s safety.

Technology ultimately shifts the power dynamic in public spaces

It is not only in things that are new that we are seeing this with, technology that we have had for years is starting to get smarter by studying what content we engage with and how we interact with it. While those algorithms might make your explore page more convenient, they can be unharmful. We are seeing them increasingly push out concerning content for young women and girls. Not that long ago, individuals like Andrew Tate were taking social media by storm and now research is suggesting that our society is regressing, with an increasing idea that feminism has done more harm than good. In fact, men from Gen Z are now more likely to believe that than baby boomers. 

The companies know this is happening. They have internal research proving their algorithms damage young users’ mental health, especially girls. But fixing it would mean less engagement, which means less money. So they do not. These are not separate issues. They are all symptoms of the same problem which is a tech industry that does not think about how its products will be used to harm women until after the damage is done.

A tech industry that does not think about how its products will be used to harm women

Every single time, it is the same pattern. New technology launches. Women highlight how this could be dangerous and get ignored. Then surprise, it turns out to be really dangerous for women. Who could have seen that coming? (Women. Women saw it coming.)

Tech companies need to stop treating women’s safety as something they will ‘get around to eventually’. We need diverse teams building technology who can spot these problems before launch, not after millions of people are already affected. We need actual regulation with real consequences when products enable abuse. Most importantly, we need to stop accepting excuses when the price to pay is women’s privacy, safety, and dignity.

We need to stop accepting excuses when the price to pay is women’s privacy

Ultimately, technology will keep advancing. The question is, will the people behind the technology finally start considering that women exist and might not want to be under surveillance, harassed, and violated by it? Right now, the answer is no.

Comments