- Is there cause to worry about the increase in "post a picture of yourself" tweets?
- In an era of rapid artificial intelligence (AI) innovation, some think that these tweets could be a well-planned strategy to get training data for AI.
- We take a look at previous social media trends, their AI sides, and why there should or shouldn't be a cause for alarm.
Post a picture of yourself wearing red. Post a picture of yourself now and one from when you were 18. Is it a coincidence that social media trends like this are increasing when artificial intelligence is causing a stir globally?
Tweeter, CoriAgain2, is taking no chances and I wonder if her apprehension is justified.
"The uptick in Twitter promoting tweets asking to see images of us, along with personal information as AI continues to grow is not lost on me. I’m personally going to refrain from participating in these image-farming tactics/threads."
Is Twitter really promoting tweets asking to see our images
A quick search for "quote tweet with a picture of" brings up many tweets ranging from those requesting a picture of you wearing glasses to one of you aged 18.
Yes, there's been a lot of those tweets lately, enough to call the "quote tweet with a picture of," a trend. But is Twitter promoting it because they want to encourage users to post more pictures of themselves?
I won't put anything past Elon Musk's Twitter, but promoting tweets with people's images to collect data on our looks is most likely not the case.
Tweets with photos generally get more engagement.
After studying four million tweets, Stone Temple Consulting discovered that tweets with images got more than double the likes and retweets of those without images.
Buffer's study also concludes that tweets with images get 150% more retweets.
Such tweets give more context to text and people are more likely to pay attention to them because of the images and videos.
Visuals are processed 60,000 times faster than text, so it makes sense for social media websites to push content with images more.
There's a reason to worry
Though the popularity of "quote tweet with a picture of yourself," tweets is probably the result of people's preference for images, the pictures you post on social media might be training material for AI.
It's okay to doubt my submission, so let's hear straight from the horse's mouth.
"Pictures on social media are commonly used to train artificial intelligence (AI) systems. Social media platforms like Instagram, Facebook, and Twitter are rich sources of user-generated visual content, and these images can be valuable for training AI models in various applications." — ChatGPT
Specifically, trends that encourage people to post themselves at different ages could help AI understand humans' age progression.
While Facebook (now Meta) denied that it had a hand in helping the 10-Year Challenge of 2019 go viral, experts like NYU Professor Amy Webb, believe it was a “perfect storm for machine learning.”
Although there are already countless images uploaded on Facebook daily,
Kate O'Neill, author of Tech Humanist: How You Can Make Technology Better for Business and Better for Humans, believes that the 10-Year Challenge provided a facial recognition algorithm with "a clean, simple, helpfully labelled set of then-and-now photos."
The same could be said for the Twitter trend. A tweet like "post a picture of yourself at 31 years old" helps the algorithm sort through pictures easily. The images are well-labelled so that it knows what an 18-year-old would look like at 31.
It's not just a harmless trend
Meta's defence in 2019 was "This is a user-generated meme that went viral on its own. Facebook did not start this trend." But O'Neill opines that while the Facebook meme might have gone viral on its own, there are social media games and memes designed to extract data.
Though speculations about using the 10-Year Challenge to train AI abound, a certain social media trend was used to teach a camera how depth works.
The Mannequin Challenge went viral in November 2016. CNN believes it was started by high school students at the Edward H. White High School in Jacksonville, Florida US.
Within a few days, the #MannequinChallenge had been used 60,000 times by people from different parts of the world.
The challenge required a group of people to strike different interesting poses and then freeze till the end of the video.
Interestingly, the more people in the video with interesting poses, the better the video performed.
However, a group of research scientists at Google realised that the Mannequin Challenge was a good way to help cameras learn depth in different scenarios.
Here's an uncomplicated way to think about the concept.
A monocular camera — typically used in self-driving cars — is a special camera that tells how far or how close an object is. However, it gets confused when humans move around in the shot.
The scientist created a method in 2019 to predict depth in a scenario where the special camera and people are moving freely. Before, depth prediction was done with objects, and the results were heavy assumptions at best.
Using thousands of mannequin videos from the Internet, the researchers generated something called multi-view stereo reconstruction.
When they tested it on real videos of people doing different things, it worked better than other methods and could create cool 3D effects.
Is it bad that your picture could be used to train AI?
A picture of your face being used to train AI isn't necessarily a bad thing. In fact, O'Neill believes it can be a good thing.
It can improve the ability of facial recognition technology when it comes to age progression prediction, so if a person has been missing for a while, we could use AI to know how they've aged over the years and know what to look for.
There might be some bad sides, but that's a matter of perspective.
Camera or sensor-based advertisers, for example, can target you based on your age. They could tell a couple of things about you just by your face and push targeted ads through those facial features.
Privacy also presents an issue that can't be overlooked. Impersonation is possible in a world where data breaches are becoming more frequent. Now that there are AI models trained to sound like you, there very well could be models trained to look like you.
AI will continue its advance, but regulations around it will determine whether we should be concerned. However, the question is does the regulator have our best interests at heart.