Canada·New
In a caller trend, radical are utilizing generative AI tools similar ChatGPT to reimagine themselves arsenic dolls oregon enactment figures. And it's proven itself popular, arsenic celebrities, politicians, institutions and large brands similar the UK's Royal Mail person jumped successful connected the fun.
It's besides called the Barbie trend, oregon starter battalion trend, and is fashionable with brands
Natalie Stechyson · CBC News
· Posted: Apr 15, 2025 4:00 AM EDT | Last Updated: 7 minutes ago
If you've scrolled societal media overmuch lately, you've astir apt noticed a batch of ... dolls.
There are dolls each implicit X and Facebook feeds. Instagram? Dolls. TikTok? You guessed it: dolls, positive tutorials on how to marque dolls. There are adjacent dolls each over LinkedIn, arguably the astir superior and slightest amusive subordinate of the gang.
You tin telephone it the Barbie AI attraction or the Barbie container trend. Or if Barbie isn't your thing, you tin spell with AI action figures, enactment fig starter pack, or the ChatGPT enactment figures trend. But nevertheless you hashtag it, the dolls are seemingly everywhere.
And portion they person immoderate similarities (boxes and packaging that mimic Mattel's Barbie, personality-driven accessories, a plastic-looking smile), they're each arsenic antithetic arsenic the radical posting them, but for 1 crucial, communal feature: They're not real.
The Congresswoman MTG Starter Kit ✨ <br>If I was a doll! <br>I emotion each my accessories, including my Bible and gavel for DOGE Committee chair! <a href="https://t.co/2fEWYH1Ubt">pic.twitter.com/2fEWYH1Ubt</a>
—@mtgreeneeIn the new trend, radical are utilizing generative AI tools similar ChatGPT to to reimagine themselves arsenic dolls oregon enactment figures, implicit with accessories. It's proven rather popular, and not conscionable with influencers.
Celebrities, politicians and large brands person each jumped in. Journalists reporting connected the trend have made versions of themselves holding microphones and cameras (though this writer won't marque you endure done that). And users person made versions of beauteous overmuch immoderate notable fig you tin deliberation of, from billionaire Elon Musk to histrion and vocalist Ariana Grande.
Good greeting from me, <a href="https://twitter.com/BenMBoulos?ref_src=twsrc%5Etfw">@BenMBoulos</a> & our AI dolls connected <a href="https://twitter.com/BBCBreakfast?ref_src=twsrc%5Etfw">@BBCBreakfast</a> (Do I truly look similar that?!)<br>We'll beryllium looking astatine concerns astir the trend, positive each your news, upwind & sport! <a href="https://t.co/mP469K7IEW">pic.twitter.com/mP469K7IEW</a>
—@luxmy_gAccording to tech media website The Verge, it actually started connected nonrecreational societal networking tract LinkedIn, wherever it was fashionable with marketers looking for engagement. As a result, galore of the dolls you'll spot retired determination question to beforehand a concern oregon hustle. (Think, "social media marketer doll," oregon "SEO manager doll.")
But it's since leaked implicit to different platforms, wherever everyone, it seems, is having a spot of amusive uncovering retired if beingness successful integrative truly is fantastic. That said, it isn't needfully harmless fun, according to respective AI experts who spoke to CBC News.
Some of our section members person hopped connected the AI enactment fig trend. Meet Ingrid Thomson and Jan-Hendrik du Toit, dedicated Wikimedians helping turn the escaped cognition movement, 1 edit and citation astatine a time.<a href="https://twitter.com/hashtag/wikimediaZA?src=hash&ref_src=twsrc%5Etfw">#wikimediaZA</a> <a href="https://twitter.com/hashtag/WikimediaSouthAfrica?src=hash&ref_src=twsrc%5Etfw">#WikimediaSouthAfrica</a> <a href="https://t.co/eYCr4tnBD2">pic.twitter.com/eYCr4tnBD2</a>
—@Wikimedia_ZA"It's inactive precise overmuch the Wild West retired determination erstwhile it comes to generative AI," said Anatoliy Gruzd, a prof and manager of probe for the Social Media Lab astatine Toronto Metropolitan University.
"Most argumentation and ineligible frameworks haven't afloat caught up with the innovation, leaving it up to AI companies to find however they'll usage the idiosyncratic information you provide."
Privacy concerns
The popularity of the doll-generating inclination isn't astonishing astatine each from a sociological standpoint, says Matthew Guzdial, an adjunct computing subject prof astatine the University of Alberta.
"This is the benignant of net inclination we've had since we've had societal media. Maybe it utilized to beryllium things similar a forwarded email oregon a quiz wherever you'd stock the results," helium told CBC News.
But arsenic with immoderate AI trend, determination are immoderate concerns implicit its information use.
Generative AI successful wide presents important information privateness challenges. As the Stanford University Institute for Human-Centered Artificial Intelligence (Stanford HAI) notes, information privateness issues and the net aren't new, but AI is truthful "data-hungry" that it ramps up the standard of the risk.
"If you're providing an online strategy with precise idiosyncratic information astir you, similar your look oregon your occupation oregon your favourite colour, you ought to bash truthful with the knowing that those information aren't conscionable utile to get the contiguous result — similar a doll," said Wendy Wong, a governmental subject prof astatine the University of British Columbia who studies AI and quality rights.
That data will beryllium fed backmost into the strategy to assistance them make aboriginal answers, Wong explained.
In addition, there's interest that "bad actors" can use information scraped online to people people, Stanford HAI notes. In March, for instance, Canada's Competition Bureau warned of the emergence successful AI-related fraud.
About two-thirds of Canadians person tried utilizing generative AI tools astatine slightest once, according to caller probe by TMU's Social Media Lab. But astir fractional of the 1,500 radical the researchers sampled had small to nary knowing of however these companies cod oregon store idiosyncratic data, the study said.
Gruzd, with that lab, suggests caution erstwhile utilizing these caller apps. But if you bash determine to experiment, helium suggests looking for an enactment to opt retired of having your information utilized for grooming oregon different third-party purposes nether the settings.
"If nary specified enactment is available, you mightiness privation to reconsider utilizing the app; otherwise, don't beryllium amazed if your likeness appears successful unexpected contexts, such arsenic online ads."
The biology and taste interaction of AI
Then there's the biology impact. CBC's Quirks and Quarks has antecedently reported connected however AI systems are an energy-intensive exertion with the imaginable to devour arsenic overmuch energy arsenic an full country.
A survey retired of Cornell University claims that grooming OpenAI's GPT-3 connection exemplary successful Microsoft's U.S. information centres can straight evaporate 700,000 litres of cleanable freshwater, for instance. Goldman Sachs has estimated that AI volition thrust a 160 per cent summation successful information centre powerfulness demand.
WATCH | AI is power-hungry: Breaking down the clime interaction of AI
The mean ChatGPT query takes about 10 times more powerfulness than a Google search, according to immoderate estimates.
Even OpenAI CEO Sam Altman has expressed interest astir the popularity of generating images, penning connected X past period that it had to temporarily present immoderate limits portion it worked to marque it much businesslike due to the fact that its graphics processing units were "melting."
it's ace amusive seeing radical emotion images successful chatgpt.<br><br>but our GPUs are melting.<br><br>we are going to temporarily present immoderate complaint limits portion we enactment connected making it much efficient. hopefully won't beryllium long!<br><br>chatgpt escaped tier volition get 3 generations per time soon.
—@samaMeanwhile, arsenic the AI-generated dolls instrumentality implicit our societal media feeds, truthful excessively is simply a mentation being circulated by artists acrophobic astir the devaluation of their work, using the hashtag #StarterPackNoAI.
Concerns had antecedently been raised astir the past AI trend, wherever users generated images of themselves successful the benignant of the Tokyo animation studio Studio Ghibli — and launched a statement implicit whether it was stealing the enactment of human artists.
Despite the concerns, however, Guzdial says these kinds of trends are affirmative — for the AI companies trying to turn their idiosyncratic bases. These models are extremely expensive to bid and support running, helium said, but if capable radical usage them and go reliant connected them, the companies tin summation their subscription prices.
"This is wherefore these sorts of trends are truthful bully for these companies that are profoundly successful the red."
ABOUT THE AUTHOR
Natalie Stechyson has been a writer and exertion astatine CBC News since 2021. She covers stories connected societal trends, families, gender, quality interest, arsenic good arsenic wide news. She's worked arsenic a writer since 2009, with stints astatine the Globe and Mail and Postmedia News, among others. Before joining CBC News, she was the parents exertion astatine HuffPost Canada, wherever she won a metallic Canadian Online Publishing Award for her enactment connected gestation loss. You tin scope her astatine [email protected].
- X