‘Malicious actors’ use AI to make explicit content of minors, FBI, Baton Rouge authorities warn

'Malicious actors' use AI to make explicit content of minors, FBI, Baton Rouge authorities warn
Bank Image

As use of artificial intelligence, or AI, becomes widespread, the FBI is warning of an uptick in “malicious actors” using the technology to manipulate innocuous photos and videos to create sexualized content of victims, many of whom are underage.

In a public service announcement, the bureau says it “continues to receive reports from victims, including minor children and non-consenting adults” whose photos or videos have been altered into explicit content that is being “publicly circulated on social media or pornographic websites for the purpose of harassing victims or sextortion schemes.”

Since April, the FBI says it’s seen an increase in the number of such reports, particularly from people who say their likeness was taken from content posted to social media sites or captured during video chats.

The agency is now urging people to protect themselves by limiting the amount of information they post online. 

Though state and local law enforcement agencies say they have yet to see any cases in Louisiana, the rapid rise of AI leaves many worried about how the still-developing technology will inevitably complicate future investigations.

“We’re concerned with the ways that criminals will advance with [this] technology,” said Casey Rayborn Hicks, a spokeswoman with the East Baton Rouge Sheriff’s Office, which oversees local investigations into child sexual abuse materials.

Once a manipulated image is created and shared, victims can face significant challenges in preventing it from continuing to circulate, the FBI says, cautioning that many victims don’t realize their likeness has been copied, manipulated and circulated until someone else brings it to their attention.

Not just sexual content

David Maimon, a professor at Georgia State University who specializes in cyber crimes, said he frequently sees “all kinds of fake videos” popping up online with the intention of defrauding victims.

“Frankly, at this point, once your image is there, anyone can do what they want with it,” he said.

While Maimon said he hasn’t come across images that have been manipulated to show explicit content, what he has seen is a growing number of people who use the technology to pose as others online in an attempt to convince victims to send them money.

He showed multiple examples from his research files of predators who took images from other people and manipulated them with AI to talk to others in real time using video-chatting apps like FaceTime or Zoom.

In one of the videos, which was posted to a platform that allows fraudsters to share tips and tricks with one another, a phone recording of someone’s laptop computer screen shows a man having what he thinks is a Facetime conversation with a woman standing in her kitchen as she convinces him to send her money for a plane ticket.

“I really do want to meet with you,” she says. “This is one of the reasons I’m coming over to Canada. When I get over to you, I will pay you instantly.”

As she moves her mouth, it becomes apparent that another man, visible in a separate box in the top left-hand corner of the screen, is the one doing the talking, the AI-generated woman mimicking every one of his movements.

While the scams can be convincingly realistic, victims do occasionally recognize that something is off but are unable to put their finger on exactly what is causing their suspicion, Maimon said.

“In some of the videos we’re seeing, the victim expresses concern … with respect to the identity of the person they’re talking to, and the offender will convince them that it’s poor connectivity on their side, or they come up with any kind of excuse,” he said.

He described the technology’s capabilities as “mind-boggling.”

“The generative AI tools will only get better, and so the quality of the videos, the quality of the conversations that these guys will be able to [have], will only get better,” Maimon said. “It will be more complicated for us fraud fighters to detect.”

Broader concerns

Cyber crime experts aren’t the only ones raising alarms. Late last month, industry leaders released a statement signed by more than 350 executives, researchers and engineers warning policymakers that “mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Despite the fact the technology is now being integrated into daily life, little regulation or oversight exists when it comes to how it can be used and by whom.

Without intervention, law enforcement agencies everywhere will likely soon feel the effects, EBRSO’s Hicks said.

“Obviously it’s something to think about in terms of making sure that the laws we have in place cover that advancing technology,” she said.

How to protect yourself

Maimon said the best way people can protect themselves from having their likeness stolen is to limit the number of images and videos of themselves and their children online. People who do post online should make sure their profiles are private and that their privacy settings make it difficult for strangers to access their information.

Maimon also urged people to be wary of whom they accept friend and follow requests from.

“We’re seeing a lot of incidents where fraudsters actually hack a Facebook account,” he said. “Whether you want it or not, offenders will be able to lift information about you, images of yourself, as well as your family and friends you tag in your profile.”

The FBI also recommends that people exercise caution when posting online and encourages guardians to monitor children’s online activity and discuss with them the risks associated with sharing personal content. Parents should run frequent searches of their children’s names to help identify the exposure and spread of personal information on the internet, the agency says.

It also warns that people should use caution when interacting with others they know online who appear to be acting outside their normal pattern of behavior, as hackers will sometimes manipulate social media accounts to gain trust from friends or contacts.

Anyone concerned that their images are being used for illegal purposes can also use reverse image search engines — which allow users to upload an image to search for other places where it appears online — to locate any photos or videos that have been circulated.

“Folks simply need to be more vigilant,” Maimon said.

Source

About Mary Weyand 13416 Articles
Mary founded Scoop Tour with an aim to bring relevant and unaltered news to the general public with a specific view point for each story catered by the team. She is a proficient journalist who holds a reputable portfolio with proficiency in content analysis and research. With ample knowledge about the Automobile industry, she also contributes her knowledge for the Automobile section of the website.

Be the first to comment

Leave a Reply

Your email address will not be published.


*