The Daily Decrypt
The Daily Decrypt
Deceptive Deepfake Cyber Scheme: Arup's Wake-Up Call Against North Korean IT Workers
Loading
/

In today’s episode, a UK engineering firm Arup was scammed of £20m through a deepfake incident where an employee fell victim to AI-generated video calls. The incident sheds light on the increasing sophistication of cyber attackers and the need for better awareness on deepfake technology. Meanwhile, the Jumio 2024 Online Identity Study reveals consumer concerns over deepfakes, with a call for more governmental regulation of AI to combat cybercrime. The US Justice Department exposed a scheme enabling North Korean IT workers to bypass sanctions, highlighting the risks associated with remote work and the importance of identifying potential threats. Original URLs: 1. https://www.theguardian.com/technology/article/2024/may/17/uk-engineering-arup-deepfake-scam-hong-kong-ai-video.2. https://www.helpnetsecurity.com/2024/05/20/consumers-online-identity-fraud/.3. https://www.helpnetsecurity.com/2024/05/17/north-korean-it-workers/

Arup, Engineering, Deepfake, Cyberattacks, deepfakes, generative AI, digital security, identity fraud

Search Phrases:

  1. Arup deepfake cyber-attacks
  2. How to protect companies from deepfake scams
  3. Consumer awareness about deepfakes and generative AI
  4. Collaborating to enhance digital security measures
  5. Preventing identity fraud with advanced technology
  6. North Korean IT workers evasion scheme
  7. Sanctions evasion by North Korean IT workers
  8. Identifying and protecting organizations from North Korean IT workers
  9. Deceptive employment schemes by North Korean workers
  10. US companies and North Korean IT worker sanctions

May21

The us justice department has uncovered a scheme involving north Korean. It workers evading sanctions by working remotely for us companies under assumed identities, which has resulted in millions of dollars generated for the DPRK.

What signs can help companies identify north Korean it workers posing as us freelancers.

Consumers consistently overestimate their ability to spot deep, fake videos with 60% believing they could detect one. Despite rising concerns over the risks posed by generative AI.

How can businesses and consumers collaborate to enhance digital security measures and prevent identity fraud in the face of increasing deep fake technology.

And in that same realm Arup,

which is a leading UK engineering firm. Fell prey to a 20 million euro, deep fake scam where AI generated video calls, duped a Hong Kong employee into transferring vast sums to criminals.

How can businesses protect themselves from sophisticated schemes?

Involving deep fake videos.

You’re listening to the daily decrypt.

The us justice department has uncovered a scheme.

Where individuals from North Korea.

Are posing as us freelancers and getting jobs at us companies under these false identities.

These individuals will utilize us payment platforms, online job sites and proxy computers within the U S to deceive.

The United States employers. They particularly target fortune 500 companies. Like major television networks. Silicon valley tech firms. And they’ve even attempted infiltration of us government agencies.

So these individuals have been aided by.

A few different us citizens. Including one that would create accounts on us job sites and then sell them to north Koreans.

Or another us woman who operated a quote laptop farm, where she essentially just had a bunch of laptops and let. Adversaries remote in looking like they were in the United States.

This scheme ran from 2020 all the way to 2023. And amassed over $6.8 million for North Korea.

But. Officially both of the individuals who are responsible for all of these fake employments have been apprehended.

And are awaiting extradition to the United States for their trial.

So, obviously this is going to be pretty tough to spot.

Because first of all, resumes for these fraudulent. Applicants are going to look really good. So they’re probably going to get the interview based on their resume and their cover letter.

But there are a few tactics you can use to help identify these fraudulent applicants and the FBI released a multiple page document with these recommendations. Like dozens of pages, but.

You can look for inconsistencies in their profiles, like name, spelling, nationality, work, location, contact information, education, et cetera.

Look for typos. Look at portfolio websites. Social media profiles. Developer profiles, like the ones on get hub.

Or their inability to conduct this interview during regular business hours.

Would mean that they’re on the other side of the world, which isn’t a deterrent. I mean, Us companies are expanding across the world, but.

Keep an eye on your new applicants, on your new hires as well. You don’t necessarily have to spot this before hiring them, but hopefully there’s a period where they don’t have. Full access to company resources, perhaps while they’re onboarding.

And be extra stringent to work with their onboarding manager to see making sure they’re in all the sessions, their cameras on.

But there are a real person. Because these individuals. We’re working at hundreds of companies, they didn’t actually have the time to do any tasks or any work. So just make sure your standards are high.

And you should be able to spot them. If you’d like to read the entire document from the FBI, it’ll be linked in the show notes below.

All right. Our third story is going to discuss a recent scan that was conducted and successfully executed using deep fake AI videos.

But first the Julio. So 2024 online identity study.

Which encompassed over 8,000 adult consumers across the UK, us, Singapore, and Mexico. Highlights that consumers tend to drastically overestimate their ability to detect deep fakes. With 60% believing they could identify them. Which is up from 52% in 2023.

And that’s interesting because deep fake technology has only improved. And we think that we are better at identifying them, given this improvement in technology. So in fact we’re actually worse.. Because major companies continued to be scammed successfully. By these deep fake AI videos.

And if you’re listening and you happen to be one of those people out there who thinks they can identify.

AI content.

That’s just a dangerous mindset to have false confidence. Sure you can identify some of that content, but always operate. Under the assumption that you can’t, which is the same for like identifying malicious URLs.

Just don’t trust your ability and you’ll be safe. If as soon as you start to trust that you’ll identify things, you’re going to make a mistake and you’re going to miss. Something pretty obvious.

Now I agree that we as individuals shouldn’t be responsible for identifying AI content. And there is a continued push for more regulations and stamping of AI content across the internet.

Or any sort of identifier. Or tool that can accurately identify AI content. But as of right now, It’s pretty much the wild, wild west. We can cross our fingers and hope that something’s coming down the pipeline for us. But right now we’ve got to be extra vigilant.

And if something smells weird or looks kind of weird or.

The cadence of someone’s voice is off or their hands are moving in very predictable ways.

Go ahead and assume that it’s an AI.

And proceed as such. So what can you do if you enter into a call with a fake version of your boss or. A fake it manager for your company or a fake parent. Start asking questions that only they would know the answer to.

If they’re on video, ask them to move their camera around, ask them to stand up, sit down. Don’t be hesitant to tell them why you’re asking them to do this, because this is a serious thing that any CEO or any boss should be tracking.

But yeah, you might have to get creative with your methods of verifying that they’re actually a human.

And on the same note, this next story is about someone who thought they could identify deep, fake.

AI videos or callers. And instead. Was convinced to transfer 20 million euros.

To a fraudulent actor. So Arup, which is a prominent British engineering company. Fell victim to a deep, fake scam where an employee was deceived into transferring 20 million euros to criminals through an AI generated video call in Hong Kong.

The Hong Kong police are currently investigating this incident.

And the case is classified as quote, obtaining property by deception.

So there’s not much else to go into about this story.

But these types of calls can have dire consequences, right. They tend to just be either zoom calls. Or maybe your CEO gives you a call on the phone and their voice is.

Duplicated.

Or they hop on a zoom video call and.

It’s actually an AI person that you’re talking to. With the likeness of your CEO. And so I personally have never had a face-to-face contact with my CEO. So I wouldn’t necessarily know if some facial feature was wrong.

Or if their eyes were too open or closed or their hairline was different, but you gotta be looking at.

The mannerisms. The smoothness of the speech, watch their mouth.

See if it looks weird and that, that can be tough because bandwidth issues, especially crossing over. Oceans. Can help mask those types of things. But like I said, you’re going to have to get creative and verifying, and it’s going to be hard to ask your CEO to move their camera around, but they will probably be very impressed. If you say, Hey, AI video calls are on the rise. Can you help me in verifying that you’re my actual CEO.

This has been the Daily Decrypt. If you found your key to unlocking the digital domain, show your support with a rating on Spotify or Apple Podcasts. It truly helps us stand at the frontier of cyber news. Don’t forget to connect on Instagram or catch our episodes on YouTube. Until next time, keep your data safe and your curiosity alive.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.