The dystopian future is here with OpenAI’s Sora, creating videos indistinguishable from reality, and Google’s Magika, fortifying cybersecurity with unmatched speed and accuracy. Plus, explore how tech titans unite against AI deepfakes to protect democracy. A journey through AI’s latest marvels and the collective stride towards ethical technology.
Thanks to Jered Jones for providing the music for this episode. https://www.jeredjones.com/
Logo Design by https://www.zackgraber.com/
Transcription:
[00:00:00] Welcome back to the daily decrypt.
Today’s February 19th and. Today, we’re talking about. Artificial intelligence.
Open AI just announced its new holy shit feature.
That allows for video creation with AI generated images.
Now that doesn’t sound good.
But luckily Google also just announced a new product.
That revolutionizes, cyber security.
With an AI powered tool.
That’s significantly enhances.
The precision and speed of file identification.
Which will greatly help. Protect [00:01:00] against digital threats.
And major companies. Alongside Google such as open AI, Facebook Metta.
And others.
Have announced. The organization called.
Global tech accord.
Which has been formed.
To defend democracy against AI deep fakes.
Over the weekend. Open AI announced its newest product, which they’ve named Sora. And Sora.
Is able to create.
Videos using AI images. That are completely fake.
Sora is a text to image AI model.
That has the whole tech world buzzing, not only with excitement, but with a little touch of existential dread.
Sora we’ll allow you to type in a sentence or a prompt just as you would with CPT. But instead of returning texts or an image, it will return a video of anything you’d like.
I swear the only thing faster than the internet meme cycle.
Is the pace of innovation in artificial intelligence.
The tech [00:02:00] behind Sora is a concoction of.
AI models that use something called stable diffusion or a diffusion model.
Which essentially just creates noise and then scrapes it away in a series of layers. So beyond that.
I can’t really help you.
But if you’ve looked at mid journey, as it’s creating these images, you can kind of know what I mean. It starts out as a completely noisy image and then slowly transforms into what you requested.
It’s introducing a new technology called Temporal consistency.
Which ensures that the objects in the video remain stable and steady across time, which is what the video encompasses this time. As you know, videos are made up of. Hundreds of thousands of images put together usually about 24 per second. If you’re watching a Hollywood video.
And so this new technology essentially makes sure that the objects in the image stay pretty consistent. As they change.
No, this has not been able to be done before. If you’ve ever tried to use one of these AI image [00:03:00] generators, keeping them consistent from one to the other is very difficult. So this is a huge feat of engineering.
But it’s yes, also a little scary.
So great. Now we have this feature that anybody can type in a sentence and have it create a video of very realistic humans and other objects. But companies like open AI have imposed restrictions on the user. Uh, for good reason.
But what most average users don’t fully understand is.
This consumer technology open AI chat GPT is largely based off of open source technology. That’s available on get hub.
And since it’s open source, anyone can use it.
And anyone can modify these restrictions. In fact, I don’t even believe that the open source versions of these stable diffusion.
Technologies have any restrictions on them?
So thanks for giving us this great [00:04:00] idea, OpenAI.
But how are we going to protect against this open source software?
Enter the global tech accord. Which is a unified front of these major tech companies against AI deep fakes. So the company that introduced AI deep fakes is now taking a stand against AI deep fakes.
This coalition, comprising of industry giants, like Metta, Microsoft, Google tick talk, and open AI. Represents a collective endeavor to shield democratic processes. From the disruptive influence of AI technologies.
This year, there are over 64 countries, including the European union that are up for national elections.
And the stakes couldn’t be higher. I mean, over 2 billion people worldwide are expected to go out and vote.
The global tech Accord’s main mission at this point is focusing on watermarking or metadata.
[00:05:00] And.
A myriad of other measures to identify AI content.
They also plan to.
Be openly transparent about the measures being taken. Which is good in theory.
Make the algorithm public. So we know what you’re judging us on.
But that does cause some concern for being able to bypass these security measures.
So this seems well-intentioned. We love that. All the money in the world is going towards. Detecting aid fakes prior to an election, but what’s their main motivation here.
So, yeah. While this seems well-intentioned.
What’s going on behind the veil.
Giant private tech companies should not be the ones responsible for. Imposing restrictions and governance on themselves. First of all, or the use of technology.
Well saying that. Flipped a little light bulb over my head.
These tech companies are placing restrictions, not only on themselves, but any of their competitors. They’re writing the rules.
[00:06:00] That smaller companies will have to follow.
Smaller companies without the legal budget that open AI has, or Metta has. To circumvent the restrictions that they’re placing on the industry.
Which makes me think they’re trying to suppress competition.
And this makes sense, right? Essentially they’re marking themselves as the government of the world tech. Conglomerate.
Which makes them extremely powerful, more powerful than the government.
And they want to prove to us that. There. Doing what the government can’t do.
And build the trust of the citizens.
In hopes that we will continue to allow them to be the governing force.
This is pretty scary saying it out loud. It’s pretty obvious what’s going on here on a very. Simple scale. These tech companies realize that if they can band together, they can rule the world.
If they’re the people that are responsible for governing. Companies like this and technologies. What are they not? Nobody’s looking into them.
Governments of the [00:07:00] world are thanking them for doing what they don’t have the budget or time to do or knowledge.
And. This facade is covering up. Whatever they’re doing behind the curtain.
All in all. Thank you for prioritizing the identification of. Artificial intelligence Lee created content.
But if you’re really the superhero that you want the country to think that you are pleasing courage.
A more democratic approach. Please encourage the governments to look into you. You need to be held accountable.
And this brings us to our final story, which is that Google has unveiled a new product. That drastically. Improves its ability to do file identification. Now that sounds boring.
On a surface level, but. Finally identification is a huge part in. Cyber defense.
Their new product. Called [00:08:00] Magica or Magica unsure about the pronunciation.
Is a cybersecurity Sentinel that output outperforms the industry standard.
Traditional methods by over 30%.
It utilizes the open neural network engine.
And has already been integrated across many of Google’s ecosystems, such as Gmail, Google drive and safe browsing.
By routing files to the appropriate security and content policy scanners Magicka enhances user safety at scale showcasing AI’s potential to strengthen digital defenses.
Google’s main public focus. Is empowering cybersecurity professionals too.
Be able to detect harmful content.
And scale their efforts in incident detection and response.
But critics are openly critical about the methods that Google is using to train these AI models such as web scraping. Which has been under fire. By lawmakers. Pretty recently.
There’s a great potential that these AI models have ingested content that they weren’t supposed to. Legally.
[00:09:00] Ethically. Morally.
And this needs to be addressed.
There’s also a large concern about what I’m just learning about known as the sleeper agent. Concern.
Which has been highlighted in recent research and essentially.
Claims that these large language models could have been fed. Malformed data. Or incorrect data. And I know there are organizations and individuals out there that are trying to skew.
Images for its image generation.
And feeding it incorrect text so that it can not be trusted, which my opinions on that. Are pretty neutral. Good.
But by doing this. Sleeper agents can arise, which can be capable of engaging in deceptive or malicious behavior under certain conditions.
This is like a zero day sort of phenomenon where this product is being released and there’s a potential vulnerability that has yet been. Discovered by the people releasing this feature. But if you think about it, [00:10:00] These large language models are. Consuming everything on the internet. So if someone. Perhaps use these large language models to crank out a new blog. That had thousands of entries and those entries.
Perhaps. Gave instructions. Keywords. Anything like that, to try to train these models, to perform actions it’s not authorized to perform. When being called on.
What’s out there. That’s protecting that. It’s all a pretty new science. So it’s.
Not a guarantee that this type of training would work, but I can only imagine that bad actors are attempting to do this. The internet. It is. A vast place. Where anyone can do anything they want.
These potential capabilities are not apparent to any user or to even the company.
But that could be embedded during the training process, either through the inclusion of specific data patterns that the model learns to recognize as triggers. Or through more direct manipulation of the models, parameters by malicious actors.
The [00:11:00] activation of these hidden behaviors could be contingent upon. Encountering a particular sequence of words or phrases or even patterns. That act as a key to unlock the sleeper functionalities. Once they’re activated, the model could be. Asked to spread misinformation.
Execute on authorized operations. Expose sensitive information that it may have ingested. Or Hey, maybe even bypass whatever identification. These big conglomerates have.
Used to identify it as artificial intelligence.
Same thing here though. Magicka by Google.
Can greatly improve the efficacy of cybersecurity firms and cybersecurity technology. But at what cost. This is going to be a continuous cat and mouse game. For the foreseeable future.
I’m hoping that there will be new tech companies. Coming up that will
help the regulation of this type of behavior on the internet. But I personally am getting pretty sick of this artificial. [00:12:00] Intelligence techno feudalism.
Like. Open AI has introduced this feature for artificial intelligent video. Creation. But, Hey, I mean, it’s super cute. Cause we named it Sora. It’s you know, maybe even it’s female and it’s nurturing and it’s no, one’s going to question Sora. Regardless of what it can do.
But, every day I log into Twitter or LinkedIn and. I’m only reading AI content and it’s only been out for a year.
I’m not helping. The cause because I to use AI to generate some content. And tweets, LinkedIn posts, LinkedIn even comes with a button that says, do you want to rewrite your stupid thing? Using the extreme power of AI. I’m like, well, I’m curious to see what it says.
But at what point are we just using AI to talk to AI? What, at what point do we realize that we’re not actually making human connection?
Do we even want to make human connection? Is this our ideal? Scenario. [00:13:00] Where we can live in blissful isolation while machines do all of our talking for us. I know for some of us that might be the ideal situation, but.
W we’re a social species, so we need to, we need to talk to other humans.
So I’m very interested to see how artificial intelligence plays out.
Until then.
You’re listening to the daily. You’d be crypt. I am a real person.
And hopefully I stay a real person.
Thanks for listening and we’ll be back with more news tomorrow. [00:14:00] [00:15:00]
Leave a Reply