In this day and age we are always seeing tons of fake news or misinformation and while deepfakes have been a thing for a while, they’re getting more popular. As this whole concept becomes more advanced and prominent, can we really trust any of the videos we see or the audio we hear?

Now, for those who do not know deepfakes are AI-generated fake videos or audio sounds. They in some cases look like real people saying very real things but the truth is they’re not. With deepfakes putting words into the mouths of others is quite easy and sure, some of them are obvious and not put together well but many of them look quite real and are very much believable. 

Back in May, Digital Trends wrote an article covering the topic of deepfakes and how they’re going to affect society as a whole and some of the points they made were extremely important to be aware of. The more prevalent these things become the more aware we need to be. Not everyone knows that videos and things of this nature can be faked. They see something and take it at face value without thinking twice about it which makes pushing different narratives much easier than you’d expect. 

Not all deepfakes are simply ‘celebrity faces’ on porn actors and actresses. Some of them are high profile people saying and doing things they never would. Not only does the world of deepfakes open up a lot of questions in regard to politics but it also brings forth a lot of questions into what is and isn’t legal. For instance, we cannot necessarily copyright our voices and yet deepfakes can be made of those who sing using their ‘voices’ to sing lyrics they may themselves not align with properly. 

In regard to voices, deepfakes, and looming lawsuits Digital Trends wrote as follows:

There are even currently untested questions about the datasets used to train these audio deepfakes. As Colin points out, a voice itself can’t be copyrighted, but a sound recording of a voice singing a song can be. Is an audio deepfake trained on hours of copyrighted Jay-Z albums a breach of copyright? If so, since the copyrights may be dispersed among multiple record labels and other entities (for instance, an interview recorded for television), there could be a whole lot of potentially aggrieved (and copyright infringed) parties.

As these A.I. tools become ever more sophisticated, these cases are going to shift from hypothetical quandaries to the subject of real legal battles, so expect to see some interesting developments. One thing’s for sure: These are the legal battles of the future. When it comes to the legality of deepfakes, even in this one specialized domain, there’s plenty of complexity to delve into. Lawyers are no doubt rubbing their hands together at the prospect.

This in reference to a deepfake situation Jay Z is facing with his voice and those who want to use it to say or well, sing/rap things he otherwise would not. These deepfakes can and likely will change the way we do a lot of things and how we handle a lot of things. Chances are you’ve seen deepfakes online without even realizing it, there are actually ever quite a few out there of our president Mr. Trump. 

Some people actually are going so far as to say these deepfakes could threaten the US election itself. You see, if a bad enough deepfake comes forth it could change everything in seemingly the blink of an eye. I know that in itself might sound like a stretch but it’s a topic being talked about widely these days. 

CNET wrote as follows in regard and brought a new concept to the table most were not necessarily considering:

The bad news: The mere existence of deepfakes is enough to disrupt the election, even if a deepfake of a specific candidate never surfaces.

One of the first nightmarish scenarios people imagine when they learn about this new form of artificial intelligence is a disturbingly realistic video of a candidate, for example, confessing to a hot-button crime that never happened. But that’s not what experts fear most. 

“If you were to ask me what the key risk in the 2020 election is, I would say it’s not deepfakes,” said Kathryn Harrison, founder and CEO of the DeepTrust Alliance, a coalition fighting deepfakes and other kinds of digital disinformation. “It’s actually going to be a true video that will pop up in late October that we won’t be able to prove [whether] it’s true or false.”

And if somebody cunning really wants a deepfake to mess with our democracy, the attack likely won’t be on one of the candidates. It would be an assault on your faith in the election itself: a deepfake of a trusted figure warning that polling sites in, say, black neighborhoods will be unsafe on election day, or that voting machines are switching votes from one candidate to another. 

Even Forbes has gone so far as to write about this topic. Forbes noted that the number of deepfake content online is growing at a rapid pace. Apparently back in 2019, there were almost 8 thousand deepfake videos present online but now we’re looking at over 14 thousand. I guess only time will tell what may come of these deepfakes and if we can somehow slow them down but the more we know the better, above all else.

Leave a Reply