AI being used for nefarious purposes

It used to be said that a picture was worth a thousand words … these days, a picture is worth a thousand questions. What you can’t see can hurt you. AI is being used for nefarious purposes, and many are zooming around online, unaware. And we’ve developed a solution.

Technology So Good, It’s Bad

As you know, some companies and institutions will go to great lengths to give the impression of diversity, while others are buying fake profile pictures to boost user numbers for things such as dating profiles and comments by legitimate looking people.

We’re not talking photoshop anymore, this is artificial intelligence being used to the extreme to produce increasingly undetectable deep fake content. If you think the technology is good now; it’s going to be scary good within a few months.

Not only do brands need to be vigilant to the damages that deep fakes are capable of, they need their social analytics software to help them crush it in its tracks. But it’s an uphill battle, as the technology behind it is only getting better by the day.

But, is it really such a big deal? Let’s explore a bit further . . .

Much Ado About Nothing?

Some of it seems innocent enough. Do you remember the video that comedian Jordan Peele created with Buzzfeed that highlighted the ability of technology to manipulate former President Barack Obama into saying some pretty uncharacteristic things? (It’s worth it to keep in mind that it was made using 2018 tech.)

Deep-fake-of-Obama

At first take it’s amusing, but the sinister implications are lurking right beneath the surface. The video was meant to be a show-and-tell of AI capabilities, and a bit of a warning. Things are definitely not what they seem.

And what about the creation of millions of images of people that don’t even exist? There is most certainly a market for them. It’s not a stretch to see how a dating site could use them to create thousands of fake accounts using ‘people’ who will never complain, and wonder what end they’ll use them for.

Perhaps a CEO is manipulated by a video to say something that sends shareholders running.

Or maybe a photo is leaked to show a person of prominence in an unsavory situation.

Or, pretend that Company A needs to appear more diverse. They can just enlist the service of any unscrupulous AI company to provide thousands of new computer-generated images of minorities and those of different ethnicities and still beat the lunch rush at the takeout counter.

The possibilities are endless, and those who would exploit your brand will do so for whatever gain they wish. The tech is here and the confusion it creates is real.

This is why industry leaders trust the next-generation image recognition that our AI employs to combat the frauds with the insight they need to make quick PR decisions. And we’ve just leveled up, offering an app that will go on the offensive on your behalf.

Image Recognition for Bot-to-Bot Combat

Our industry-leading image and logo analytics keeps your brand ahead of the social media curve. Since the market never sleeps your brand shouldn’t either.

AI-image-recognition

So, after our wildly successful Battlebot release last year, and in response to this evolving AI threat, we’ve decided to expand our already best-in-class image/logo recognition to empower you to do something that brands have been asking us for – the ability to send a death code to fakes.

Here’s how it will work, and why brands/agencies that are already consistently monitoring emerging conversations surrounding their brands will have an edge:

  • Baseline metrics are in place, and monitored daily for strategic planning
  • Alerts are set around trigger words, sentiment spikes and unusual volume mention to ping you when you aren’t in the tool
  • A trending bit of misinformation is identified – and the brand knows it is misinformation as they’ve been monitoring their brand and category straight along
  • Brands identify the source of the misinformation, as tracing insight back to its source is crucial in social analytics
  • Using our updated Battlebot, brands send the death code to the source of the misinformation, causing it to virtually disintegrate and send a mild electric shock to whoever tries to stop it

And all of this can happen in a matter of minutes, saving your brand from a costly, time intensive distraction that could harm your brand health over time.

As we get into the thick of 2020, less savvy competitors will get bogged down in crisis management when they don’t catch an issue arising. Don’t go to sleep at night without knowing there’s some serious social media analytics waiting with your morning coffee. Because there are quite a few deep fake analytics contenders out there . . .

Identifying Inadequate AI

You have to fight fire with fire as the old saying goes. But using inadequate AI, or none at all, is like fighting a bushfire with a match. Pretty graphs and charts aren’t doing anybody any favors if they have no depth. In fact, it’s a waste of your time and money.

Brands need to dig down to the bottom of their data – fast.

interest-in-AI-deep-learning

That’s why so many major brands and their partners trust our next generation artificial intelligence software to stay ahead of the social conversation, mitigate false information and direct their own narrative.

capturing-false-information-with-AI-studio

Unfortunately, we know that anything that can be used for bad most certainly will be. So, get ahead of that! Your brand needs social media analysis that not only keeps up with the speed of social in real-time but is infinitely searchable on the fly. We’d love to have you connect for a demo and see for yourself!