Article

Making sense of
Cybersecurity’s contradictions

JUL 3, 2024
E-Invoicing
E-Invoicing

People are complex. Our standards vary, our beliefs can compete, and our firmly held opinions can evolve.  In a 2015 New York Times opinion piece titled “The Virtue of Contradicting Ourselves,“ organizational psychologist Adam Grant explains the science showing that most people resist the discomfort of holding two seemingly contradictory beliefs, values, or attitudes at the same time—a feeling that social psychologist Leon Festinger coined “cognitive dissonance” back in 1957. When many people look at artificial intelligence (AI) and especially generative AI (GenAI), they balk at a level of complexity that rivals our own, craving a clear verdict that AI either brings good tidings or cause for concern. In a way, they’re trying to alleviate cognitive dissonance. Because just as we humans contradict ourselves, so too does AI—or at least our expectations for it do. On the one hand, AI promises unprecedented efficiency, innovation, and convenience. On the other hand, it raises profound questions that can’t be ignored.

What happens when AI simultaneously supports sellers in creating more personalized, human experiences while also powering “machine customers” acting in place of human buyers?

Is using AI to work faster than ever before a boon or a very real risk that employees and their employers should worry about?

Does AI’s power-hungry hardware diminish the positive impact of AI’s applications in sustainability initiatives?

In what ways does AI pose data security risks, and are there ways AI itself can be used to combat risk?

Will AI help organizations modernize their legacy technology and contribute to an increase in technical debt?

Have questions about your Cybersecurity journey? 

Implementing AI responsibly requires more than just technical expertise. It requires an awareness of the nuance in the potential outcomes. We confront that nuance in the following sections, each representing one or more contradictions in the conversation about AI with emphasis on contradictions that are relevant to today’s organizations. While we can’t neatly resolve all the gray areas in the discussion of AI so that they’re black and white, good and bad, we can—and do—help organizations navigate AI’s contradictions to both mitigate risk and capitalize on the immense economic opportunity that AI presents. And let there be no doubt that the economic opportunity is immense.

What happens when AI simultaneously supports sellers in creating more personalized, human experiences while also powering “machine customers” acting in place of human buyers?

Is using AI to work faster than ever before a boon or a very real risk that employees and their employers should worry about?

Does AI’s power-hungry hardware diminish the positive impact of AI’s applications in sustainability initiatives?

We understand that billion- and trillion-dollar promises of economic growth can be hard to process alongside present-day uncertainty about AI’s risks. But while the capabilities of AI may be new to us, the challenge of processing conflicting information is not. As Grant argued in his op-ed, the discomfort of cognitive dissonance can be a precursor to evolving and changing our minds for the better. “One person’s flip-flopping is another’s enlightenment,” he writes. So, by opening our minds to the different and sometimes opposing facets of AI and GenAI, perhaps people can become more comfortable with a technology that seems to be evolving with or without us. Let’s try it, shall we?

You May Like

Let's Work Together

Get In Touch