Quantcast
Channel: security and compliance - Blog
Viewing all articles
Browse latest Browse all 3

Putting the AI Genie Back in the Bottle?

$
0
0

Much has been said and written about the meteoric rise of AI over the past twelve months or so. That includes our own blog, too: In the last two quarters, we’ve covered AI governance, generative AI security questions, and even ChatGPT’s one-year anniversary

Coverage is so thorough that AI—generative AI specifically—has now passed the peak of Gartner’s Hype Cycle and is starting to feel more than a little overplayed. That said, the metaphorical AI Genie is well and truly out of the bottle with concerns rising almost as fast as the AI technology updates that seem upon us nearly every day. Where does this put the data and analytics industry? Or society as a whole? Is it even possible to put the AI Genie back in the bottle, and if we could, would we want to? 

 

Unleashing the AI Genie—responsibly 

Like many in the industry, Domo is focusing much of our efforts on AI capabilities within our platform, ensuring that those capabilities make sense and deliver value to our customers. We’re working hard to ensure that data, as the foundational AI asset, is managed and ready to deliver on AI’s promises.

Likewise, we’re taking a pragmatic approach to the expanding portfolio of available AI models by providing agnostic management tooling and integration. This is our top priority in today’s data and analytics landscape. 

Technology and business use cases aside, regulatory bodies are also abuzz about AI readiness. I’ve had the pleasure of working with Australian universities and federal government agencies on policy and guidance around “responsible” AI.

The breakneck speed of AI development has created a strong sense of urgency among regulators to understand potential risks with AI and develop mitigating strategies. As most will appreciate this is somewhat of a thankless task, with regulators being “damned if they do and damned if they don’t.” It also appears the AI Genie is relishing its time out of the bottle and shows no signs of wanting to get back in. 

 

Regulating the AI Genie—top two concerns 

Regulation can take many forms, ranging from outright prohibition to drafting recommendations and guidelines, with varying degrees of enforcement. The key concerns at present fall into two camps: 

  1. The AI technology itself, including the speed of development, data considerations, governance, infrastructure, and operating costs.
  2. The impact and potential risks to business and society, primarily from a legal perspective concerning bias and ethics as well as human accountability and unexplainable outcomes.

Compounding these concerns is the need to “get it right”—regulators rarely have the luxury of trial and error and are beholden to a wide range of interest groups and constituents, all of whom demand immediate responses. But regulation needs to be conservative (patient, even?) so as not to overreach or unnecessarily stifle growth and innovation. Normally this type of constraint is workable. However, the pace of AI development and adoption is driving new levels of urgency—including early propositions to “pause” AI altogether! 

So where does that leave us? Clearly there’s no way (or need) to put the AI Genie back in the bottle. However, now that the initial surge of AI hype is passing it is incumbent on the industry to develop a more nuanced response to AI’s possibilities.

While there is no shortage of innovation and commercial opportunity, we need to ensure that we do everything possible to minimise risks and drive productive, sustainable use cases. If we don’t, AI risks becoming a technology underachiever, and we risk squandering its potential. 

The post Putting the AI Genie Back in the Bottle? first appeared on Blog.


Viewing all articles
Browse latest Browse all 3

Latest Images

Trending Articles





Latest Images