AI for communicators: What’s new and what matters

Fresh data on CEO concerns of being replaced by automation, domestic and international regulatory news, new tools from TikTok, Google and more

AI roundup

It’s hard to believe that generative AI only exploded into the public consciousness with the broad release of ChatGPT this past November. Since then, it’s upended so many aspects of life — and threatens to change many more.

Now, we’re looking at fresh data on CEO concerns of being replaced by automation, domestic and international regulatory developments, new tools from TikTok, Google and more.

Data leaks, scared CEOs and conscious AI, oh my! 

Many communicators who are hesitating over experimenting with AI tools are worried about data privacy. And those fears aren’t without merit. 

This week, those fears were realized when security startup Wiz revealed that Microsoft’s AI team accidentally leaked 38 TB of private company data that included employee computer backups, passwords to Microsoft services, over 30,000 internal Teams messages and more. 

The cause of the leak is unique to Microsoft’s stake in AI, though– the AI team uploaded training data with open-source code for AI models to the cloud-based software development site GitHub. When external users visited the link, it gave them permission to view Microsoft’s entire Azure cloud storage account. 

Microsoft told TechCrunch that “no customer data was exposed, and no other internal services were put at risk because of this issue.” To its credit, the company also said that Wiz’s research expanded Microsoft’s relationship with GitHub’s secret spanning service, allowing it to monitor all open source code changes for exposure of credentials and other proprietary information. 

This incident, though specific to Microsoft’s business, highlights the risk of sharing AI models and proprietary information on cloud services. Triple-check your permissions, read the fine print and codify these steps in any internal guidelines for using AI-powered tools so your colleagues know to do the same.  

Another concern that communicators have amid expanded enterprise AI applications is a fear of having their jobs replaced by automation. Turns out, CEOs are thinking the same thing.

A new report from edX that surveyed 800 C-suite executives, including over 500 CEOs, found that nearly half (49%) of CEOs believe that “most” or “all” of their roles could be replaced by AI. The implications of this statistic are startling, as it suggests even the boss understands they are not immune to being replaced.

Elsewhere, the report’s key findings suggest conclusions pointing back to the business goals of the online learning platform that fielded it – 87% of C-suiters said they’re struggling to find talent with AI skills, while most execs believe workers skilled at using AI should earn more (82%) and promoted more often (74%)

The last edition of this roundup explored the gap between employees who crave AI training and organizations that are actually providing it. Wielding this data may help communicators make the case for more AI upskilling. 

In the interim, a new report offers some signs you can watch out for to tell if your AI is actually conscious. 

That includes a measurable distinction between conscious and unconscious perception, understanding parts of the brain accessed to complete specialized tasks and many more things that are not so easy to understand. 

While the chances of AI achieving sentience are low, they’re never zero. Constant vigilance! 

AI regulation discussions continue across the globe

Our last look at AI regulation explained the precedent set by the U.S. District Court for the District of Columbia that AI-generated images couldn’t be copyrighted because they lacked human authorship. Soon after, the U.S. copyright office opened a public comment period that it claimed would inform regulations moving forward.

In just two short weeks, the regulation conversation has evolved considerably. The Department of Homeland Security announced new policies aimed at promoting responsible AI use within the department, with a specific focus on facial recognition tech. 

“The Department uses AI technologies to advance its missions, including combatting fentanyl trafficking, strengthening supply chain security, countering child sexual exploitation, and protecting critical infrastructure,” the DHS wrote. “These new policies establish key principles for the responsible use of AI and specify how DHS will ensure that its use of face recognition and face capture technologies is subject to extensive testing and oversight.”

Meanwhile, Bill Gates, Elon Musk and Mark Zuckerberg met with the Senate last week to discuss the benefits and risks of AI. All tech moguls support government regulation.

CNN reports:

The session organized by Senate Majority Leader Chuck Schumer brought high-profile tech CEOs, civil society leaders and more than 60 senators together. The first of nine sessions aims to develop consensus as the Senate prepares to draft legislation to regulate the fast-moving artificial intelligence industry. The group included CEOs of Meta, Google, OpenAI, Nvidia and IBM.

All the attendees raised their hands — indicating “yes” — when asked whether the federal government should oversee AI, Schumer told reporters Wednesday afternoon. But consensus on what that role should be and specifics on legislation remained elusive, according to attendees. 

Countries around the world are facing the same struggles on regulating AI before it grows too wildly out of control. 

Reuters compiled a comprehensive list of how global governments are wrestling with these issues. Nearly all are still in the planning and investigation phases, with few rolling out concrete policies. Some, including Spain and Japan, are looking into possible data breaches from OpenAI and pondering how best to address genies that are already out of bottles.

China, meanwhile, has already implemented temporary rules while permanent ones are put into place. Since going into effect on Aug. 15, these measures require “ service providers to submit security assessments and receive clearance before releasing mass-market AI products,” Reuters reported.

But according to Time, these rules aren’t being enforced very strictly and it sounds like permanent rules might be watered down. The stringent temporary rules were seen as hampering AI development in the tech-forward nation, and they’re already being scaled back. Notably, rules for internal AI uses are much more lax than for external purposes. 

Some say that these relaxed rules could cause more competition with American AI companies, while others argue that China is already far behind in development and its authoritarian control of the internet will further slow development, even without the new rules. 

New AI tools across Google and TikTok

Bard, the generative AI product from Google, is trying to gain market share after a rocky start that saw it lagging far behind ChatGPT. Bard’s initial launch used a less sophisticated AI than ChatGPT, the New York Times reported, and early users walked away unimpressed – but never came back, even after the tool was improved.

Now the team at Alphabet is hoping that integration with Google’s blockbuster products like Gmail and YouTube will give Bard a boost. 

According to the Times:

Google’s release of what it calls Bard Extensions follows OpenAI’s announcement in March of ChatGPT plug-ins that allow the chatbot to gain access to updated information and third-party services from other companies, including Expedia, Instacart and OpenTable.

With the latest updates, Google will try to replicate some of the capabilities of its search engine, by incorporating Flights, Hotels and Maps, so users can research travel and transportation. And Bard may come closer to being more of a personalized assistant for users, allowing them to ask which emails it missed and what the most important points of a document are.

The Google search engine will also offer a fact check of Bard’s answers, a safeguard against hallucinations. Answers that can’t be supported with search data will be highlighted in orange, quickly helping users identify dubious claims.

With the dominance of the Google suite in so many people’s personal and professional lives, these changes could make Bard more attractive as it seamlessly fits into day-to-day tasks. But that assumes that AI is answering questions in a way that’s helpful. 

Meanwhile, Morgan Stanley is making a huge bet on AI, going so far as to equip financial advisors with an artificial intelligence-powered “assistant.” CNBC says that the bespoke OpenAI tool, the AI @ Morgan Stanley Assistant, will allow advisors to quickly search a huge database of research. Finding answers in short order will allow for more client interaction, Morgan Stanely hopes.

““Financial advisors will always be the center of Morgan Stanley wealth management’s universe,” Morgan Stanley co-President Andy Saperstein said in a memo obtained by CNBC. “We also believe that generative AI will revolutionize client interactions, bring new efficiencies to advisor practices, and ultimately help free up time to do what you do best: serve your clients.”

In an interesting wrinkle, users must ask the AI questions in full sentences, as if talking to a human. Search engine-like keywords won’t do the job. 

If used properly, this could improve customer service. But it also carries a high risk of error or overreliance. It’s an experiment to watch for sure.

Finally, deepfakes and AI-powered spoofs are becoming commonplace on TikTok these days. It’s not uncommon to hear a celebrity speaking words their mouth never said. 

The social media giant has launched a new bid to make it easier for regular users to identify this AI-generated content. In addition to a label that allows creators to voluntarily tag their content as AI-generated or significantly edited with AI, TikTok is currently testing an AI detection tool. If the technology works, this could be a game-changer for transparency in the social space. 

But as with everything in AI right now, it’s all ‘if’s. 

What trends and news are you tracking in the AI space? What would you like to see covered in our biweekly AI roundups, which are 100% written by humans? Let us know in the comments!

Justin Joffe is the editor-in-chief at Ragan Communications. Before joining Ragan, Joffe worked as a freelance journalist and communications writer specializing in the arts and culture, media and technology, PR and ad tech beats. His writing has appeared in several publications including Vulture, Newsweek, Vice, Relix, Flaunt, and many more.

Allison Carter is executive editor of PR Daily. Follow her on Twitter or LinkedIn.

COMMENT

Ragan.com Daily Headlines

Sign up to receive the latest articles from Ragan.com directly in your inbox.