Zoom's updated TOS prompted concerns about AI and privacy. Can the two go hand-in-hand?

It's been a bit of a rough PR week for video meeting and chat platform Zoom. The app, which became synonymous with remote working during the pandemic, received a few sideways glances when it announced plans to bring workers back into the office.

Then, thanks to a post on blog StackDiary, users on X, formerly Twitter, caught wind of what appeared to be a change to Zoom's terms of service (TOS), which would seemingly allow the platform to feed user "content" to AI programs in training almost indiscriminately.

Zoom calls workers back to office:Even Zoom wants its workers back in the office: 'A hybrid approach'

Section 10.4 of Zoom’s TOS, specifically, raised red flags for many users, reading in part:

"You agree to grant and hereby grant Zoom a perpetual, worldwide, non-exclusive, royalty-free, sublicensable, and transferable license and all other rights required or necessary to redistribute, publish, import, access, use, store, transmit, review, disclose, preserve, extract, modify, reproduce, share, use, display, copy, distribute, translate, transcribe, create derivative works, and process Customer Content and to perform all acts with respect to the Customer Content .... for the purpose of product and service development, marketing, analytics, quality assurance, machine learning, artificial intelligence, training, testing, improvement of the Services, Software, or Zoom’s other products, services, and software, or any combination thereof."

With the abundance of conversation around artificial intelligence and the role it should or should not play circulating everywhere lately, the words "machine learning," "artificial intelligence," "training," and "testing" rang in the ears of users.

X, along with other social media platforms, quickly lit up with proclamations of banning Zoom from company use and the cancelling of memberships.

Zoom responds to outcry over changes to its terms of service

Soon after, Zoom put out a company blog clarifying the clause, adding in bold lettering: "For AI, we do not use audio, video, or chat content for training our models without customer consent."

The post also clarified what AI services Zoom was hoping to train, which include Zoom IQ Meeting Summary and Zoom IQ Team Chat Compose, tools that generate meeting summaries and compose chats. The features are currently available on a free-trial basis and require account owners to enable the features, according to the post.

"When you choose to enable Zoom IQ Meeting Summary or Zoom IQ Team Chat Compose, you will also be presented with a transparent consent process for training our AI models using your customer content. Your content is used solely to improve the performance and accuracy of these AI services. And even if you chose to share your data, it will not be used for training of any third-party models," the blog reads.

A Zoom spokesperson told USA TODAY in a statement that: "Zoom customers decide whether to enable generative AI features, and separately whether to share customer content with Zoom for product improvement purposes. We’ve updated our terms of service to further confirm that we will not use audio, video, or chat customer content to train our artificial intelligence models without your consent.”

While Zoom was the unfortunate subject of public discourse this time around, International Association of Privacy Professionals Washington, D.C. Managing Director Cobun Zweifel-Keegan told USA TODAY that people, businesses and regulators can expect to continue grappling with the use of AI.

"Every now and then, we seem to get a lot of interest in some company's terms of service and how it's raised, often they're very similar to each other. If you look at other companies, they will have a lot of the same language that was an issue here," Zweifel-Keegan shared.

"Companies are always updating their terms of service because of new products and services that they're offering...but yes, also, I think we're going to start seeing AI being explicitly described in terms of service more often."

Zweifel-Keegan said that terms of service exist, on a large scale, to protect the company. Because anything a user "creates," including the use of text, voice and image, then automatically belongs to the user thanks to copyright law, a platform seeking to make any copy of user-created "content" to store and otherwise use is opening itself up to liability. This is why we see the long, and at times aggressive sounding, walls of text we must click "agree" to before using a service.

At the same time, companies are still subject to privacy policy, which usually exists separately from but in tandem with TOS agreements. The purpose of a privacy policy, Zweifel-Keegan said, is to explain the commitments that the company is making around personal information and often to limit the company's ability to collect information for purposes that go beyond delivering the agreed upon service.

"Terms of service and privacy policies are both legally binding instruments that companies write and publish on their websites. They have very different purposes," he said. "So, reading one without looking at the other can sometimes lead to kind of frightening and maybe overly dramatic reads on what exactly is happening."

AI models need real-world information to improve

In the case of Zoom, concerns were focused on the potential to use video, audio and other "customer content" for training models and training algorithms, said Zweifel-Keegan. This, he said, gets to the core of policy and general conversation around AI today: what should be allowed to us for training new models?

"We have debates around should publicly available information, which is what was used primarily to train, to educate [AI], should that be used? Should that be allowed?" he said.

"Should you have to seek out consent and find all the pieces of information individually before you can train those types of models, should people know before they've shared their information that it's going to be used to train a model?"

He said that models like Zoom can be especially tricky, as the decisions in this care are up to the primary holder of the account. Meaning, if you're in a call and the host has consented to the use of AI, you have the choice of asking them to rescind that consent or leave the call if you don't want to take part in it.

How to handle a data breach:Got a data breach alert? Don't ignore it. Here's how to protect your information.

Zweifel-Keegan said that different companies will likely determine different means of using consumer content and data to train AI and provide different possibilities for opting-out. Ultimately, these models need real-world information in order to improve.

Even after systems are launched, they require continued training in order to make them better and more useful. At the same time, they need to be monitored for privacy, fairness and bias, accuracy and other concerns.

But one thing is for sure: it's not going away anytime soon.

Those concerned with the use of their content can take a few steps to double-check they understand what they're getting in to before clicking accept, however.

Where to go to better understand what you're agreeing to

"I think most companies have gotten much better over the years at explaining their privacy practices in plain language," said Zweifel-Keegan.

"So, in addition to privacy policies, which are just as arduous as terms of service, usually many companies have layered privacy notices, privacy centers where you can go to better understand and to more quickly understand how they're collecting types of information, using and sharing it with people and any choices that you might have to exercise over that."

He pointed out the many blogs Zoom has available on their website which explain in clearer, simpler terms what is inside of their TOS and privacy policies, something many other companies have as well. In an increasing number of states, consumers also have the ability to request an overview for the information a company has on them and ask for its deletion.

And, he reassured, companies are beholden to statements made in these policies, meaning consumers are protected by authorities like the Federal Trade Commission if any claims, like the ability to opt-out of certain data collection, turn out to be false.

"AI is part of every policy conversation right now," said Zweifel-Keegan. "I think one of the takeaways for companies in this is that it is really important to be straightforward with your users and provide as much clarity as possible about terms of service changes to help make sure that customers understand what you are doing and what you're not doing."

Disclaimer: The copyright of this article belongs to the original author. Reposting this article is solely for the purpose of information dissemination and does not constitute any investment advice. If there is any infringement, please contact us immediately. We will make corrections or deletions as necessary. Thank you.