Imagine you’re an associate at a consulting firm. You’re surprised to see a new “AI Assist” button appear in your email application one morning. Without any training or guidance from your firm’s IT department, you decide to try it out, asking the AI to draft a response to a client’s inquiry about tax implications for a proposed merger. The AI confidently generates a response that looks professional and well-written, which you quickly review and send. Three days later, your managing partner calls you into their office—the AI cited outdated tax regulations and recommended a structure that would create significant liability for the client. The incident triggers an urgent internal review, revealing that dozens of employees have been using the undisclosed AI feature for weeks, potentially exposing the firm to professional liability and damaging client relationships.
This may seem like the plot line of a modern-day Suits episode, but it is a reality that some software vendors have begun quietly embedding generative AI capabilities into their platforms, activating these features without notifying their business customers’ IT administrators. Major software providers have rolled out AI assistants and features through routine platform updates, essentially testing these capabilities on live enterprise data without explicit authorization from their clients. The practice of deploying “hidden AI” extends beyond mere feature releases—some vendors have started incorporating client data into their AI training processes through default settings, requiring organizations to actively discover and disable these features rather than obtaining advance permission. This approach flips traditional enterprise software deployment protocols, where new capabilities typically require administrative approval before activation.
Whether they are quietly training AI with client data, feeding that data into AI algorithms (which has prompted FTC scrutiny), or instituting an AI-run chatbot without disclosing the nature of their platform, the trend spans industries from banking and tech, to health care and more. (Though many users in Europe are already protected from such data harvesting by the EU and UK’s data laws.)
Before vendors implement AI tools into their services, getting customer opt-in is a fundamental step to avoiding a host of possible legal and ethical minefields.
Understanding the Opt-In Principle
The deployment of AI features and training requires a deliberate two-pronged approach to client consent.
First, organizations can obtain explicit permission before enabling AI-powered features within their products or services. Even if a tool is a trial or a beta feature, clients are less excited about the early access and more concerned that the functionality creates a data leakage opportunity, misaligns them against their commitments to their customers or regulators, or doesn’t fit with their overall AI governance. A buried “opt-out” button is very different from an “opt-in” feature, which would make an AI component obvious to users. “Most companies add the friction because they know that people aren’t going to go looking for it,” security and privacy activist Thorin Klosowski tells Wired. “Opt-in would be a purposeful action, as opposed to opting out, where you have to know it’s there.”
Second, and equally critical, organizations can secure separate authorization before using client data to train their AI models. Companies need massive amounts of data to build out new AI models. In their bid to compete, many are finding ways to consume user data to which they already have access—without client awareness. For example, some organizations have subtly added terms to their service agreements that allow them to scrape user data by default.
By making both AI feature activation and data usage for training contingent on explicit client approval, organizations demonstrate respect for customer autonomy while protecting themselves from the regulatory and reputational risks that arise from unauthorized AI implementation.
Risks of Not Opting In: Data Privacy and Regulatory Consequences
In addition to being an ethical question mark, AI features enabled without explicit opt in can violate major data protection laws like the EU’s General Data Protection Regulation (GDPR) or the California Consumer Privacy Act (CCPA). The EU also unanimously passed the AI Act last year, with a focus on high-risk use of the technology and transparency requirements for the largest models.
The EU has stayed especially busy, having issued more than 2,000 fines as of March 2024. One such company, the facial-recognition startup Clearview AI, was hit recently with a penalty from the Netherlands of more than $30 million after failing to stop multiple GDPR violations. On top of heavy fines, the potential reputational damage of treating data carelessly cannot be overstated.
Meanwhile, backed by strong implementation capabilities by the State of California, the CCPA has been actively conducting data enforcement sweeps. The organization has targeted streaming services, mobile applications and connected vehicles, among others. In 2024, the state also introduced specific provisions targeting AI’s impact on high-risk industries. These laws require businesses to notify customers when AI-driven decisions affect critical areas such as lending, insurance or access to health care. Failure to comply can result in significant fines and reputational damage, setting a clear expectation for businesses to prioritize transparency and obtain informed consent before deploying AI solutions.
Lack of transparency around AI deployment can expose clients to significant compliance risks, especially in industries subject to regulatory oversight. When vendors enable AI features without client consent, they may inadvertently receive access to sensitive client data in ways that contravene established policies or agreements. For instance, if a financial services client unknowingly allows a vendor’s AI tool to access sensitive personal data, the client might fail an audit for improper personal data handling practices. This could lead to Matters Requiring Attention (MRAs) from regulators, signaling deficiencies that demand immediate correction, further scrutiny or fines. Similarly, clients may unintentionally misinform regulators when reporting their data management practices, as they might be unaware of AI-driven processes embedded within third-party platforms. Regulatory bodies scrutinize these omissions harshly, often imposing penalties for non-compliance or requiring additional corrective measures. In a landscape of increasing regulatory vigilance regarding AI, undocumented AI use could easily spiral into substantial financial and operational costs for businesses reliant on third-party technology solutions.
The Ethics of Undocumented AI Use
Transparency about AI use should be non-negotiable. Aside from the evolving legalities around AI, companies also hold an ethical obligation to ensure that clients are aware their data is being used.
So far, such behavior has been difficult to regulate, but UNESCO is among those offering ethical guidelines. The framework, Recommendation on the Ethics of Artificial Intelligence, was adopted by 193 countries in 2021. “In no other field is the ethical compass more relevant than in artificial intelligence,” says UNESCO Assistant Director-General for Social and Human Sciences, Gabriela Ramos. In regard to data, the recommendations emphasize “transparency; appropriate safeguards for the processing of sensitive data; an appropriate level of data protection; effective and meaningful accountability schemes and mechanisms,” among other touchpoints.
As companies increasingly embed AI into their operations, they shoulder the moral responsibility to inform clients how their data is collected, processed and potentially repurposed. Failing to document or disclose AI use undermines client trust and risks exploiting user autonomy, transforming what could be a mutually beneficial relationship into one fraught with suspicion and ethical ambiguity.
Conclusion: What Effective Opt-In Looks Like
Effective opt-in practices for AI tools go beyond a mere box to check—robust mechanisms can build trust, ensure transparency and empower clients to make informed decisions about their data and technology use. We outline below what effective opt-in looks like, with actionable steps for both vendors and their customers:
- Explicit Consent Mechanism: Vendors can ensure clients actively agree to AI-enabled features through a clear, unambiguous opt-in process available for the highest-level administrator.
- Customizable AI Controls: Vendors should offer configurable settings for AI features across the enterprise, allowing exclusively the administrator to modify or disable specific functionalities without disrupting their overall software experience.
- Granular Consent Options: Vendors can provide the administrator, as well as individual users, with specific choices about how their data will be used, including separate opt-ins for data processing, AI-driven decision-making, and model training on proprietary or sensitive information.
- Detailed Client Notifications: Vendors can notify clients about the activation of AI features through multiple channels, such as email alerts, in-app messages or dashboards.
- Contractual Safeguards: When negotiating software agreements, customers can push to include provisions that require vendors to explicitly disclose all AI-related functionalities, and prohibit the use of client data for model training without prior written consent.
Companies that prioritize transparency and give clients control over their AI use demonstrate a respect for autonomy and accountability that builds lasting partnerships.
RELATED ARTICLES
Wrangle Your Data: How to Prep Content to Get the Best Value from Your AI