Steering emerging growth in Generative AI and Trade Secret Protection
The Trinidad and Heppner case represents early judicial signals that information shared with public AI tools may fall outside
Steering emerging growth in Generative AI and Trade Secret Protection
The Trinidad and Heppner case represents early judicial signals that information shared with public AI tools may fall outside established legal protections
The recent federal district court rulings have underscored the risks involved in sharing confidential material with generative AI platforms.
In the Trinidad v. OpenAI case, the court dismissed the plaintiff’s trade secret claims under the Defend Trade Secrets Act (DTSA). It ruled that the plaintiff had voluntarily disclosed her allegedly proprietary frameworks to OpenAI while using ChatGPT to develop them.
In a separate matter, Judge Jed S Rakoff in the United States v. Heppner case held that documents generated through publicly accessible AI systems were not protected by attorney-client privilege. He reasoned in part that information communicated via an AI platform could not be considered confidential where the platform had no contractual obligation to maintain secrecy.
When viewed collectively, Trinidad and Heppner represent early judicial signals that information shared with public AI tools may fall outside established legal protections.
While the outcome aligns with long-standing principles of trade secret and privilege law, it highlights the need for companies to reassess their exposure when using AI technologies. Before examining these cases more closely, it is useful to outline broader considerations at the intersection of generative AI and trade secret law.
A core requirement for trade secret protection under the DTSA is that the information must not be ‘generally known’ or ‘readily ascertainable through proper means’.
Generative AI
Generative AI systems, including ChatGPT or Claude, raise a threshold concern. They may render information that was previously confidential ‘generally known’ or ‘readily ascertainable’ within the meaning of trade secret law.
The DTSA further requires that owners take ‘reasonable measures’ to preserve secrecy, an issue central to the court’s reasoning in Trinidad.
Traditionally, information is considered verifiable if it can be obtained from public sources such as patents, trade publications, or reference materials. Although courts have not yet directly addressed whether information surfaced through generative AI queries meets this standard, it is likely they will analogize such outputs to conventional public sources. In case that approach is adopted, information that AI tools can reconstruct or derive from publicly available data may no longer qualify for trade secret protection.
Some observers argue that as generative AI becomes more adept at synthesizing public information, it could effectively erode trade secret protections by making such information broadly accessible. It remains uncertain whether this scenario will fully materialize or whether it overstates current technological capabilities.
At the same time, a countervailing perspective suggests that this evolution may be appropriate if AI systems could readily derive certain information from public inputs. It may indicate that such information does not warrant trade secret protection, thereby raising the threshold for what qualifies as protectable proprietary knowledge
Trinidad v. OpenAI
Trinidad squarely examines the implications of uploading potentially protected confidential material to a generative AI platform.
The DTSA requires that a trade secret owner take reasonable steps to preserve the secrecy of the information. Observers have long suggested that disclosing trade secrets to a generative AI system could undermine protection, much like publishing such information online. Trinidad is among the first to directly engage with this question.
Meanwhile, the pro se plaintiff in Trinidad has brought multiple claims against OpenAI, including under the DTSA. It alleged misappropriation of proprietary AI development frameworks she created while using ChatGPT.
The court’s analysis centered on the threshold requirement that purported trade secrets must be subject to reasonable measures to maintain their confidentiality. It observed that the plaintiff “has not alleged that she took any reasonable measures to keep these ‘protocols and frameworks’ secret.”
Importantly, the plaintiff acknowledged developing these frameworks using ChatGPT. This was an action that “would have required her to voluntarily share the information she now alleges is part of her ‘trade secrets’ with OpenAI.”
While relying on the principle articulated by the Supreme Court in the Ruckelshaus v. Monsanto Co. case, the court reiterated that when a party “disclosed her trade secret to others who are under no obligation to protect the confidentiality of the information, her property right is extinguished.”
By agreeing to OpenAI’s Terms of Service and using ChatGPT in the development process, the plaintiff effectively consented to disclosure without instituting confidentiality safeguards.
The court reiterated that, irrespective of whether such consent would ultimately be enforceable contractually, the absence of protective measures and the voluntary nature of the disclosure defeated trade secret protection under the DTSA.
U.S. v. Heppner
Heppner arose in a distinct setting of criminal prosecution rather than a civil trade secret dispute. It considered whether communications recorded through a generative AI platform were shielded by the attorney-client privilege. Nonetheless, the court’s treatment of confidentiality bears directly on the question of whether trade secret protection could survive disclosures made to AI systems, making the case instructive for practitioners.
During the government’s investigation, the FBI recovered documents from Heppner’s residence, including roughly thirty-one records capturing his exchanges with Anthropic’s Claude AI.
Following the indictment, the defense counsel asserted privilege over these materials. He contended that they reflected information derived from the counsel and were prepared to obtain legal advice.
Characterizing the issue as “a question of first impression nationwide,” the court held that the AI-generated records did not meet the criteria for attorney-client privilege, determining that the communications lacked “at least two, if not all three” of the doctrine’s core elements.
First, Claude was not an attorney, and exchanges between non-attorneys concerning legal matters did not attract privilege. The court added that this reasoning would extend to other forms of privilege, all of which require, “among other things, a trusting human relationship with a licensed professional who owes fiduciary duties and is subject to discipline.”
Second, and more pertinent to trade secret considerations, the communications were not confidential. This conclusion rested not only on the involvement of a third-party AI platform but also on Anthropic’s stated privacy practices, which inform users that input and output data may be collected. It would be used for system training, and disclosed to third parties, including regulators.
Thus, the court declined to extend privilege protection to documents related to AI.
Trade Secrets on Generative AI
The important question for trade secret owners and the counsel advising them is what additional steps are necessary to protect sensitive information in the era of generative AI. The growth of the internet previously compelled companies to adopt stronger safeguards, such as password protection, firewalls, encryption, monitoring of employee internet activity, and comprehensive contractual confidentiality provisions.
Therefore, organizations that were slow to implement such safeguards risked forfeiting trade secret protection. A similar dynamic is emerging now. Companies that fail to respond to the distinct risks posed by generative AI may likewise expose their confidential information.
Some organizations responded by banning employee use of generative AI tools. However, this approach is impractical and counterproductive. In reality, employees are unlikely to abandon tools that significantly enhance productivity, and blanket bans tend to encourage workarounds rather than compliance. A more effective strategy is to direct usage toward platforms that provide credible confidentiality protections.
While one approach is to deploy an internal generative AI system. Under this model, any information shared with the AI remains within the company’s controlled environment, and it is governed by employee confidentiality obligations. Offering strong protection, it can be resource-intensive and beyond the reach of many organizations.
Another option is to secure an enterprise-level license from a commercial AI provider. For instance, certain providers offer arrangements under which user inputs and outputs are not retained, except where required for legal compliance or misuse prevention. Some also provide zero data retention configurations that limit data storage beyond initial screening processes.
Now, whether an enterprise license alone qualifies as a ‘reasonable measure’ remains unsettled, though there is support for that position. Commercial terms and data processing agreements typically impose explicit contractual limits on the provider’s ability to use customer data for model training or to disclose it to third parties.
These arrangements resemble traditional non-disclosure agreements or vendor confidentiality clauses, both of which have long been recognized as reasonable safeguards in trade secret law. In addition, enterprise-grade security measures are often independently audited, which may support a finding of objective reasonableness.
However, the reasonable measures inquiry is inherently fact-specific, and reliance on an enterprise license alone is unlikely to suffice. Courts are likely to consider whether additional safeguards were implemented, including access controls, internal policies governing permissible data inputs, and employee training on appropriate usage.
It is also important to note that even under strict data retention configurations, some information may still be retained for limited purposes such as abuse monitoring. This residual retention could be cited by an opposing party in challenging the adequacy of a company’s safeguards.
In parallel with adopting enterprise solutions, companies should establish and document internal AI governance frameworks. These policies should clearly define what categories of information may be entered into AI systems, and employees should formally acknowledge their obligations. A related concern, often described as ‘shadow AI’ arises when employees use personal or consumer-grade accounts for work-related tasks.
Employees who accept consumer platform terms without authorization may inadvertently permit the use of company data for training purposes, allowing proprietary information to enter external systems without oversight.
To mitigate this risk, companies should adopt and enforce policies prohibiting the use of personal AI accounts for business purposes, supported by access controls and training.
Companies should also apply a strict need-to-know principle to AI usage, limiting which employees can input trade secret information into such systems and maintaining appropriate access logs. In practical terms, organizations should review existing trade secret protection measures and determine where enhancements are needed to address AI-related risks.
Focus on Fairness
Recent decisions such as Trinidad and Heppner signal the early stages of judicial engagement with the intersection of generative AI and trade secret law. Their central takeaway is clear. They maintain that disclosing confidential or proprietary information to a public generative AI platform (without adequate contractual and structural protections), may be treated as equivalent to public disclosure.
Courts are likely to apply established legal principles relating to trade secrets, privilege, or confidentiality, without creating AI-specific exceptions.
Relating to trade secret holders, this underscores that the ‘reasonable measures’ standard now extends to AI usage.
At a minimum, organizations should transition users to enterprise-grade AI services with appropriate contractual safeguards. They must implement clear internal policies governing permissible data inputs, provide employee training, and audit existing protection frameworks to identify and address the loopholes.
The legal standards are not new. The challenge lies in applying them rigorously within the evolving technological landscape.