• Developments in Digital Privacy Litigation in 2024-2025: CIPA, VPPA, and California’s SB 690

    By Leeza Arbatman and Scott C. Hall

    In the wake of an explosion in digital privacy litigation, courts and legislatures are redrawing some of the boundaries of what qualifies as unlawful data collection under decades-old statutes. Claims brought under California’s Invasion of Privacy Act (CIPA) and the federal Video Privacy Protection Act (VPPA) have tested how far traditional wiretap and video privacy laws can stretch to cover modern tracking technologies like pixels, session replay tools, and embedded analytics software. As these suits proliferate, courts are being asked to decide whether routine digital tracking amounts to interception, surveillance, or unauthorized disclosure of personal information.

    Recent developments reflect both the tightening and expansion of privacy liability. In California, courts remain split on whether modern tracking tools qualify as “pen registers” or violate CIPA’s wiretap provisions, while a pending bill—SB 690—aims to sharply curtail such claims going forward. At the federal level, VPPA decisions have moved in divergent directions, with a growing Circuit split on what makes someone a “consumer” and what counts as “personally identifiable information.” Together, these trends show a legal landscape in flux, shaped as much by statutory interpretation as by shifting expectations around digital privacy and surveillance.

    California CIPA Developments

    Recent decisions illustrate the divergent paths CIPA claims are taking in California and beyond. While some courts continue to reject CIPA suits targeting ordinary website tracking, others are permitting such claims to proceed—especially where plaintiffs allege unauthorized use of third-party tracking software or more invasive data collection. The result is a patchwork of outcomes that often turn on the specific tracking technologies and legal theories alleged.

    What Counts as a “Pen Register” or “Trap and Trace Device” Under § 638.51?

    Courts are divided on whether modern web-tracking tools fall within the scope of California Penal Code § 638.51, which prohibits unauthorized use of devices that capture dialing or routing information, but not communication content.

    In some recent decisions, courts have permitted claims to proceed where plaintiffs plausibly alleged that tools like TikTok scripts or IP trackers functioned like pen registers or trap-and-trace devices:

    • Lillian Jurdi v. MSC Cruises (USA) LLC, No. 24STCV14098 (Cal. Super. Ct. Sept. 17, 2024): TikTok tracking scripts that collected geographic information, referral tracking, and URAL tracking could qualify as such devices.
    • Shah v. Fandom, Inc., No. 3:23-cv-4883 (N.D. Cal. Oct. 21, 2024): IP tracking that relayed user location data supported a pen register claim, as did the fact that users could not reasonably expect that trackers would be installed on websites and transmit their IP addresses every time they visited.
    • Heiting v. IHOP Restaurants, LLC, No. 24STCV14453 (Cal. Super. Ct. Oct. 28, 2024): TikTok scripts plausibly captured incoming user data that identified that user like a trap-and-trace device.
    • Lesh v. CNN, No. 1:23-cv-7374 (S.D.N.Y. Feb. 20, 2025): Court noted in dicta that IP tracking might fit the definition, particularly where it collected location-related data associated with user communications.

    Others have rejected such claims, holding that § 638.51 targets telephone surveillance and doesn’t extend to routine online tracking:

    • Sanchez v. Cars.com, 2025 WL 487194 (Cal. Super. Ct. Jan. 27, 2025): The pen register statute does not extend to internet communications.
    • Rodriguez v. Plivo, 2024 WL 5184413 (Cal. Super. Ct. Oct. 2, 2024): Basic location data revealed by an IP address is not sensitive enough to sustain a pen register claim.
    • Palacios v. Fandom, No. 24STCV11264 (Cal. Super. Ct. Sept. 24, 2024): IP addresses are not outgoing communications, as required to plausibly allege violation of the pen register statute.
    • Aviles v. LiveRamp, No. 23STCV28190 (Cal. Super. Ct. Jan. 28, 2025): Tracking beacon collected only IP addresses and device information so did not qualify as a pen register.

    Session Replay and “Reading” in Transit under § 631(a)

    Courts assessing CIPA § 631(a) claims based on session replay tools have focused on whether the software “reads” communications during transmission. The statute prohibits unauthorized interception, but not all data capture qualifies—liability generally requires real-time comprehension or decoding.

    Several decisions highlight this distinction between passive recording and active interception:

    • Heerde v. Learfield Communications LLC, No. 2:23-cv-5258 (C.D. Cal. July 19, 2024): Court allowed the § 631(a) claim to proceed past the pleading stage where plaintiffs alleged that search terms were transmitted in real time to third parties, constituting interception in transit.
    • Torres v. Prudential Financial, Inc., 2025 WL 1135088 (N.D. Cal. Apr. 17, 2025): Court granted summary judgment for defendants. The session replay software recorded keystrokes and mouse movements for later viewing but did not “read” the data as it was being transmitted. The absence of real-time decoding or interpretation defeated the CIPA claim.
    • Williams v. DDR Media, LLC, 757 F. Supp. 3d 989 (N.D. Cal. 2024): After discovery, the court found that the tracking software hashed inputs and did not retain or analyze their contents. Because it neither read nor attempted to understand the meaning of the communications during transmission, no liability under § 631(a) attached and summary judgment was granted for the third-party vendor and defendant who partnered with it.

    Privacy Expectations in IP Addresses and Standing

    Defendants continue to win dismissal where courts find no reasonable expectation of privacy in IP addresses or where plaintiffs fail to allege a concrete injury.

    • Gabrielli v. Insider Inc., No. 1:23-cv-7433 (S.D.N.Y. Feb. 18, 2025): Dismissed for lack of standing; IP tracking alone didn’t show harm or privacy invasion.
    • Zhizhi Xu v. Reuters News & Media, No. 1:23-cv-7425 (S.D.N.Y. Feb. 13, 2025): Standing denied where plaintiff didn’t allege that IP tracking resulted in targeting or other harm.
    • Heiting v. FKA Distributing Co., No. 3:23-cv-5329 (N.D. Cal. Feb. 3, 2025): No standing where plaintiff failed to specify frequency of visits, data shared, or whether the tracking led to any deanonymization or harm.
    • Casillas v. Transitions Optical Inc., No. 23STCV30742 (Cal. Super. Ct. Apr. 23, 2024): Dismissed for lack of allegations about howplaintiff interacted with the site or what data was collected.
    • Ingrao v. AddShoppers, Inc., 2024 WL 4892514 (E.D. Pa. Nov. 25, 2024): Held that email addresses and general internet activity are not sensitive enough to support standing under CIPA or similar statutes.

    The Ninth Circuit Weighs in with Three Decisions

    Amidst these varying district court cases, the Ninth Circuit weighed in on three CIPA cases, affirming dismissal of CIPA claims in two cases, but reversing dismissal in a third case. These decisions will likely be used by both plaintiffs and defendants going forward in bringing and defending against CIPA claims:

    • Thomas v. Papa John’s, 2025 WL 1704437 (9th Cir. June 18, 2025): Affirmed dismissal of CIPA claims based on session replay code because plaintiff alleged that Papa John’s directly violated § 631(a) by eavesdropping, as opposed to aiding and abetting eavesdropping by a third party. The panel held that a party to a conversation cannot be liable for eavesdropping on its own conversation.
    • Mikulsky v. Bloomingdale’s, 2025 WL 1718225 (9th Cir. June 20, 2025): Reversed dismissal of CIPA claims based on session reply code on defendant’s sufficient facts to allege that defendant aided or conspired with third-party session reply providers to capture the “contents” of plaintiff’s communications on defendant’s website (including names, addresses, credit card information, and product selections), and not merely “record” information (such as mouse clicks or movements) regarding the characteristics of those communications.
    • Guiterrez v. Converse, 2005 WL1895315 (9th Cir. July 9, 2025): Affirmed dismissal of CIPA claims based on chat feature provided on Converse website by Salesforce because plaintiff provided no evidence that her chats were read by Salesforce, despite evidence that Salesforce could read those chats. Note concurrence by Judge Bybee questioning whether CIPA was intended to cover internet communications at all: “If theCalifornia legislature wanted to apply § 631(a) to the internet, it could do so by amending that provision or adding to CIPA’s statutory scheme . . . California has failed to update § 631(a) to account for advances in technology since 1967. It is not our job to do it for them.” Id. at *3.

    The VPPA Circuit Split in the Digital Age

    Background

    The VPPA prohibits video service providers from knowingly disclosing a consumer’s personally identifiable information (PII) related to video viewing without consent. Congress enacted the statute in 1988 after Judge Robert Bork’s video rental history was disclosed during his Supreme Court confirmation process. Although the titles—such as Hitchcock thrillers and family films—were unremarkable, the episode sparked public concern over the ease with which viewing habits could be exposed. Following what became known as the “Bork Tapes” episode, Congress passed the VPPA to protect disclosure of consumers’ video viewing information without their consent.

    The Second Circuit Expands, Then Narrows, the VPPA

    In Salazar v. National Basketball Association, 118 F.4th 533 (2d Cir. 2024), the plaintiff subscribed to the NBA’s email newsletter and later viewed videos on NBA.com while logged into Facebook. He alleged that the NBA used Meta’s tracking pixel to share his personal information and viewing history and Facebook ID with Meta for targeted advertising. The Second Circuit held that the email newsletter constituted a “good or service” under the VPPA even though it was non-video content. This holding significantly expanded the definition of a “subscriber” under the statute and led to a surge in VPPA claims.

    More recently, however, in Solomon v. Flipps Media, Inc., 2025 WL 1234567 (2d Cir. May 1, 2025), the Second Circuit held that sending a Facebook user’s ID and a URL containing a video title to Meta does not trigger VPPA liability. Applying an “ordinary person” standard, the court ruled that this data combination does not constitute PII because it doesn’t, on its own, reveal an individual’s viewing history without additional tools or expertise. Solomon is a major victory for defendants and is expected to significantly curb pixel-based VPPA claims in the Second Circuit. The decision aligns the Second Circuit with the Third and Ninth Circuits, reinforcing a narrower interpretation of the statute.

    The Sixth Circuit’s VPPA Limitation: Salazar v. Paramount Global

    In Salazar v. Paramount Global, 133 F.4th 642 (6th Cir. 2025), the Sixth Circuit rejected a VPPA claim based on Meta Pixel use, narrowing the definition of “consumer” under the statute. The plaintiff alleged that 247Sports.com disclosed his video viewing history to Facebook while he was logged into his account and subscribed to the site’s newsletter. The court held that unauthorized disclosure of viewing history to Facebook constituted a concrete injury, analogizing it to common-law privacy harms. However, it concluded that Salazar did not have a “consumer” relationship with the defendant, as required under the VPPA—Salazar’s newsletter subscription didn’t qualify as a subscription to goods or services in the nature of audiovisual materials.

    The Seventh Circuit’s Expansion of VPPA Viability: Gardner v. Me-TV National Limited Partnership

    In Gardner v. Me-TV National Limited Partnership, 132 F.4th 1022 (7th Cir. 2025), the Seventh Circuit expanded the scope of VPPA liability by holding that plaintiffs who created free MeTV accounts to access personalized video features qualified as “subscribers” under the statute. The plaintiffs alleged that MeTV embedded Meta’s tracking pixel in its videos, transmitting their viewing history and personal data to Facebook for targeted advertising. The court found that exchanging email addresses and zip codes for personalized video access made the plaintiffs “subscribers,” emphasizing that “data can be worth more than money” in the digital economy. It adopted a broad reading of “consumer,” holding that the VPPA covers anyone who subscribes to any service from a video tape service provider, regardless of whether thesubscription is tied directly to video content. The court rejected MeTV’s argument that the plaintiffs merely subscribed to an “information service,” explaining that the statute focuses on who provides the subscription—not the specific type of content accessed. Gardner marks a significant expansion of VPPA exposure, particularly for ad-supported platforms that collect user data in exchange for personalized video features.

    Takeaways

    Together, Solomon v. Flipps Media, Salazar v. Paramount Global, and Gardner v. Me-TV illustrate the deepening Circuit split over how broadly the VPPA applies in the context of modern digital tracking. The Circuits have taken different positions on who qualifies as a “consumer” and what constitutes “personally identifiable information” traceable to a person. These cases underscore the uncertainty that remains around the VPPA’s reach in the age of ad-supported streaming and pixel-based analytics, with the permissibility of such claims now hinging heavily on jurisdiction.

    SB 690: California’s Legislative Response to CIPA Abuse

    Amidst the wave of CIPA litigation, the California legislature has introduced a bill to curb increasingly abusive litigation practices over website data collection that have surged over the past few years.

    What the Bill Does

    SB 690 amends CIPA to exempt from liability the use of recording or tracking technologies that serve a “commercial business purpose.” The exemption applies to Penal Code Sections 631, 632, 637.2, and 638.51, provisions that have been the focus of extensive litigation and have generated significant uncertainty for businesses attempting to navigate compliance. The bill aims to clarify the permissible use of common and now universally used web technologies that assist with analytics, advertising, and personalization of digital experiences. If passed, the bill will rein in what many see as an increasingly unmanageable and unpredictable wiretapping litigation landscape.

    Who’s Affected

    • Defendants Favored: Website operators, analytics providers, and ad tech firms gain protection from CIPA suits arising out of standard business activities.
    • Plaintiffs’ Bar Constrained: Routine lawsuits over standard tracking implementations lose statutory footing.
    • Businesses See Reduced Exposure and Litigation Cost: Currently, CIPA permits $5,000 per statutory
      violation, and litigation costs on top of that create hefty financial repercussions for CIPA violations.

    Status and Outlook

    SB 690 passed the California Senate unanimously and also found strong support in the Assembly. As amended, the bill applies prospectively only—it will not affect pending cases filed before the effective date. However, the Assembly voted to advance the bill as a two-year bill, meaning that it can carry over into the 2026 legislative session and will likely delay enactment of the bill. This may prompt a further surge of CIPA filings over the next few months as plaintiffs race to file before the new limitations take effect.

    Conclusion

    As courts and lawmakers confront the realities of digital tracking and data analytics, the legal contours of privacy litigation are rapidly evolving. The mixed rulings under CIPA reveal a judiciary still grappling with how to apply legacy statutes to modern technologies, while the VPPA decisions reflect growing disagreement over the statute’s scope in a data-driven economy. At the same time, SB 690 signals a legislative push to restore predictability and limit liability for businesses engaging in routine online practices. For companies operating in the digital space, this moment represents both risk and opportunity: a chance to reassess compliance strategies as privacy law realigns, and a need to stay alert as courts and legislatures continue to reshape the rules of engagement.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com for further information or assistance.

  • U.S. State Privacy Laws: 2025 Status Update

    By Saachi S. Gorinstein and Scott C. Hall

    By the end of 2025, eight new states will have enacted comprehensive privacy laws: Delaware, Iowa, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, and Tennessee. With twenty states expected to have such laws effective by year’s end and more than a dozen additional states actively considering similar legislation for 2026 and beyond, businesses must continue to navigate an increasingly complex and fragmented regulatory landscape. While all state privacy laws share common core principles such as transparency in notice, data minimization, and opt-out rights for certain data usage, other aspects such as applicability thresholds, consumer rights, and enforcement mechanisms vary significantly across jurisdictions, all in the absence of a unifying federal privacy framework.

    General Principles of State Privacy Laws

    Certain baseline privacy principles remain consistent across all states. Businesses operating in any jurisdiction should provide clear notices to consumers about how their data is collected, used, and disclosed, and should limit the use of data collected to specific, disclosed purposes. Businesses should ensure they are collecting only the data necessary for legitimate business purposes and using it solely for the purposes stated in clear and conspicuous privacy notices.

    Consumer Rights

    Most states grant consumers a core set of rights that typically include the ability to access, delete, and correct personal data; request copies of their data (data portability); and opt out of targeted advertising, the sale of personal data, and certain types of profiling. However, there are notable exceptions. Iowa’s law does not provide consumers with the right to correct inaccurate data or to opt out of processing for targeted advertising and profiling, limiting individual control compared to other states. In contrast, Minnesota extends consumer protections by allowing individuals to understand the basis of profiling decisions, access the data used, and pursue alternative outcomes. Minnesota also grants a transparency right (similar to Oregon’s and Delaware’s) allowing consumers to request a list of third parties that have received their data. Maryland takes a more limited approach, allowing consumers to request a list of categories of third parties to whom their data has been disclosed.

    Opt-In Preferences and Data Protection Impact Assessments

    All state privacy laws require businesses to honor opt-out requests, and some require respect for universal opt-out preference signals through mechanisms such as Global Privacy Control (GPC), which allow consumers to communicate their preferences regarding the sale of personal data and targeted advertising across all websites without needing to opt out individually. Amidst enforcement attention on this topic from California regulators, new laws in Delaware, Nebraska, New Hampshire, and New Jersey require recognition of such signals, with Maryland and Minnesota set to align by the end of the year.

    Many new state laws also require businesses to conduct data protection impact assessments (“DPIAs”) and/or internal or external audits when engaging in “high-risk” processing. This typically includes activities such as selling or sharing data for targeted advertising, profiling, or processing sensitive personal information.

    Sensitive Information

    All state privacy laws, including those taking effect in 2025, impose heightened restrictions on the collection and processing of sensitive information, and several expand what qualifies as “sensitive.“ New categories include national origin (Delaware, Maryland, New Jersey), transgender or non-binary status (Delaware, Maryland, New Jersey), biometric data (Maryland, Tennessee), and certain financial account information (New Jersey). Maryland’s law is particularly stringent, with a broad definition of “consumer health data” that includes information related to gender-affirming treatment and reproductive or sexual health care, and it prohibits processing or sharing sensitive information unless strictly necessary for a consumer requested service even with consent. Additionally, new state laws in Delaware, Maryland, Nebraska, New Hampshire, New Jersey, and Tennessee follow several already enacted state laws in requiring businesses to conduct DPIAs when processing sensitive data or engaging in other high-risk activities.

    Applicability Thresholds of State Privacy Laws

    Determining which state privacy laws apply to your business requires careful analysis. While California, Tennessee, and Utah use revenue-based thresholds (e.g., $25 million) either alone or in combination with other factors, most states rely on volume-based criteria, typically applying to businesses that process the personal data of 100,000+ residents or derive a certain portion of revenue from selling data.

    Several states have lower or broader thresholds:

    • Montana: Applies to businesses collecting personal information of 50,000 consumers, or 25,000 if 25%+ of revenue comes from data sales.
    • Maryland, New Hampshire, Delaware, Rhode Island (2026): Thresholds begin at 35,000 residents, with Delaware and Rhode Island also using a 20% revenue qualifier.
    • Texas and Nebraska: Among the broadest, apply to nearly any business that is not a “small business” under SBA definitions, with no numerical data thresholds.
    • Florida: Applies only to large for-profit companies with $1 billion+ in global revenue and certain tech-related operations.

    Adding to the complexity, California uniquely includes employee, contractor, job applicant, and business-to-business transaction data under its CPRA, while most other states limit “consumer” to individuals acting in a personal or household context.

    As a result, businesses must be aware of their data collection and processing activities in each state with a privacy law, and must analyze those activities against the requirements of each applicable state law.

    Enforcement of State Privacy Laws

    Like most state privacy laws, the 2025 statutes do not authorize any private rights of action (California remains the exception for certain data breaches involving sensitive personal information). Enforcement authority generally lies with each state’s Attorney General (or, in California, its newly created Privacy Protection Agency), who are expected to take a more active role in investigating compliance and responding to consumer complaints, especially involving sensitive personal data. Most of the new laws also include cure periods, giving businesses an opportunity to correct violations before enforcement proceeds. Notably, New Jersey’s law grants rulemaking authority to the Director of the Division of Consumer Affairs, signaling that additional implementing regulations may follow, similar to frameworks in California and Colorado. A unique provision in Tennessee’s law introduces an affirmative defense to enforcement actions–the first of its kind among U.S. privacy statutes. Businesses may invoke this defense by demonstrating that they maintain a written privacy program that “reasonably conforms” with the National Institute of Standards and Technology (NIST) privacy framework or a comparable standard. This incentivizes the adoption of widely recognized best practices and supports a more proactive approach to privacy compliance.

    Takeaways for Businesses

    With twenty comprehensive privacy laws expected to be effective by the end of 2025 and many more under consideration, privacy compliance is a national business imperative. Although discussions around a federal privacy law continue, no such law has yet materialized. As in the past, companies cannot rely on potential federal intervention to alleviate the burden of multi-jurisdictional compliance.

    It is essential for all businesses to consistently map their data collection, use and disclosure, update privacy policies and notices, implement consumer rights requests mechanisms, honor opt-out and limitation requests, and continue to monitor evolving requirements and implement scalable, principle based privacy programs that can adapt to a shifting—and ever-increasing—patchwork of obligations.

    See the U.S. State Privacy Laws – Applicability Thresholds chart linked here for more details.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com for further information or assistance.

     

  • Effective September 1, 2025: Texas Expands Telemarketing Law to Cover Text Messages

    By Scott Hall

    Beginning September 1, 2025, Texas will broaden the scope of its telemarketing law (Chapter 302 of the Business & Commerce Code) to explicitly include text message marketing within the definition of “telephone solicitation.” Companies using SMS for promotional purposes that include messaging consumers in Texas should assess whether they are now subject to new registration, bonding, and compliance obligations.

    Key Requirements

    Companies engaged in sales-oriented SMS outreach to Texas consumers must:

    • Register with the Texas Secretary of State
    • Submit a $200 filing fee
    • Post a $10,000 security (via a bond, irrevocable letter of credit, or certificate of deposit)
    • File quarterly addenda listing all salespersons engaged in solicitations
    • Comply with disclosure and recordkeeping mandates

    These requirements have historically applied to voice-based telemarketing, but the amendment clarifies their application to modern communication platforms, including SMS.

    Exemptions

    A number of exemptions apply under Subchapter B of Chapter 302, most notably:

    • Outreach to current or former customers: No registration is required if messages are limited to prior customers and the business has operated under the same name for at least two years.
    • Educational Institutions and Nonprofits
    • Publicly Traded Companies and Financial Institutions regulated at the state or federal level
    • Sellers of food products, newspapers, periodicals, or cable subscriptions
    • Retailers with physical locations that have operated for two years under the same name, and a majority of business occurs at those locations as opposed to online
    • Isolated solicitations that are not part of a recurring pattern

    There is not a lot of guidance on the scope of these exemptions, and they may depend on specific facts and circumstances that should be discussed with legal counsel. Also note that the law only exempts sellers (not third-party platforms) unless the provider is contracting predominantly with exempt businesses and meets other criteria.

    Security Requirement

    For those subject to the law, a $10,000 security deposit must accompany the registration. This can be satisfied by:

    • A surety bond from a licensed company
    • An irrevocable letter of credit from a federally insured financial institution
    • A certificate of deposit with restricted withdrawal rights

    The purpose is to create a recovery mechanism for consumers harmed by a seller’s insolvency or contractual breach.

    Broader Compliance Considerations

    While Texas provides a customer-based exemption, companies should also keep in mind other states such as Florida, Maryland, and Oklahoma that have strict SMS marketing laws without such exemptions, as well as federal law (TCPA), which still requires prior express written consent for most automated marketing texts. Companies relying on marketing platforms like Klaviyo or Attentive should also review their vendor contracts to ensure data is being used only on the company’s behalf and in compliance with these rules.

    Recommendations

    • Assess whether your SMS campaigns involve Texas consumers (or consumers in other states with registration or other special compliance requirements).
    • Review eligibility for exemptions, especially under the “former or current customer” carve-out.
    • Confirm you are properly capturing and storing user consent, ideally in a verifiable format.
    • Evaluate whether your platform provider meets service-provider-only criteria, or whether any contract amendments are needed.
    • Prepare registration materials and financial security, if required.

    Please reach out to the Coblentz team for further information or assistance.

  • Navigating the Shifting AI Landscape: What U.S. Businesses Need to Know in 2025

    By Mari Clifford and Scott Hall

    Artificial intelligence is no longer a wild west frontier technology—it’s a regulated one. As AI systems become central to how companies operate, communicate, and compete, legal oversight is catching up. In 2025, AI governance is defined by divergence: a harmonized, risk-based regime in the EU; a fragmented, reactive framework in the U.S.; and rapid regulatory expansion at the state and global levels. Businesses deploying or developing AI must now navigate a multi-jurisdictional patchwork of laws that carry real compliance, litigation, and reputational consequences.

    This article outlines the key regulatory developments, contrasts the EU and U.S. approaches, and offers concrete recommendations for U.S. companies operating AI systems.

    EU AI Act: Global Reach with Teeth

    The EU AI Act, which entered into force in August 2024, is the world’s first comprehensive, binding legal framework for AI. It classifies systems by risk level—unacceptable, high, limited, and minimal—and imposes extensive obligations on high-risk and general-purpose AI (GPAI) models. High-risk AI systems must undergo pre-market conformity assessments, maintain technical documentation, and register in a public EU database. GPAI models face additional transparency, copyright, and cybersecurity obligations, particularly if they exceed scale thresholds (e.g., >10,000 EU business users).

    The Act’s extraterritorial reach means U.S. companies offering AI products or services in the EU—or whose outputs affect EU residents—must comply. Notably, failure to implement the EU’s “voluntary” GPAI Code of Practice could shift the burden of proof in enforcement actions.

    Timeline to Watch: The law becomes enforceable starting August 2026, with GPAI obligations phasing in from 2025.

    The U.S. Approach: Fragmentation, Tension, and State-Level Acceleration

    Executive Orders & Federal Initiatives

    U.S. federal law remains sectoral and piecemeal. President Biden’s 2023 Executive Order on “Safe, Secure, and Trustworthy AI” established guiding principles, including fairness, transparency, and privacy protections, and tasked agencies with issuing AI-specific standards. However, this was rescinded in 2025 by the Trump administration’s new EO prioritizing deregulation and “American leadership in AI,“ creating a sharp policy pivot and regulatory uncertainty. In parallel, the administration also unveiled a draft AI Action Plan, emphasizing voluntary industry standards and innovation incentives over binding rules. While still in flux, this initiative further underscores the unsettled political climate around federal AI policy.

    While bills like the AI Accountability Act and the SAFE Innovation Framework have been proposed, no comprehensive federal AI law has passed. Instead, federal agencies like the FTC, EEOC, and CFPB continue to regulate AI through existing consumer protection and civil rights laws—often through enforcement actions rather than formal rulemaking.

    State Spotlight: Colorado, California, and Others Lead the Way

    Absent a comprehensive federal law, states have moved decisively. The list below highlights a representative sample of enacted state AI statutes as of July 2025; dozens of additional bills are pending and advancing every legislative cycle:

    Arizona

    • HB 2175 – requires health-insurer medical directors to personally review any claim denial or prior-authorization decision that relied on AI, exercising independent medical judgment (in force on June 30, 2026).

    California

    • AB 1008 – expands the CCPA definition of “personal information” to cover data handled or output by AI.
    • AB 1836 – bars commercial use of digital replicas of deceased performers without estate consent.
    • AB 2013 – requires AI developers to post detailed training-data documentation.
    • AB 2885 – creates a uniform statutory definition of “artificial intelligence” (effective January 1, 2025).
    • AB 3030 – mandates clear gen-AI disclaimers in patient communications from health-care entities (effective January 1, 2025).
    • SB 1001 “BOT” Act – online bots that try to sell or influence votes must self-identify.
    • SB 942 AI Transparency Act – platforms with >1M monthly users must label AI-generated content and provide a public detection tool.

    Colorado

    • SB 24-205 Colorado AI Act – first comprehensive U.S. framework for “high-risk” AI; imposes reasonable-care, impact-assessment, and notice duties on developers and deployers (effective 2026).
    • SB 21-169 – bans unfair discrimination by insurers through algorithms or predictive models.
    • HB 23-1147 – requires deep-fake disclaimers in election communications.
    • Colorado Privacy Act – consumers may opt out of AI “profiling” that produces legal or similarly significant effects; DPIAs required for such processing.

    New York

    • New York CityLocal Law 144 – employers using automated employment-decision tools must obtain an annual independent bias audit and post a summary.

    Tennessee

    • HB 1181Tennessee Information Protection Act (2024) – statewide privacy law; impact assessments required for AI profiling posing significant risks.
    • “ELVIS Act” (2024) – makes voice mimicry by AI without permission a Class A misdemeanor and grants a civil cause of action.

    Texas

    • Texas Data Privacy and Security Act – lets Texans opt out of AI profiling that has significant effects and compels risk assessments for such uses.

    Utah

    • SB 149 “AI Policy Act” (amended by SB 226) – requires disclosure when consumers interact with generative-AI chat or voice systems and sets professional-licensing guardrails.
    • HB 452“Artificial Intelligence Applications Relating to Mental Health” – regulates the use of mental health chatbots that employ artificial intelligence (AI) technology.

    Expect additional Colorado-style comprehensive AI frameworks to surface in 2025-26 as states continue to fill the federal gap.

    Global Developments & Cross-Border Tensions

    Beyond the EU and U.S., countries like Brazil, China, Canada, and the U.K. are advancing AI governance through a mix of regulation and voluntary standards. Notably:

    • China mandates registration and labeling of AI-generated content.
    • Brazil is poised to pass a GDPR- and EU AI Act-style law.
    • The U.K. continues to favor a principles-based, regulator-led approach but may pivot toward binding regulation.

    U.S.-EU divergence has triggered geopolitical friction. The EU’s upcoming GPAI Code of Practice is a flashpoint, with U.S. officials warning it could disproportionately burden American firms. Meanwhile, the U.S. may reconsider participation in multilateral frameworks like the Council of Europe’s AI Treaty.

    A Compliance Playbook for 2025

    AI legal exposure increasingly mirrors privacy law: patchwork rules, aggressive enforcement, and high reputational stakes. To mitigate risk, companies should:

    • Inventory AI Systems: Identify all AI tools in use—especially those making or influencing decisions in high-risk sectors (HR, healthcare, finance, etc.).
    • Conduct Risk Assessments: For GPAI or high-risk tools, assess training data, bias exposure, and explainability. Use frameworks like NIST’s AI RMF or the EU’s conformity checklist.
    • Build Cross-Functional Governance: Legal, compliance, technical, and product teams must coordinate. Assign AI risk ownership and create change triggers for reclassification (e.g., changes in use or scale).
    • Monitor State and Federal Law Developments.
    • Plan for EU Market Entry: Determine whether EU-facing AI systems require local representation, registration, or conformity assessment under the AI Act.
    • Audit Communications: Avoid AI-washing. Public statements about capabilities, safety, or human oversight must match internal documentation and performance.

    The message from global regulators is clear: innovation is welcome, but governance is non-negotiable. Whether operating domestically or globally, businesses must prepare for AI compliance to become a core legal discipline, akin to privacy or cybersecurity.

    For legal teams and compliance leaders, now is the time to move from principles to programs—and to see governance as a competitive advantage, not just a regulatory burden.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com for further information or assistance.

  • 2025 Mid-Year Privacy Report

    A Comprehensive Look at New Developments in Data Privacy Laws

    By Scott Hall, Leeza Arbatman, Kat Gianelli, Saachi Gorinstein, and Hunter Moss

    Download a PDF version of this report here.

    In 2025, privacy and AI regulation have moved from the sidelines to the center of business risk and strategy. U.S. states are rapidly enacting a patchwork of privacy laws, with new AI laws emerging and expected to increase. Meanwhile, regulators are tightening oversight of automated decision making, children’s data, health metrics, and cross-border data transfers. And litigation over online data collection by companies continues to expand under various statutes, including wiretapping and pen register claims under the California Invasion of Privacy Act (CIPA), and claims under the Video Privacy Protection Act (VPPA), resulting in diverging court rulings that send mixed signals to companies regarding privacy compliance.

    Our Mid-Year Privacy Report examines the most significant developments shaping the privacy and AI landscape in 2025 and highlights practical steps businesses can take to navigate an increasingly complex, multi-jurisdictional legal landscape.

    You can download the full report here. If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com for further information or assistance.

  • New California Regulations Regarding AI Use in Hiring and Employment

    By Fred Alvarez and Hannah Withers 

    If your company is using AI or other automated decision making systems to make employment or hiring decisions, a new set of regulations in California will be going into effect on October 1, 2025 that will require your attention.

    Issued by the California Civil Rights Council (CRC), these regulations define what types of AI systems are being regulated, what types of employment decisions are included, and what employers need to do to prevent discrimination claims resulting from the use of these AI tools. If you (or your agents) are using AI in any manner related to employment decisions, we suggest familiarizing yourself with the key aspects of the regulations, which are summarized below.

    The purpose of the regulations is to respond to concerns about the potentially discriminatory impact of AI tools in employment and to make clear that California’s anti-discrimination laws still apply even when employers are using AI tools to make decisions. The regulations make clear that an employer cannot escape liability for discriminatory hiring or employment decisions based on the fact that the decision was made by an AI generated tool. Rather, even when just using AI tools, companies may still be liable for discriminatory hiring or employment decisions.

    Key Sections of the Regulations to be Familiar With:

    • An “Automated-decision system” is defined as “a computational process that makes a decision or facilitates human decision making regarding an employment benefit… [it] may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.”
    • The covered employment and hiring practices are quite broad and include decisions related to recruitment, applicant screening, background checks, hiring, promotion, transfer, pay, benefit eligibility, leave eligibility, employee placement, medical and psychological examination, training program selection, or any condition or privilege of employment.
    • An “agent” includes “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity, which may include applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave, including when such activities and decisions are conducted in whole or in part through the use of an automated decision system. An agent of an employer is also an ‘employer’ for purposes of the Act.”
    • Under the rules, it is unlawful to use AI systems that result in employment discrimination based on protected characteristics (e.g. religion, race, gender, disability, national original, age, etc.) The rules also specify that the use of technology that has an adverse impact is “unlawful unless job-related and consistent with business necessity and the technology includes a mechanism for requesting an accommodation”.
    • There is a 4 year record retention requirement for employment records created or received (including applications, personnel records, membership records, employment referral records, selection criteria, automated-decision system data) dealing with a covered employment practice or employment benefit.

    Here is What We Recommend Considering and Preparing For:

    • Determine if you are using any AI systems that should be evaluated for possible discriminatory impact. Consider whether you use any of the following systems with employees or applicants:
      • Computer-based assessments or tests, such as questions, puzzles, games, or other challenges.
        Automated systems to direct job advertisements or other recruiting materials to targeted groups.
      • Automated systems to screen resumes for particular terms or patterns.
      • Automated systems to analyze facial expression, word choice, and/or voice in online interviews.
      • Automated systems to analyze employee or applicant data acquired from third parties.
    • If you use an automated decision making system, examine what data is being collected, how decisions are being made, and any possible discriminatory impact that might be made as a result. Consider how applicants or employees in each protected group might be impacted by the way the automated systems are set up and implemented. (e.g., The use of online application technology that limits, or screens out, ranks, or prioritizes applicants based on their schedule may discriminate against applicants based on their religious creed, disability, or medical condition. Such a practice having an adverse impact is unlawful unless job-related and consistent with business necessity and the online application technology includes a mechanism for the applicant to request an accommodation.)
    • Investigate how the automated decision making systems you are using check for bias on behalf of protected classifications. Efforts to check for bias can be used as a defense to claims of discrimination. To check for bias you should investigate whether the AI tool:
      • Monitors for unintended discriminatory effects during the ongoing use of the software.
      • Conducts live bias tracking functionality as decisions are being made.
      • Supports compliance with anti-discrimination requirements in states that require annual independent bias audits of AI tools in hiring.
      • Supports customer compliance with applicant notice requirements in states that have laws relating to the use of AI tools in hiring.
      • Maintains records of hiring decisions and AI processes conducted.
      • Has materials or a white paper regarding employment and privacy law compliance.
      • Has measures in place to prevent discriminatory outcomes in its algorithm design and outputs.
    • If you work with third parties or other agents for hiring or employment decisions, talk to them about what AI tools they are using and learn more about what information those tools gather and how they make decisions. You are responsible for how your agent is using AI tools and you should communicate with agents specifically about their compliance with these California regulations.
    • Review your accommodations policies to make sure that any automated systems being used are not operating in a way that would miss the need for an accommodation.
    • Review your record retention practices for employment and hiring decisions to make sure that you keep records of hiring or employment decisions made using AI systems for 4 years.

     

    The Coblentz Employment team is available to answer any questions you may have about the impact of these regulations and how to prepare logistically ahead of their effective date on October 1, 2025. For additional information, you may also refer to the CRC’s press release.

  • Monkey Business No More: Ninth Circuit Rules NFTs Are Protected by Trademark Law, Confirms the Limits of Expressive Speech Protection, but Overturns Judgment of Likely Confusion

    By Sabrina Larson and Kat Gianelli

    Key Takeaways

    • The Ninth Circuit confirmed that non-fungible tokens (NFTs) are ‘goods’ under the Lanham Act and can be protected by trademark law.
    • Even if a defendant uses a trademark owner’s mark with the goal of commentary and criticism, the fair use doctrine will not protect that use where the defendant uses the mark to designate its own goods.
    • The First Amendment does not protect a defendant’s unlicensed use of a trademark when the use of the mark is at least partially acting as a source identifier, even if the defendant intended such use as satire and expressive speech.
    • The decision is a win for brand owners promoting digital assets.

    The Ninth Circuit ruled in Yuga Labs, Inc. v. Ryder Ripps on July 23, 2025 that non-fungible tokens (NFTs) are eligible for trademark protection under the Lanham Act, a significant development for creators of digital tokens. The Court also confirmed the limits of protection for satirical, expressive speech protection, where the defendant nonetheless uses the plaintiff’s trademarks as source identifiers.

    The Court, however, overturned the lower court’s $8.8 million judgment for Yuga, finding that Yuga had not proven at summary judgment that the defendants’ tokens are likely to confuse NFT buyers.

    Background

    Yuga Labs, who created the NFT “Bored Ape Yacht Club,” sued artists Ryder Ripps and Jeremy Cahen for creating a nearly identical NFT titled “Ryder Ripps Bored Ape Yacht Club,” which was tied to the same ape images as Yuga’s NFTs. Yuga alleged trademark infringement and unlawful cybersquatting.

    Examples of Yuga’s Bored Ape NFTs[1]

    The defendants claimed their project was a satirical protest, and countersued alleging violation of the Digital Millennium Copyright Act (DMCA) and sought declaratory relief that Yuga had no copyright protections over Bored Apes.

    The district court granted summary judgment for Yuga on its trademark infringement claim and anti-cybersquatting claim, and also granted summary judgment for Yuga with regards to the defendants’ DMCA counterclaim, resulting in an $8.8 million judgment for Yuga, which the artists appealed.

    Ninth Circuit Analysis and Decision

    One of the defendants’ defenses was to argue that NFTs are not ‘goods’ under the Lanham Act, but the Ninth Circuit disagreed, holding that NFTs are protectable as ‘goods’ under the Lanham Act and affirming that Yuga’s “Bored Ape Yacht Club” trademarks are enforceable despite their digital nature. This conclusion aligns with the U.S. Patent & Trademark Office, which has also concluded that NFTs are ‘goods.’ The Court reasoned that NFTs are “more than a digital deed to or authentication of artwork” because they “also function as membership passes, providing ‘Ape holders’ with exclusive access to online and offline social clubs, branded merchandise, interactive digital spaces, and celebrity events.” The Court concluded, “Yuga’s NFTs are not merely monkey business and can be trademarked.”

    The defendants also argued that they made nominative fair use of the Yuga marks. A common example of fair use is where one “‘deliberately uses another’s trademark or trade dress for the purposes of comparison, criticism, or point of reference.’”[2] The Court disagreed because the defendants used the Yuga marks not merely to reference Yuga’s NFTs, but as trademarks – that is, to create, promote, and sell their own NFTs. In that case, “[i]t does not matter that Defendants’ ultimate goal may have been criticism and commentary.”[3]

    The Court also rejected the defendants’ argument under the First Amendment that their NFTs were part of an expressive art project and that the “expressive nature” of their use of the Yuga marks entitled them to an exception to trademark infringement for expressive speech. Again, the Court disagreed because this exception does not apply where the defendant uses the marks as source identifiers. “[W]hen a use of the plaintiff’s mark is ‘at least in part for source identification,’ the First Amendment exception to trademark enforcement is foreclosed.”[4]

    Ultimately, the Court reversed the district court’s grant of summary judgment on trademark infringement and cybersquatting claims against the defendants, finding that the likelihood of consumer confusion, which is central to both claims, presents factual disputes that must be resolved at trial. Although the defendants’ satirical use did not establish nominative fair use or protect the use of the marks under the First Amendment, the Court noted that that purpose created “significant questions about whether the likelihood-of-consumer-confusion requirement was satisfied.”

    The panel affirmed the dismissal of the defendants’ counterclaims under the DMCA and for declaratory relief, concluding there was no evidence of knowing misrepresentation or an active copyright dispute.

    Conclusion and Takeaways

    The Ninth Circuit emphasized that “when we apply ‘established legal rules to the totally new problems’ of emerging technologies, our task is ‘not to embarrass the future.’”[5] This decision marks a significant step in adapting traditional intellectual property law to the evolving digital economy. It is a win for brand owners operating in the digital economy, opening the door for them to bring claims against infringing digital goods as they traditionally have against counterfeit products.

    While the Court remanded for a determination of whether the defendants infringed Yuga’s marks, it clarified that NFTs are not exempt from the protections and tenets of trademark law in the Ninth Circuit – NFTs are ‘goods’ under trademark law, and trademark infringement analysis must be applied when those marks are used at least in part as source identifiers by the defendant even with the intention of criticism and satire.

     

    [1] Yuga Labs Inc v. Ryder Ripps, 9th U.S. Circuit Court of Appeals, No. 24-879, Opinion (“Op.”) at 10.

    [2] Op. at 34, quoting E.S.S. Ent. 2000, Inc. v. Rock Star Videos, Inc., 547 F.3d 1095, 1098 (9th Cir. 2008).

    [3] Op. at 36. See Jack Daniel’s Props., Inc. v. VIP Prods. LLC, 599 U.S. 140, 148 (2023) (explaining a defendant does not get the benefit of fair use “even if engaging in parody, criticism, or commentary – when using the similar-looking mark ‘as a designation of source for the [defendant’s] own goods’” (alteration in original) (citation omitted)). See our analysis of the Jack Daniel’s decision here.

    [4] Op. at 41, quoting Jack Daniel’s, 599 U.S at 156. See our analysis of the Jack Daniel’s decision here.

    [5] Op. at 6, quoting TikTok Inc. v. Garland, 604 U.S. –, 145 S. Ct. 57, 62 (2025) (cleaned up and internal quotations omitted).

     

     

  • 2025 CEQA Reforms: What Developers Need to Know

    By Miles Imwalle, Megan Jennings, Elena Neigher, Alyssa Netto, and Craig Spencer

    Governor Gavin Newsom signed two budget trailer bills on June 30, 2025, enacting the most substantial reforms to the California Environmental Quality Act (CEQA) in over five decades. To help you navigate these important changes, we have prepared a three-part summary of budget trailer bills Assembly Bill 130 and Senate Bill 131:

    New CEQA Exemption for Infill Housing Development Projects: What it Means for Developers 

    AB 130 and SB 131 were adopted on the last day of the 2024-25 fiscal year after the Governor made it clear he would not approve the budget without meaningful CEQA reforms. While not the sweeping “rollback” of environmental review that some sources have claimed, the legislation will undoubtedly smooth the road for approval for many infill housing projects. In this post, we focus on the criteria for using the new exemption for housing development projects in AB 130. Read more here.

    “Near-Miss” CEQA Streamlining: New Option to Reduce Scope of Review for Housing Development Projects 

    SB 131 includes a new CEQA process that limits the environmental review required for “near-miss” housing development projects—those projects that meet all criteria for a CEQA exemption, except for a single disqualifying condition. Specifically, the environmental review in these instances is restricted to analyzing impacts stemming exclusively from the single condition that disqualifies the housing project from receiving a statutory or categorical exemption. Read more here.

    CEQA Transportation Mitigation Fees and Other Key Reforms in AB 130 and SB 131 

    In our third update on the important changes in budget trailer bills AB 130 and SB 131, we cover changes to the mitigation options for vehicle miles traveled (VMT), additional focused CEQA exemptions, and other amendments to land use processes. Read more here.

    The Coblentz Real Estate Team has extensive experience with the state’s latest land use laws and can help to navigate their complexities and opportunities. Please contact us for additional information and any questions related to the impact of this legislation on land use and real estate development.

  • California Releases Final Employee Notice on Victim Leave Rights

    By Fred W. Alvarez, Hannah Jones, Dan Bruggebrew, Allison Moser, Paige Pulley, Hannah Withers, and Stacey Zartler

    The California Civil Rights Department (CRD) just released its long-awaited model employee notice triggering a new compliance obligation for all California employers regarding the rights of employees who are victims of qualifying acts of violence. This is a good time to review your policies and onboarding materials to ensure you’re providing this notice to employees now and going forward.

    What’s New?

    Effective immediately, employers must provide notice to employees about their rights to take protected leave and request workplace accommodations if they or their family members are victims of certain crimes. This requirement is tied to Assembly Bill 2499 (codified as Government Code §12945.8), which expanded existing protections and made notice mandatory now that the CRD model notice is available. The model notice is located here: CRD Model Notice

    Who Needs to Comply?

    All California employers, regardless of size, are required to provide this notice.

    If you have 25 or more employees, additional protections apply to employees whose family members are victims of a qualifying act of violence, including a broadly defined list that covers a child, parent, grandparent, grandchild, sibling, spouse, domestic partner, or “designated person” who can be someone related by blood, such as an aunt or uncle, or someone who is equivalent to a family member, such as a best friend. Employers may limit an employee to one “designated person” per 12-month period.

    When and How to Provide the Notice

    The new law requires you to give this notice in four scenarios:

    • At hire – Include it in your onboarding packet effective immediately.
    • Annually – Distribute it to all employees once per year.
    • Upon request – Provide it to any employee who asks.
    • When notified – If an employee tells you they or a family member are a victim of a qualifying crime.

    You can use the CRD’s model notice or create your own version, as long as it’s substantially similar in both content and clarity. If 10% or more of your workforce at a location speaks a language other than English, you’ll need to provide the notice in that language. The CRD has made translated versions available on its website.

    What the Notice Covers

    The notice explains an employee’s rights, including:

    • Job-protected leave for medical care, counseling, safety planning, or legal help related to the incident.
    • Workplace safety accommodations, like schedule changes, reassignment, or security assistance—subject to an interactive process and undue hardship standard.
    • Protection from retaliation for using these rights.
    • Confidentiality of any information shared regarding the incident or related requests.

    It also reminds employees they may be eligible for wage replacement under State Disability Insurance or Paid Family Leave, and may qualify for bereavement leave and other forms of crime victim leave under separate Labor Code provisions and applicable law.

    What You Should Do Now

    Here’s a practical checklist to help you meet your new obligations:

    • Download and review the CRD’s model notice.
    • Add the notice to your onboarding documents and distribute it to current employees annually.
    • Train HR and managers to respond appropriately when employees raise concerns or request time off or accommodations under this law.
    • Be prepared to provide the notice to current employees if they make a request.

    Want More Details? Read the CRD’s FAQ

    The CRD has also published an FAQ document that answers common employer questions about the law and the notice requirement. You can view it here: CRD FAQs

    Here are a few highlights:

    • What is a “qualifying act of violence”?
      It’s broader than domestic violence or sexual assault—it includes any crime that causes physical or mental injury, or the death of a family member.
    • Can we create our own notice instead of using the CRD version?
      Yes, but it must be substantially similar in both content and clarity.
    • Do we have to provide this notice to existing employees immediately? While there isn’t a specific requirement that notice be provided to existing employees immediately, employers need to provide it annually and we recommend rolling this out as soon as practical.
    • What happens if we don’t comply?
      Non-compliance can lead to enforcement action by the CRD, including penalties for failing to provide the notice or interfering with protected leave rights.

    If you’d like support reviewing your materials, preparing communications, or training your team, we’re here to help. Let us know if you’d like the notice translated into your preferred language(s), or if you’d like assistance adapting it into your onboarding materials.

  • Federal Reserve to Implement New ISO 20022 Funds Wiring System

    By Kyle J. Recker and Max Martinez

    Due to the Federal Reserve’s imminent shift to a new funds wiring system (known as ISO 20022), if you have upcoming plans to transfer any amount of funds via wire transfer, confirm with your bank and anyone else handling your funds that they are prepared for the shift to ISO 20022 and can accommodate your wire on the planned date of transfer.

     

    On July 14, 2025, the Federal Reserve plans to implement a new funds wiring and messaging format, ISO 20022, to modernize both domestic and cross-border wire transfers. After three years of development and trials, the Federal Reserve will sunset its existing wiring system, the Fedwire Application Interface Manual (FAIM), which is currently used nationwide by banks, escrow services, and other funds exchange operations to facilitate wiring of funds from one party to another. “ISO” refers to the International Organization for Standardization, and the change to ISO 20022 will align the Federal Reserve wire transfer system with those used in other payments markets, including those of key U.S. trading partners. The ISO 20022 system allows for more detailed information to be included with a wire transfer, which is expected to improve efficiencies in related wire transfer processes and result in faster and more reliable payments. The upgrade should be welcome news to anyone regularly involved in closing transactions that involve the wiring of funds, as the existing FAIM system has been known to cause some consternation due to its lack of transparency and predictability (e.g., the anxiety-ridden waiting period from funds wiring to receipt by the escrow service for a same-day transaction closing).

    The ISO 20022 system underwent customer testing from March to June 2025 (in which ISO 20022 was actually used for certain planned wire transfers in commercial settings), while testing of the system’s online portal interface has been ongoing since March 2023. However, each FAIM user is responsible for developing its own preparedness and contingency plans in connection with the phase-out of FAIM and the implementation of ISO 20022, so there may be some variance among institutions in the smoothness and efficiency of the transition. If you have upcoming plans to transfer any amount of funds via wire transfer, particularly if a large sum, you should confirm with your bank and anyone else handling your funds as to whether they are prepared for the shift to the ISO 20022 system and can accommodate your wire on the planned date of transfer. You should also be prepared for the possibility of wires being delayed due to transitional complications. If possible, it may be prudent to wire funds in advance of any upcoming closings or be prepared to extend or delay a closing date for a few days.

    As always, we encourage you to reach out to us with any questions on this topic or as may be needed in connection with any specific projects.

    Sources:

    https://www.frbservices.org/resources/financial-services/wires/iso-20022-implementation-center
    https://www.frbservices.org/news/communications/061825-fedwire-iso-go
    https://www.frbservices.org/resources/financial-services/wires/iso-20022-implementation-center/fedwire-iso-20022-testing-requirements-key-milestones
    https://www.frbservices.org/resources/financial-services/wires/faq/iso-20022
    https://www.jpmorgan.com/insights/payments/payments-optimization/iso-20022-migration