• California Finalizes CCPA Regulations on Automated Decision-Making Technology, Risk Assessments, and Cybersecurity Audits

    By Mari S. Clifford and Scott C. Hall

    In July 2025, the California Privacy Protection Agency (CPPA) adopted final regulations governing automated decision-making technology (ADMT), privacy risk assessments, and cybersecurity audits under the California Consumer Privacy Act (CCPA). The final vote by the CPPA Board took place on July 24, following over a year of drafting and public comment.

    The regulations now await approval by California’s Office of Administrative Law (OAL). If the agency files them by the August 2025 deadline, they may become operative as early as December 1, 2025. Otherwise, the effective date will default to January 1, 2026. Businesses should not mistake this recalibration for retreat. The rules establish a practical but enforceable compliance regime—particularly for companies leveraging algorithmic tools, engaging in high-risk processing, or navigating overlapping state and global privacy frameworks.

    Recalibrating the Definition and Scope of ADMT

    The CPPA’s final rules significantly narrow the scope of ADMT obligations to cases where technology “replaces or substantially replaces human decision making,” removing explicit references to artificial intelligence and behavioral advertising use cases. This is a meaningful departure from the earlier, more expansive draft, which included tools that merely “facilitated” decisions or referenced artificial intelligence more broadly.

    Under the revised rules, businesses are only subject to ADMT obligations when the technology is used to make “significant decisions,” defined as those affecting financial services, employment, housing, education, or healthcare, or when they engage in certain types of profiling or train models for such use cases. Many previously covered scenarios, such as first-party advertising or public observation, have been removed entirely from the rule’s opt-out and notice requirements.

    Additionally, businesses no longer need to issue standalone “pre-use” notices. Instead, the revised rules allow them to integrate ADMT disclosures into existing notices at collection, easing administrative overhead while preserving transparency obligations.

    Narrowing Consumer Rights and Expanding Business Flexibility

    In line with its refined scope, the CPPA has pared back many of the consumer rights included in the original ADMT draft. Opt-out rights no longer apply to workplace or education profiling, public surveillance, or ADMT training activities. Instead, the rules focus on scenarios where ADMT is used to make determinations about core life opportunities—such as being hired, admitted to a school, or approved for a loan.

    For these remaining “significant decision” use cases, businesses must provide a mechanism for consumers to opt out, or, in some cases, provide an appeal process reviewed by a qualified human decisionmaker. The rules also introduce specific safeguards for
    biometric profiling and emotion-recognition systems, including accuracy evaluations and nondiscrimination audits.

    Importantly, the final version of the rules appears likely to retain access rights to ADMT outputs, logic summaries, and decision making factors—but businesses will not be required to disclose trade secrets or details that could compromise fraud or safety defenses.

    Cybersecurity Audits: Scaled by Revenue, Governed by Independence

    The CPPA has also finalized a more risk-based and scalable cybersecurity audit framework. Under the revised draft, businesses that (i) meet the data broker threshold or (ii) process personal information of 250,000 consumers (or sensitive data of 50,000 consumers) must conduct an annual cybersecurity audit starting between 2028 and 2030, depending on revenue tier.

    Audits must follow recognized professional standards and be certified by an executive responsible for cybersecurity. Auditors may be internal or external, but must be structurally independent. Key updates include:

    • Businesses are no longer required to justify omitted safeguards (e.g., zero-trust architecture) or assess controls deemed inapplicable.
    • Reports now require detailed explanations of any security gaps, plus a remediation plan, and must be retained for five years.
    • A certification of audit completion must be submitted annually to the CPPA, beginning April 1, 2028, for larger entities.
    • Internal auditors may now report directly to senior management, rather than the board, so long as they remain structurally independent from the cybersecurity function.

    This flexible structure is intended to support scalability across organizations while preserving the CPPA’s ability to scrutinize audit content and governance rigor.

    Risk Assessments: From Intrusive to Interoperable

    In a move praised by industry stakeholders, the CPPA has also walked back some of the more onerous elements of its proposed risk assessment requirements. Most notably:

    • Full submissions are no longer required. Instead, businesses must retain the assessment and file only a certification and brief summary of key facts with the CPPA starting in 2028.
    • Risk assessments are now required before a business (1) sells or shares personal information, (2) processes sensitive personal information, (3) uses ADMT for a significant decision concerning a consumer, (4) uses automated processing to infer attributes about an educational or job applicant, student, employee, or independent contractor, (5) uses automated processing to infer attributes based on a person’s presence in a sensitive location, such as a medical facility, shelter, or place of worship, or (6) trains ADMT for any of those uses.
    • Assessments must address detailed elements (purpose, types of personal information, specific processing operations, safeguards, stakeholder contributors, approver identity, and risks/benefits) and be approved by the business decision maker responsible for that activity. They must be reviewed at least every three years, or within 45 days of a material change to the processing activity. Starting April 1, 2028, businesses must annually report to the CPPA the number of risk assessments conducted, the types of processing activities and personal information involved, and submit an executive attestation, under penalty of perjury, that the assessments were completed.

    Strategic Implications and Compliance Planning

    These revised rules offer clearer paths for operationalization but shorten the lead time for implementation. Businesses that rely on ADMT or engage in high-volume or sensitive data processing should prioritize the following steps in the months ahead:

    • ADMT Mapping: Inventory current assessments and incorporate the new CCPA triggers by year end.
    • Privacy Risk Framework Integration: Evaluate whether existing DPIAs or AI assessments can be adapted to meet CCPA criteria. This is particularly critical for training use cases.
    • Audit Preparation: Assign ownership for cybersecurity compliance and begin gap-mapping against the CPPA’s control expectations, especially if audit certification deadlines fall in 2028 or 2029.\
    • Executive Readiness: Socialize the upcoming CPPA attestation requirement with your executive team and secure resources for the 2026-27 assessment cycle.

    Takeaways for Businesses

    The CPPA’s latest rulemaking reflects a maturation of the CCPA framework, shifting the regulatory emphasis from consumer self-help to enterprise accountability. While the approved rules are more targeted and feasible than earlier drafts, they still demand robust documentation, governance, and strategic alignment across legal, privacy, and security teams.

    The CPPA Board signaled it may revisit these rules as technology and market practices evolve, so anticipate further iterative adjustments.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com or Mari Clifford at mclifford@coblentzlaw.com for further information or assistance.

  • Developments in Digital Privacy Litigation in 2024-2025: CIPA, VPPA, and California’s SB 690

    By Leeza Arbatman and Scott C. Hall

    In the wake of an explosion in digital privacy litigation, courts and legislatures are redrawing some of the boundaries of what qualifies as unlawful data collection under decades-old statutes. Claims brought under California’s Invasion of Privacy Act (CIPA) and the federal Video Privacy Protection Act (VPPA) have tested how far traditional wiretap and video privacy laws can stretch to cover modern tracking technologies like pixels, session replay tools, and embedded analytics software. As these suits proliferate, courts are being asked to decide whether routine digital tracking amounts to interception, surveillance, or unauthorized disclosure of personal information.

    Recent developments reflect both the tightening and expansion of privacy liability. In California, courts remain split on whether modern tracking tools qualify as “pen registers” or violate CIPA’s wiretap provisions, while a pending bill—SB 690—aims to sharply curtail such claims going forward. At the federal level, VPPA decisions have moved in divergent directions, with a growing Circuit split on what makes someone a “consumer” and what counts as “personally identifiable information.” Together, these trends show a legal landscape in flux, shaped as much by statutory interpretation as by shifting expectations around digital privacy and surveillance.

    California CIPA Developments

    Recent decisions illustrate the divergent paths CIPA claims are taking in California and beyond. While some courts continue to reject CIPA suits targeting ordinary website tracking, others are permitting such claims to proceed—especially where plaintiffs allege unauthorized use of third-party tracking software or more invasive data collection. The result is a patchwork of outcomes that often turn on the specific tracking technologies and legal theories alleged.

    What Counts as a “Pen Register” or “Trap and Trace Device” Under § 638.51?

    Courts are divided on whether modern web-tracking tools fall within the scope of California Penal Code § 638.51, which prohibits unauthorized use of devices that capture dialing or routing information, but not communication content.

    In some recent decisions, courts have permitted claims to proceed where plaintiffs plausibly alleged that tools like TikTok scripts or IP trackers functioned like pen registers or trap-and-trace devices:

    • Lillian Jurdi v. MSC Cruises (USA) LLC, No. 24STCV14098 (Cal. Super. Ct. Sept. 17, 2024): TikTok tracking scripts that collected geographic information, referral tracking, and URAL tracking could qualify as such devices.
    • Shah v. Fandom, Inc., No. 3:23-cv-4883 (N.D. Cal. Oct. 21, 2024): IP tracking that relayed user location data supported a pen register claim, as did the fact that users could not reasonably expect that trackers would be installed on websites and transmit their IP addresses every time they visited.
    • Heiting v. IHOP Restaurants, LLC, No. 24STCV14453 (Cal. Super. Ct. Oct. 28, 2024): TikTok scripts plausibly captured incoming user data that identified that user like a trap-and-trace device.
    • Lesh v. CNN, No. 1:23-cv-7374 (S.D.N.Y. Feb. 20, 2025): Court noted in dicta that IP tracking might fit the definition, particularly where it collected location-related data associated with user communications.

    Others have rejected such claims, holding that § 638.51 targets telephone surveillance and doesn’t extend to routine online tracking:

    • Sanchez v. Cars.com, 2025 WL 487194 (Cal. Super. Ct. Jan. 27, 2025): The pen register statute does not extend to internet communications.
    • Rodriguez v. Plivo, 2024 WL 5184413 (Cal. Super. Ct. Oct. 2, 2024): Basic location data revealed by an IP address is not sensitive enough to sustain a pen register claim.
    • Palacios v. Fandom, No. 24STCV11264 (Cal. Super. Ct. Sept. 24, 2024): IP addresses are not outgoing communications, as required to plausibly allege violation of the pen register statute.
    • Aviles v. LiveRamp, No. 23STCV28190 (Cal. Super. Ct. Jan. 28, 2025): Tracking beacon collected only IP addresses and device information so did not qualify as a pen register.

    Session Replay and “Reading” in Transit under § 631(a)

    Courts assessing CIPA § 631(a) claims based on session replay tools have focused on whether the software “reads” communications during transmission. The statute prohibits unauthorized interception, but not all data capture qualifies—liability generally requires real-time comprehension or decoding.

    Several decisions highlight this distinction between passive recording and active interception:

    • Heerde v. Learfield Communications LLC, No. 2:23-cv-5258 (C.D. Cal. July 19, 2024): Court allowed the § 631(a) claim to proceed past the pleading stage where plaintiffs alleged that search terms were transmitted in real time to third parties, constituting interception in transit.
    • Torres v. Prudential Financial, Inc., 2025 WL 1135088 (N.D. Cal. Apr. 17, 2025): Court granted summary judgment for defendants. The session replay software recorded keystrokes and mouse movements for later viewing but did not “read” the data as it was being transmitted. The absence of real-time decoding or interpretation defeated the CIPA claim.
    • Williams v. DDR Media, LLC, 757 F. Supp. 3d 989 (N.D. Cal. 2024): After discovery, the court found that the tracking software hashed inputs and did not retain or analyze their contents. Because it neither read nor attempted to understand the meaning of the communications during transmission, no liability under § 631(a) attached and summary judgment was granted for the third-party vendor and defendant who partnered with it.

    Privacy Expectations in IP Addresses and Standing

    Defendants continue to win dismissal where courts find no reasonable expectation of privacy in IP addresses or where plaintiffs fail to allege a concrete injury.

    • Gabrielli v. Insider Inc., No. 1:23-cv-7433 (S.D.N.Y. Feb. 18, 2025): Dismissed for lack of standing; IP tracking alone didn’t show harm or privacy invasion.
    • Zhizhi Xu v. Reuters News & Media, No. 1:23-cv-7425 (S.D.N.Y. Feb. 13, 2025): Standing denied where plaintiff didn’t allege that IP tracking resulted in targeting or other harm.
    • Heiting v. FKA Distributing Co., No. 3:23-cv-5329 (N.D. Cal. Feb. 3, 2025): No standing where plaintiff failed to specify frequency of visits, data shared, or whether the tracking led to any deanonymization or harm.
    • Casillas v. Transitions Optical Inc., No. 23STCV30742 (Cal. Super. Ct. Apr. 23, 2024): Dismissed for lack of allegations about howplaintiff interacted with the site or what data was collected.
    • Ingrao v. AddShoppers, Inc., 2024 WL 4892514 (E.D. Pa. Nov. 25, 2024): Held that email addresses and general internet activity are not sensitive enough to support standing under CIPA or similar statutes.

    The Ninth Circuit Weighs in with Three Decisions

    Amidst these varying district court cases, the Ninth Circuit weighed in on three CIPA cases, affirming dismissal of CIPA claims in two cases, but reversing dismissal in a third case. These decisions will likely be used by both plaintiffs and defendants going forward in bringing and defending against CIPA claims:

    • Thomas v. Papa John’s, 2025 WL 1704437 (9th Cir. June 18, 2025): Affirmed dismissal of CIPA claims based on session replay code because plaintiff alleged that Papa John’s directly violated § 631(a) by eavesdropping, as opposed to aiding and abetting eavesdropping by a third party. The panel held that a party to a conversation cannot be liable for eavesdropping on its own conversation.
    • Mikulsky v. Bloomingdale’s, 2025 WL 1718225 (9th Cir. June 20, 2025): Reversed dismissal of CIPA claims based on session reply code on defendant’s sufficient facts to allege that defendant aided or conspired with third-party session reply providers to capture the “contents” of plaintiff’s communications on defendant’s website (including names, addresses, credit card information, and product selections), and not merely “record” information (such as mouse clicks or movements) regarding the characteristics of those communications.
    • Guiterrez v. Converse, 2005 WL1895315 (9th Cir. July 9, 2025): Affirmed dismissal of CIPA claims based on chat feature provided on Converse website by Salesforce because plaintiff provided no evidence that her chats were read by Salesforce, despite evidence that Salesforce could read those chats. Note concurrence by Judge Bybee questioning whether CIPA was intended to cover internet communications at all: “If theCalifornia legislature wanted to apply § 631(a) to the internet, it could do so by amending that provision or adding to CIPA’s statutory scheme . . . California has failed to update § 631(a) to account for advances in technology since 1967. It is not our job to do it for them.” Id. at *3.

    The VPPA Circuit Split in the Digital Age

    Background

    The VPPA prohibits video service providers from knowingly disclosing a consumer’s personally identifiable information (PII) related to video viewing without consent. Congress enacted the statute in 1988 after Judge Robert Bork’s video rental history was disclosed during his Supreme Court confirmation process. Although the titles—such as Hitchcock thrillers and family films—were unremarkable, the episode sparked public concern over the ease with which viewing habits could be exposed. Following what became known as the “Bork Tapes” episode, Congress passed the VPPA to protect disclosure of consumers’ video viewing information without their consent.

    The Second Circuit Expands, Then Narrows, the VPPA

    In Salazar v. National Basketball Association, 118 F.4th 533 (2d Cir. 2024), the plaintiff subscribed to the NBA’s email newsletter and later viewed videos on NBA.com while logged into Facebook. He alleged that the NBA used Meta’s tracking pixel to share his personal information and viewing history and Facebook ID with Meta for targeted advertising. The Second Circuit held that the email newsletter constituted a “good or service” under the VPPA even though it was non-video content. This holding significantly expanded the definition of a “subscriber” under the statute and led to a surge in VPPA claims.

    More recently, however, in Solomon v. Flipps Media, Inc., 2025 WL 1234567 (2d Cir. May 1, 2025), the Second Circuit held that sending a Facebook user’s ID and a URL containing a video title to Meta does not trigger VPPA liability. Applying an “ordinary person” standard, the court ruled that this data combination does not constitute PII because it doesn’t, on its own, reveal an individual’s viewing history without additional tools or expertise. Solomon is a major victory for defendants and is expected to significantly curb pixel-based VPPA claims in the Second Circuit. The decision aligns the Second Circuit with the Third and Ninth Circuits, reinforcing a narrower interpretation of the statute.

    The Sixth Circuit’s VPPA Limitation: Salazar v. Paramount Global

    In Salazar v. Paramount Global, 133 F.4th 642 (6th Cir. 2025), the Sixth Circuit rejected a VPPA claim based on Meta Pixel use, narrowing the definition of “consumer” under the statute. The plaintiff alleged that 247Sports.com disclosed his video viewing history to Facebook while he was logged into his account and subscribed to the site’s newsletter. The court held that unauthorized disclosure of viewing history to Facebook constituted a concrete injury, analogizing it to common-law privacy harms. However, it concluded that Salazar did not have a “consumer” relationship with the defendant, as required under the VPPA—Salazar’s newsletter subscription didn’t qualify as a subscription to goods or services in the nature of audiovisual materials.

    The Seventh Circuit’s Expansion of VPPA Viability: Gardner v. Me-TV National Limited Partnership

    In Gardner v. Me-TV National Limited Partnership, 132 F.4th 1022 (7th Cir. 2025), the Seventh Circuit expanded the scope of VPPA liability by holding that plaintiffs who created free MeTV accounts to access personalized video features qualified as “subscribers” under the statute. The plaintiffs alleged that MeTV embedded Meta’s tracking pixel in its videos, transmitting their viewing history and personal data to Facebook for targeted advertising. The court found that exchanging email addresses and zip codes for personalized video access made the plaintiffs “subscribers,” emphasizing that “data can be worth more than money” in the digital economy. It adopted a broad reading of “consumer,” holding that the VPPA covers anyone who subscribes to any service from a video tape service provider, regardless of whether thesubscription is tied directly to video content. The court rejected MeTV’s argument that the plaintiffs merely subscribed to an “information service,” explaining that the statute focuses on who provides the subscription—not the specific type of content accessed. Gardner marks a significant expansion of VPPA exposure, particularly for ad-supported platforms that collect user data in exchange for personalized video features.

    Takeaways

    Together, Solomon v. Flipps Media, Salazar v. Paramount Global, and Gardner v. Me-TV illustrate the deepening Circuit split over how broadly the VPPA applies in the context of modern digital tracking. The Circuits have taken different positions on who qualifies as a “consumer” and what constitutes “personally identifiable information” traceable to a person. These cases underscore the uncertainty that remains around the VPPA’s reach in the age of ad-supported streaming and pixel-based analytics, with the permissibility of such claims now hinging heavily on jurisdiction.

    SB 690: California’s Legislative Response to CIPA Abuse

    Amidst the wave of CIPA litigation, the California legislature has introduced a bill to curb increasingly abusive litigation practices over website data collection that have surged over the past few years.

    What the Bill Does

    SB 690 amends CIPA to exempt from liability the use of recording or tracking technologies that serve a “commercial business purpose.” The exemption applies to Penal Code Sections 631, 632, 637.2, and 638.51, provisions that have been the focus of extensive litigation and have generated significant uncertainty for businesses attempting to navigate compliance. The bill aims to clarify the permissible use of common and now universally used web technologies that assist with analytics, advertising, and personalization of digital experiences. If passed, the bill will rein in what many see as an increasingly unmanageable and unpredictable wiretapping litigation landscape.

    Who’s Affected

    • Defendants Favored: Website operators, analytics providers, and ad tech firms gain protection from CIPA suits arising out of standard business activities.
    • Plaintiffs’ Bar Constrained: Routine lawsuits over standard tracking implementations lose statutory footing.
    • Businesses See Reduced Exposure and Litigation Cost: Currently, CIPA permits $5,000 per statutory
      violation, and litigation costs on top of that create hefty financial repercussions for CIPA violations.

    Status and Outlook

    SB 690 passed the California Senate unanimously and also found strong support in the Assembly. As amended, the bill applies prospectively only—it will not affect pending cases filed before the effective date. However, the Assembly voted to advance the bill as a two-year bill, meaning that it can carry over into the 2026 legislative session and will likely delay enactment of the bill. This may prompt a further surge of CIPA filings over the next few months as plaintiffs race to file before the new limitations take effect.

    Conclusion

    As courts and lawmakers confront the realities of digital tracking and data analytics, the legal contours of privacy litigation are rapidly evolving. The mixed rulings under CIPA reveal a judiciary still grappling with how to apply legacy statutes to modern technologies, while the VPPA decisions reflect growing disagreement over the statute’s scope in a data-driven economy. At the same time, SB 690 signals a legislative push to restore predictability and limit liability for businesses engaging in routine online practices. For companies operating in the digital space, this moment represents both risk and opportunity: a chance to reassess compliance strategies as privacy law realigns, and a need to stay alert as courts and legislatures continue to reshape the rules of engagement.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com or Mari Clifford at mclifford@coblentzlaw.com for further information or assistance.

  • U.S. State Privacy Laws: 2025 Status Update

    By Saachi S. Gorinstein and Scott C. Hall

    By the end of 2025, eight new states will have enacted comprehensive privacy laws: Delaware, Iowa, Maryland, Minnesota, Nebraska, New Hampshire, New Jersey, and Tennessee. With twenty states expected to have such laws effective by year’s end and more than a dozen additional states actively considering similar legislation for 2026 and beyond, businesses must continue to navigate an increasingly complex and fragmented regulatory landscape. While all state privacy laws share common core principles such as transparency in notice, data minimization, and opt-out rights for certain data usage, other aspects such as applicability thresholds, consumer rights, and enforcement mechanisms vary significantly across jurisdictions, all in the absence of a unifying federal privacy framework.

    General Principles of State Privacy Laws

    Certain baseline privacy principles remain consistent across all states. Businesses operating in any jurisdiction should provide clear notices to consumers about how their data is collected, used, and disclosed, and should limit the use of data collected to specific, disclosed purposes. Businesses should ensure they are collecting only the data necessary for legitimate business purposes and using it solely for the purposes stated in clear and conspicuous privacy notices.

    Consumer Rights

    Most states grant consumers a core set of rights that typically include the ability to access, delete, and correct personal data; request copies of their data (data portability); and opt out of targeted advertising, the sale of personal data, and certain types of profiling. However, there are notable exceptions. Iowa’s law does not provide consumers with the right to correct inaccurate data or to opt out of processing for targeted advertising and profiling, limiting individual control compared to other states. In contrast, Minnesota extends consumer protections by allowing individuals to understand the basis of profiling decisions, access the data used, and pursue alternative outcomes. Minnesota also grants a transparency right (similar to Oregon’s and Delaware’s) allowing consumers to request a list of third parties that have received their data. Maryland takes a more limited approach, allowing consumers to request a list of categories of third parties to whom their data has been disclosed.

    Opt-In Preferences and Data Protection Impact Assessments

    All state privacy laws require businesses to honor opt-out requests, and some require respect for universal opt-out preference signals through mechanisms such as Global Privacy Control (GPC), which allow consumers to communicate their preferences regarding the sale of personal data and targeted advertising across all websites without needing to opt out individually. Amidst enforcement attention on this topic from California regulators, new laws in Delaware, Nebraska, New Hampshire, and New Jersey require recognition of such signals, with Maryland and Minnesota set to align by the end of the year.

    Many new state laws also require businesses to conduct data protection impact assessments (“DPIAs”) and/or internal or external audits when engaging in “high-risk” processing. This typically includes activities such as selling or sharing data for targeted advertising, profiling, or processing sensitive personal information.

    Sensitive Information

    All state privacy laws, including those taking effect in 2025, impose heightened restrictions on the collection and processing of sensitive information, and several expand what qualifies as “sensitive.“ New categories include national origin (Delaware, Maryland, New Jersey), transgender or non-binary status (Delaware, Maryland, New Jersey), biometric data (Maryland, Tennessee), and certain financial account information (New Jersey). Maryland’s law is particularly stringent, with a broad definition of “consumer health data” that includes information related to gender-affirming treatment and reproductive or sexual health care, and it prohibits processing or sharing sensitive information unless strictly necessary for a consumer requested service even with consent. Additionally, new state laws in Delaware, Maryland, Nebraska, New Hampshire, New Jersey, and Tennessee follow several already enacted state laws in requiring businesses to conduct DPIAs when processing sensitive data or engaging in other high-risk activities.

    Applicability Thresholds of State Privacy Laws

    Determining which state privacy laws apply to your business requires careful analysis. While California, Tennessee, and Utah use revenue-based thresholds (e.g., $25 million) either alone or in combination with other factors, most states rely on volume-based criteria, typically applying to businesses that process the personal data of 100,000+ residents or derive a certain portion of revenue from selling data.

    Several states have lower or broader thresholds:

    • Montana: Applies to businesses collecting personal information of 50,000 consumers, or 25,000 if 25%+ of revenue comes from data sales.
    • Maryland, New Hampshire, Delaware, Rhode Island (2026): Thresholds begin at 35,000 residents, with Delaware and Rhode Island also using a 20% revenue qualifier.
    • Texas and Nebraska: Among the broadest, apply to nearly any business that is not a “small business” under SBA definitions, with no numerical data thresholds.
    • Florida: Applies only to large for-profit companies with $1 billion+ in global revenue and certain tech-related operations.

    Adding to the complexity, California uniquely includes employee, contractor, job applicant, and business-to-business transaction data under its CPRA, while most other states limit “consumer” to individuals acting in a personal or household context.

    As a result, businesses must be aware of their data collection and processing activities in each state with a privacy law, and must analyze those activities against the requirements of each applicable state law.

    Enforcement of State Privacy Laws

    Like most state privacy laws, the 2025 statutes do not authorize any private rights of action (California remains the exception for certain data breaches involving sensitive personal information). Enforcement authority generally lies with each state’s Attorney General (or, in California, its newly created Privacy Protection Agency), who are expected to take a more active role in investigating compliance and responding to consumer complaints, especially involving sensitive personal data. Most of the new laws also include cure periods, giving businesses an opportunity to correct violations before enforcement proceeds. Notably, New Jersey’s law grants rulemaking authority to the Director of the Division of Consumer Affairs, signaling that additional implementing regulations may follow, similar to frameworks in California and Colorado. A unique provision in Tennessee’s law introduces an affirmative defense to enforcement actions–the first of its kind among U.S. privacy statutes. Businesses may invoke this defense by demonstrating that they maintain a written privacy program that “reasonably conforms” with the National Institute of Standards and Technology (NIST) privacy framework or a comparable standard. This incentivizes the adoption of widely recognized best practices and supports a more proactive approach to privacy compliance.

    Takeaways for Businesses

    With twenty comprehensive privacy laws expected to be effective by the end of 2025 and many more under consideration, privacy compliance is a national business imperative. Although discussions around a federal privacy law continue, no such law has yet materialized. As in the past, companies cannot rely on potential federal intervention to alleviate the burden of multi-jurisdictional compliance.

    It is essential for all businesses to consistently map their data collection, use and disclosure, update privacy policies and notices, implement consumer rights requests mechanisms, honor opt-out and limitation requests, and continue to monitor evolving requirements and implement scalable, principle based privacy programs that can adapt to a shifting—and ever-increasing—patchwork of obligations.

    See the U.S. State Privacy Laws – Applicability Thresholds chart linked here for more details.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com or Mari Clifford at mclifford@coblentzlaw.com for further information or assistance.

     

  • Effective September 1, 2025: Texas Expands Telemarketing Law to Cover Text Messages

    By Scott Hall

    Beginning September 1, 2025, Texas will broaden the scope of its telemarketing law (Chapter 302 of the Business & Commerce Code) to explicitly include text message marketing within the definition of “telephone solicitation.” Companies using SMS for promotional purposes that include messaging consumers in Texas should assess whether they are now subject to new registration, bonding, and compliance obligations.

    Key Requirements

    Companies engaged in sales-oriented SMS outreach to Texas consumers must:

    • Register with the Texas Secretary of State
    • Submit a $200 filing fee
    • Post a $10,000 security (via a bond, irrevocable letter of credit, or certificate of deposit)
    • File quarterly addenda listing all salespersons engaged in solicitations
    • Comply with disclosure and recordkeeping mandates

    These requirements have historically applied to voice-based telemarketing, but the amendment clarifies their application to modern communication platforms, including SMS.

    Exemptions

    A number of exemptions apply under Subchapter B of Chapter 302, most notably:

    • Outreach to current or former customers: No registration is required if messages are limited to prior customers and the business has operated under the same name for at least two years.
    • Educational Institutions and Nonprofits
    • Publicly Traded Companies and Financial Institutions regulated at the state or federal level
    • Sellers of food products, newspapers, periodicals, or cable subscriptions
    • Retailers with physical locations that have operated for two years under the same name, and a majority of business occurs at those locations as opposed to online
    • Isolated solicitations that are not part of a recurring pattern

    There is not a lot of guidance on the scope of these exemptions, and they may depend on specific facts and circumstances that should be discussed with legal counsel. Also note that the law only exempts sellers (not third-party platforms) unless the provider is contracting predominantly with exempt businesses and meets other criteria.

    Security Requirement

    For those subject to the law, a $10,000 security deposit must accompany the registration. This can be satisfied by:

    • A surety bond from a licensed company
    • An irrevocable letter of credit from a federally insured financial institution
    • A certificate of deposit with restricted withdrawal rights

    The purpose is to create a recovery mechanism for consumers harmed by a seller’s insolvency or contractual breach.

    Broader Compliance Considerations

    While Texas provides a customer-based exemption, companies should also keep in mind other states such as Florida, Maryland, and Oklahoma that have strict SMS marketing laws without such exemptions, as well as federal law (TCPA), which still requires prior express written consent for most automated marketing texts. Companies relying on marketing platforms like Klaviyo or Attentive should also review their vendor contracts to ensure data is being used only on the company’s behalf and in compliance with these rules.

    Recommendations

    • Assess whether your SMS campaigns involve Texas consumers (or consumers in other states with registration or other special compliance requirements).
    • Review eligibility for exemptions, especially under the “former or current customer” carve-out.
    • Confirm you are properly capturing and storing user consent, ideally in a verifiable format.
    • Evaluate whether your platform provider meets service-provider-only criteria, or whether any contract amendments are needed.
    • Prepare registration materials and financial security, if required.

    Please reach out to the Coblentz team for further information or assistance.

  • Richard Patch Named to BTI Client Service All-Stars 2025

    Richard Patch has been recognized as one of the best attorneys for client service in BTI’s 2025 Client Service All-Stars report.

    The BTI Client Service All-Stars report is based solely on unprompted, confidential feedback from more than 350 independent, individual interviews with CLOs and General Counsel at Fortune 1000 companies and large organizations. Client Service All-Stars are recognized by top legal decision makers for being embedded in client decision-making, understanding business and operating context, bringing new ideas clients have not seen before, operating in compressed time frames, and offering practicality in time to matter.

    Richard’s trial and appellate practice focuses on complex civil litigation, and he is widely regarded as one of the leading cable television and telecommunications lawyers in the country. “My client service philosophy is built on clear communications, responsiveness, and unwavering integrity,” says Richard. “I prioritize understanding each client’s unique goals in order to deliver result-driven legal solutions. My commitment is to handle every case with the utmost diligence and respect, ensuring that my clients are always informed and receive legal support that consistently exceeds their expectations.” To view Richard’s BTI Client Service All-Star profile, please click here.

    Earlier this year, Coblentz was recognized in BTI’s Client Service A-Team 2025, where corporate counsel ranked Coblentz in the top firms nationally for client service excellence. Coblentz was also named a strongly recommended midsize firm in BTI’s Most Recommended Law Firms 2025 report.

    BTI is an industry leader in conducting independent research on how clients acquire, manage, and evaluate their professional services providers. Read more about the BTI Client Service All-Stars 2025 here.

    Categories: News
  • 36 Coblentz Attorneys Recognized by The Best Lawyers in America® and Best Lawyers: Ones to Watch® in America 2026

    In its annual listing of the nation’s top legal talent, the Best Lawyers® in America guide recognized 36 Coblentz attorneys, including 9 as Ones to Watch®. Best Lawyers lists are based entirely on an exhaustive peer-review evaluation. Inclusion in The Best Lawyers in America® and Best Lawyers: Ones to Watch® in America is determined solely through a comprehensive peer-review survey. The 2026 edition of The Best Lawyers in America® is based on more than 31 million evaluations.

    Categories of recognition are listed below.

    2026 Best Lawyers in America Recognitions

    2026 Best Lawyers: Ones to Watch in America Recognitions

    To view the full rankings, please visit the Best Lawyers website.

     

    Categories: News
  • Probate Litigation Bootcamp

    Join Coblentz partners Frank Busch and Fred Crombie on Friday, August 22, 2025 during the Bar Association of San Francisco’s Probate Litigation Bootcamp. The all-day bootcamp will cover probate litigation from petition to trial.

    Frank Busch will co-present “Discovery after Death: Planning and Executing Discovery that Actually Helps” and “Tools for Protecting the Vulnerable: Probate Elder Abuse Restraining Orders, Financial Elders Abuse Actions and Attachments.” Fred Crombie will co-present “Attacking the Pleadings: Demurrer, Anti-SLAP and MSJ.”

    For more details and to register, please click here.

    Categories: Events
  • Navigating the Shifting AI Landscape: What U.S. Businesses Need to Know in 2025

    By Mari Clifford and Scott Hall

    Artificial intelligence is no longer a wild west frontier technology—it’s a regulated one. As AI systems become central to how companies operate, communicate, and compete, legal oversight is catching up. In 2025, AI governance is defined by divergence: a harmonized, risk-based regime in the EU; a fragmented, reactive framework in the U.S.; and rapid regulatory expansion at the state and global levels. Businesses deploying or developing AI must now navigate a multi-jurisdictional patchwork of laws that carry real compliance, litigation, and reputational consequences.

    This article outlines the key regulatory developments, contrasts the EU and U.S. approaches, and offers concrete recommendations for U.S. companies operating AI systems.

    EU AI Act: Global Reach with Teeth

    The EU AI Act, which entered into force in August 2024, is the world’s first comprehensive, binding legal framework for AI. It classifies systems by risk level—unacceptable, high, limited, and minimal—and imposes extensive obligations on high-risk and general-purpose AI (GPAI) models. High-risk AI systems must undergo pre-market conformity assessments, maintain technical documentation, and register in a public EU database. GPAI models face additional transparency, copyright, and cybersecurity obligations, particularly if they exceed scale thresholds (e.g., >10,000 EU business users).

    The Act’s extraterritorial reach means U.S. companies offering AI products or services in the EU—or whose outputs affect EU residents—must comply. Notably, failure to implement the EU’s “voluntary” GPAI Code of Practice could shift the burden of proof in enforcement actions.

    Timeline to Watch: The law becomes enforceable starting August 2026, with GPAI obligations phasing in from 2025.

    The U.S. Approach: Fragmentation, Tension, and State-Level Acceleration

    Executive Orders & Federal Initiatives

    U.S. federal law remains sectoral and piecemeal. President Biden’s 2023 Executive Order on “Safe, Secure, and Trustworthy AI” established guiding principles, including fairness, transparency, and privacy protections, and tasked agencies with issuing AI-specific standards. However, this was rescinded in 2025 by the Trump administration’s new EO prioritizing deregulation and “American leadership in AI,“ creating a sharp policy pivot and regulatory uncertainty. In parallel, the administration also unveiled a draft AI Action Plan, emphasizing voluntary industry standards and innovation incentives over binding rules. While still in flux, this initiative further underscores the unsettled political climate around federal AI policy.

    While bills like the AI Accountability Act and the SAFE Innovation Framework have been proposed, no comprehensive federal AI law has passed. Instead, federal agencies like the FTC, EEOC, and CFPB continue to regulate AI through existing consumer protection and civil rights laws—often through enforcement actions rather than formal rulemaking.

    State Spotlight: Colorado, California, and Others Lead the Way

    Absent a comprehensive federal law, states have moved decisively. The list below highlights a representative sample of enacted state AI statutes as of July 2025; dozens of additional bills are pending and advancing every legislative cycle:

    Arizona

    • HB 2175 – requires health-insurer medical directors to personally review any claim denial or prior-authorization decision that relied on AI, exercising independent medical judgment (in force on June 30, 2026).

    California

    • AB 1008 – expands the CCPA definition of “personal information” to cover data handled or output by AI.
    • AB 1836 – bars commercial use of digital replicas of deceased performers without estate consent.
    • AB 2013 – requires AI developers to post detailed training-data documentation.
    • AB 2885 – creates a uniform statutory definition of “artificial intelligence” (effective January 1, 2025).
    • AB 3030 – mandates clear gen-AI disclaimers in patient communications from health-care entities (effective January 1, 2025).
    • SB 1001 “BOT” Act – online bots that try to sell or influence votes must self-identify.
    • SB 942 AI Transparency Act – platforms with >1M monthly users must label AI-generated content and provide a public detection tool.

    Colorado

    • SB 24-205 Colorado AI Act – first comprehensive U.S. framework for “high-risk” AI; imposes reasonable-care, impact-assessment, and notice duties on developers and deployers (effective 2026).
    • SB 21-169 – bans unfair discrimination by insurers through algorithms or predictive models.
    • HB 23-1147 – requires deep-fake disclaimers in election communications.
    • Colorado Privacy Act – consumers may opt out of AI “profiling” that produces legal or similarly significant effects; DPIAs required for such processing.

    New York

    • New York CityLocal Law 144 – employers using automated employment-decision tools must obtain an annual independent bias audit and post a summary.

    Tennessee

    • HB 1181Tennessee Information Protection Act (2024) – statewide privacy law; impact assessments required for AI profiling posing significant risks.
    • “ELVIS Act” (2024) – makes voice mimicry by AI without permission a Class A misdemeanor and grants a civil cause of action.

    Texas

    • Texas Data Privacy and Security Act – lets Texans opt out of AI profiling that has significant effects and compels risk assessments for such uses.

    Utah

    • SB 149 “AI Policy Act” (amended by SB 226) – requires disclosure when consumers interact with generative-AI chat or voice systems and sets professional-licensing guardrails.
    • HB 452“Artificial Intelligence Applications Relating to Mental Health” – regulates the use of mental health chatbots that employ artificial intelligence (AI) technology.

    Expect additional Colorado-style comprehensive AI frameworks to surface in 2025-26 as states continue to fill the federal gap.

    Global Developments & Cross-Border Tensions

    Beyond the EU and U.S., countries like Brazil, China, Canada, and the U.K. are advancing AI governance through a mix of regulation and voluntary standards. Notably:

    • China mandates registration and labeling of AI-generated content.
    • Brazil is poised to pass a GDPR- and EU AI Act-style law.
    • The U.K. continues to favor a principles-based, regulator-led approach but may pivot toward binding regulation.

    U.S.-EU divergence has triggered geopolitical friction. The EU’s upcoming GPAI Code of Practice is a flashpoint, with U.S. officials warning it could disproportionately burden American firms. Meanwhile, the U.S. may reconsider participation in multilateral frameworks like the Council of Europe’s AI Treaty.

    A Compliance Playbook for 2025

    AI legal exposure increasingly mirrors privacy law: patchwork rules, aggressive enforcement, and high reputational stakes. To mitigate risk, companies should:

    • Inventory AI Systems: Identify all AI tools in use—especially those making or influencing decisions in high-risk sectors (HR, healthcare, finance, etc.).
    • Conduct Risk Assessments: For GPAI or high-risk tools, assess training data, bias exposure, and explainability. Use frameworks like NIST’s AI RMF or the EU’s conformity checklist.
    • Build Cross-Functional Governance: Legal, compliance, technical, and product teams must coordinate. Assign AI risk ownership and create change triggers for reclassification (e.g., changes in use or scale).
    • Monitor State and Federal Law Developments.
    • Plan for EU Market Entry: Determine whether EU-facing AI systems require local representation, registration, or conformity assessment under the AI Act.
    • Audit Communications: Avoid AI-washing. Public statements about capabilities, safety, or human oversight must match internal documentation and performance.

    The message from global regulators is clear: innovation is welcome, but governance is non-negotiable. Whether operating domestically or globally, businesses must prepare for AI compliance to become a core legal discipline, akin to privacy or cybersecurity.

    For legal teams and compliance leaders, now is the time to move from principles to programs—and to see governance as a competitive advantage, not just a regulatory burden.

    If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com or Mari Clifford at mclifford@coblentzlaw.com for further information or assistance.

  • 2025 Mid-Year Privacy Report

    A Comprehensive Look at New Developments in Data Privacy Laws

    By Scott Hall, Mari Clifford, Leeza Arbatman, Kat Gianelli, Saachi Gorinstein, and Hunter Moss

    Download a PDF version of this report here.

    In 2025, privacy and AI regulation have moved from the sidelines to the center of business risk and strategy. U.S. states are rapidly enacting a patchwork of privacy laws, with new AI laws emerging and expected to increase. Meanwhile, regulators are tightening oversight of automated decision making, children’s data, health metrics, and cross-border data transfers. And litigation over online data collection by companies continues to expand under various statutes, including wiretapping and pen register claims under the California Invasion of Privacy Act (CIPA), and claims under the Video Privacy Protection Act (VPPA), resulting in diverging court rulings that send mixed signals to companies regarding privacy compliance.

    Our Mid-Year Privacy Report examines the most significant developments shaping the privacy and AI landscape in 2025 and highlights practical steps businesses can take to navigate an increasingly complex, multi-jurisdictional legal landscape.

    You can download the full report here. If your company needs assistance with any privacy issues, Coblentz Data Privacy & Cybersecurity attorneys can help. Please contact Scott Hall at shall@coblentzlaw.com for further information or assistance.

    Categories: News
  • New California Regulations Regarding AI Use in Hiring and Employment

    By Fred Alvarez and Hannah Withers 

    If your company is using AI or other automated decision making systems to make employment or hiring decisions, a new set of regulations in California will be going into effect on October 1, 2025 that will require your attention.

    Issued by the California Civil Rights Council (CRC), these regulations define what types of AI systems are being regulated, what types of employment decisions are included, and what employers need to do to prevent discrimination claims resulting from the use of these AI tools. If you (or your agents) are using AI in any manner related to employment decisions, we suggest familiarizing yourself with the key aspects of the regulations, which are summarized below.

    The purpose of the regulations is to respond to concerns about the potentially discriminatory impact of AI tools in employment and to make clear that California’s anti-discrimination laws still apply even when employers are using AI tools to make decisions. The regulations make clear that an employer cannot escape liability for discriminatory hiring or employment decisions based on the fact that the decision was made by an AI generated tool. Rather, even when just using AI tools, companies may still be liable for discriminatory hiring or employment decisions.

    Key Sections of the Regulations to be Familiar With:

    • An “Automated-decision system” is defined as “a computational process that makes a decision or facilitates human decision making regarding an employment benefit… [it] may be derived from and/or use artificial intelligence, machine-learning, algorithms, statistics, and/or other data processing techniques.”
    • The covered employment and hiring practices are quite broad and include decisions related to recruitment, applicant screening, background checks, hiring, promotion, transfer, pay, benefit eligibility, leave eligibility, employee placement, medical and psychological examination, training program selection, or any condition or privilege of employment.
    • An “agent” includes “any person acting on behalf of an employer, directly or indirectly, to exercise a function traditionally exercised by the employer or any other FEHA-regulated activity, which may include applicant recruitment, applicant screening, hiring, promotion, or decisions regarding pay, benefits, or leave, including when such activities and decisions are conducted in whole or in part through the use of an automated decision system. An agent of an employer is also an ‘employer’ for purposes of the Act.”
    • Under the rules, it is unlawful to use AI systems that result in employment discrimination based on protected characteristics (e.g. religion, race, gender, disability, national original, age, etc.) The rules also specify that the use of technology that has an adverse impact is “unlawful unless job-related and consistent with business necessity and the technology includes a mechanism for requesting an accommodation”.
    • There is a 4 year record retention requirement for employment records created or received (including applications, personnel records, membership records, employment referral records, selection criteria, automated-decision system data) dealing with a covered employment practice or employment benefit.

    Here is What We Recommend Considering and Preparing For:

    • Determine if you are using any AI systems that should be evaluated for possible discriminatory impact. Consider whether you use any of the following systems with employees or applicants:
      • Computer-based assessments or tests, such as questions, puzzles, games, or other challenges.
        Automated systems to direct job advertisements or other recruiting materials to targeted groups.
      • Automated systems to screen resumes for particular terms or patterns.
      • Automated systems to analyze facial expression, word choice, and/or voice in online interviews.
      • Automated systems to analyze employee or applicant data acquired from third parties.
    • If you use an automated decision making system, examine what data is being collected, how decisions are being made, and any possible discriminatory impact that might be made as a result. Consider how applicants or employees in each protected group might be impacted by the way the automated systems are set up and implemented. (e.g., The use of online application technology that limits, or screens out, ranks, or prioritizes applicants based on their schedule may discriminate against applicants based on their religious creed, disability, or medical condition. Such a practice having an adverse impact is unlawful unless job-related and consistent with business necessity and the online application technology includes a mechanism for the applicant to request an accommodation.)
    • Investigate how the automated decision making systems you are using check for bias on behalf of protected classifications. Efforts to check for bias can be used as a defense to claims of discrimination. To check for bias you should investigate whether the AI tool:
      • Monitors for unintended discriminatory effects during the ongoing use of the software.
      • Conducts live bias tracking functionality as decisions are being made.
      • Supports compliance with anti-discrimination requirements in states that require annual independent bias audits of AI tools in hiring.
      • Supports customer compliance with applicant notice requirements in states that have laws relating to the use of AI tools in hiring.
      • Maintains records of hiring decisions and AI processes conducted.
      • Has materials or a white paper regarding employment and privacy law compliance.
      • Has measures in place to prevent discriminatory outcomes in its algorithm design and outputs.
    • If you work with third parties or other agents for hiring or employment decisions, talk to them about what AI tools they are using and learn more about what information those tools gather and how they make decisions. You are responsible for how your agent is using AI tools and you should communicate with agents specifically about their compliance with these California regulations.
    • Review your accommodations policies to make sure that any automated systems being used are not operating in a way that would miss the need for an accommodation.
    • Review your record retention practices for employment and hiring decisions to make sure that you keep records of hiring or employment decisions made using AI systems for 4 years.

     

    The Coblentz Employment team is available to answer any questions you may have about the impact of these regulations and how to prepare logistically ahead of their effective date on October 1, 2025. For additional information, you may also refer to the CRC’s press release.