Last week, Massachusetts’ Supreme Judicial Court delved into a case with potentially national implications: should Meta platforms face a lawsuit alleging that Instagram’s design illegally hooks kids with addictive features?

The justices appeared divided as they questioned whether Meta’s practices are protected by Section 230 of the Communications Decency Act, the law that shields online platforms from liability for third-party content, and the First Amendment. The oral arguments shed light on the complexity of holding social media giants accountable when their tools impact children’s mental health.

A central issue is whether Meta, as Instagram’s parent company, is functioning more as a publisher (protected by Section 230) or as an advertiser pushing engagement for profit (potentially unprotected).

Justice Gabrielle R. Wolohojian drew a parallel between Instagram’s design choices, autoplay, ephemeral postings, and incessant notifications, and traditional advertising. She argued that features crafted to keep teens scrolling resemble direct marketing more than publishing. “These notifications seem to me to be much closer to that [advertising],” she stated.

Meta’s attorney, Mark Mosier, countered that content, not notifications, drives user engagement, and that Meta can’t be liable for third-party posts. Using an analogy, he argued: a notification about a math lecture won’t drive clicks, but one about a celebrity liking your post will.

On the other hand, Justice Scott L. Kafker suggested that Meta’s algorithmic encouragement of engagement might itself be a core publishing function. Justice Kafker said, “They are basically taking the content, they don’t care what it is, and they are making everyone look at it. They are like the greatest publisher on earth.” He compared Instagram’s promotional tactics to a book publisher creating an enticing cover to spark interest in an otherwise old or controversial novel.

However, the state attorney general’s office drew a distinction: Holding a platform liable as a publisher means targeting the harm caused by third-party content, not by the platform’s own conduct aimed at maximizing attention and ads.

Meta’s defense leans heavily on Section 230 and the First Amendment. The state, meanwhile, has crafted a “clever” legal strategy targeting Meta’s own conduct, Instagram’s addicting design, rather than only the user-generated content, creating a potential crack in the federal immunity shield. The case comes amid mounting concerns. School districts have accused Meta of concealing internal research showing negative mental health impacts on young users from compulsive Instagram use. Allegations include that staff members themselves compared their work to “drug pushing.”

Justice Dalila A. Wendlandt questioned whether algorithmic notifications crafted by Instagram, rather than by third parties, should enjoy Section 230 protection. Justice Wendlandt said, “I am wondering how the notifications themselves are covered by 230… given that the third party isn’t creating the content that you’re pushing to the kids,” referencing the so-called “algorithm of incessant notification” that fuels fear of missing out (or “FOMO”) among teens.

For now, the court has taken the issue under advisement. The outcome could set precedent on how states can regulate the ways social media platforms attract and retain young users, and where the limits of federal protections like Section 230 truly lie.

Why this case matters:

  • Children’s Mental Health: Evidence grows that compulsive social media use can negatively affect teens.
  • Section 230’s Limits: The case explores whether platform design, separate from content, can trigger liability.
  • Tech Accountability: States and schools are increasingly challenging social media companies’ influence over young people.

As Massachusetts leads the legal charge, the country is watching: will courts redefine the rules of tech responsibility, or will platform immunity prevail? Stay tuned.

Photo of Kathryn Rattigan Kathryn Rattigan

Kathryn Rattigan is a member of the Business Litigation Group and the Data Privacy+ Cybersecurity Team. She concentrates her practice on privacy and security compliance under both state and federal regulations and advising clients on website and mobile app privacy and security…

Kathryn Rattigan is a member of the Business Litigation Group and the Data Privacy+ Cybersecurity Team. She concentrates her practice on privacy and security compliance under both state and federal regulations and advising clients on website and mobile app privacy and security compliance. Kathryn helps clients review, revise and implement necessary policies and procedures under the Health Insurance Portability and Accountability Act (HIPAA). She also provides clients with the information needed to effectively and efficiently handle potential and confirmed data breaches while providing insight into federal regulations and requirements for notification and an assessment under state breach notification laws. Prior to joining the firm, Kathryn was an associate at Nixon Peabody. She earned her J.D., cum laude, from Roger Williams University School of Law and her B.A., magna cum laude, from Stonehill College. She is admitted to practice law in Massachusetts and Rhode Island. Read her full rc.com bio here.