VOSS Posted February 18, 2023 Posted February 18, 2023 Quote Section 230 of the Communications Decency Act immunizes internet companies from liability over content posted by third parties and allows platforms to remove content considered obscene or objectionable. The dispute before the Supreme Court marks the first time the court will consider the scope of the law, and the question before the justices is whether Section 230's protections for platforms extend to targeted recommendations of information. The court fight arose after terrorist attacks in Paris in November 2015, when 129 people were murdered by ISIS members. Among the victims was 23-year-old Nohemi Gonzalez, an American college student studying abroad who was killed at a bistro in the city. Gonzalez's parents and other family members filed a civil lawsuit in 2016 against Google, which owns YouTube, alleging that the tech company aided and abetted ISIS in violation of a federal anti-terrorism statute by recommending videos posted by the terror group to users. Google moved to dismiss the complaint, claiming that they were immune from the claims under Section 230. A federal district court in California agreed and, regarding YouTube's recommendations, found that Google was protected under the law because the videos at issue were produced by ISIS. The U.S. Court of Appeals for the 9th Circuit affirmed the district court's ruling, and Gonzalez's family asked the Supreme Court to weigh in. The high court said in October it would take up the dispute. The court fight has elicited input from a range of parties, many of which are backing Google in the case. "Given the sheer volume of content on the internet, efforts to organize, rank, and display content in ways that are useful and attractive to users are indispensable," lawyers for Meta, the parent company of Facebook and Instagram, told the court. The case has presented the justices with a rare opportunity to hear directly from the co-authors of the legislation at issue. Ron Wyden, now a Democratic senator from Oregon, and Chris Cox, a former GOP congressman from California, crafted Section 230 in the House in 1996. The bipartisan pair filed a friend-of-the court brief explaining the plain meaning of their law and the policy balance they sought to strike. "Section 230 protects targeted recommendations to the same extent that it protects other forms of content curation and presentation," they wrote. "Any other interpretation would subvert Section 230's purpose of encouraging innovation in content moderation and presentation. The real-time transmission of user-generated content that Section 230 fosters has become a backbone of online activity, relied upon by innumerable internet users and platforms alike." Google, they argued, is entitled to liability protection under Section 230, since the platform's recommendation algorithm is merely responding to user preferences by pairing them with the types of content they seek. The battle also highlights competing views about the internet today and how Section 230 has shaped it. For tech companies, the law has laid the groundwork for new platforms to come online, an industry of online creators to form and free expression to flourish. For Gonzalez's family and others, the algorithmic recommendations have proven deadly and harmful. The Supreme Court has given little indication of how it may approach Section 230. Only Justice Clarence Thomas has written about lower courts' interpretations of the legal shield. "Courts have long emphasized non-textual arguments when interpreting [Section] 230, leaving questionable precedent in their wake," Thomas wrote in a 2020 statement urging the court to consider whether the law's text "aligns with the current state of immunity enjoyed by internet platforms." The Supreme Court could issue a ruling that affirms how Section 230 has been interpreted by lower courts, or narrow the law's immunity. But internet companies warned the court that if it limits the scope of Section 230, it could drastically change how they approach content posted to their sites. With a greater risk of costly litigation with fewer protections, companies may be more cautious about letting content appear on their sites that may be problematic, and only allow content that has been vetted and poses little legal risk. A decision from the Supreme Court is expected by the summer. Source
ATRL Moderator Bloo Posted February 19, 2023 ATRL Moderator Posted February 19, 2023 This is a complex issue. Don’t wanna dive into it. But recommendation algorithms are not manually programmed by humans to promote a type of content. Recommendation algorithms are all data-driven and based on correlations like “people who like video A seem to really like video B, so since you like video A, lemme recommend video B.” It’s not a person writing explicitly to promote any type of content which is the general way many people discussion the way content are recommended which isn’t helpful to this conversation.
GreatestLoveofAll Posted February 23, 2023 Posted February 23, 2023 it definitely is a complex issue and despite me being one to lean more towards judicial activism, the court showed wise restraint today and the call to pass the issue onto Congress is politically and logically the safest thing to do because this is an issue that really affects a whole bunch of people and how the internet as a whole works. And the Court has done enough shenanigans in the last year- last thing they need is them canceling the internet on their many reasons to dislike them.
Recommended Posts