Skip to the main content
ASML Fellow Launches CLR:SKY
icon-community

ASML Fellow Launches CLR:SKY

Introducing A New AI-Powered Civility Overlay for Bluesky

I am grateful to have been part of the first cohort of fellows at the Applied Social Media Lab at Harvard's Berkman Klein Center for Internet and Society. Over the past six months, my fellowship has focused on finding practical ways to reduce online polarization and promote healthier, more constructive digital conversations. For more than twenty years working as an entrepreneur and product manager, I have helped design and build consumer products that effectively integrate behavioral science to encourage positive behaviors. This has included senior roles at major tech companies such as Meta, Twitter, and Nextdoor, where I specialized in trust and safety, content moderation, ethical artificial intelligence, and behavior change strategies. Witnessing the challenges and significant impacts of online communication firsthand motivated my commitment to exploring solutions that encourage more respectful and productive interactions.

During this fellowship I investigated methods to decrease toxicity, hate speech and online polarization. Specifically, my research examined how showing users the classifications given to their content by moderation tools could influence their online behavior. Typically, social media platforms utilize sophisticated content moderation systems, including artificial intelligence algorithms and human reviewers, to identify and manage problematic content. However, these moderation efforts usually happen behind the scenes, leaving users unaware of how their posts are categorized or perceived. My hypothesis was that making these classifications visible could encourage users to proactively self-moderate their content, thus reducing harmful interactions before they even occur.

This research laid the foundation for the creation of CLR:SKY, an innovative AI-driven tool designed specifically for the Bluesky social media platform. CLR:SKY offers three main civility-focused features aimed at improving online dialogue: the real-time "Toxicity Weather Report," a Generative AI (GenAI) Editor, and a Perspective Assistant. The Toxicity Weather Report uses dynamic weather icons to instantly communicate the tone and potential impact of a user’s content, transitioning from sunny to stormy as toxicity levels rise. This immediate visual feedback encourages users to reconsider and potentially revise their posts before sharing. The GenAI Editor provides optional rewritten content suggestions that enhance clarity, empathy, and positivity, helping users express their intended message more constructively. The Perspective Assistant further enriches interactions by analyzing conversation context and suggesting ways to acknowledge and integrate alternative viewpoints into responses, promoting thoughtful and balanced communication.

CLR:SKY’s approach is grounded in extensive research related to behavioral psychology, social norms, and the concept of "nudging." Studies have consistently demonstrated that making individuals explicitly aware of how their behavior may be perceived by others significantly influences their actions. For example, research conducted using Jigsaw’s Perspective API shows that providing visible toxicity feedback can reduce harmful language in posts by as much as 34%. Behavioral economist Richard Thaler’s work further supports this concept, illustrating how subtle cues, or "nudges," can effectively guide individuals toward more socially beneficial decisions. Thaler’s extensive research highlights the substantial potential of minor adjustments in how information is presented to influence meaningful behavior change.

Additionally, the psychological concept of cognitive dissonance, introduced by Leon Festinger, explains why explicit feedback mechanisms such as those implemented in CLR:SKY can be particularly effective. Cognitive dissonance theory posits that individuals naturally seek consistency between their actions and their self-concept as empathetic and socially responsible people. When users receive explicit feedback about potential negativity or harm associated with their posts, this inconsistency motivates them to adjust their behavior to align more closely with their self-image. Supporting this approach, research from the Pew Research Center and NYU's Center for Social Media and Politics indicates broad public support—over 70%—for greater transparency and user involvement in social media moderation, suggesting strong receptivity to tools that enable proactive self-regulation.

CLR:SKY also integrates insights from research on digital empathy. Studies from Stanford University have demonstrated that interventions designed to build empathy, such as prompts encouraging users to consider others' perspectives, effectively reduce conflict and foster more positive online interactions. Complementing these findings, research from the University of Cambridge underscores how visible AI-generated feedback helps users reconsider aggressive or divisive language, resulting in significantly improved civility in online communications.

We have three goals with this project. Firstly, we aim to empower users by offering immediate, actionable insights into the potential social impact of their words and an easy-to-use genAI tool to help with reframing. Secondly, we hope that the bridging feature, built into the genAI rewrite for when you are replying to a comment, gives people a new frame for how they can present their ideas when engaging with people with whom they may disagree. Lastly, we hope to contribute to the broader conversation about how social media platforms can intentionally be redesigned to promote civil, empathetic, and inclusive interactions.

I warmly invite you to explore CLR:SKY firsthand and send us feedback (signup to receive a user-feedback survey here: www.clrsky.ai/feedback). Your feedback and participation will be crucial as we work together to cultivate digital spaces that foster greater civility, empathy, and meaningful connection. 

You might also like