
BKC x AISST Event Series - Spring 2025
Student-led event series on AI
BKC x AI Student Safety Team (AISST) are collaborating to present a speaker series featuring a lineup of experts in AI governance. Through engaging one-hour talks with interactive Q&A sessions, you’ll have the opportunity to engage in meaningful dialogue with peers and explore exciting career options in AI governance. Don’t miss out on this student-led initiative and the unique opportunity to connect with AI leaders.
These events are for Harvard ID holders only.
Regular Sessions: March 14th - April 25th
Consumer Agents with Prof. Rory van Loo
Marth 14th at BKC
The technology has long existed for automated tools that would filter out toxic social media content or a virtual shopping assistant that would find and even purchase the best deals online without having to go to many different websites and product pages. Yet incumbent businesses have used lawsuits and data control to stifle such digital help. This talk will explore the law’s role in supporting third-party automated tools—and in limiting their potential to become avenues for new harms. Friday, March 14th, from 12:15-1:15 pm, in the BKC Multipurpose Room.
Note: this event in-person attendance only and will be conducted under Chatham House rules. Video recordings will not be permitted during the event.
AI and Personhood with Prof. James Boyle
March 25th at BKC
Join us in the on Tuesday, March 25 from 12:15pm-1:15pm in the BKC Multipurpose Room to hear from Duke Law Professor James Boyle, author of The Line: AI and the Future of Personhood.
Chatbots like ChatGPT have challenged human exceptionalism: we are no longer the only beings capable of generating language and ideas fluently. Chatbots are not conscious. But what happens in the future if the claims to consciousness are more credible?
In The Line, Boyle explores what these changes might do to our concept of personhood, to “the line” we believe separates our species from the rest of the world, but also separates “persons” with legal rights from objects.
The event will include experimental AI bots developed by the Applied Social Media Lab.
AI Outputs are Not Protected Speech with Peter Salib
April 4 at BKC
Law may soon regulate the outputs of generative AI systems, forbidding, for example, false or dangerous outputs. Some scholars have argued that such regulations would raise dire First Amendment issues, because the outputs of generative AI systems are someone’s First Amendment protected speech—AI creators, users, or AIs themselves. Professor Salib argues that AI outputs are not anyone’s protected speech, and thus that AI regulations should face lower constitutional hurdles than widely assumed. Friday, April 4th, from 12:15-1:15 pm.
State Capacity and AI Diffusion: Exploring the Links Between Government and Economic Growth Through Estonia's Modernization with Joel Burke
April 4 at BKC
In the early 1990's, Estonia was a newly re-independent nation after the fall of the Soviet Union with no major industry, let alone tech industry. Today, the country is well known as a startup and e-government leader which has been an early mover on government adoption of AI. This talk will explore how the building of technical state capacity within the Estonian government was foundational to the development and growth of Estonia's tech sector and look at lessons Estonia's journey may have for AI diffusion in the U.S. Friday, April 4th, from 1:30-2:30 pm.
The Stakes and Prospects of Sino-American AI Diplomacy with Bill Drexel
April 25th at WCC
Law may soon regulate the outputs of generative AI systems, forbidding, for example, false or dangerous outputs. Some scholars have argued that such regulations would raise dire First Amendment issues, because the outputs of generative AI systems are someone’s First Amendment protected speech—AI creators, users, or AIs themselves. Professor Salib argues that AI outputs are not anyone’s protected speech, and thus that AI regulations should face lower constitutional hurdles than widely assumed.