From Platform Governance to Generative AI:
Concepts, Methods, and Data for Studying Tech Governance
Date: 3-4 June 2025 Location: Bremen, Germany
Host: Lab Platform Governance, Media, and Technology, Centre for Media, Communication and Information Research (ZeMKI), University of Bremen and DFG Research Unit 5656 Communicative AI
Co-funded by
CALL FOR PAPERS
Theme and Objective
The study of platform governance and generative AI has become increasingly critical as these technologies significantly (re)shape public discourse, societal norms, and policy-making processes. By determining how information is created, shared, and consumed, these technological systems raise complex questions of accountability, fairness, transparency, contestability, and ethics. The AoIR Flashpoint Symposium 2025 in Bremen aims to contribute to these discussions on governance and normative questions. We are seeking papers and panel proposals that focus on analytical concepts, empirical methods, and robust data approaches that are instrumental for studying tech governance.
The advent of generative AI tools such as ChatGPT has resurfaced challenges long associated with social media platforms, including misinformation, bias, hate speech, discrimination, power and agency. In this context we have learned that how platforms recommend and regulate content and interactions constitutes an essential part of what they are, and strongly defines their role and responsibility in contemporary societies. What can we learn from these debates for generative AI? We need to assume that generative AI with its technologically advanced forms of automation embedded in complex digital infrastructures will rather exacerbate than relieve those challenges. Understanding these dynamics will be vital for informing public debate and developing regulatory frameworks that uphold human rights and democratic values.
The 2025 AoIR Flashpoint Symposium seeks to gather researchers and experts on this topic at a critical time. While we see public attention and daily routines partly move from social media platforms to generative AI, meaningful regulatory activities will be crucial in that space in 2025. The EU advances governance frameworks based on risk assessments and fundamental human rights through the Digital Services Act (DSA) and the AI Act. At the same time, the future of tech regulation in the U.S. remains uncertain at the onset of a second Trump presidency. Meanwhile, around the world, governments aim to reign in both the excesses as well as the freedoms of social media, and global and regional bodies such as the UNESCO develop normative frameworks for the future of social media and generative AI. In addition to these public initiatives, companies’ product policies, ethical principles, and standardization efforts give rise to highly complex governance regimes and normative frameworks for technological systems.
In this context, researchers face various challenges when they seek to collect, store, analyze, or share data on platform and AI governance. Platforms change data access requirements and options for researchers regularly, putting established data collection processes at risk. While corporations employ skilled research teams, their findings are typically kept private or focused on advancing corporate interests rather than serving the public good. Government policies have the potential to enable academic and independent research, but poorly designed or enforced regulations may inadvertently obstruct it. Some recent statutory data-sharing rules (e.g., DSA) promise to provide new opportunities for robust research but are currently only in their infancy. As a result of different initiatives, there is now an increasing number of large- and small-scale datasets available such as the EU Transparency Database, the Zuckerberg Files, the Platform Governance Archive, or the FBarchive. However, systematic research and sustained collaborative approaches in this area remain limited. To achieve this, academics, policymakers, and other stakeholders must work together to build the necessary expertise to develop and sustain much-needed infrastructures.
The AoIR Flashpoint Symposium 2025 seeks to address these critical challenges by bringing together researchers of various disciplines, practitioners, and policymakers for a two-day symposium. The aim of the symposium is to foster collaboration and critical discourse concerning generative AI and social media platforms, and to enable the research community to more effectively produce robust and independent knowledge about these socio-technical systems and their effects.
Topics for Submissions
Submissions should contribute to the overall topic and theme of the symposium. Within that general area of research, we value conceptual innovations, empirical explorations, research and policy papers, as well as data repository and research infrastructure initiatives for the field.
Questions to be discussed at the symposium include, but are not limited to:
- What are adequate concepts and analytical frameworks to understand platform governance and generative AI? How do we understand key concepts such as power and institutions, responsibility and accountability, while social media platforms and generative AI become increasingly entangled? What are the conceptual and epistemological differences between studying generative AI systems and studying platform governance?
- What methodological approaches can contribute to meaningful empirical evidence while data access is challenging? What can we learn from two decades of empirical social media research for studying generative AI?
- What kinds of data do we already collect as a community, and where are the gaps? What are the main challenges that researchers, policy makers and civil society groups face while attempting to access and archive data in this space?
- How can we enable better cooperation between public institutions, researchers, civil society groups and the tech industry in mandating, creating, curating and maintaining data archives?
- How do we accomplish building research infrastructures and routines for studying tech governance that are sustainable (long-term maintenance, ecological footprint)?
Submissions and Timeline
We are seeking submissions on the topics and the theme of this call in the form of abstracts (400-600 words) for three formats:
- Research papers that address concepts, methods, and data for studying tech governance from platform governance to generative AI;
- Policy and position papers that seek to move the discussion on regulation of platform and AI governance, or on research infrastructures and collaborations in that space forward;
- Pre-curated panels that comprehensively discuss a key aspect of the topics and themes developed in the call.
Please submit your proposals via this form not later than February 15, 2025. We will notify accepted authors on March 07, 2025
We plan to publish a special issue with a selection of accepted papers in a refereed journal after the symposium.
There will be a limited set of travel scholarships available for early career researchers. Please indicate your interest in a scholarship once your paper is accepted to the symposium.
Conference Organisation
Prof. Dr. Christian Katzenbach, Dr. Daria Dergacheva, Dr. Dennis Redeker
Lab Platform Governance, Media, and Technology (PGMT)
Centre for Media, Communication and Information Research (ZeMKI), University of Bremen