For researchers, social networks have always represented greater access to data, more democratic participation in the production of knowledge and great transparency about social behavior. Getting a sense of what was happening (especially during political crises, major media events, or natural disasters) was as easy as looking at a platform like Twitter or Facebook. However, in 2024 that will no longer be possible.
In 2024, we will face a bleak digital age, as social media platforms move away from the logic of Web 2.0 and toward one dictated by AI-generated content. Companies have been quick to incorporate large language models (LLM) into online services, with hallucinations (inaccurate and unjustified responses) and errors, which have further fractured our trust in online information.
Another aspect of this new digital dark age comes from not being able to see what others are doing. Twitter once pulsed with the publicly readable sentiment of its users. Social researchers loved and trusted Twitter data because it provided an easy and reasonable approximation of how a significant portion of Internet users behaved. However, Elon Musk has now deprived researchers of Twitter data after recently announcing that it would end free access to the platform’s API. This made it difficult, if not impossible, to obtain the data needed for research on topics such as public health, natural disaster response, political campaigns, and economic activity. It was a stark reminder that the modern Internet has never been free or democratic, but rather walled and controlled.
Closer cooperation with platform companies is not the answer. X, for example, has filed a lawsuit against independent researchers who pointed out the increase in hate speech on the platform. Recently, it has also been revealed that researchers who used Facebook and Instagram data to study the platforms’ role in the 2020 US election had been granted “independence with permission”by Meta. This means that the company chooses which projects to share its data with, and while the research may be independent, Meta also controls what types of questions are asked and by whom.
With elections taking place in the United States, India, Mexico, Indonesia, the United Kingdom and the EU in 2024, the stakes are high. Until now, online”observatories” has been independent monitoring social media platforms looking for evidence of manipulation, inauthentic behavior and harmful content. However, changes in data access by social media platforms, as well as the explosion of AI-generative misinformation, mean that the tools that researchers and journalists developed in previous national elections to monitor online activity They won’t work. One of my collaborations, AI4TRUST, is developing new tools to combat misinformation, but our effort is stalled due to these changes.
We need to clean up our online platforms. The Center for Countering Digital Hate, a research, advocacy and policy organization working to stop the spread of hate and misinformation online, has called for the adoption of its STAR Frame (Security by Design, Transparency, Accountability and Responsibility). This would ensure that digital products and services are secure before they are launched; increasing transparency around algorithms, enforcement and advertising; and work to hold companies accountable to democratic and independent bodies and be responsible for omissions and actions that cause harm. The EU Digital Services Act is a step in the right direction for regulation, including provisions to ensure that independent researchers can monitor social media platforms. However, these provisions will take years to be implemented. The UK’s Online Safety Bill, which is moving slowly through the political process, could also help, but again, these provisions will take time to implement. Until then, social media’s transition to AI-mediated information means that a new digital dark age will likely begin in 2024.