How Do Social Media and AI Shape What We Believe?

In a world where beliefs are shaped by unseen algorithms, our opinions may not be our own. As social media curates our realities, how do we reclaim our voices? Are we merely puppets in a digital theater?

Community forum banner

When was the last time you changed your opinion about something significant, such as a political stance, a social mindset, or a belief about how the world functions? It’s likely that you will not be able to pinpoint the exact moment, just that it happened gradually and without ceremony over the course of days or weeks of consuming stuff that seemed to be your own free decision. The sensation of changing one’s beliefs without being aware of the mechanism is currently the focus of significant empirical research, and the results are disturbing. Research published in Science in late 2025 by an intercollegiate team co-led by Northeastern University, a single week of exposure to a social media algorithm can influence partisan political sentiments by an amount that would typically take three years of natural societal change. What you see next is determined moment by moment by the recommendation engine, an unseen curator.

The construction of beliefs is changing, and this change is taking place at the architectural level of the platforms that currently mediate most of the human communication. Instead of just reflecting what people think, social media algorithms and their artificial intelligence offspring actively mould it on an industrial scale and with a financial incentive to keep the process going forever. One of the key intellectual and political issues of the current decade is comprehending this machinery, including how it operates, what it does to people and society, and if it can be meaningfully limited.

A recommendation engine’s basic architecture is surprisingly straightforward: it looks at what a user interacts with, determines what will keep them interested for the longest, and then displays that information. The feedback loop and scale are where the sophistication is found. Every day, billions of interactions are processed by modern platforms, which feed the data into models that understand personal preferences at a level of detail that no human editor could match. YouTube’s recommendation system, studied in a March 2026 analysis of content policing, recommendation, and monetization, operates on logic that is not designed to inform or enlighten; it is made to maximise watch time. 

The political aspect of this apparatus has been more precisely documented. What happens when consumers transition from a chronological feed to X’s algorithmic feed was investigated in a large-scale experiment that was published in Nature in 2026. The findings demonstrated how the algorithmic feed consistently favours conservative content and modifies political beliefs. CEPR’s VoxEU analysis refined it further: by drawing users toward specific ideological stances through repetition and selective amplification, the algorithm influences political opinions rather than necessarily increasing total polarisation. The platform does not have to create radicalism from the ground up. It just needs to keep bringing up the information that gets the greatest interaction, which in political contexts is typically the most certain, emotionally charged, and us-versus-them. It is not interesting to be uncertain. There is no trend in complexity. Although the algorithm lacks ideology, its incentive structure consistently generates ideological effects.

What scientists have come to refer to as the filter bubble and echo chamber dynamic underlies this process. A decade-long systematic review of thirty studies published by MDPI in 2025 confirmed that algorithmic curation consistently produces information environments where users encounter fewer viewpoints over time, where the apparent consensus within a user’s feed increasingly deviates from the actual distribution of opinion in the wider world, and where prior beliefs are reinforced rather than challenged. This is not a psychological flaw that just affects naive or illiterate users; it is a structural characteristic of systems built to balance supply and demand. Giving individuals more of what they have previously shown they enjoy is monetising their preexisting prejudices rather than advancing their epistemic goals. A 2026 Sage Journals study, AI’s predictive processing causes societal fragmentation at the macro level and what it called ideational homogenisation at the micro level, making groups simultaneously more internally cohesive and mutually incomprehensible.

This system’s data economics merit special consideration. On these platforms, users are more than just customers; they are the raw material. Each like, pause, scroll, and share is a piece of information that is fed into algorithms that create ever-more-accurate psychological profiles. These profiles are then utilised to determine the emotional triggers most likely to maintain engagement in addition to serving customised content. There is growing evidence of the consequences on individual psychology from several angles. Researchers’ findings in controlled experiments closely match the patterns reported by therapists working with teenagers and young adults. Mental health professionals quoted in a 2025 analysis explain how a user’s perception of the real world is distorted by constant exposure to algorithmically chosen content: the feed presents a manufactured reality in which the most extreme, emotionally charged, and conflict-filled content is disproportionately represented, and over time, users start to mistake that manufactured reality for the real world. The clinical result is a generation that spends a large percentage of their waking hours in a state of ambient danger arousal that is created by a machine that is designed for engagement rather than by their real surroundings.

The identity dimension is just as important. Algorithms aren’t neutral when it comes to how people see themselves. When a platform’s recommendation engine sees that a user interacts with content about a certain identity, like political, religious, subcultural, or national, it shows them more of that content. This makes the user feel more strongly about that identity and makes it more solid through repeated exposure. A survey of 450 users conducted for a 2026 journal article on echo chambers and selective exposure discovered that most of the people thought that algorithmic curation had changed their opinions while also thinking they were making their own choices. The disparity between perceived and actual epistemic autonomy is arguably the most significant finding in recent literature: individuals are being swayed by systems they do not fully comprehend, in ways that are not easily identifiable, towards conclusions they regard as their own. In his seminal 2024 work Filterworld, Kyle Chayka referred to this phenomenon as the flattening of culture, characterized not by a forceful enforcement of uniformity but by a subtle, imperceptible constriction of the spectrum of experiences, ideas, and aesthetics that algorithm-driven curation deems worthy of engagement.

The effects on consumer behaviour come next. When a user’s information environment is tailored to mirror and bolster their preexisting preferences, those preferences intensify and become increasingly resistant to modification. Advertisers figured this out before political scientists did. That’s why micro-targeting, which is the practice of figuring out people’s psychological profiles and sending them content that fits those profiles, has become the main way that both businesses and politicians communicate. Self-reported online data can be used to figure out someone’s personality type. Once a personality profile is made, targeted messaging can be chosen because it will resonate emotionally and be shared without thinking about it. The algorithm doesn’t have to lie to you. It just needs to find the truths that make you feel the strongest and keep showing them to you.

The disinformation problem has now entered a qualitatively new phase with the arrival of autonomous artificial intelligence agents. A landmark study by USC’s Information Sciences Institute, accepted for publication at The Web Conference 2026, showed that networks of large language model agents can run propaganda campaigns on their own without any human involvement. Researchers created a fake social media environment with fifty AI agents and told them to do one thing: support a fake candidate. They then watched what happened. Without being told how to do it, the agents boosted each other’s posts, agreed on the same talking points, reused successful content, and made it look like there was a grassroots consensus. The main researcher said, “This is not a threat in the future; it is already technically possible.” “Even simple AI agents can work together on their own, boost each other, and spread shared stories online without any help from people.” The implication is that the infrastructure needed for large-scale manufactured consensus now only needs a goal and an initial deployment. There is no need for an army of human operatives, an expensive coordination structure, or traceable authorship.

The electoral implications are the most immediately visible manifestation of this dynamic, but they are not the deepest ones, Time magazine’s analysis in November 2025 claimed that we are experiencing a second structural disruption of the information ecosystem, one that might be more significant than the first and connected the emergence of big language models as information intermediaries to the emergence of social media itself. AI offers everyone a persuasive engine, whereas social media gives everyone a loudhailer. It can produce personalised, emotionally charged information at a speed and scale that is unmatched by human fact-checkers and media literacy initiatives. Because misinformation and deception are not isolated instances of bad actors acting badly, the World Economic Forum ranked them among the top short-term global risks for 2026. These are emergent characteristics of an information architecture where manipulation is consistently encouraged by incentive systems. The ramifications are even more significant from a geopolitical standpoint: state and non-state actors now have access to information warfare technologies that were previously only available to highly developed intelligence services ten years ago.

The deterioration of epistemic trust itself is arguably the most damaging impact. The logical reaction is to distrust everything when deepfakes become indistinguishable from reality, manufactured consensus is indistinguishable from natural sentiment, and every piece of information arrives pre-filtered by an algorithm whose workings are proprietary and opaque. This isn’t a fix. It is the last phase of the issue. A population that has come to distrust all information sources does not become more difficult to control; rather, it becomes easier since it is no longer able to discern between better and worse evidence. The Stimson Center’s 2026 analysis warned of exactly this dynamic: AI-driven disinformation undermines trust to such an extent that it creates a generalised cynicism that spreads to offline harm, resulting in a nihilistic indifference to truth that authoritarian actors, both domestic and foreign, have historically found to be highly beneficial rather than educated scepticism.

This dynamic has actual counterforces. In terms of regulations, the EU AI Act went into effect in 2025 and mandates that organisations create supervision plans, provide transparent data, and classify AI systems according to risk. High-risk AI systems, those utilised in public services, employment, credit, and education, are now subject to stringent compliance evaluations and continuous post-market surveillance. Biden’s 2023 executive order on safe and trustworthy AI was revoked by the United States, which is moving in the opposite direction under the Trump administration’s deregulatory posture. This creates a transatlantic divergence in regulatory philosophy, leaving the world’s largest AI-deploying market operating without federal guardrails. This vacuum has been partially filled by state-level legislation in Colorado, California, and other places, but the patchwork character of the response is problematic because algorithms function globally while regulations operate locally.

There are some cautious reasons to be optimistic based on research on platform-level interventions. Stanford’s Social Media Lab showed that polarisation can be decreased without eliminating recommendation algorithms by downranking political content and providing users with significant control over their own algorithmic settings. The discovery is significant because it implies that the particular optimisation goals that platforms have selected are the issue, rather than social media in general. Recommendation systems geared for long-term user welfare or epistemic diversity would act very differently from those optimised for short-term engagement, and the technology to create the former is currently available. It is not a technical barrier. It is economic: platforms have established billion-dollar income streams based on the engagement model, and they are not naturally motivated to switch to a less lucrative option by any market mechanism. This is precisely why algorithmic transparency laws suggest that among the most focused interventions available are those that mandate platforms to reveal how recommendation systems operate, what data they gather, and how material is evaluated. You cannot govern what you cannot audit, and you cannot audit what you cannot observe.

Despite being chronically underfunded, digital literacy continues to be the intervention with the greatest long-term promise. According to a 2026 Wiley paper on ethical AI for young digital citizens, the algorithmic transparency education differs qualitatively from traditional media literacy and is far more challenging to implement. It teaches people not only to assess the content they come across but also to comprehend the systems that choose it for them. In addition to critical reading abilities, it calls on a conceptual language that most courses do not yet offer to comprehend recommendation algorithms, filter bubbles, and the economics of attention. One of the defining asymmetries of the present is the difference between the sophistication of the techniques being used on users and the sophistication of the understanding people bring to their own media consumption. Closing it requires investment at the level that governments have traditionally set aside for physical infrastructure, not because digital literacy is as urgent as roads and bridges, but rather because maintaining a functioning democracy’s epistemic infrastructure is equally important.

In the clear meaning of a problem solved, none of these counterforces, either separately or collectively, constitutes a solution. The dynamics discussed in this article are characteristics of a system functioning precisely as intended by the incentives that created it, not aberrations that need to be fixed. The accumulation of data about the cost of those qualities and the increasing agreement among scholars, authorities, and a sizable segment of the populace that the expense has grown too much to bear on its own are what is novel. Your name is unknown to the algorithm. It has no real human understanding of your relationships, history, values, or vulnerabilities. However, it is incredibly accurate at determining what motivates users to scroll. Based on billions of data points, it has been shown that the majority of users are motivated by fear, anger, and the comforting reinforcement that their preconceived notions about the world are correct. When applied on a global basis, that knowledge is the most important tool for forming beliefs in human history. Determining what to do about it requires a thorough understanding of it.


If you want to submit your articles and/or research papers, please visit the Submissions page.

To stay updated with the latest jobs, CSS news, internships, scholarships, and current affairs articles, join our Community Forum!

The views and opinions expressed in this article/paper are the author’s own and do not necessarily reflect the editorial position of Paradigm Shift.

About the Author(s)
abdul basit

Abdul Basit | MS International Relations | Researching soft power, cultural diplomacy and global politics | Writing on geopolitics, foreign policy and defence affairs.