Ever wonder why people get so heated on social media? It’s not just coincidence – it’s because social media algorithms are designed to trigger outrage, feeding off our natural fear responses and biases. These algorithms play off our unique, subjective realities, creating a disconnect when we encounter views that clash with our own. This ‘annoyance factor’ sparks strong emotions that keep us engaged, commenting and scrolling; asserting our belief systems and trying to “right the wrong” in the world. In this article, we’re going to explore how these algorithms manipulate our emotions, why they stir up such intense reactions, and how gaining an understanding of our own minds can help us better navigate these digital battlegrounds.
I. Unpacking the Human Cognitive Framework
Have you ever thought about how our minds work? It’s like watching a brilliant maestro conducting an orchestra. Our minds direct and shape our world views, creating a one-of-a-kind cognitive framework for each and every one of us. Just think of cognitive frameworks as our brain’s GPS, crafted from our past experiences, beliefs, and the knowledge we’ve accumulated.
These mental maps kick into gear from the time we’re tiny tots, and they keep shifting and changing as we grow and live, getting shaped by everything our senses soak up and the experiences we live through (Perry, 2002). All the things we see, hear, smell, touch, taste, and even think about add more detail to this intricate map, making it a sharper tool to understand the world.
In a lot of ways, our minds are like fortune tellers. They use our cognitive framework to make predictions about what’s around the corner, helping us to stay one step ahead (Clark, 2013). This fortune-telling feature is super important for survival. It helps us see danger or opportunities before they arrive and plan our moves accordingly.
Fear, one of the most basic human emotions, is a big player in all of this. It’s our body’s alarm bell, going off when we sense danger or a threat, keeping us safe. So, when something we see, hear, or think about doesn’t match up with our cognitive framework—when what’s happening doesn’t match up with what we expect—our minds hit the fear button. This mismatch makes us question our ability to predict what’s next, which feels like a threat to our survival.
That’s why we have a kind of built-in bias to stick to our current cognitive framework. It’s gotten us this far, right? It’s helped us navigate life and kept us safe. But this survival instinct and fear of the unknown can make us push back against information that doesn’t fit into our current cognitive framework (LeDoux, 2015).
II. Cognitive Frameworks and Social Media: The Outrage Mechanism
Let’s dive into the intriguing dance between our mental maps, also known as cognitive frameworks, and social media algorithms. These algorithms, especially the ones stirring up outrage, are cleverly designed to take advantage of our natural biases. This creates a wave of fear and anger responses, which keeps us hooked and staring at our screens (Brady et al., 2020).
Imagine browsing social media and stumbling upon a post that’s totally against what you believe. Our primal brain, the old survival-focused part of our mind, immediately jumps into action. The difference between the post and our personal understanding of the world scares us. Why? Well, if our understanding of the world is wrong, it might mean we’re less equipped to survive than we thought we were.
This fear kicks off a chain reaction in our brain, releasing chemicals that can cloud our thinking and spark powerful emotional reactions. This explains why a simple disagreement on social media can quickly turn into a heated argument. We’re instinctively defending our understanding of the world (Garrett, 2019).
Our primal brain’s fear response can be so strong it stops new information from changing our minds. It quite literally dampens the signals to part of our brain that reasons, which is also known as an “amygdala hijack”. Unless someone is used to exploring the unknown and views discomfort as a chance to grow, they’re likely to reject ideas that go against their beliefs (Tamborini et al., 2020). This is why your friends on social often represent an echo chamber, but your newsfeed looks like the world is out to get you.
III. The Cycle of Outrage and its Impacts
Our emotions aren’t just getting in the way of civil online discussions. The wave of outrage set off by these social media algorithms can have serious effects on our society, politics, and even mental health (Brady et al., 2020).
When our primal brain feels threatened, we often dig our heels in and refuse to change our beliefs. This reaction strengthens our mental maps, making us even more resistant to future challenges and deepening the divide between different opinions (Van Bavel & Pereira, 2018).
These cycles of outrage on social media can supercharge the spread of misinformation, fake news, and extreme views. After all, we’re more likely to share or respond to posts that trigger strong emotions in us (Vosoughi et al., 2018).
On top of this, the constant stream of emotional content can take a toll on our mental health. It can lead to stress, anxiety, and emotional exhaustion, all of which can negatively affect our well-being (Faelens et al., 2020). In some cases, this constant exposure might even lead to long-term changes in our brain’s chemistry.
IV. Strategies for Mitigating the Outrage Cycle
Breaking the cycle of online outrage stirred up by social media algorithms isn’t an easy task. But there are two areas we can focus on: boosting conscious awareness and media literacy, and encouraging tech companies to act responsibly.
First, we need to become more aware of how our minds work. By realizing that our primal brain doesn’t always know best, we can make the conscious decision to engage with ideas that challenge our mental maps (Hahn et al., 2020).
Media literacy goes hand in hand with this conscious awareness. We need to understand how social media algorithms work and the influence they have on our online experience (Mihailidis & Viotty, 2017). By fostering critical thinking through media literacy programs, we can help people better judge the reliability of the information they come across online.
But individuals can’t do it alone. Tech companies also need to step up. They can change their algorithms to tone down the divisive content and be more transparent about how these algorithms work (Pariser, 2020). By promoting posts that foster empathy, understanding, and critical thinking, they can help create a healthier online space.
V. Conclusion: Building Resilience Against Outrage Algorithms
Navigating the complex world of our consciousness and the influence of social media algorithms demands a holistic approach. We need to combine personal growth, education, and changes to tech policies to create a healthier online environment.
On a personal level, we can start by practicing mindfulness and learning to reframe our thinking. By exposing ourselves to a wide range of viewpoints, we can help our mental maps grow, making it easier to handle differing opinions without feeling threatened (Lumma et al., 2015).
Education also plays a key role. From an early age, we should be teaching media literacy, helping younger generations to understand the digital world they’re growing up in and how to think critically about the information they come across (Bulger & Davison, 2018).
Finally, it is crucial that we rethink our tech policies. Tech companies ought to be held accountable for the algorithms they create, particularly when they impact our cognitive processes and shape societal discourse. As Shoshana Zuboff discusses in her book “The Age of Surveillance Capitalism”, freedom is at risk when entities in power have an in-depth understanding of our minds, often better than our own (Zuboff, 2021).
However, it’s important to note that private corporations aren’t the sole actors manipulating societal perceptions. Government intelligence agencies, functioning under the directives of their respective administrations, have also been known to leverage these platforms to influence public opinion. As pointed out by Glenn Greenwald in his book “No Place to Hide”, these agencies use psychological operations, often described as ‘propaganda campaigns’, when they deem it necessary (Greenwald, 2014).
The fusion of such governmental strategies with computational propaganda, a concept explored by Woolley and Howard, has profound implications for our societies (Woolley & Howard, 2018). As such, there is an urgent need to reassess our tech policies, emphasizing transparency, accountability, and ethical practices in the creation and deployment of these influential algorithms.
It’s up to all of us to push for more transparency and ethical guidelines in the design of these algorithms (Zuboff, 2021).
By tackling the problem of social media outrage from these different angles, we can aim to create a digital space that works with our cognitive processes rather than exploiting them. This empowers us all to navigate the online world in a more informed and conscious way.
Sources:
Perry, B. D. (2002). Childhood experience and the expression of genetic potential: What childhood neglect tells us about nature and nurture. Brain and mind, 3(1), 79-100.
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioral and Brain Sciences, 36(3), 181-204.
LeDoux, J. E. (2015). Anxious: Using the brain to understand and treat fear and anxiety. Penguin.
Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2020). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences, 114(28), 7313-7318.
Garrett, R. K. (2019). Echo chambers and partisan polarization: Evidence from the 2016 presidential campaign. Digital Journalism, 7(2), 129-147.
Tamborini, R., Prabhu, S., Bowman, N. D., Hahn, L., & Klebig, B. (2020). The Influence of Moral Salience on the Physiological and Psychological Responses to Violent Video Games. Media Psychology, 23(4), 643-665.
Van Bavel, J. J., & Pereira, A. (2018). The Partisan Brain: An Identity-Based Model of Political Belief. Trends in Cognitive Sciences, 22(3), 213-224.
Vosoughi, S., Roy, D., & Aral, S. (2018). The spread of true and false news online. Science, 359(6380), 1146-1151.
Faelens, L., Hoorelbeke, K., Fried, E. I., De Raedt, R., & Koster, E. H. (2020). Negative influences of Facebook use through the lens of network analysis. Computers in Human Behavior, 108, 106320.
Hahn, U., Merdes, C., & von Sydow, M. (2020). Understanding the Understanding of Understanding: An Epistemic Network Approach. Synthese, 197(11), 4795-4820.
Mihailidis, P., & Viotty, S. (2017). Spreadable Spectacle in Digital Culture: Civic Expression, Fake News, and the Role of Media Literacies in “Post-Fact” Society. American Behavioral Scientist, 61(4), 441-454.
Pariser, E. (2020). Can we break free from the ‘filter bubble’?. BBC Future.
Lumma, A. L., Kok, B. E., & Singer, T. (2015). Is meditation always relaxing? Investigating heart rate, heart rate variability, experienced effort and likeability during training of three types of meditation. International Journal of Psychophysiology, 97(1), 38-45.
Bulger, M., & Davison, P. (2018). The promises, challenges, and futures of media literacy. Journal of Media Literacy Education, 10(1), 1-21.Zuboff, S. (2021). The age of surveillance capitalism: The fight for a human future at the new frontier of power. Public Affairs.
Greenwald, G. (2014). No Place to Hide: Edward Snowden, the NSA, and the U.S. Surveillance State. Metropolitan Books.
Woolley, S. C., & Howard, P. N. (Eds.). (2018). Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media. Oxford University Press.
0 Comments