According to Pew Research, 68% of adults in the U.S. use Facebook, 47% use Instagram, and 22% use X (Twitter). [1] By some estimates 40% of the world's population use social media. [2] That the appeal of social media is variegated by age, ethnicity, social status, etc. is widely recognized. But what is less recognized is the difference in appeal by individual characteristics such as personality type, biases, agendas, dispositions, and political affiliation. Further, the perceived value of social media
varies by audience. For some it is a pleasant pass-time while for others it is a distracting time-sink. Some perceive social media as a reliable source of information, while for others see it as a tool for online manipulation and mischief. Some view it as a mechanism for social bonding, while others see it as an online partisan weapon. In short, social media means different things to different people and serves many different interests. Social scientists have been attempting to measure the various effects of social media use for decades, but so far without general agreement. So we must conclude that the ultimate effect of social media on society remains an open question. Perhaps this is a good time to take a step back from attempting to assess the effects of social media and study it from the perspective of an online technology. We ask the question: is there something significant about this particular online technology that separates it from such peers as email, the World Wide Web, ecommerce, Voice over IP, digital streaming services, and the like?
We begin this comparing social media with earlier online communication paradigms. The social media communication model moves beyond earlier rectified asymmetrical email/text messaging by being designed to be (1) a media-rich, mixed reality environment, (2) trivially scalable from individuals and memberships to dynamically-created, autonomous groups of arbitrary sizes, (3) an infrastructure that replicates – or at least approximates - in-person group dynamics. As a communication model, social media is transformative. It doesn't just extend earlier communication protocols, it moves communication into the realm of immersive experience. This transformation should have been perceived by society as an omen and immediate cause for concern, for society had no prior experience with online immersion. As a result, society was once again caught off-guard by the velocity of a new technology. Of course, society has always had to deal with disrupting technologies: such as the firearm, printing press, telephony, horseless carriages, televisions, transistors, digital imagery, and the cloud, to name but a few. But our experience with social media is a unique fusion of ubiquity and velocity. Prior to the current millennium, the issue of whether an online, immersive technology would be a social good wasn't even understood, much less asked. There were no, and still aren't for that matter, “netiquette” standards for online immersion. Few technologists anticipated that social media would enable or exacerbate cyberbullying, social media dopamine loops, micro-targeting, diminishing message content reliability in communication, or the rapid acceleration of disinformation, etc. Social science was providing clues, but they were ignored.
Finally, we note that the scalability of social media is baked into the design of the platforms. While this impressive capability is widely appreciated by users, it has been under-appreciated by social scientists and scholars. While broadcasting is an inherent feature of social media, just as it is with television and radio, so was narrowcasting (e.g., micro-targeting). From the messenger's point of view, this scalability is completely transparent: the message initiator can use the same social media tools to spam the world as it uses to micro-target potential terrorists, sovereign citizens, and cyber-bullying victims. This versatility is unique in human experience, and largely overlooked.
Thus, the versatility of social media allows it to naturally support the dynamic maintenance of associations, use tailored messaging to appeal to special-interest groups of all sizes with ease, and work with media from any and all digital content providers effortlessly. Whether the goal is to reune with old friends, satisfy personal vanity, seek immediate gratification, settle old scores, vent hostility, harass, doxx, terrorize, cultivate in-group outrage, promote conspiracy theories, or organize insurrections, social media has proven to be an ideal platform. And social media can offer minimal or no social stigma if identity is obscured. Virtual exchanges, unlike their veridical counterpart, do not provide a social buffering that discourages intemperate or toxic exchanges, and social and cultural norms are relaxed and normal inter-personal and social filters are held in in suspension. Social media was by design an ideal outlet for psychopathy and anti-social behavior. How could anyone anticipate a problem?
Perhaps the most famous exemplar of mock reality psychology studies is the Milgram experiment conducted at Yale in the early 1960's. [3] This study was an attempt to measure the willingness of participants to blindly follow the instructions of authority figures, even when the instructions involved potential acts of harm to human subjects. It was a social scientific study that seem to quantify what Hannah Arendt described as the banality of evil. [4] The Milgram experiment confirmed that exceedingly hostile, inhumane, and anti-social behavior lies in the shadows of a great deal of social interaction and may surface in a variety of rather mundane personal characteristics such as the willful obedience to authority. Both Arendt and Milgram felt that the assumption that horrific acts are the result of psychological pathologies like psychopathy and sociopathy may well be misguided. Horrific acts may follow from far more mundane characteristics. Although simplistic, this overview recognizes the potential of moral agency in human action: that is, the contribution of social environment and structure in circumscribing what individuals consider acceptable behavior.
The Stanford Prison Experiment (SPE) that also sought to measure the effect of situational and contextual variables on human behavior. [5][6] The SPE study used university students to study the effects of situational variables on participants in a simulated prison environment. Some students were assigned the role of guards, others prisoners. The resulting hostel and sometimes brutal behavior observed caused the experiment to be terminated early and led to increased scrutiny over ethical guidelines for experiments involving human subjects. There is no shortage of commentary about this study, [7] [8] including a film made in 2015. [9]
My thesis is that there is much to be learned from such mock reality psychology experiments in relation to our experience with social media.
The Milgram and SPE studies reveals a close connection between social context and abusive behavior. Hannah Arendt said as much in her book on Adolf Eichmann, [4] where she observed that when the Nazi Holocaust is understood in a broader, societal perspective, it suggests that situational contexts are sometimes powerful enough to induce apparently normal, stable individuals to engage in abnormal, immoral or criminal conduct. An interesting spin on this was provided in an experiment by Carnahan and McFarland [10] that showed that self-selection played a critical role in the resulting behaviors. To quote the authors:
Volunteers for the prison study scored significantly higher on measures of the abuse-related dispositions of aggressiveness, authoritarianism, Machiavellianism, narcissism, and social dominance and lower on empathy and altruism, two qualities inversely related to aggressive abuse. Although implications for the SPE remain a matter of conjecture, an interpretation in terms of person-situation interactionism rather than a strict situationist account is indicated by these findings.
Their argument is cogent and, it seems to me, well-grounded. In both the Milgram study and SPE, the participants volunteered to participate in the study. While the impact of self-selection on human experiments may be difficult to quantify, it is real nonetheless. That participants of experiments are accompanied by their personal tendencies, dispositions, attitudes, beliefs, and the like is beyond doubt. However, the degree to which these personality characteristics influence the outcomes of experiments is at best incompletely known. Carnahan and McFarland's study demonstrates that the absence of random selection exaggerates negative potential effects. Their claim that individuals would be unlikely to volunteer to participate in experimental environment that were likely to produce situations discordant with their personalities seems incontrovertible.
Carnahan and McFarland set up an experimental replication of SPE that enabled them to estimate the effects of such psychological traits as agreeableness, openness to experience, dispositional sympathy and empathy, sensation seeking, codependence, altruism, and even monetary incentives, on a subject's propensity toward volunteerism for two different environments. They then tested for the research traits of the volunteers in order to determine whether there were statistically significant differences between the “prison life” and control volunteer groups. The authors concluded that:
...volunteers who responded to a newspaper ad to participate in a psychological study of prison life … were significantly higher on measures of aggressiveness, authoritarianism, Machiavellianism, narcissism, and social dominance than those who responded to a parallel ad that omitted the words of prison life, and they were significantly lower in dispositional empathy and altruism.
This research strongly suggests that self-selection may well have significantly influenced the outcomes of experiments like those of Milgram and the SPE. This certainly accords with our intuitions and is consistent with Hannah Arendt's belief that alleged “true believers” like Adolf Eichmann may have been as significantly motivated by psychological tendencies and moral disengagement as they were with Nazi ideology, racism, and antisemitism. Although the proposition that random selection of participants would minimize the effects of social bias was not a working hypothesis of the Carnahan and McFarland study, it certainly seems plausible. While scholarship like that mentioned above does not speak directly to the pitfalls of the zealous attachment to social media, the study of the effects of self-selection and voluntary participation is directly relevant. We emphasize that self-selection is at the core of the design of social media. There is no random selection in selecting Facebook friends, follow Instagram hashtags, or line up for tweets. Self-selection exposes similar attendant risks in social media as it did in the Milgram and Stanford Prison Experiment studies. Self-selection amplifies the risk of anti-social, abnormal behavior by isolating groups from social and cultural norms for acceptable behavior.
He is alluding to the effects of self-selection discussed above by suggesting that social media platforms tend to bring out the worst behavior in some people: e.g., cyber-bullying and shaming, online harassment and character assassination, doxing, trolling, not to mention deadening personal interaction and perverting politics. [12] That criticism certainly seems to fit. But it would be a mistake not to identify possible parallels with 911-swatting, spamming, ransomware, digital fraud, online hate groups, conspiracy theory websites, and the use of Internet resources to promulgate disinformation! [13][14] Regrettably, legitimate criticisms of social media are largely ignored by large segments of the public because of the perceived appeal of social media. But we ignore these criticisms at our own peril because of the potential existential threats that social media platforms may produce for society. In addition to being a mock psychology testbed, it must be admitted that social media is a global, unsupervised experiment in naïve crowd psychology. [15]
Here are some specific causes of social media stress.
A. Social media is distinctively Pavlovian.
Even modest reflection reveals the Pavlovian nature of our social media experience. As Lanier put it, “…everyone who is on social media is getting individualized, continuously adjusted stimuli…” He likens social media to an online Skinner box, but controlled by corporate interests rather than scientific oversight. Think of this as behavioral modification, where users are the guinea pigs in some mad scientists' online experimental cage. Algorithm-driven, adaptive social media relies on positive and negative reinforcement in the same was that Skinner used it in his namesake box: the way you manipulate online subjects to do what a developers want is by feeding them positive stimuli, v.v. And we cannot overestimate the effects of negative reinforcement: cyber-bullying, pretexting and catfishing, belittlement, harassment and the like have become staples of social media that play upon social anxieties of defenseless victims. In Lanier's words, “…social media amplifies negative emotions more than positive ones, so it's more efficient at harming society than at improving it: creepier customers get more bang for their buck.”
B. The “lock-in” network effect is the pandemic of social media
“Lock-in” is the term used in the networking community to denote an environment where there are strong disincentives to stop or switch services. Lock-ins have a similar effect to frequent-traveler programs but use a reverse psychology. Instead of offering premiums, lock-ins provide disincentives subtly through ‘fear of missing out' (FOMO). Experience with “lock-in's” confirms the potential efficacy of disincentives. In this case, FOMO imitates addiction.
FOMO-based enticement lead to disincentive-based monopolies – another relatively unique characteristic of social media. Platform design specifically excludes wiggle room for shared loyalties – you're either in or out. Because of the considerable peer pressure to remain locked-in to the platform shared by the “in crowd,” continued association results from a kind of cognitive blackmail that subverts individuality and free will in some people. Lanier observed that “there isn't a real choice to move to different social media accounts. Quitting entirely is the only option for change.” Well put.
C. Social media is behavior modification on steroids
Three time-honored adages apply to the understanding of commercial social media platforms:
Social media is neither “free” nor beneficent. Platform owners/executives are beholding to their customers – and their customers are the organizations that pay them. Social media platforms receive money from advertisers who want to modify the purchasing behavior of users. But, they also receive money from marketers that re-sell or re-purpose user information to other organizations for second-order manipulation (e.g., micro-targeting). But, and this is the important point, they do NOT receive money from users. The users offer up changes in their behavior in exchange for the opportunity to use the service. Behavior modification is the commodity, and users are in a very real sense the product. This is the business model of social media.
But there is also a more subtle, non-commercial form of behavioral modification: the modification of user-behavior by other users, including modifying behavior through reactions to posts, attracting new followers/subscribers, changing attitudes, supporting causes, stimulating others to action, etc. So, while the primary behavior modification market is the economic part of the business plan, there is also a secondary, non-economic behavior modification “aftermarket” that enhances the primary market and creates additional perverse incentives for the participants. Some platform users are willing to participate in the primary market in order to have the opportunity to influence the behavior of other platform users in the secondary market. Perhaps a better way to think of a social media model is as a multi-tiered behavior modification environment, where the higher tiers are based on economic manipulation and the lower tiers are based on psycho-social manipulation. In any event, social media is all about behavioral modification in one form or another.
D. The social media ecosystem seems ideal for galvanizing social outliers
Let's suppose that you want to organize some significant social disruption: e.g., insurrection, terrorist attack, assassination, coup, etc. You estimate that there are hundreds of potentially willing co-conspirators, but you don't know who they are, where they are located, or how to reach them. How would you design a system to communicate with them and get them to join your cause? Specifically, what characteristics should the system have?
First, we have to rule out using mass media, for there is no reason to assume that the reach will extend to many members of the miniscule target audience, and mass media is certain to draw unwanted attention to the illegal/anti-social cause. The optimal approach should be both granular (narrowcasting rather than broadcasting) and tribal (high probability of reaching individuals with shared objectives and dispositions). But how are we to individuate members of this imagined tribe. The solution is to rely on the same self-selection that we observed in the mock psychology experiments.
Our observations call for the use of a communication platform that has characteristics like these:
(1)-(5) seem worthy characteristics for our ideal communication platform. What comes to mind?
E. Social media and the 5-Ds of sociopathy
Social media is also an ideal communication environment for what I'll call the 5-Ds of sociopathy: disinformation, deception, dishonesty, delusion, and duplicity. The transaction friction for cultivating and distributing deception, lies, disinformation, fake news, post-truths, etc. is virtually nil. Goals of unbridled self-promotion, dissemination of fake news (in the journalist's, not the politician's sense of the term), conspiracy theories, scams and misrepresentations, libel, slander, and solicitations for participation in illegal acts, are easily accommodated by social media – especially when the perpetrator hides behind a cloak of anonymity through rogue accounts. Social media, along with other un-vetted online media sources is epistemically vacuous. There is no fact-checking, vetting, or counter-balancing of communication because there is no accountability.
The precursors of modern social media were largely envisioned the last quarter of the twentieth century as enabling technologies for online engagement which might be used for dynamic, interactive and participatory environments, independent of bias, social stigma, and class distinctions. It was hoped that this would a great leap forward for equal opportunity – certainly an admirable goal. But technologists were focused on only the positive potential of enriching online experiences - through such things as video conferencing, idea-sharing, collaboratories, computer-assisted cooperative work, and anonymous engagement – and not on potential misuse. But even then, some scholars recognized potential downsides such intellectual distraction, loss of privacy, subversion of intellectual property regulation, and result in an intensity loss in terms of the quality of inter-personal interaction, but they were largely drowned out by the enthusiasm. History has shown that while many of these fears were justified, they were also too narrow. Ironically, history has also shown that many of these social ills were anticipated by George Orwell [17] and Aldous Huxley [18] nearly a century ago. But that's another story.
More to the point, absent any significant experience with the use of online technology to facilitate social engagement and democratize social organization, technologists were caught up in the euphoria of innovation and forged ahead full speed. To put this online innovation in context, it should be remembered that in 1990 the email protocols SMTP, POP and IMAP less than a decade old; the World Wide Web was still in development; Amazon.com had yet to be created, and Gmail and Facebook were 15 years in the future. Hindsight may be accurate, but it isn't always that informative when de-contextualized. It simply never occurred to most technologists that social media platforms would be weaponized to subvert democracy, spread disinformation, and foment hate. This is just another corollary to Langdon Winner's observation that technologies may take on unethical and anti-social qualities that go unnoticed by the developers. Winner's thesis that that society's concern should not be limited to the ethical intent of technology, but also the full range of potential effects, is unassailable. [19] The fact that very few could anticipate that social media would become an Ideal platform for nurturing motivated reasoning and reinforcing cognitive dissonance speaks volumes about deficiencies in research methods.
In this analysis of social media as a technology platform, we noted that there were many indicators of attendant societal risks of computing and networking technologies that were left unattended. Society was (and remains) ill-prepared for the velocity and revolutionary nature of some of the most cutting edge technologies. It is not widely acknowledged that a technology does not have to be done in bad faith to have negative consequences for society. All too often we allow cognitive bias to distort our assessment of technology innovation and withhold criticism until it becomes demonstrably irrational. Consider that perceived useful innovations like leaded gasoline, chlorofluorocarbons, DDT, asbestos, styrofoam, nuclear fission, hydrogenated oils, and tobacco products, all continued to be widely endorsed long after negative consequences were scientifically documented. [20] Of course this innovation tenacity derives from both an attachment to perceived practical advantage, but also refusal to accept reality. It is this latter, cognitive bias that is hardest to predict and explain, and is related to the earlier rush to innovation without much reflection.
So the existential threat that social media has produced is not without precedence, but it has few technology rivals in terms of rapidity and ubiquity – the one possible exception would be another epistemic bridge to nowhere, generative AI. Over the past half century we seem to be racing toward an age of un-enlightenment – not dissimilar from the dystopias predicted by Orwell and Huxley, who both predicted consequences of catering to the most base, artless, and unsophisticated of human drives: e.g., self-importance, instant gratification, unconditional belief reinforcement, revenge, epistemological relativism, acceptance of anti-social ideas, anti-science, and, most of all, the defense of willful ignorance as an unalienable right. In the end the cause of our crisis is hydra-headed along the lines discussed above.
We conclude with a quote from Jaron Lanier: “Social media is biased, not the Left or the Right, but downward.”
[1] J. Gottfried, Americans' Social Media Use, Pew Research Center Reporg, January 31, 2024. (available online: https://www.pewresearch.org/internet/2024/01/31/americans-social-media-use/ )
[2] J. Brown, Is social media bad for you? The evidence and the unknowns, BBC online, 4 January, 2018. (available online: https://www.bbc.com/future/article/20180104-is-social-media-bad-for-you-the-evidence-and-the-unknowns )
[3] S. Milgram, Behavioral Study of Obedience, Journal of Abnormal and Social Psychology, 67:4, pp. 371-378, 1963. (available online: https://psycnet.apa.org/fulltext/1964-03472-001.pdf?auth_token=1d253fd7013d912325038b7020c28db39fe79560&returnUrl=https%3A%2F%2Fpsycnet.apa.org%2FdoiLanding%3Fdoi%3D10.1037%252Fh0040525 )
[4] H. Arendt, Eichmann in Jerusalem: A Report on the Banality of Evil, Penguin Classics, New York, 2006.
[5] P. Zimbardo, The Lucifer Effect: Understanding How Good People Turn Evil, Random House reprint edition, New York, 2008.
[6] R. Sword and P. Zimbardo, 50 Years On: What We've Learned From the Stanford Prison Experiment, Psychology Today, August 16, 2021. (available online: https://www.psychologytoday.com/us/blog/the-time-cure/202108/50-years-what-weve-learned-the-stanford-prison-experiment )
[7] The Stanford Prison Experiment official website https://www.prisonexp.org/ .
[8] E. Dolan, Intro to psychology textbooks gloss over criticisms of Zimbardo's Stanford Prison Experiment, Social Psychology, September 7, 2014. (available online: https://www.psypost.org/2014/09/intro-psychology-textbooks-gloss-criticisms-zimbardos-stanford-prison-experiment-27970 )
[9] S. Highfill, The Stanford Prison Experiment: Billy Crudup talks his new film, Entertainment Weekly, July 17, 2015. [available online: https://ew.com/article/2015/07/17/stanford-prison-experiment-billy-crudup/ ]
[10] T. Carnahan and S. McFarland, Revisiting the Stanford Prison Experiment: Could Participant Self-Selection Have Led to the Cruelty?, Personality and Social Psychology Bulletin, 33:5, 2007, pp. 603-14. (available online: https://journals.sagepub.com/doi/abs/10.1177/0146167206292689 )
[11] J. Lanier, Ten Arguments for Deleting Your Social Media Accounts Right Now, Picador, New York, 2018.
[12] S. Hatterstone, Tech guru Jaron Lanier: ‘The danger isn't that AI destroys us. It's that it drives us insane', The Guardian, 23 Mar, 2023. (available online: https://www.theguardian.com/technology/2023/mar/23/tech-guru-jaron-lanier-the-danger-isnt-that-ai-destroys-us-its-that-it-drives-us-insane ) <<< Lanier argues “ he argues that the internet is deadening personal interaction, stifling inventiveness and perverting politics.”>>
[13] H. Berghel, The QAnon Phenomenon: The Storm Has Always Been Among Us, Computer, 55:5, pp. 930199, May, 2022. (available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9771124 )
[14] H. Berghel, 911 Swatting, VoIP, and Doxxing, Computer, 56:3, pp. 135-139, 2023. (available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10058781 )
[15] H. Berghel, Social Media and the Banality of (Online) Crowds, Computer, 55:11, pp. 100-105, Nov. 2022. (available online: https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9928241 )
[16] H. Berghel, Malice Domestic: The Cambridge Analytica Dystopia, Computer, 51:5, pp. 84-89, May, 2018. (available online: https://ieeexploe.ieee.org/stamp/stamp.jsp?tp=&arnumber=8364652 )
[17] G. Orwell, 1984, Signet Classic, New York, 1961.
[18] A. Huxley, Brave New World Revisited, Harper Perennial Modern Classics reprint, New York, 2006.
[19] L. Winner, Do Artifacts Have Politics?, Daedalus, 109:1, pp. 212-136, Winter, 1980. (available online: https://www.jstor.org/stable/20024652 )
[20] N. Oreskes and E. Conway, Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Climate Change, Bloomsbury Reprint, New York, 2011.