Leaving Facebook (part 1)

September 2020

Over the past few years I’ve become increasingly uneasy with Facebook and social media in general. I’ve had an account for over a decade now (since 2008) and it has become such an integrated presence in my life that it’s difficult for me to imagine it otherwise. But year after year, new revelations become public, new scandals get out, another terrible event get exacerbated, more misinformation spreads,… My insight into the technological, psychological and polticial functioning of social media on us as individuals and society has grown. And the negative conclusion is inescapable: it’s not good.

I know that I should quit - I’ve known that for some years now. I’ve made quite some changes to my online behaviour already. I cut out a whole bunch of Google owned apps, got rid of WhatsApp and switched to Signal, removed the Facebook app and Facebook Messenger from my phone. But in the end, I still have and use Facebook on daily basis. There are friends I keep in contact with, events to keep track of, updates about things I like (literally, in the case of Facebook). The idea to leave it all behind gives me nothing short of anxiety: I’ll lose so much.

Therefore I have resolved to pledge to quit Facebook by the end of this annus horribilis.

Reasons

There are a plethora of good reasons to have issues and doubts around the use of Facebook and I think that points towards the obvious conclusion: don’t be on Facebook. The problem is that once you’re in, it’s difficult to get out. The easiest would be to not have joined in the first place. There are a few people I know that had the foresight (even if their reasons were not necessarily those that I have today) to never join; and I guess that some sort of “hats off” would be approriate to them. “Better never to have been… on Facebook.” But since I’m past that point, the threshold to overcome is quite a bit higher.

Ownership of data: you are the product

While Facebook is ostensibly a social media service, that’s actually not completely true. It’s an advertisement agency first (and one of the largest in the world at that) that uses it’s social media services to gather it’s input data and as digital billboard to project to its billions of ‘users’. The service it renders to us is not profitable in itself but is still provided for free, thus creating a need to monitize it in another way. Selling it’s users time spent on the platform is how this works: advertisers pay Facebook for an amount of views that can be very specifically targeted and tailored for its intended audience. They pay Facebook and they are the customers, not the users. Or as the adage goes: “If you’re not paying for it, you’re the product.”

We - willingly - input huge amounts of data into our profiles, so as to better use the platforms. But legally, the data is stored on the platform’s servers over which they retain ownership; even if they don’t require you to sign over your copyright to them, which they often do. But who reads those user agreements anyway? This is exactly how they can keep operating in legality and why (their) ownership of (your) data is so important. We need (and there are) solutions for these, but that means we have to start accepting the cost of data ownership.1

Privacy: you are not the client

The data you generate is being used against you. Like I said before, it is used to tailor ads specifically for us and deliver the tools for very precise targeting. Based, not only, on what we input ourselves (mainly text and images), but also how we interact with the platform, what we do outside of it (by using those tracking cookies every website keeps bugging us about) and what we do in the outside world (gps-tracking). This is very invasive to say the least and many of us are at least vaguely aware of it but seem te be able to shrug it off as not important to us personally (as do I) - but it most definitely is.

We’re in some sort of collective delusion that our data is relatively safe, that the ads are benign, that Facebook and Zuckerberg genuinly regret the bad things that have happened. My guess is that this is because of how we use the platform: as something intimate, between friends, family and acquaintences. We rarely see the other side of Facebook, only our connections on our timeline. This feels personal and somewhat limited, like some sort of online village mentality where you cross paths with people you (at least) recognize and ads that are relevant to your own life - in the past your local bakery sponsored the football team, displaying their name on t-shirts, now you see ads for them on your timeline, what’s the difference?

But we have to face the facts: the financial logic dictates that these companies turn an (ever-increasing) profit, so they have every incentive to use and abuse the data we supply them with to maximize the efficiency of their algorithms, so as to make their service all the more valuable to their real customers. Privacy concerns stand in the way of this goal and are thus only in name (and even then) important to them.

Mental health: sadness by design

We’ve known this longer than social media have been around: displaying unrealistic and often unattainable imagery has a severy impact on people’s self-perception and sense of self-worth. For decades there has been criticism leveled against the fashion industry, with still only token rectification on these points.

But what happens when not some designers, journalists and editors, limited to catwalks and magazines, have a say in this, but hugely powerful algorithms that force upon us a logic that demands the impossible from ordinary users? Enter the influencer-culture: in order to gain views and clicks (and sponsorships that depend on these numbers), they present a certain (mostly positive) image of their lives. In essence it’s a form of exhibitionism: they open their meticulously curated lives up to us, the viewers. We’re all very aware of this, but in this constant mental battle over our attention, we’re the ones that are losing.

And it’s not just the influencers, whose livelihoods are derived from this, but also the ‘common user’. We also succumb to the algorithm by way of ‘likes’: the system demands for us to show positive things, to show us how our lives are good (and better than someone else’s maybe). A constant pressure to prove to our network and the world at large that our embellished (or even imaginary) lives are equally as desirable, if not more so, than those of our supposed peers. And if not, if we think we’ve seen through the charade, we’ll let the voyeurs in and feast upon our misery. We’ve all entered the competition and it’s wearing people out. And weary people become less critical… It’s a win-win, but not for us.

The new puritanism: big tech loves conservatism

At the end of 2018, Tumblr (where I had two blogs that I actively used) decided to ban pornography. This was a huge shock for the community: until then it had been one of the last big places that allowed creative content of this kind and was therefore a safe haven, not just for artists, photographers and bloggers, but also for minority groups (especially LGBTQ+) and sexworkers. It made it a vibrant community with lots of content and an excellent platform for people wanting to share their niche-art. But the new ban put a stop on all that, and even though my own blogs didn’t violate any of the new standards2, I decided to quit the platform in disagreement.

This is but one minor exemple, and Tumblr was never that omnipresent as some other companies are. But it shows how much say they have in deciding for us what is acceptable and what is not. And most, if not all, of these tech giants are, weirdly, very puritanical and sexist. Nudity is only allowed if it’s deemed tasteful (not deviating from the patriarchical beauty standards) or if the nipples “are not female-presenting” (in the case of Tumblr). There has been a disproportionate crackdown on non-heteronormative imagery (in a society where images are central to our discourse) and the enforcement of the dominant, American mores on everything. Because of this, there is less artistic freedom, less space for minorities and less room for discussion on these topics.

It’s a fine line to a social network to straddle, for sure: how to keep these spaces family-friendly? But there are better ways to deal with it than crack down on material just because it’s outside the norm. Active, human moderation and federation (see part 2) are possible answers to this phenonomenon.

The algorithm: echo-chambers and manipulation

A simple psychological mechanic helps explain quite a bit: as we’ve seen, the goal is to maximize data input from users; that’s why there is incentive to choose ways that maximize user interaction. This applies both for Facebook itself, as for the corporations, people and organizations on it, thus creating a feedback loop. And as it turns out, posts that generate idignation, anger and division generate huge amounts of interaction. This not only creates political echo-chambers, but also breeds a certain extremism.3

While some tech-companies have said they would intervene, the major issue still is that the fundamental reality of social media encourages these trends and this will only be reversed by forcefull, radical intervention in the sector. Self-regulation will never remedy the underlying, systemic problems.

Democracy: gradual decline

Ever since the rise of social media and manipulative tech giants, we’ve seen the steady erosion of our (admittedly flawed) democracies. Faith in the media as trustworthy declined and so-called ‘fake news’ and conspiracy theories are on the rise. The value of truth and the meaning of words has become fuzzy. And now, not for the first time, the right has expertly used this, in tandem with the decline in the economic system under ‘late-stage’ capitalism, to further their cause. The targeted ads make it possible to influence the fractured (working) classes, while the left specifically needs these to be united to excercise their power. I fear that, at the minimum, we’ll see continued hollowing-out of our democracies and the rule of law. This means regression on all fronts, increasing repression and an overall grim future.

Where, at some times during the 20th century, it could be argued that capitalism and democracy went hand in hand, these have since parted ways. It’s not, and never has been, in capitalism’s intrinsic interests to support democratic principles. It is, in my opinion, folly to hope for a solution within a capitalist framework to fix these issues. Any solution will have to be contrary to this profit-oriented logic and will need to include a return to democratic participation in how to shape the fundamentals of our society, including the digital.

I should leave

All this leads to the conclusion that has been looming over my thoughts these last few years: the best thing to do is to leave the corporate social media platforms. The companies themselves won’t change as they don’t have any incentive to. The only thing that arguably could create this incentive is for them to lose users, making their platform less valuable to advertisers.

But if I (and other like-minded people) leave the platform, won’t I abandon online discourse to the very elements I object to? Do I have an obligation to stay and try to counteract this? Try and have some sort of positive influence? I think the positive influence, if any, I can have is neglible. It’s a delusion to think that my efforts would outweigh the algorithms. And even then, is it worth it for me? It costs a ton of mental energy to combat the 247 propaganda and advertising machine. I think these are things that shouldn’t be asked of any one individual to endure.

Is this a condemnation of all social media? No, I think there are other, better ways to harness the potential of networks and the internet. More on that in a future blogpost. Is it certain that I won’t return? No, I can’t say for sure that I’ll be able to keep my resolution. Maybe I’ll return eventually, with a clean slate or under a pseudonym. But at least I will have tried.


  1. I prefer to think about the cost the users (as individuals or, preferably, as a community) have to bear instead of the price the tech giants should have to pay (by way of a data-tax for example), because it’s more empowering to the users. See also this article by the EFF on the subject. [return]
  2. Some of my pictures were falsely flagged as being inappropriate, and although I could and did appeal, those were eventually overturned for no good reason, making it impossible to reinstate some of my personal photography. [return]
  3. I differentiate extremism, which is irrational and accelerationist, from radicalism, which is principled and restrained. [return]