Skip to main content
Eskenzi Cyber Book & Film Club

Eskenzi Club – Tweet Chat: The Social Dilemma

By April 13, 2021No Comments

Released in 2020, the documentary-drama, ‘The Social Dilemma’, offers a thought-provoking and alarming depiction of our reality today. The film exposes the ruthless nature of tech giants seeking to reinforce marketing algorithms for monetary gain and the consequences that have emerged as a result; from eliciting mental health issues and nurturing addictions to promoting the spread of fake news, and threatening democracy.

Whilst it certainly offers a somewhat biased, or one-sided take on the social media phenomenon, the film nevertheless raises a number of important concerns that are worth addressing.

As part of Eskenzi PR’s latest initiative, the Eskenzi Cyber Book & Film Club, cybersecurity and cyberpsychology experts were invited to take part in a Tweet Chat to discuss some of these very issues. Specifically, we were joined by Brian Higgins, Director at ARCO Cyber Security and Security Specialist at Comparitech; Anete Poriete, UX Researcher and Cyber Psychologist at CyberSmart; Madeline Howard, Director at Cyber Cheltenham (CyNam); and Neil Stinchcombe, co-founder of Eskenzi PR.

To read up on all of their insights, check out the Eskenzi Twitter or look under the hashtag #EskenziClubSD !

What is the biggest problem with social media?

In the same way the documentary began, the event kicked off with a rather broad question:

“What do you think is the biggest problem with social media today? Is there a problem?”

A general consensus suggested that a lack of regulation and ownership of responsibility has played a central role in the failings of social media.

For Brian Higgins, part of the problem can be attributed to our ignorance. Indeed, if we are unaware that we are in the matrix, how can we then solve the issue, let alone recognise the problem in the first place?

Social Media: Tool or Manipulation Instrument?

During the film, Tristan Harris, former design ethicist at Google and co-founder of Centre for Humane Technologies, suggested that we had “moved away from a tools based technology environment, to an addiction and manipulation used technology environment. Social media isn’t a tool waiting to be used. It has its own goals, and it has its own means of pursuing them by using your psychology against you.”

The argument suggests that algorithms and artificial intelligence are increasingly adept at understanding who we are, and are leveraging this knowledge to curate our reality as well as influence our thoughts and decisions.

In addition to algorithms, however, is the platform offered to ‘influencers’.

Unfortunately, it seems our habit of consuming bite-size information has also made us conducive to being manipulated as both our attention spans and critical thinking are negatively impacted.

To Intervene or Not to Intervene

Recognising the imperfect nature of social media design then, we wondered if intervention by tech giants is required, particularly with regards to disinformation/misinformation.

Yet, the issue of misinformation is not always clear cut. In fact, a recent study conducted by Facebook suggests that it is not necessarily false information that creates problems but content that doesn’t “outright break the rules”.

The study sought to understand the spread of ideas on social media and how it was having an impact on Covid-19 vaccine hesitancy. Despite banning false and misleading statements about the vaccine, many statements including expressions of concern or doubt, are often too ambiguous to be removed but have been found to play a harmful, contributing role to hesitancy. This is especially true when the message is promoted by influencers and are concentrated within like-minded communities, acting as an echo chamber.

Anete Poriete explains this further:


To address the issue, Madeline Howard believes proactive engagement is necessary.

This then led us to question whether it is ever okay to amplify a message.

The Privacy Paradox

The news is full of concern about privacy, we all think of it as very important, but the way we act in reality is often contradictory. There appears to be cognitive dissonance in that we claim to value our privacy, and yet we continue to engage in services such as Facebook, that undermines it. Moreover, we often choose to overshare details of ourselves and our lives on such platforms.

Interestingly, our offline behaviours also make us susceptible to cybercrime.

Recommendations and Solutions

To conclude the Tweet Chat, we asked the experts what they thought about the use of verified ID in helping to make us safer online and the concept of ethical-by-design.

In response to verified ID, the verdict was clear that it would encourage accountability. Nevertheless, as Anete points out, anonymity can also serve as a safety measure. As such, ID verification should be subject to choice. Neil added that the security of one’s identification should also be considered before ID verification is implemented on a wider scale.

In respect to the concept of ‘ethical-by-design’, it was agreed that ethics is ever-evolving and subjective; and should, therefore, be regularly evaluated. The key is in ensuring that technological design is working in the user’s best interest and operates with transparency.

A Concluding Note

While the Tweet Chat mainly focused on the negative consequences of social media, it is important to recognise that it has also brought us many benefits which cannot and should not be neglected. We just hoped this discussion provided you with some food for thought.