Journalists as heroes in the horror of a disinformed society
- Brianna Warrant
- Oct 27, 2023
- 3 min read
Have you ever been a victim of a clickbait headline of something so unbelievable your jaw dropped, making you click and read on the article? I know I have.
Clickbait is an example of fake news. Something to wrile an audience up. Something so unbelievable that you would never imagine someone would make it up, therefore you believe it to be true.
According to Chapter 9 of "Mobile and Social Media Journalism" by Anthony Adornato, fake news is defined as “the deliberate fabrication of information with the intent to deceive.” The Cambridge Dictionary’s definition also states, “False stories that appear to be news, spread on the internet or using other media, usually created to influence political views or as a joke.”
Adornato said since the term fake news often means believing “your side,” it is more important to use the terms misinformation and disinformation, types of “information pollution choking public discourse.” I’d like to agree with Adornato on this one, although they go hand in hand.
Although these definitions are broad yet factual, I believe there is more to each person’s motivations behind posting disinformation. When people start to believe something is true, the intent to deceive may not be the motivator, they might want to spread this information because it is so unbelievable that they want more people to know about it. Or, someone just wants those clicks to become famous or gain money.
The spread of disinformation appeals to our emotions. If it lines up with our viewpoint, we can use this emotional response to discredit the other side to make our side seem stronger.
It’s a game of emotion and addiction.
Adornato wrote how the rampant diffusion of disinformation is large in part due to the technology companies and social media platforms news is being shared on. They have been criticized for failing to prioritize factually accurate content.
Despite this, there are new features developed allowing users to flag questionable content. Facebook as well has a third-party fact-checking organization to identify and review false content.
Is this enough? The answer is a hard no.
Disinformation is designed to take advantage of how the system works, almost like a virus. People are finding ways around these protective features through algorithms and SEO tactics. Technology companies are also profiting from these false posts so do they really want to completely diminish the spread of disinformation?








Comments