How Opinions Are Influenced

We are in the clutches of a new left leaning news revolution!

It may be being controlled by a small number of people and a little information technology.  Forget the neo-Nazis or some hard right 5th column. This is not their game.  You think ISIS use the Internet in sophisticated ways?  They are useful idiots compared to the really serious players, those who not only recruit but also mass message and try to control the agenda.

Let us suppose a quite reasonable scenario.  You have a small group of associates and object to the way the world is developing. You are all willing to protest against any particular event, but there is not many of you, certainly not enough to make a viable difference. What to do?

The Old Way  Write letters to everyone and every news agency to protest or complain.  Usually, small numbers are ignored and the protest is ignored as the rants of some malcontents. Eventually they go back to their jobs and get on with life.

Today:   In the digital age. The age of mass communication, you have programs that generate unique letters/ comments/ opinions based on preselected keywords. These keywords will find conversations on numerous forums (because that is what algorithms are designed to do) and then send them, using thousands of correct email addresses (you have access to data banks of addresses, on sale just about anywhere).  Ditto for Twitter addresses.  This multiplies the small group by hundreds, thousands or even millions.

Such a program could generate and send a protest letter from me (or you) on some issue I don’t care about or even disagree with. I may never know. News agencies do not have the resources to check legitimacy. They just want a story that sells so they just count!

Of course, news agencies could create a form requesting confirmation of emails sent.  That’s easy to do, yes, but also easy to counter. Google already uses software that scans for multiplicity and duplicity to make ad targeting more effective.  Read https://mitpress.mit.edu/books/obfuscation for more on this. But I digress.

Large service providers can create millions of false email accounts, and each of these can create accounts on Facebook, Twitter, LinkedIn, etc. (Wells-Fargo and their fake bank accounts spring to mind. As does Twitter being sanctioned for using bots to increase the number of ‘users’. CNN also got caught out doing this about six months ago.)  They are empowered to send thousands or millions of messages supporting or rejecting this situation or that.  Large organisations receiving millions of messages or multiple thousands will use deep learning to read the message for favourability or not, maybe keywords and themes, and then produce a count.  In this scheme, we have machines creating the messages and machines reading them.

The organisations could be BBC, Channel 4, or some random MP or council department.  The key point is the volume.  It must be too large for the messages to be read in total, but not too large that the sheer count looks suspicious.  The volume must not be too small, as then the message theme is ignored.

Completely fake websites can be created to recruit cadres of useful participants (useful idiots again) to attend rallies or protests of any sort. The Remain protests and the Women’s marches of 2016/2017 spring to mind. These programs could also respond to blog entries in any desired fashion, perhaps with a short confirmation message, thanking the user for participating.

In this way, just a handful of people can shape the news or government – or ignite a revolution.

Indeed, the so-called Fake News may not be fake at all, but the organisations may be faked-out, reacting to a blast of fake messages. These blasts shape what to do and what they do. It even explains how they can suddenly move from one topic to a completely different one, something like a shoal of fish simultaneously and almost instantly change directions as if being controlled by a hidden hand, which they are. It simulates public opinion, and eventually drives it.

Turing Test Redux

The Turing Test Here for a machine is set to task a person to decide by asking a series of questions if he/she is talking to a machine or a person.  The new test is this:  Can any organisation, by sending a single return message decide if the particular message was sent by a machine-driven network or an actual person?

Doubtful.

The great unknown in all this relates to knowledge, and that knowledge pertains to the sheer power of machine driven actions, for which language is not a problem, for which numbers are not a problem, for which verifiability is difficult, and for which a presumed ignorance of how it is done is paramount.  And massive IT firms all know this too well.

This scenario is not only technically possible but has probably happened. Is it happening now? Who’s going to admit they did it, much less publish it?

1 Comment

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s