The New York Times Magazine recently profiled video editor Josh Owens who offered a revisionist take on Infowars.com’s conspiracy theory celebrity Alex Jones. Slate‘s Aymann Ismail was more sceptical of Owens’ subjugation claims. When I worked for the former Disinformation website its publisher sought (unsuccessfully) to negotiate a book deal with Jones, earlier on in his career. This poses an intriguing counterfactual: what if the ‘meme warfare’ subculture surrounding Jones and conspiracy theories had been more liberal than alt-right in nature? How would that have affected the 2016 United States election outcome?
Australian MP Andrew Hastie – Chair of the Parliamentary Joint Committee on Intelligence and Security – has called in a new paper published by the Henry Jackson Society and the Konrad Adenauer Stiftung think tanks for Australia to develop political warfare capabilities. Hastie contends that this is necessary for Australia to counter great and rising power competition (which is also a focus of ‘fourth generation’ scholarship on strategic culture) and its potential threats to Australian liberal democracy. In particular, Hastie focuses on the impact that sophisticated disinformation, misinformation, and hybrid warfare strategies can have. Hastie’s stance illustrates a military-informed, realist approach for Australia – and one that will lead to further academic, policymaker, and public debate.
The New Yorker‘s Joshua Yaffa has profiled Russian television producer Konstantin Ernst who as Channel One’s general director, is an important media and propaganda expert for the Putin Administration:
“Baldly false stories, in the right doses, are not disastrous for Channel One; in fact, they are an integral part of the Putin system’s postmodern approach to propaganda. In the Soviet era, the state pushed a coherent, if occasionally clumsy, narrative to convince the public of the official version of events. But private media ownership and widespread Internet access have made this impossible. Today, state outlets tell viewers what they are already inclined to believe, rather than try to convince them of what they can plainly see is untrue. At the same time, they release a cacophony of theories with the aim of nudging viewers toward believing nothing at all, or of making them so overwhelmed that they simply throw up their hands. Trying to ascertain the truth becomes a matter of guessing who benefits from a given narrative.”
On my reading list is historian Robert Service’s recent book Kremlin Winter: Russia and the Second Coming of Vladimir Putin (New York: Pan Macmillan, 2019). It’s important to contextualise the current debates on disinformation, misinformation, hybrid war, and political war in the context of Russia’s renewed power projection, and Russian strategic culture which uses asymmetric tactics. Service’s analysis of current history will help with my on-going research agenda.
The Guardian Australia’s Transparency Project has published research on how clandestine actors use the social network Facebook to run disinformation campaigns (podcast). Facebook has also been able to monetise Islamophobia pages. The Australian Labor Party suggests that Facebook and other social media giants could be broken up as an antitrust measure to halt or to prevent misinformation.
Richard Stengel was the Obama Administration’s Under Secretary for Public Diplomacy and Public Affairs. Stengel’s new book Information Wars: How We Lost The Global Battle Against Disinformation and What We Can Do About It (New York: Grove, 2019) is on my reading list for future research projects – in order to understand the United States Government response to disinformation and information warfare.
The Rand Corporation has recently released a directory of web tools to combat online disinformation:
The rise of the internet and the advent of social media have fundamentally changed the information ecosystem, giving the public direct access to more information than ever before. But it’s often nearly impossible to distinguish accurate information from low-quality or false content. This means that disinformation—false or intentionally misleading information that aims to achieve an economic or political goal—can become rampant, spreading further and faster online than it ever could in another format.
As part of its Countering Truth Decay initiative, and with support from the Hewlett Foundation, RAND is responding to this urgent problem. Our researchers identified and characterized the universe of online tools developed by nonprofits and civil society organizations to target online disinformation. These tools were created to help information consumers, researchers, and journalists navigate today’s challenging information environment.
I’ll make several observations on this useful collection of resources. Fighting disinformation has moved from the cultic milieu of the former Disinformation subculture search engine to policy think tanks. The election of United States President Donald Trump and the United Kingdom’s Brexit vote – both in 2016 – has created a secondary market in tools to counter online disinformation. The other thing evident from RAND’s list is the emphasis on gamification as a cognitive strategy to engage the public.
In his new book Mindf*ck: Inside Cambridge Analytica’s Plot to Break The World (London: Profile Trade, 2019) its former director of research Christopher Wylie poses an urgent dilemma for researchers:
Fresh out of university, I had taken a job at a London firm called SCL Group, which was supplying the UK Ministry of Defence and NATIO armies with expertise in information operations. After western militaries werre grappling with how to tackle radicalisation online, the firm wanted me to help build a team of data scientists to create new tools to identify and combat extremism online. It was fascinating, challenging and exciting all at once. We were about to break new ground for the cyber defences of Britain, America and their allies and confront bubbling insurgencies of radical extremism with data, algorithms and targeted narratives online. But through a chain of events that unfolded in 2014, a billionaire acquired our project in order to build his own radicalised insurgency in America. Cambridge Analytica, a company few had ever heard of, a company that weaponised research in psychological profiling, managed to turn the world upside down.(Loc 77-83, Kindle edition, emphasis added.)
At the outset of his tell-all memoir Wylie has identified a significant problem for researchers in industry-led projects: a change in funder can mean a new direction for an established research program. This is especially problematic with ‘dual use’ research that is at first created for defensive purposes – yet that also may have offensive applications. This is the Cambridge Analytica dilemma.
Many university research offices deal with this dilemma through a range of means. Ethics committees and institutional review boards oversee and ‘greenlight’ individual projects. Discipline and field-based norms can also act as powerful group-based socialisation mechanisms for new researchers. Export controls can prevent the proliferation of sensitive intellectual property and technologies. Other mechanisms also exist such as compartmentalisation and access-restricted budgets that are on a ‘need to know’ basis.
These institutional safeguards may all collectively fail when your funder is Renaissance Technologies’ hedge fund maven Robert Mercer. Cambridge Analytica and Mercer used Wylie’s research about combatting violent extremism to instead weaponise political subcultures, and to mobilise voter blocks in the 2016 United States campaign that elected President Donald Trump. In doing so, they highlighted how private funders can change the direction, the scope, and the norms of use for industry-led research.
The Cambridge Analytica dilemma points to a further, urgent challenge. The United States-based John F. Kennedy Special Warfare Center originally conceptualised psychological operations (PSYOP) – now called military information support operations (MISO) – for counterinsurgency and unconventional warfare. What happens when this PSYOP or MISO expertise migrates online to be mobilised by oligarchical wealth to covertly influence, shape or experiment with mass behaviour? How does this change the relationship between the elite few and the populace many? What possible counter-measures may exist? What new safeguards may be needed?
We’ll find out more in the United Kingdom in December, and in the United States in 2020. I’ll be watching developments closely.
Welcome to my new research program blog, Vega Theory.
My research program is at the nexus of the strategic studies, terrorism studies, and political economy sub-fields. My in-progress doctoral thesis at Australia’s Monash University advances a new analytical theory of strategic subcultures in terrorist organisations, and uses process tracing to examine Japan’s Aum Shinrikyo as a case study.
This blog will advance the new research agenda outlined in my doctoral thesis. In particular, I am interested to further develop a deeper understanding of causal mechanism-based analysis, and to explore the possible existence of strategic subcultures in a range of areas, from other terrorist cells, groups, and organisations to asset management firms and hedge funds. A common theme in all of these examples is how to harness volatility (vega) for strategic advantage.
I also have an interest in developing capabilities for counter-coercion and counterdeception capabilities to deal with fraud, white-collar crime, misinformation, and information warfare. This interest draws on my past experience in editing the former subculture search engine Disinformation and in the cultic milieu. In particular, I am looking at insights from interpersonal neurobiology and social neuroscience, and their applicability to identifying causal mechanisms for countering socio-political deception.