The State of Technological Journalism

This article was originally published on Wonk Bridge

This is certainly not Mozambiquan artist Ngwenya Malangatana’s impression of the vulnerable public interest being devoured by the febrile forces of misinformation. Malangatana, “Untitled, 1967”

There is a new and perverse honour in the drive to be a journalist in our century. The grand narrative trope of the wilderness newly civilised by the hardy and intrepid has been reversed by the virtual geo-blanching effect of net economics, as a region of employ once flush and fertile has now become the home of the brave, the desperate, and the obsessive pugilist against the tide. To write for a crust now is to write for a crust, and to paraphrase Dr. Johnson while bastardising his truism with a speculation of Aristotle’s, any man or woman who writes for one these days must be either a Blockhead or a God.

Modern journalism is also home to a curious collective psychology; in the face of the massive upheaval of social media, in which the conspiracy of civilian voices has enervated discourse, hopelessly saturated the market of information, and shaken the credibility of once totemic outlets now forced to bring in ad revenue however they can, most journalists continue to write on as if nothing has happened. There’s been little pronounced change or adaptation in journalistic approach, if you discount the dispiriting and dispiritingly uninteresting development of outbrain-style writing as the hack’s new haven. Journos have a mightier-than-ever task ahead of them — vast seas of fierce contest in opinion to navigate, the assimilated informational wealth of infinite sources at their inbox-tips, the breaking down of the walls separating disciplinary interest — but seem unwilling really to countenance it. To extend the earlier metaphor, there’s precious little seeming awareness of the new territory that the brave journo now roams, no raising of the voice to deal with its huge expanse, no adaptive change in clothing or awareness of its dangers.

And it is therefore little surprise that we cannot altogether trust what is written about that strange new world.

It’s all a typical way of saying that, of all the journalism that is written about some of the vast technological questions that face us, a lot of it is not fit for purpose — that which doesn’t sell us a fiction often serves to manipulate our perception of the size of those questions, or at least illuminate them in such a way as to turn them into ghoulish shadowplays for an audience’s entertainment, as opposed to their enlightenment. Just as the discussion and debate of ideas has grown largely absent from the courts and houses of political debate, our vogue for processional, non-empurpled journalism, combined with the humanities graduate’s natural enervation for topics that often require at least a vague familiarity with underpinning scientific, engineering, or mathematical concepts, has left us in a position of sore misinformation.

To labour as we have in our opening paragraphs, only then to neglect the capital component in all this, would be ridiculous. In one of the net’s bizarre ironies, the overwhelming plenty of new source has made the environment arid for the professional, who still needs to eat. This has allowed capital to invade journalism to a deleterious degree. Large percentages of article content published by industry leaders in tech journalism, the likes of WIRED for instance, are written with the collusion of vested interest — i.e. involve companies using the magazine’s subterfuge of credibility to, in effect, sell you something — and more still operate from a research base that is basically reducible to the press releases of companies hungry to sell a product, and perfectly willing to exploit the market’s general ignorance by orientating their ‘content’ — a word that in itself removes the dignity of journalistic labour by reducing it to a consumable — around buzz-word technological terms.

At the time of writing, Wired had published nine articles to their site’s ‘Tech’ section since the start of 2019 — of those nine, three were declared as having been written by the respective CEOs of BuffaloGrid, Resi, Primer, and a fourth by Google vice-president Vint Cerf. More were written by those with other types of declared interests, like academic postings or elected office. Enlarging our sample size, of everything Wired published between the time of writing and the beginning of July 2018, a shade over 51% of all the site’s content beneath the tech umbrella had some form of declared affiliation with a commercial entity — articles that either were written by a company stakeholder or executive, were created with the participation of a company, or that simply focus on a particular product or set of them in a generally un-critical way.

It is not right to be sensationalistic about this: there is, properly deployed, great value in, as it were, going to the fountain instead of sipping from the water jug, and expert opinion’s value is self-evident, especially in heavily technical fields. WIRED, like any outlet, are also perfectly within rights to write reviews of products that might interest their readers, reviews that are just reviews and nothing more; this is an important way in which the press smooths the path of discernment and mediates the degree to which advertising need constantly intrude on their readers’ lives.

Nevertheless, the constant looming presence of ulterior interest creates an unsustainable environment for the most fundamental aspect of journalism (all the more fundamental today): analysis, which cannot exist freely in so corporatised a set of surrounds even where there are journalists with sufficient expertise to perform it. And it is absolutely fundamental a Digital Right that we have interrogative journalism on technology to avail ourselves to, not merely to show us how vast certain issues are, and how considerable are our role within them, but also as a means to show us how tiny other ones are that might commanding an excess of our attention.

AI is a prime example; the thesis of John Naughton’s article on AI from the Guardian has an acuity of vision, summarising an essential truth that, yes, the public’s perception of AI, its current status and its immediate frontier of possibility, is wildly inaccurate. A term that cribs significant stakeholder interest, and to an indeterminate extent has accrued capital investment for companies with little to no real interest in it, AI has chosen the tech-oriented media as its primary commercial battleground. It’s a field that invites more direct ethical scrutiny than any other area in tech (probably because, unlike such phenomena as CRISPR or Blockchain, the public probably feels they have a better anecdotal understanding of it), and the often completely fatuous theories (hypotheses would be a truer word) peddled about AI contribute to a type of ‘virtue profiteering’, an ethical theatre in which companies peacock about, sometimes curating their own ethics boards, trying to shore up perceptions of their own trustworthiness (and thereby their suitability for investment) relative to advances in the field. This has resulted in a billowing cloud of misunderstanding, one that the media has directly enabled.

To put it briefly, what most tech companies sell as AI now are optimisation solutions — archly pseudonymised machine learning programs*, running millions of scenarios through, for example a genetic algorithm. This is neither all that glamorous nor potentially dangerous; if you believe in your own fascination with AI as potentially apocalyptic and superhuman, you will almost certainly have been bored stiff at a company meeting over technological processes far more interesting than what actually constitutes functional AI in 2019. A computer capable of calculating with unforeseen variables, to originate motivation as even the basest human mind can, the heir of true intelligence, is not even within faintest reach of present AI development; recreation of even basic mammalian intelligence may yet be decades away. The sublime possibility of a human-approximate AI almost certainly roots within the field of Natural Language Processing, which has progressed only lightly as a field since Karen Spärck Jones’ and Joseph Weizenbaum’s seminal work within it in the 1960s.

But, just as Naughton’s article successfully challenges a prevailing dogma of journalistic practice over tech, it also betrays among the worst tendencies of practice in the field; it is written by someone whose contextual immersion in the field and especially its surrounds, that gesture which really fuels effective analysis, appears tenuous. It’s hopeful, though unfortunately not altogether probable, that when Naughton wrote that “quantum phenomena are not likely to have much of a direct impact on the lives of most people, one particular manifestation of AI — machine-learning — is already having a measurable impact on most of us...”, he actually was referring to quantum computing [1]. That would be a less egregious mistake than having meant as he wrote. The harnessing of quantum phenomena is responsible, to take one instance, for the functioning of transistors; if you are reading this on an iPhone X (not that you necessarily should be), you hold in your hand over 4.3 billion little transistors that function under the principle of quantum tunneling. It is in a different way hopeful, and this time undeniable, that Naughton could be and was held to account for his mistake by his own readership, below that very article.

But that is just an instance of a set of informed tech-enthusiasts prevailing over a less-informed tech-enthusiast; despite Naughton’s presence and the truth otherwise of his credentials as an academic[2], a journalist was not truly present in this particular exchange. A journalist bears a unique toolkit necessary for plumbing the recesses of issues, of burrowing deep, and this requires dispassion to go along with interest. So many tech-oriented journalists, in order to be approximate to the tech they love, are happy to generally operate with the same degree of rigour shown by a customer , demonstrating in their work all the maniacal enthusiasm of technochauvinism [3]. This is anathema to the journalist’s professional disinterest, a disinterest which very much serves the public’s interest.

It is the reason, certainly in anecdotal experience, that one can go into a work of technojournalism that is based on an intriguing premise and yet come out feeling not only unfulfilled (fulfillment is important) but un- or misinformed (informedness is more important still). More insidiously, seduced by narrative preference in reporting [4] we can be ignorant to our own misinformation.

And ignorance of our own misinformation is one of the Early Digital’s foremost moral crises, one that the media should be trying like hell to stay the march of, and yet one that the media is instead actively, sometimes cynically, exacerbating. It’s not merely important to know about the tech (flying cars! Holograms! Orange-flavoured smart-homing killer sex robots!) that could yet tower over us; it’s important to have access to disinterested analysis of the tech that already underpins us.

What, for instance, a given article on artificial intelligence should be more minded to explore might be something like the relationship between the widespread adoption of “AI”, the degree to which it is being given priority in subsidy as part of a wider push towards automation, and the bargaining power of labour. Thoroughgoing inter-comparison of technological concepts with their corresponding political affiliates tends to be pretty rare; the two knowledge-bases appear to exist uneasily within a single constitution, respectively offering types of experience too divergent to catch many minds all the same. Nevertheless, by focusing on the public’s oblique, immature dread of prospective T-1000 models crawling out of the metalloid slime of a Google campus, as opposed to the widespread development of ‘AI’ for applications of marginal utility, our discourse is entertaining a key failure of focus, and prevents the citizen from understanding their real position relative to these issues, its possibilities and its risks. In this case, that the most high-impact potentiality of“AI” as we know is that it will reduce the bargaining power of labour.

Like any other technology, “AI” as we know it is being pressed in the service of the area of greatest demand. Demand, at present, is base and ephemeral, negligent in such a way as lets cynical and exploitative opportunism in. Until our pattern of demand shifts, “AI” as we know it will not be purposed towards worthier targets, such as in immunotherapy. But by refusing to focus on the actual, any relative discernment to this effect becomes manifestly impossible. It is the responsibility of a competent, responsible press trade to make these things evident and justify their readership’s position relative to them.

Our trade has suffered for its lack of agility in adaptation, procedurally and intellectually, to the new climate; the dire forecast is that those we are intending to serve will suffer even more for it than we will, if we do not begin to reconsider the way in which we turn our pondering eyes to technology.

*Wonk Bridge bears no affiliation whatsoever to IBM Analytics; this link was selected purely for demonstrative purposes and should not be construed as an endorsement.

[1] The statement would nevertheless still be somewhat suspect. Yale University scientists, in 2018, managed for the first time to to observe quantum information while preserving its integrity; this holds out hope for further progress in the field, but nevertheless establishes that quantum computing is as yet a fairly distant prospect, and one whose potential impact cannot at this moment be adequately calculated.

[2] I must stress that I bear neither the enmity towards Naughton to wish to make a particular example out of him, nor the platform or pedigree as yet to be able to do so inadvertently.

[3] This term, coined by scholar and software developer Meredith Broussard, describes the prejudicial idea that all forms of technological development are, by default, both positive ethically and incontrovertible procedurally, and must be embraced off-hand.

[4] It’s admittedly sexier to have the prospect of AI-motivated genocide dangled in your tableted lap than to be told that yet another company has managed to maximise its capital efficiency with a crack AI-program, or that another company still has succeeded in creating a camera of such sophistication in facial recognition that it can tell you, in precisely measured cubic litres, how much prettier or uglier you look compared to yesterday.