From cloudy logic to logistical system: Algorimages, black boxes, and the socio-technical infrastructure of platforms
by Leo Hansson Nilson
On the eve of his acquisition of Twitter, Elon Musk tweeted a video of himself entering the social media company’s headquarters carrying a sink basin, captioned ‘let that sink in!’ The clip aimed at once to announce Twitter’s change in ownership and to create so-called ‘viral’ content to call attention to this event. This proved successful, as the post garnered significant engagement metrics through its circulation and inspired several meme versions and parodies. Moreover, the video was itself a version of an extant ‘meme format’ wherein image-macros depict literal scenarios of letting a sink enter a space.
Musk’s video is a prime example of what I call the algorimage. The algorimage appears on our myriad, mobile devices as meme image-macros, TikTok dance videos, YouTube tutorials, targeted advertisements, reaction GIFSs, screenshots, Instagram stories, or clips excerpted from superhero films. Despite being radically disparate in content, they are all equally web ‘content’ propagating on social media platforms through an algorithmic mode of circulation. The gist of the algorimage as a concept is that it allocates disparate content under the same structural form. This contingency is crucial. We could simply call algorimages ‘digital images’, which undoubtedly require algorithms and other computational processes to appear in various image formats. However, the moment these become entangled within larger infrastructures, encompassing connections between networks, protocols, software applications, interfaces, databases, data centers, user data inputs, cloud servers, and ‘cold storage’ disks, which are their conditions of motion, they function and appear as algorimages.
It is my contention that the algorimage is the dominant image form in contemporary society. While there is certainly not a dearth of research attempting to address the intensified proliferation of audiovisual images engendered by digitisation’s dislocation of their production, distribution, and consumption from the domains of cinema and television, the same cannot really be said of the analysis of these ubiquitous images themselves. Instead, there is a tendency to examine this ‘post-cinematic’ media landscape,[1] or ‘post-photographic iconosphere’,[2] and the means of its ‘post-production’,[3] through examples drawn from representative works of, precisely, cinema, television, or art installations. In this article, I will attempt to shift into view the vast circulation of images in formats and forms not necessarily discernible as discrete ‘works’, whose ambient accumulation and banality belie their hegemony.
The privileged vantage point from which to begin such an undertaking is the algorimage’s functionality rather than its representations. In this article, I argue that the algorimage and its operations are primarily logistical. As the shipping container is the emblem of the just-in-time production and circulation of commodities, the algorimage is to the continual data production and circulation to be realised as revenue in the so-called ‘platform economy’. The analysis of the algorimage can thus be positioned within a turn in media studies toward logistical media and infrastructures. Such an approach examines the material conditions and capacities of media to coordinate and control the circulation of commodities and information, of people, property, and things.[4] Infrastructure refers here to the transportation and telecommunications networks, shipping containers, intermodal terminals, and warehouses that comprise the logistics of global supply chains, but also hardware servers and software applications, protocols, and networks that facilitate the flow of data. Infrastructure is thus both physical and virtual. However, the latter is by no means ‘immaterial’. A database, in order to serve as an instance of storage, requires data centers and hard drives. These, in turn, are housed in property, standing atop the earth from which the minerals needed to power them, and the devices with which they communicate, are extracted by a waged or unrenumerated class. This is all propped up to provide access to proprietary technologies in exchange for user data. What needs to be stressed is that infrastructure is irreducible to the physical or technological, but is inexorably bound to the social relations and conditions of the capitalist mode of production. Media is material all the way down, and we should understand infrastructure as that which, in John Durham Peter’s succinct pun, ‘stands under’.[5]
Analysing the algorimage is thus only possible if we set our sights on that which is not directly represented in its content. Returning to Musk’s video discussed at the outset, it shows him sinking in to his new CEO position; but if we sink below this surface, we can also see that it is an MP4 file encoded with the MPEG-4 part 10 codec, the standardised format that allows it to be uploaded and shared on Twitter in the first place. Herein, it is also stored and processed as data so that it can be requested and returned in real-time by and for users whose engagement, through viewing, liking, or sharing it, forms the basis of Twitter’s business model. Diving into the bitstream and seeing the otherwise invisible code that operationalises your interface, however, is not as easy as Neo plugging into the proverbial ‘matrix’, as seen in the film of the same name. For the matrix, or grid of intelligibility, established by a graphical user interface hinges on the very issue of visibility. Here the problem of the ‘black box’ hits us head on, as it would appear that assessing the operations of the algorimage would be obscured by the opacity of their algorithmic base.
This article attempts to provide a materialist analysis of the logistical circulation of algorimages through socio-technical infrastructure. It is divided into three main parts. First, I address this problem of the black box and the algorimage, not as a matter of revealing how an algorithm resolves input into output, but of the structures that pose it as a problem in the first place. In the second section, I enact this shift in perspective through the case study of Twitter’s platform infrastructure by excavating the interoperable technical substrates that store, process, and circulate algorimages, from database solutions and ad servicing to trending timeline algorithms. Finally, I situate and analyse these technical processes in relation to their social conditions of emergence within transformations of the capitalist mode of production – specifically, the turn since the 1970s to the logistical infrastructure of global supply chains in the interest of speeding up the turnover time between the valorisation and realisation of capital.
Disassembling the black box
Algorithmic processes are not visible to us in the manner that the algorimages they instantiate are. This inaccessibility is not just an issue for those of us illiterate in code. The most crucial algorithms for maintaining social media platform operations and revenue streams are not only semi-autonomous, such that even their programmers cannot access their exact inner-workings, but also proprietary secrets. This amounts to a fundamental lacuna evincing the centrality of that which, apparently, has no image. Hence, the extraordinary explanatory power afforded to what is colloquially referred to as ‘the algorithm’ belonging to a platform. The secrets harboured in these black boxes supposedly hold the answers to everything from why the video you posted failed to receive user engagement, to why your uncle may have been ‘radicalised’ by your pick of the ‘alt-right’, ‘libtards’, or the ‘woke left’.
This is not something that one finds only in the popular discourse but is equally expressed in a lot of the critical, academic literature surrounding machine learning (ML) algorithms. Of the primary points of contention in such works, the problem of bias with regard to algorithmic systems is perhaps the most central. Accompanying the widespread implementation of ML algorithms is the necessary scrutiny toward the results they generate. This has produced vital insights regarding how algorithms, rather than being ‘neutral’ processes, actively feed the reproduction of biases and norms, particularly those pertaining to race and gender.[6] Often, these findings are supplemented with calls for algorithmic accountability and greater transparency, both in the sense of assigning responsibility to the corporations or people that implement biased algorithms, as well as trying to get inside their black boxes so as to audit and regulate the results of data processing.[7]Yet, as Kate Crawford elucidates, overemphasis on making an algorithm transparent discloses only the issue of the individual case while obfuscating how the training logics that undergird ML systems are not merely technical but socialproblems. To neglect this is to conflate seeing a system with knowing it and puts the onus on the faulty knowledge produced by a particular dataset, as opposed to such a mode of knowledge production itself.[8] Outputs generated by algorithms are generated nonetheless, and remain problematic even if they have not been programmed into the ‘secret’ algorithms themselves, which it must continually be insisted are part of larger socio-technical infrastructures. Simply stating that biases exist does not circumvent the need to interrogate the structures in which they are born and bred.
The transparency critique of algorithms, as we might call it, undoubtedly highlights important issues, but treating algorithms as monolithic black boxes misses, as Nick Seaver states, how algorithms can fundamentally be understood as socio-technical systems. Seaver offers an alternate approach to algorithms by arguing that they should not primarily be defined as specific, computational procedures, but ‘as culture’ enacted by the diffuse practices of people, transcending boundaries between technical and non-technical definitions.[9] Seaver has a point with regard to being wary of the mythmaking involved in reducing algorithms to inaccessible secrets, yet, it remains unclear exactly how treating them as practices in this manner would not warrant scrutiny of black boxes as cultural actors. Moreover, the socio-technical system he describes appears simply to be a matter of the heterogenous ways that algorithms are talked about and perceived by ‘diverse and ambivalent characters’, ranging from programmers and researchers to corporate messaging. The appeal to systematicity is, however, left curiously unexamined. By giving precedence to the way people discursively ‘do’ algorithms over what algorithms technically do, this position renders the black box a moot point. In doing so, the social character of these technical procedures is elided, and instead naturalises algorithmic procedures as the product of discrete individuals.
While the internal logic of an algorithmic process may be closed off even for their programmers, the possibility of critically studying them is not. Simply put, the information hidden in the black box is not the only information being processed; nor is it the case that we cannot know how algorithms work. It is because there is knowledge of how certain algorithms function (for instance suggestion algorithms like K-nearest neighbours) that the specificity of certain proprietary algorithms becomes such a vexing problem. Control over the most optimised versions of the algorithms used to perform particular tasks is integral to market capitalisation, as evidenced by Google’s streamlining of the search engine. The matter of ‘search’ is illustrative of another aspect of the black box fixation, which is, as Matthew Fuller emphasises, that it is two-sided. For an algorithm, the users it encounters are also unknown quotients. Data collection and caching are attempts to get inside the black box of the user in order to more accurately predict their actions through pattern recognition.[10] While a black box might hinder us from knowing detailed technical specifications of how the ML algorithms of a certain platform’s recommender system have decided that these results are most relevant to you, we do know that they have been programmed to perform this function for the purposes of data production and circulation. The challenge in examining a proprietary algorithm as an ‘excluded middle’ lies in resisting the urge to exclude the larger media ecology of which such algorithms comprise one part. For such a reduction results in a mystification of the processes of the algorithm’s black box, clamouring for an ‘open sesame’ command protocol revealing only a black hole. The programmers have overcome their obsession with the black box – it is time we did too.
It is, I argue, in the analysis of the algorimage that we can begin to disassemble the black box and throw into stark relief ‘[a]ll that surrounds images’, to speak with Jane Birkin and Jussi Parikka, as opposed to ‘merely seeing them’.[11] The surroundings of the algorimage, as stated above, are the technical networks between algorithms and data centers, but these are themselves surrounded by a social network – that is, the logistics of capital. An algorimage is irreducible to any individual, visual image but is fundamentally a social relation. What appears in it is secondary to its function as a form of appearance. This is a concept drawn from Marx, who uses it to describe, for instance, how the wage form seems to express payment for the value of labour performed when it is actually payment for the value of labour-power, thus obscuring the unpaid labour that creates surplus value. To give another example, the ‘value’ of a commodity comes to appear in the form of exchange-value, and can thus be related to all other commodities as exchangeable crystallisations of abstract labour. As a result, commodities are fetishised and the social characteristics of labour are reflected as objectivecharacteristics of the products of labour themselves. Forms of appearance thus make their conditions and relations of possibility invisible by presenting themselves to the eye.[12] To account for the algorimage we must go beyond seeing what appears in it toward that which constitutes the form of its appearance. In order to do this, as suggested above, we must move to investigate the material conditions, at once social and technical, that make this mystification of algorithms and their black boxes possible. In the next section we will attempt such an excavation through the case study of Twitter’s platform infrastructure.
Algorimage inventory
Algorimages are logistical media, which are not the black box of logistics but carry the ‘instructions for its assembly’.[13]Yet, it is only by putting these parts together that the algorimage itself is assembled. In this sense, the algorimage’s fundamental imbrication with the infrastructure facilitating its circulation allows us to analyse it as a kind of technical object. Conceptualised by Gilbert Simondon, a technical object is a structural unit that arises out of the ‘internal resonance’ between a series of convergent functions, a process Simondon calls concretisation. That is to say, that each of the parts of the technical object operate reciprocally through their integration and interconnection, such as in an air-cooled engine where the engine’s functioning is inseparable from its cooling component. The coherent, technical object becomes a ‘mode of existence’ akin to a ‘natural object’ through the incorporation of its surroundings as intervening conditions of its functioning. This establishes what Simondon calls an associated milieu, meaning, the mediating relation between ‘technical’ and ‘natural’ elements, at once creating and conditioning the technical object from which it itself is created.[14] To assess the algorimage in this manner we must consider the computational technics that make it, as Yuk Hui argues by building on Simondon, a specifically digital object, now mediated by the associated milieux of its databases, algorithms, and protocols. By incorporating these as regulative mechanisms, the digital object becomes individualised through an ‘objectification of data’ into visual and repetitive formats, specifically, as image or video files.[15] The networked, media ecology surrounding the algorimage thus becomes integral to its functioning.
When an algorimage appears on Twitter it activates the full scale of the platform’s storage system, which is divided into seven main services: Hadoop, Manhattan, Blobstore, Graph, Cache, Messaging, and SQL Relational stores.[16] Each of these are required in order for content to be transmitted, and as such are essential to understanding the conditions of algorimage circulation on the platform. Here, I will focus on the first three databases, as they are the primary means for storing and processing algorimages as image-data files.
Manhattan is Twitter’s native database solution developed to serve millions of queries per second in real-time. It is a distributed database which stores tweets, accounts, direct messages, and advertising. What is particular about Manhattan is that it is ‘eventually consistent’, meaning that requested data is returned as rapidly as possible, i.e. low latency, at the expense of the guarantee that data is retrieved in its most recently updated form. This is done in order to secure the system’s high availability, which means that even if one part of the system fails, such as an individual data node, it is still able to handle the massive traffic of requests.[17] To achieve this, data is replicated onto several different servers, each of which may not have the updated data yet, but eventually will.[18]
The visual images files themselves are stored in Twitter’s own specific photo storage system, Blobstore – so named because it harbours binary large-scale objects, or ‘blobs’, such as photos and videos. It was designed with three primary goals: low cost in terms of money and time for storing tweets with visual images; high performance to serve hundreds of thousands of image requests in milliseconds; and scalability of operations in tandem with the expanding infrastructure of the platform. When a user uploads an image, it is sent to the Blobstore frontend servers which forward it to a storage node, or, the server where it should be stored. Subsequently, it is written onto a disk along with instructions for recording the image’s metadata. Once recorded, these image-data are replicated by Twitter’s own Kestrel application, which is tasked with ensuring data integrity across the multiple data centers that power the above software systems. Data is then placed through libcrunch, a library that allocates this data cluster – the original image-blob and its replicas – attempting to minimise risk of data loss, maximise data recovery, and fully map network topology. Here, a principle similar to eventual consistency is employed as the replicated metadata is smaller than the original ‘blob’ data and can thus be returned faster. If a request requires access to a blob that has not yet been updated on its nearest data center, it is rerouted to the next-nearest data center that has a replica of the blob data available at a slightly higher latency.[19] The degradation of the returned image relative to the speed at which the image appears on the interface highlights that what is at stake in latency is first and foremost quantitative rather than qualitative. In other words, it is the absence of the image, its failure to appear on an interface, that evinces fissures in data flow, whereas the mostly imperceptible difference between the retrieved data and its recent update maintains operational continuity.
Upholding the steady circulation of data on Twitter through the continuity of operations at every level from storage to transmission is of utmost priority and is a result of the concretisation of the algorimage. While the functions of Hadoop and Manhattan are seemingly extrinsic to the algorimage, they are on the contrary wholly intrinsic to its circulation. These two services store user analytics that can be connected and congealed into the relations of an algorimage, such as what accounts engaged with it, or what other algorimages and tweets it can be recommended to. An algorimage appears through the convergence rather than the compromise of their requirements, a feature that Simondon stresses is fundamental for a technical object.[20] Their compatibility maintains a rapid rate of return between request and retrieval by increasing the flow of algorimages while decreasing what is needed to store them in terms of cost and hardware/software space. This is essential for an infrastructure such as Twitter’s, designed as it is to accommodate fluctuations in the supply and demand of billions of data per second and ensure its efficient transmission. It is from the vantage of this efficiency and the establishment of a system to secure it that the logistics of Twitter’s platform seeps into view. For efficiency, Matthew Hockenberry argues, is the logistical logic par excellence, as it concerns itself with an ‘organizational consistency’ through the ‘delivery of a more regular and reliable speed of connection, of communication, and (…) anticipation’.[21] Delivering an algorimage through high-volumes of user traffic, without losing speed, is paramount for preventing loss of potential revenue.
Ad (s)pace
Twitter, like its competitors in the social networking sector, primarily generates its revenue from advertising sales. The infrastructure of Twitter’s backend and the data it collects is stored in order to supply ad clients with potential customers in exchange for payment. Advertisements appear in the shape of still or moving algorimages, such as short videos posted by brands and companies, or simply a picture tied to a promoted profile. In order to compete with the billions of tweets deposited into Twitter’s storage infrastructure each day, advertisers pay to promote their content throughout the informational flow of the platform, on the timeline, user profiles, and in the pre-rolls for other media publisher’s videos.[22]
To serve this demand of supplying clients with users through the circulation of algorimages, Twitter has developed its own advertising platform that processes ‘ad requests’ through an ‘ad serving pipeline’. This pipeline marks its logistical function – a pipeline in the logistics industry refers to the network of flows between infrastructures, information, and goods[23] – by erecting and securing a supply chain managing the movement of data to metadata into monetisable information. An ad being serviced through this pipeline moves through the spend cache, which stores the budget for each ad campaign, and is immediately updated by the live spend counter, which in turn aggregates the ad engagement metrics it receives from the ads callback instance, where each event of user engagement is sent to be serviced by the ad server after retrieving data from the ‘spend cache’. This reciprocal accounting for and calculation of engagement spends in lockstep with the servicing of an ad is key. Twitter runs a simultaneous ‘pacing service’ regulating the spending of an ad campaign’s budget, pacing it so as to maximise ad performance and making sure not to spend the entire budget before it has been optimised. These pacing algorithms keep the ad servicing in check and, effectively, balance the budget by slowing down the deployment of resources if they are being depleted too quickly. This to prevent both squandered investments for the client as well as ‘overspending’, where clients receive more clicks for their money and cost Twitter income for that campaign.[24]
The properly concrete, technical object is one where its degrees of internal resonance and relation turn it into a ‘system of the necessary’ that is ‘entirely coherent within itself and entirely unified’.[25] In the ad service, each part of the data pipeline works together for it to fulfil its function. However, it is also dependent upon reliability and stability of the previously discussed database instances, which secure the viability of the advertising platform. The algorimage is constituted through and constitutes the functioning of these interoperable levels in order for them to be effective. Concreteness such as this, Simondon elaborates, is not exhausted by the intentions of its fabrication, ‘but part of a system where a multitude of forces’ can produce excess effects. It is for this reason that the technical object can never be truly concrete nor fully knowable.[26] To some extent then, the technical object seems to remain locked in the black box. However, it is in this very scission that its operations and levers of causality become clearer. Simondon stresses that there is always a difference between an object’s technical scheme and what he calls the ‘scientific picture’ of the reciprocal causalities ‘for which it is the base’.[27] The reference to a ‘base’ presumably appropriates the Marxist sense of the word to denote how technics, despite determining the function of the individualised object, cannot be separated from its associated milieu, or what amounts to the ‘superstructure’ of its effects. These are the very conditions from which the technics of the object may be concretised and realised.
As a digital object composed of data and metadata, the algorimage’s concreteness becomes increasingly calculable and relational through its digital milieux, or networks, protocols, and standards, which Hui argues has the ‘power to converge and integrate’ social and economic systems into its operations.[28] This fully reticulated regime of digital objects, wherein the mediating function of the associated milieux becomes part and parcel with the internal coherence of digital objects themselves, entails that concretisation cannot be completely borne by the technical. If they could ever be absolutely separated, the technical and digital objects that we use and are used by today are wholly woven into the worldwide, socio-technical systems of the capitalist mode of production. The drive to optimise the speed, latency, and availability of algorimages saturating the operational logic of Twitter’s technical infrastructure is imperative to its social infrastructure and, in the final instance, its economic performance as a prominent, social media platform. While user experience is accentuated in Twitter’s own account of these processes, efficiency and optimisation’s bottom line is exactly that – thebottom line.
Speaking of Twitter’s platform infrastructure as a supply chain is thus more than metaphorical. Data pipelines, databases, and data warehouses effectively comprise parts of their own supply chain. Hockenberry conceives of this constellation of software applications and data structures in terms of a ‘digital supply chain’ analogous to the supply chains of capital, constituting containers of digital objects rather than cargo.[29] Just as commodity supply chains are increasingly coordinated by computational technology, data supply chains such as Twitter’s are inextricable from the infrastructures that coordinate the global circulation of capital. A principle like the Manhattan database’s ‘eventual consistency’, for instance, mirrors the ‘just-in-time’ aims of lean manufacturing and ‘pull’ production to eliminate standing inventory by restocking supplies only when necessary.[30] Data back-ups and replications of an algorimage across data centers, as in Blobstore, serve as risk mitigations stabilising the totality of Twitter’s infrastructure rather than the integrity of its individual content. Prioritising this smooth sailing of the system over the content of a shipping container is an essential feature of logistics.[31] Lossless data becomes secondary to the ceaseless, lossy stream of data from front- to backend. There is no loss less desired than returning a ‘404 not found’ error.
For the algorimage object, its concretisation connects and congeals users into relations of metadata that can be quantified and computed through an interoperable mode of circulation between software applications, formats, data centers, and collaborative filtering algorithms. The data invested within an algorimage is processed and recorded in a certain database such that it can be retrieved upon another user’s request; these are recorded and calculated within a database that keeps a tally on the image’s engagements and impressions; in turn providing fodder for an ad service fitting high-bidding advertisers with high-performance content; all of which is now contained within the circulation of the algorimage as metadata that is continually counted and collected. This adds up to a kind of built-in obsolescence imaging the present, in the sense that it is happening now, as requiring immediate interaction in order to keep pace with the fact of its passing. Such is the just-in-and-out-of-time supply chain of Twitter algorimages.
Recycling algorimages
On a social media platform like Twitter the surest way of securing such efficiency is through the circulation of algorimages. It is the inter-user engagement with an algorimage that enhances its potential circulation, attracting the traffic of user attention, and thus advertising and data licensing clients. This images Twitter as the place where you can see, to paraphrase its own slogan, what is happening. These happenings, or ‘trends’, are continuously updated on the home screen to the right of the timeline. By default, the appearance of your timeline depends upon ‘ranking’ algorithms run on a third party ML software called Tensorflow. These aim to predict what content will engage you the most, not simply by measuring your stored interests, but specifically what has interested you enough to interact with.[32] These deep learning models are constantly in use, such that even if an account is inactive for long stretches, the algorithms continue to update the node of that user profile through the activity of its neighbours.[33] These ‘preferences‘ are anything but personal, they are rather inherently collective by virtue of collaborative filtering algorithms whose logic of decreasing data pool diversity suggests that ‘your’ preferences are similar to other users because they have already been identified as being similar to you.[34]
Twitter’s Tensorflow algorithms are continually updated to adapt to and anticipate the breaking waves of real-time events, which corporate and personal users alike attempt to engage and catch-up with. And while all content racing against the clock eventually is laid to waste, some things mutate to outlast the daily refreshment of Twitter’s timeline prediction models. Certain ‘trends’ may persist for days, a week, and sometimes long enough to become a recurrent mode of engaging with what is currently ‘trending’ – becoming, in a word, memes. More often than not, such content takes the form of appearance, or perhaps reappearance, of an algorimage. Certain algorimages become concentrated – doubly so in the sense of gathering high volumes of engagement, as well as in the focusing of attention – such that their circulation exceeds the increasingly short circuits of the ‘content cycle’.
The ‘Distracted Boyfriend’ meme is illustrative in this regard. This algorimage is drawn from a stock photo in which a man is seen looking back at a woman who has passed him by while another woman, who the man is walking and holding hands with, scowls at the man. It is an example of a meme format known as ‘object labeling’, named for the act of adding text labels to an underlying image so as to alter its interpretation. In August 2017, the meme format went viral, and in an ironic mirroring of the content of the algorimage itself spawned an abundance of versions over the next few months by user accounts of all kinds and sizes vying for the other’s attention. Technically, quite literally, ‘object labeling’ is more like a template than a pure format. The latter would be reserved for file formats that compress and encode images, which on Twitter can only be either JPEG or PNG for photos, as well as GIFs and the H264 codec of the MPEG-4 Part 10 format for videos. Cementing standard formats and codecs enables a generic mode of image circulation, which as Adrian Mackenzie has illustrated connects various scales of technological infrastructure, from network to application to user convention.[35] The ‘object labeling’ template doubles, nevertheless, the role of formats in the reciprocity between digital milieux and algorimage object, and overtly demonstrates the importance of format standardisation in constraining both container and content.
Formats are necessarily codified, Jonathan Sterne states, by specifying the protocological operations of a medium. However, these specificities are often obscured from users and/or public discussion, taking on ‘a sheen of ontology when they are more precisely the product of contingency’.[36] In the case of the ‘distracted boyfriend’, its popularity led to users finding other stock photos shot by the same photographer from the same series, showing the same people in variations of the same, initial scenario.[37] These derivatives, while amassing significant engagement, never superseded the ‘original’ in part because of the strength of the latter’s ‘format’. It helps us see that the circulation of an algorimage functions as a mode of formatting, of forming patterns of engagement in its own algorimage, constituting a veritable mise-en-abyme, or mise-en-abmeme.
Realisation-time
The increased prevalence of constantly refreshed, predictive algorithms in databases is essentially the condition of the digital archive, which as Wolfgang Ernst argues is characterised by the subsumption of storage and transfer into streaming. Digital media thus become increasingly time-critical.[38] They, as Wendy Chun puts it, ‘live and die by the update’.[39] Within the micro-temporalities of social media platforms like Twitter, the time between the data produced by uploading an image and its circulation is to become as instantaneous as possible. As in the logistics industry, timing is of the essence. Real-time aims to become realisation-time. Realisation refers here to the moment in the circuit of capital when a commodity is exchanged within the sphere of circulation at a profit, thus ‘realising’ the value and surplus value in it.[40] The circuit of capital as a whole, M-C-M, circulates first as money advanced in order to purchase labour-power and means of production, which are expended in the production of commodities and subsequently sold on the market in exchange for more money than was initially advanced. The duration of this circuit – total time spent in the spheres of production and circulation – is capital’s turnover time. Circulation time acts as a negative limit on that of production since it is time not spent producing commodities or surplus value. Cutting this time down is a motor for the perpetual renewal and expansion of valorisation that drives capital’s ‘life-process’.[41] This is one of the key factors for the logistics revolution, which emerges in the wake of what Robert Brenner calls the ‘long downturn’ of the 1970s, characterised by declining rates of profit primarily stemming from overproduction and overcapacity in the manufacturing sector.[42] Faced with this crisis, capital pivoted from industrial production to the liquid lanes of circulation in the interests of speeding-up turnover, seeking its solution in the ‘FIRE sector’ (finance, insurance, and real estate), information technology, and global supply chains for commodity circulation.[43]
This shift finds its theoretical complement in the turn toward re-evaluations of Marx’s value theory, most notably the claim that it is now ‘immaterial labour’ – encompassing everything from the production of information and knowledge to affect and attention – that has become the dominant source of value within what has been called contemporary, ‘cognitive capitalism’.[44] It would appear that algorimages fit this bill, as they move through data pipelines and backend infrastructures onto frontend interfaces in pursuit of the valorisation and realisation of information as both the means of ‘shipping’ and as the ‘shipped’. More activity and more engagement equal more potential profit to be realised, whether through advertising and data licensing, venture capital investment, or potentially higher market capitalisation. Just as capital not in motion is not capital, without the circulation of the algorimage as its fundamental unit the so-called ‘platform economy’ does not compute. In fact, it is this very computation that the algorimage has been unable to execute. Realisation-time remains unrealised. The realisation of capital is not a given, but precisely a potential, the actualisation of which is the explicit goal of the valorisation process. However, it is in the sphere of capitalist production where human labour comes to be counted as abstract labour that forms value. Circulation, on the other hand, is where value undergoes a metamorphosis, operating a change in form from money to commodity and back. Circulation may realise value and effectively validate it through exchange, but as the exchange of equivalents it cannot on its own generate any new, surplusvalue, no matter its absolute necessity for capital accumulation.[45]
In light of this, claims of a transformation of value on account of informatisation, globalisation, or financialisation must be problematised. Despite the apparent booms of the logistics industry, financial speculation, outsized market capitalisations of ‘tech’ companies, and short-term profits reaped by individual firms, these ‘innovations’ have proven to be a bust when it comes to systemic accumulation. The global economy never really recovered from the crisis of the 1970s, for productivity and profitability have continued to decline. The rise of automation and the diffuse ‘service industry’ are exemplary here, as the recent analyses by Jason E. Smith and Aaron Benanav elucidate. Contrary to the discourse claiming that computational, labour-saving innovations are spearheading productivity and growth, its results have been stagnation and technological inertia. This holds for the platform economy, whose primary revenue streams are derived from advertising, which cannot on its own bear the brunt of the global economy.[46] The notion that the business models of platforms and tech firms is now the dominant form of capitalism, given recent purchase by Shoshana Zuboff’s best-selling account of how they capture our ‘behavioural data surplus’ as the basis of ‘prediction products’[47] mistakes this possibility of predicting output from input for an inevitability. It takes the fact that the processing of behavioural data produces results in the form of an array of circulating, personalised algorimages for proof that they also produce results in terms of massive, financial returns.
We can elucidate this further by considering how Twitter’s platform infrastructure, as a logistics space traversed by the container technology of the algorimage, simulates what Jasper Bernes identifies as the logistical urge to ‘transmute all fixed capital into circulating capital’ so as to ‘imitate and conform to the purest and most liquid of forms capital takes: money’.[48] Here, we can draw parallels to the reification of ‘the algorithm’ alluded to above. In the general formula for capital, M-C-M’, money assumes a form of appearance that, by being the formula’s beginning and endpoint, makes it seem as if capital increases itself automatically. In Deleuze and Guattari’s words, this is the ‘miraculating-machine’ of capital, appearing as a ‘divine presupposition’.[49] Seeing the algorimage in this way would seem to suggest a reiteration of the problem and determinism of the algorithmic black box. Yet, it is the algorimage as the inevitable output of an apparently objective algorithmic process that algorimage circulation itself produces. The general formula for capital is recast as an algorithm. The problem of transforming money input into more money output seems to be solved by the churn of algorimages, presupposing the influx of user activity and, it would follow, increased revenue. This effect can be likened to the one Alexander Galloway ascribes to software, which by virtue of being simply instructions to be executed by a computer, emerges as a socially significant process because it, effectively, does what it says.[50] Thus, the issue is not the opacity of ‘the algorithm’ and its black box. Open-sourcing the codebase of Twitter’s timeline would not, for instance, remove the racial biases of search any more than reading Marx lifts the veil and rids us of the commodity fetish. What is rendered opaque by such a view is the socio-technical machinery wherein the algorimage circulates as a form of appearance that projects the productivity and power of the platform economy while obfuscating its very insufficiency.
Sinking ship
To conclude where we began, it should be noted that the research for this article was undertaken and completed prior to the finalisation of Elon Musk’s purchase of Twitter. Some remarks on how this might affect the findings seems warranted– not in the least because Musk accompanied his acquisition with promises to improve Twitter by turning it into a space for ‘free speech’, specifically by publishing the technical specifications of Twitter’s core algorithms. Instead, he fired roughly half of Twitter’s staff, many of whom were the platform’s software engineers maintaining the operations of Twitter’s code base (and whose work provides a heft of the material analysed above). As a result, its technical infrastructure has begun to fail more regularly and visibly. Rather than opening up Twitter’s black box by making its algorithmic secrets public, what has been revealed is precisely that these individual specifications, or those of any other reified ‘algorithm’, matter less than the system of interdependent parts that are put into place to facilitate the flow of information and ensure the stability and scale of platform operations. When these begin to falter the power of the platform as a whole does too. Such is the situation on Twitter now as Musk’s layoffs amass significant amounts of this kind of ‘technical debt’, almost as much as the financial debt Musk has taken on to make the buyout possible, and which are steadily accruing in the face of falling revenue.
This is the most significant effect of Twitter’s change of ownership. Regarding ownership, it should also be noted that Musk has taken Twitter private again. So, while the arguments about Twitter’s hardware and software systems being efficient for the sake of its financial viability may no longer apply to its valuation as a publicly-traded stock, it certainly continues to have bearing on the company’s turnover and profitability. For as this drama continues to unfold, Twitter’s advertising revenue and speculative investment on which it depends is drying up. Moreover, the fallout of Musk’s decisions has delivered blows to his main businesses SpaceX and Tesla, which have seen notable drops in respective share prices and investments.[51] This provides further fuel for the case being made here that the technics of platforms are fundamentally embroiled with social infrastructure, and the circulation of the algorimage is not merely a relation between blocks of code but of political-economy. What builds and transports our little black boxes and the secrets they contain are vast logistical infrastructures, whether those are supply chains of global commodity production and trade or the data pipelines of technical platforms, the circulation of a shipping container, or an algorimage. Mapping the movements of algorimages is to enact a shift in vision from the cloudy logic of the algorithm to the logistics of the system, which turns out to be one and the same.
Author
Leo Hansson Nilson is a PhD student at the section for Cinema Studies in the Department of Media Studies at Stockholm University. Currently he is working on his thesis project, tentatively titled ‘Terminal Circulation: Algorimages and the Logistical Media of Capital’, which is a theorisation of the algorithmically-processed, digital images that materialise on social media platforms and their circulation across the logistical infrastructures of contemporary capital and computation. His research interests include film and image theory, Marxism, media archaeology, and philosophy of technology.
References
Alton, L. ‘7 tips for creating engaging content every day’, Twitter Business, https://business.twitter.com/en/blog/7-tips-creating-engaging-content-every-day.html (accessed 14 April 2023).
Benanav, A. Automation and the future of work. London: Verso Books, 2021.
Bernes, J. ‘Logistics, Counterlogistics, and the Communist Prospect’, Endnotes 3, September 2013.
Birkin, J. and Parikka, J. ‘Conversation: On Practices of Images and Measures of Practice A Conversation around Two Books on Photography’, Media Theory, 22 December 2021: https://mediatheoryjournal.org/jane-birkin-jussi-parikka-in-conversation-on-practices-of-images-andmeasures-of-practice/ (accessed 14 April 2023).
Brenner, R. The economics of global turbulence: The advanced capitalist economies from long boom to long downturn, 1945-2005. London: Verso Books, 2006.
Caplan, R., Donovan J., Hanson, L., and Matthews, J. ‘Algorithmic Accountability: A Primer’, Data & Society, 18 April 2018: https://datasociety.net/library/algorithmic-accountability-a-primer/ (accessed 14 April 2023).
Chun, W.H.K. Updating to remain the same: Habitual new media. Cambridge: The MIT Press, 2016.
Cowen, D. The deadly life of logistics: Mapping violence in global trade. Minneapolis: University of Minnesota Press, 2014.
Crawford, K. Atlas of AI. New Haven-London: Yale University Press, 2021.
Deleuze, G. and Guattari, F. Anti-Oedipus: Capitalism and schizophrenia, translated by S. Hurley and H. Lane. Minneapolis: University of Minnesota Press, 2000.
Denson, S. and Leyda, J. (eds) Post-cinema: Theorizing 21st century film. Falmer: REFRAME Books, 2016.
Ernst, W. Digital memory and the archive, translated and edited by J. Parikka. Minneapolis: Minnesota University Press, 2013.
Fontcuberta, J. The postphotographic condition. Montréal: Le mois de la photo à Montréal, 2015.
Fuller, M. Behind the blip: Essays on software culture. New York: Autonomedia, 2003.
Gallagher, S. ‘Announcing Twitter’s rebranded advertising product suite’, Twitter Business, https://business.twitter.com/en/blog/announcing-rebranded-ad-suite.html
Galloway, A.R. Protocol: How control exists after decentralization. Cambridge: Harvard University Press, 2006.
_____. The interface effect. Cambridge-Malden: Polity, 2012.
Giles, C. ‘Digitisation failing to lift global productivity, study shows’, Financial Times, 14 April 2019: https://www.ft.com/content/3b300edc-5e51-11e9-a27a-fdd51850994c (accessed 14 April 2023).
Hashemi, M. ‘The Infrastructure Behind Twitter: Scale’, Twitter Engineering, https://blog.twitter.com/engineering/en_us/topics/infrastructure/2017/the-infrastructure-behind-twitter-scale (accessed 14 April 2023).
Hockenberry, M., Starosielski, N., and Zieger, S. ‘Introduction’ in Assembly codes: The logistics of media, edited by M. Hockenberry, N. Starosielski, and S. Zieger. Durham: Duke University Press, 2021: 1-22.
Hockenberry, M. ‘“Every Man Within Earshot”: Auditory Efficiency in the Time of the Telephone’ in Assembly codes: The logistics of media, edited by M. Hockenberry, N. Starosielski, and S. Zieger. Durham: Duke University Press, 2021: 113-129.
_____. ‘Redirected Entanglements in the Digital Supply Chain’, Cultural Studies, 35, no. 4-5, 3 September 2021: 641-662; https://doi.org/10.1080/09502386.2021.1895242
Hui, Y. On the existence of digital objects. Minneapolis: University of Minnesota Press, 2016.
Know Your Meme, ‘Distracted Boyfriend’, https://knowyourmeme.com/memes/distracted-boyfriend (accessed 14 April 2023).
Lucas, R. ‘The Surveillance Business’, New Left Review, 121, January-February 2020): 132-141.
Mackenzie, A. ‘Codecs’ in Software studies: A lexicon, edited by M. Fuller. Cambridge-London: The MIT Press, 2006.
Marx, K. Capital volume 1: A critique of political economy, translated by B. Fowkes. London: Penguin Books, 2004.
_____. Capital volume II: A critique of political economy, translated by D. Fernbach. London: Penguin Books, 1992.
Milmo, D. ‘Slumping revenue, Tesla woes and a “resignation”: Musk’s wild reign at Twitter so far’, The Guardian, https://www.theguardian.com/technology/2023/jan/01/revenue-tesla-elon-musks-twitter-staff-investors (accessed 14 April 2023).
Noble, S. Algorithms of oppression: How search engines reinforce racism. New York: NYU Press, 2018.
Parks, L. and Starosielski, N. Signal traffic: Critical studies of media infrastructures. Champaign: University of Illinois Press, 2015.
Pasquale, F. The black box society: Behind the secret algorithms that control money and information. Cambridge: Harvard University Press, 2015.
Rossi, E. and Bronstein, M. ‘Deep Learning on Dynamic Graphs’, Twitter Engineering, https://blog.twitter.com/engineering/en_us/topics/insights/2021/temporal-graph-networks (accessed 14 April 2023).
Rossiter, N. Software, infrastructure, labor: A media theory of logistical nightmares. London-New York: Routledge, 2016.
Shapiro, M. and Kemme, B. ‘Eventual Consistency’ in Encyclopedia of database systems, edited by L. Liu and M. Tamer Öszu. Boston: Springer, 2009: https://doi.org/10.1007/978-0-387-39940-9_1366
Simondon, G. On the mode of existence of technical objects. Minneapolis: Minnesota University Press, 2017.
Smith, J.E. Smart machines and service work: Automation in an age of stagnation. London: Reaktion Books, 2020.
Sterne, J. MP3: The meaning of a format. Durham: Duke University Press, 2012.
Twitter Blog, ‘New video products make it easier to watch what’s happening on Twitter’: https://blog.twitter.com/en_us/topics/product/2022/new-video-products-make-easier-watch-what-happening-twitter (accessed 14 April 2023).
Seaver, N. ‘Algorithms as culture: Some tactics for the ethnography of algorithmic systems’, Big Data & Society, December 2017: https://doi.org/10.1177/205395171773
Steyerl, H. ‘Too Much World: Is the Internet Dead’, e-flux, 49, November 2013: https://www.e-flux.com/journal/49/60004/too-much-world-is-the-internet-dead/ (accessed 14 April 2023).
Twitter Engineering, ‘Blobstore: Twitter’s in-house photo storage system’: https://blog.twitter.com/engineering/en_us/a/2012/blobstore-twitter-s-in-house-photo-storage-system (accessed 14 April 2023).
_____. ‘Hadoop filesystem at Twitter’: https://blog.twitter.com/engineering/en_us/a/2015/hadoop-filesystem-at-twitter (accessed 14 April 2023).
_____. ‘How we fortified Twitter’s real time ad spend architecture: https://blog.twitter.com/engineering/en_us/topics/infrastructure/2020/how_we_fortified_twitters_real_time_ad_spend_architecture (accessed 14 April 2023).
_____. ‘Manhattan, our real-time, multi-tenant distributed database for Twitter scale’: https://blog.twitter.com/engineering/en_us/a/2014/manhattan-our-real-time-multi-tenant-distributed-database-for-twitter-scale (accessed 14 April 2023).
Young, L.C. ‘Colonization’s Logistical Media: The Ship and the Document’ in Assembly codes: The logistics of media, edited by M. Hockenberry, N. Starosielski, and S. Zieger. Durham: Duke University Press, 2021: 94-110.
Zhuang, Y, Thiagarajan, A, and Sweeney, T. ‘Ranking Tweets with Tensorflow’, Tensorflow: https://blog.tensorflow.org/2019/03/ranking-tweets-with-tensorflow.html (accessed 14 April 2023).
Zuboff, S. The age of surveillance capitalism: The fight for a human future at the new frontier of power. New York: Hachette, 2019.
[1] For instance, see Denson & Leyda 2016.
[2] Fontcuberta 2015.
[3] Steyerl 2013 (accessed 14 April 2023).
[4] To name but a few, Durham Peters 2015; Hockenberry & Starosielski & Zieger 2021, Parks & Starosielski 2015; Rossiter 2016.
[5] Durham Peters 2015, p. 33.
[6] For notable examples see Noble 2018 and Pasquale 2015.
[7] Caplan & Donovan & Hansen & Matthews 2018, pp. 15-25.
[8] Crawford 2021, p. 12; 128-131.
[9] Seaver 2017.
[10] Fuller 2003, p. 70.
[11] Birkin & Parikka 2021.
[12] Marx 2004, pp. 163-164; 677-680.
[13] Hockenberry & Starosielski & Zieger 2021, p. 3.
[14] Simondon 2017, pp. 28-29; 49-59.
[15] Hui 2016, pp. 56.
[16] Twitter Engineering 2017 (accessed 14 April 2023).
[17] Twitter Engineering 2014 (accessed 14 April 2023).
[18] Shapiro & Kemme 2009, p. 46.
[19] Twitter Engineering 2012 (accessed 14 April 2023).
[20] Simondon 2017, p. 28.
[21] Hockenberry 2021, ‘“Every Man Within Earshot”: Auditory Efficiency in the Time of the Telephone’, pp. 115-116.
[22] Gallagher (accessed 14 April 2023).
[23] Cowen 2014, p. 8.
[24] Twitter Engineering 2020 (accessed 14 April 2023).
[25] Simondon 2017, pp. 28-32.
[26] Simondon, On the Mode of Existence of Technical Objects, p. 39.
[27] Simondon 2017, pp. 39-40.
[28] Hui 2016, pp. 26-27.
[29] Hockenberry 2021, ‘Redirected Entanglements in the Digital Supply Chain’, pp. 643-646.
[30] Bernes 2013.
[31] Young 2021, p. 95.
[32] Zhuang & Thiagarajan & Sweeney 2019 (accessed 14 April 2023).
[33] Rossi & Bronstein 2021 (accessed 14 April 2023).
[34] Galloway 2006, pp. 113-114.
[35] Mackenzie 2006, ‘Codecs’.
[36] Sterne 2012, p. 8.
[37] Know Your Meme (accessed 14 April 2023).
[38] Ernst 2013, pp. 99-100.
[39] Chun 2016, p. 2.
[40] Marx 2004, pp. 953-954.
[41] Marx 1992, pp. 200-204; 235-236.
[42] Brenner 2006.
[43] Bernes 2013.
[44] These concepts have been hugely influential on contemporary, critical theory that to cite the works that develop them would take up the entire word count of this article. Two notable authors that specifically address this in relation to digital media and images are Tiziana Terranova and Jonathan Beller.
[45] Marx 1992, pp. 225-226.
[46] Benanav 2020 and Smith 2020. That information and tech more broadly has been unable to bolster the global economic productivity and profitability is a position not only evident in explicitly Marxist accounts but can be glossed in pages of more traditional, economic trade press. See for instance Giles 2019.
[47] Zuboff 2019. For a sustained critique of Zuboff’s claim, which further problematises the power afforded to advertising revenue, see Lucas 2019.
[48] Bernes 2013.
[49] Deleuze & Guattari 2000, pp. 10-18.
[50] Galloway 2015, pp. 69-74.
[51] Milmo 2023 (accessed 14 April 2023).