Название | Machine Habitus |
---|---|
Автор произведения | Massimo Airoldi |
Жанр | Социология |
Серия | |
Издательство | Социология |
Год выпуска | 0 |
isbn | 9781509543298 |
Analogue Era (–1945)
Taking analogue to mean ‘not-digital’ (Sterne 2016), this first historical phase ranges in principle from the invention and manual application of algorithms by ancient mathematicians to the realization of the first digital computers right after the Second World War. Within this period, algorithms were applied either by human-supervised mechanical devices or by humans themselves (Pasquinelli 2017). In fact, up until the early twentieth century, the word ‘computer’ indicated a person employed to make calculations by hand. Mechanical computers started to be conceptualized at the beginning of the nineteenth century, following Leibniz’s early intuitions about the mechanization of calculus (Chabert 1999), as well as a rising demand for faster and more reliable calculations from companies and governments (Wilson 2018; Campbell-Kelly et al. 2013). Aiming to automatize the compilation of tables for navigation at sea, particularly strategic for the British Empire, in the 1820s the mathematician Charles Babbage designed the first mechanical computer, the Difference Engine, which was then followed by the more ambitious Analytical Engine – ideally capable of performing ‘any calculation that a human could specify for it’ (Campbell-Kelly et al. 2013: 8). Babbage’s proto-computers were pioneering scientific projects that remained largely on paper, but more concrete applications of simpler electro-mechanical ‘algorithm machines’ (Gillespie 2014) came to light by the end of the century. In 1890, Hollerith’s electric tabulating system was successfully employed to process US census data, paving the way for the foundation of IBM. Thanks to the punched-card machines designed by Hollerith, information on over 62 million American citizens was processed within ‘only’ two and a half years, compared with the seven years taken by the previous census, with an estimated saving of 5 million dollars (Campbell-Kelly et al. 2013: 17–18). The mass production of desk calculators and business accounting machines brought algorithms closer to ordinary people’s everyday routines. Still, information was computationally transformed and elaborated solely through analogue means (e.g. punched cards, paper tapes) and under human supervision.
Digital Era (1946–1998)
Through the 1930s and the 1940s, a number of theoretical and technological advances in the computation of information took place, accelerated by the war and its scientific needs (Wiener 1989). The Harvard Mark I became the ‘first fully automatic machine to be completed’, in 1943. However, it was still ‘programmed by a length of paper tape some three inches wide on which “operation codes” were punched’ (Campbell-Kelly et al. 2013: 57). The pathbreaking conceptual work of the British mathematician Alan Turing was crucial to the development of the first modern electronic computer, known as ENIAC, in 1946. It was a thousand times faster than the Harvard Mark I, and finally capable of holding ‘both the instructions of a program and the numbers on which it operated’ (Campbell-Kelly et al. 2013: 76). For the first time, it was possible to design algorithmic models, run them, read input data and write output results all in digital form, as combinations of binary numbers stored as bits. This digital shift produced a significant jump in data processing speed and power, previously limited by physical constraints. Algorithms became inextricably linked to a novel discipline called computer science (Chabert 1999).
With supercomputers making their appearance in companies and universities, the automated processing of information became increasingly embedded into the mechanisms of post-war capitalism. Finance was one of the first civil industries to systematically exploit technological innovations in computing and telecommunications, as in the case of the London Stock Exchange described by Pardo-Guerra (2010). From 1955 onwards, the introduction of mechanical and digital technologies transformed financial trading into a mainly automated practice, sharply different from ‘face-to-face dealings on the floor’, which had been the norm up to that point.
In these years, the ancient dream of creating ‘thinking machines’ was spread among a new generation of scientists, often affiliated to the MIT lab led by professor Marvin Minsky, known as the ‘father’ of AI research (Natale and Ballatore 2020). Since the 1940s, the cross-disciplinary field of cybernetics had been working on the revolutionary idea that machines could autonomously interact with their environment and learn from it through feedback mechanisms (Wiener 1989). In 1957, the cognitive scientist Frank Rosenblatt designed and built a cybernetic machine called Perceptron, the first operative artificial neural network, assembled as an analogue algorithmic system made of input sensors and resolved into one single dichotomic output – a light bulb that could be on or off, depending on the computational result (Pasquinelli 2017). Rosenblatt’s bottom-up approach to artificial cognition did not catch on in AI research. An alternative top-down approach, now known as ‘symbolic AI’ or ‘GOFAI’ (Good Old-Fashioned Artificial Intelligence), dominated the field in the following decades, up until the boom of machine learning. The ‘intelligence’ of GOFAI systems was formulated as a set of predetermined instructions capable of ‘simulating’ human cognitive performance – for instance by effectively playing chess (Fjelland 2020). Such a deductive, rule-based logic (Pasquinelli 2017) rests at the core of software programming, as exemplified by the conditional IF–THEN commands running in the back end of any computer application.
From the late 1970s, the development of microprocessors and the subsequent commercialization of personal computers fostered the popularization of computer programming. By entering people’s lives at work and at home – e.g. with videogames, word processors, statistical software, etc. – computer algorithms were no longer the reserve of a few scientists working for governments, large companies and universities (Campbell-Kelly et al. 2013). The digital storage of information, as well as its grassroots creation and circulation through novel Internet-based channels (e.g. emails, Internet Relay Chats, discussion forums), translated into the availability of novel data sources. The automated processing of large volumes of such ‘user-generated data’ for commercial purposes, inaugurated by the development of the Google search engine in the late 1990s, marked the transition toward a third era of algorithmic applications.
Platform Era (1998–)
The global Internet-based information system known as the World Wide Web was invented in 1989, and the first browser for web navigation was released to the general public two years later. Soon, the rapid multiplication of web content led to a pressing need for indexing solutions capable of overcoming the growing ‘information overload’ experienced by Internet users (Benkler 2006; Konstan and Riedl 2012). In 1998, Larry Page and Sergey Brin designed an algorithm able to ‘find needles in haystacks’, which then became the famous PageRank of Google Search (MacCormick 2012: 25). Building on graph theory and citation analysis, this algorithm measured the hierarchical relations among web pages based on hyperlinks. ‘Bringing order to the web’ through the data-driven identification of ‘important’ search results was the main goal of Page and colleagues (1999). With the implementation of PageRank, ‘the web is no longer treated exclusively as a document repository, but additionally as a social system’ (Rieder 2020: 285). Unsupervised algorithms, embedded in the increasingly modular and dynamic infrastructure of web services, started to be developed by computer scientists to automatically process, quantify and classify the social web (Beer 2009). As it became possible to extract and organize in large databases the data produced in real time by millions of consumers, new forms of Internet-based surveillance appeared (Arvidsson 2004; Zwick and Denegri Knott 2009). The development of the first automated recommender systems in the early 1990s led a few years later to a revolution in marketing and e-commerce (Konstan and Riedl 2012). Personalized recommendations aimed to predict consumer desires and assist purchasing choices (Ansari, Essegaier and Kohli 2000), with businesses being offered the promise of keeping