SIGCIS 2015 Workshop Abstracts

Session Details for SIGCIS 2015 Workshop: "Infrastructures"

9:00 - 10:30 am: Plenary Session
 
Workshop and Speaker introductions by Andrew Russell (Stevens Institute of Technology)
 
Keynote lecture by Prof. Nathan Ensmenger (Indiana University), The Materiality of the Virtual: An Environmental History of Computing

Abstract: In recent years the scope of the history of computing has become bigger, broader, and more global.   In an attempt to expand my own historiographical horizons, I have been experimenting with what I am calling an environmental history of computing.  The goal is to ground the history of the digital economy in the material world by focusing on the relationship between "computing power" and more traditional processes of resource extraction, exchange, management, and consumption.  When we look beyond the consumption of digital products, and focus on the physical infrastructure that makes our online interactions possible — the actual vast web of wires, cables, towers, generators, and other physical equipment that underlies the apparently virtual realm of Cyberspace —  the digital present does not seem quite so discontinuous with our industrial past.   In fact, many of most significant  social and economic nodes of the Information Society sit at the intersection of traditional, material infrastructures like railroads, power grids, and river systems.   In this paper I will suggest several ways in which humans, the environment, and information technology have been in interaction over the past several centuries.  More specifically, I will explore the global life-cycle of a digital commodity — in this case a unit of the virtual currency Bitcoin — from lithium mines in post-colonial South America to the factory city-compounds of southern China to a "server farm" in the Pacific Northwest to the "computer graveyards" outside of Agbogbloshie, Ghana.
 
11:00 am - 12:30 pm: Parallel Sessions I
 
1A. Labor, Workforce, Maintenance
  1. Arvid Nelsen (University of Minnesota), Concern for the 'Disadvantaged': Computer Training Programs for Communities of Color in the Late 1960s
    Abstract: This paper examines the factors that influenced the content and goals of computer training programs created in the late 1960s for communities of color by computer professionals searching within their own sphere for solutions to problems experienced by African-American and Spanish-speaking communities. Course content and pedagogical philosophies were shaped by perceptions of race and class – perceptions that were sometimes conflated as predominantly white individuals focused on the “disadvantaged” –a term that for a period of time would be understood to refer specifically to a subset of communities of color: urban, adult/young adult, predominantly male, undereducated, and un- or under-employed. Local chapters of the Association for Computing Machinery and other organizations initiated at least a dozen training programs in 1968, with others following shortly. In 1969 the ACM announced a new Committee on Computing and the Disadvantaged. By 1972 discussion of such initiatives had all but disappeared and in 1974 the Committee itself was disbanded.

    Material for this presentation comes from a book I am writing on the history and evolution of these and later programs. Computer education for persons of color has a history that is generally unknown, up to and including significant activities today. Knowledge of earlier programs is almost entirely lost, to the history community and to educators who could benefit from the lessons of the past.  The fact that community needs have persisted as programs have come and gone leads to the questions: What promotes student success? What promotes program sustainability? Factors to consider include: the contemporary state of the technology, the locus of computing, governmental initiatives, societal concerns, and business trends. Program details explored include the identity of program providers, their relationship to the communities served, and the genders and ages of students, all in respect to stated goals and intended outcomes, curriculum content and methodologies, and funding sources.

    Scholars in disciplines from cultural and media studies to education and policy have written on race and computing. (Everett 2007)  Historians of technology have recognized the importance of race (Green 2001, Lakwete 2003, Sinclair 2006, Pursell 2006, De La Pena 2010) but have undertaken little research. This has been attributed to a genuine dearth of sources, but they can be found. My work is based on journal and newspaper articles, archival sources, and oral histories.
     
  2. Amy Sue Bix (Iowa State University), Technical Work and Gendered Professionalization in the 1970s and 1980s: The Association for Women in Computing
    Abstract: Women's access to technical knowledge in modern U.S. society reflects a contested history. After decades of gradual shifts, support, backlash, and pressure for change, more women won opportunities to join the American engineering community during the early 1900s, in World War II, and even the conservative postwar years. In my recent book, Girls Coming to Tech! A History of American Engineering Education for Women (MIT Press 2014, I documented the story of how college engineering became coeducational, starting in the late 1800s. Shifting direction, my new research concentrates on just the last fifty years, the period of most rapid change. While compressing the time-frame, it expands the focus beyond changes in engineering education, to investigate a broader evolution of women’s history in American technical work, amidst intensified discussions of gender and diversity in engineering professionalization, career paths, and public outreach.

    This paper addresses this history of gender as a growing center of attention in technical education and professionalization, through focused case-study comparisons of the AWC (Association for Women in Computing) and ACM-W (Association for Computing Machinery’s Council on Women in Computing). Established in 1978, the AWC not coincidentally emerged at a time of key transitions in three areas: first, a growing public visibility of computer/technical work; second, an expanded discussion of women’s position in American engineering and technical fields; and third, new debates over issues confronting women who sought success in business and the professional world, following years of second-wave feminist activism. In its early years, the AWC’s agenda, aims, and activities reflected, negotiated, and worked to advance all three of these sets of issues. The group fostered discussions of feminism, professionalization, and technology, and portrayed these issues as connected in essential ways. This paper then assesses the approach taken to gender, professionalization, and technology two to three decades later, by examining the philosophy, approach, and activities of ACM-W. The more recent history of ACM-W conferences, writings, outreach, and analysis illustrates what had changed and what had not, regarding gender and technology, from the 1970s to the early twenty-first century.

    Evaluating these case-studies of AWC and ACM-W underlines broader patterns in the history of organizing, advocating, and outreach that engaged a growing number of female STEM professionals from the late 1960s to today. Early networking of the 1970s reflected faith that creating a critical mass of women in technical fields could ultimately change the climate for the better, that individual triumphs ultimately would redound to create more positive conditions for women in traditionally-masculinized fields. ACM-W’s later work grew international and continued to emphasize themes such as mentoring, support, and education, with more financing, visibility, and effect than earlier decades of discussion. Yet ACM-W itself and other observers also concluded that the task of promoting more female success in engineering and technical work remains a deeper challenge than many advocates had earlier recognized, for complex psychological, educational, and social reasons. Today, the aim of promoting diversity in STEM commands more attention than ever, from the White House and groups such as the wider body of ACM, the National Academy of Engineering, the AAUW, Girl Scouts, local schools, corporations, and hundreds of other actors. Such high-profile interest and the ongoing relevance of this topic make it particularly valuable to assess how this conversation has evolved over the last fifty years, partly through the history of groups such as the AWC and ACM-W.
     
  3. William Aspray (University of Texas at Austin), The History of NSF Programs to Broaden Participation in Computing
    Abstract: This paper will provide an overview of the history of NSF’s programmatic efforts to broaden participation in computing. It will focus on three programs within CISE (NSF's computer science directorate): IT Workforce, Broadening Participation in Computing, and Computing Education for the 21st Century. For the last of these I will give particular attention to NSF’s collaboration with the College Board, ACM, the Computing in the Core Coalition, and Code.org to introduce substantive computer science education into 10,000 US schools. The talk will discuss how the programs in computing were influenced by two programs, the Program on Women and Girls and the ADVANCE program, which covered all of the STEM disciplines, not just computing. Even more generally, I will talk about how the Civil Rights and Women Rights movements, the Viet Nam war, the Reagan Administration’s hostility to federal education programs, and the reverse discrimination lawsuits of the 1990s all shaped these efforts. Two of the important topics to be covered involve infrastructure: the MII and MIE programs for information infrastructure at minority-serving institutions and the EOT PACI programs providing wide educational and outreach efforts related to national advanced networking and high-performance computing efforts.
     
  4. Lee Vinsel (Stevens Institute of Technology), ICTs, Auto Safety, and System Maintenance: The Toyota Unintended Acceleration Recalls, 2009–2011
    Abstract: Over the last forty years, automotive engineers and designers have increasingly relied on information and communication technologies (ICTs) to control a host of functions in cars. While many of these technological changes have been a blessing to auto consumers, they have also interrupted some traditional ways of dealing with cars, including regulations aimed at fostering auto safety. Instead of examining automotive computers as a case of innovation, whereby new technologies are diffused, this paper will focus on how new technological designs have affected forms of system maintenance. Here, maintenance is a perspective as much as it is a set list of topics, such as Kevin Borg’s account of how auto mechanics have had to wrestle with computers in their efforts to maintain cars. The point is to view the overall automotive system—including cars, drivers, and attendant infrastructures and technologies (gas stations, traffic systems, roadways, etc.)—as a system, particularly as a system heavily based in ICTs, and then to ask how a wide variety of groups and individuals maintain it. In the United States, federal regulatory enforcement has worked to ensure that automobiles live up to basic standards since the 1960s. These activities have led to the inclusion many artifacts and systems, including, just in automobiles themselves, energy-absorbing dashboards, airbags, crumple zones, catalytic converters, and over two dozen other design features. In this way, regulatory enforcement tries to guarantee that the “infrastructures” of daily, auto-dependent life in the United States meets social expectations, as expressed in legislation and federal codes.

    To some degree, computers have aided this work of system maintenance. Indeed, as Kevin Borg and Ann Johnson have shown, engineers put the first computers as a way of meeting federal automotive pollution control standards. Moreover, the federally-mandated “check engine light” became a means for pushing automotive consumers to properly maintain their vehicles, particularly when it came to emission control systems. Yet, the US auto safety agency, the National Highway Traffic Safety Administration (NHTSA), has struggled to address the entrance of computers into cars. The agency did not have organizational capabilities or expertise around computing, and staff members found themselves scrambling when automotive computers came under scrutiny as a possible safety hazard. To examine this dynamic, this paper uses the Toyota safety recalls of 2009 to 2011—in which some individuals and groups argued that onboard computers may have been causing “unintended acceleration.” The paper relies on published sources and potentially interviews to examine how NHTSA worked to understand automotive computers, to render these technologies mundane, and to include them in the agency’s routine practices of system maintenance.

1B. Works in Progress I

  1. Eric Hintz (Smithsonian Institution), Susan Kare: Design Icon
    Link to paper
    Abstract: Susan Kare designed most of the distinctive icons, typefaces and other graphic elements that gave the Apple Macintosh its characteristic—and widely emulated—look and feel.  In this work-in-progress, I present a biography of Kare, explore her pioneering work on the Apple Macintosh, and trace her continuing influence on user interface design.

    In 1983, Kare, a graphic designer, was recruited onto the Macintosh team by software engineer Andy Hertzfeld, a former high school classmate.    Unlike the textual, command-line displays of earlier mainframes and mini-computers, the Macintosh featured a bit-mapped display in which each on-screen pixel was individually controlled by a single "bit" of data.  Creating graphics was simply a matter of deciding which bits to turn on and off. Using a simple graph paper sketchbook (and later, Hertzfeld’s Mac icon editor), Kare drew pictorial metaphors for various commands and operating statuses; these became the Mac’s signature icons—the trash can, the paintbrush, and the ticking bomb. She also designed several proportional typefaces (e.g. Geneva, New York, and Monaco) that improved upon the mono-spaced characters found on typewriters and earlier computers.

    Kare’s work has been described in various Steve Jobs biographies, Apple corporate histories, and the technical press; for example, the New York Times has called her “the Betsy Ross of the personal computer."   Kare’s iconography has also been featured in exhibitions at the Smithsonian’s National Museum of American History, New York’s Museum of Modern Art, and the New Mexico Museum of Natural History and Science in Albuquerque. However, to my knowledge, historians of computing have published little on Kare’s design work, an oversight which this article hopes to redress.

    The article situates Kare’s GUI design work in the context of earlier efforts by Douglas Engelbart (Stanford Research Institute) and Alan Kay and Adele Goldberg (Xerox PARC).  It describes Kare’s academic training, artistic influences, and design process at Apple and other clients. Drawing on contemporary press coverage and archival film footage from the 1980s, plus several more recent interviews, the article documents Kare’s crucial role in shaping the Apple Macintosh GUI and her continuing influence on user interface design.
     
  2. Jacob Ward (University College London), Research Transplanted and Privatised: Post Office/British Telecom R&D in the Digital and Information Era
    Link to Paper
    Abstract: “Research is the door to tomorrow” was the rallying cry engraved above the entrance to the Post Office Research Station, Dollis Hill, home of Colossus, popularly known as the world’s first programmable, electronic, digital computer. In 1975, Dollis Hill was left vacant, substituted for the new Post Office Research Centre in Martlesham, Suffolk, and less than a decade later, in one of the most portentous acts of the Thatcher government, Post Office Telecommunications was renamed British Telecommunications (BT) and privatised. Martlesham was no longer a government laboratory, but a private research and development establishment.

    The relocation of Dollis Hill and the privatisation of BT are an essential period in the history of British telecommunications, but remarkably little has been written about telecommunications in post-war Britain. Drawing on a broad range of Post Office and other government documents, this dissertation proposes to address that deficit by exploring this history in four contexts: local, national, international, and “future-mindedness”.

    The local context would be explored by investigating the relocation of the research station and the new village of Martlesham Heath, which was built around the station to provide housing; the village was built along traditional English ideals, and enquiry would focus on the relationships between science, technology, and English tradition.

    The national context concerns the dramatic organisational changes which the Post Office (GPO) experienced, as it was disestablished from the Civil Service in 1969, and then renamed BT and privatised in 1984. Research would explore the history of the political, organisational, and technological aspects of this liberalisation of British telecommunications.
    The international context would be investigated through comparison with the break-up of AT&T’s monopoly. There were clear political influences from the USA on the Thatcher government’s programme of privatisation, and research would focus on this transatlantic political shift and its influence on science, technology, and infrastructure.

    Finally, the impact of the “future-mindedness” of the Post Office’s Long Range Planning Division would also be studied. This division envisioned technological futures to guide telecommunications development, and research would explore the rhetorical power of the future on Britain’s highly politicised telecommunications infrastructure, and how such futures make and are made by the state.

    From these research topics, I hope to explain the patterns of invention, innovation, and infrastructural development in British telecommunications in the 1970s and 1980s, and contextualise this within the liberalisation of the GPO/BT as it moved from public to private.
     
  3. Christine Mitchell (New York University), Bright Side of a Dark Age: Developments in Machine Translation, 1966-1992
    Link to paper
    Abstract: This work-in-progress forms part of a book project that pursues a media theory of translation.  In an effort to push past debates and distinctions between human-natural and machine-artificial language, the book examines machine translation (MT) developments alongside (and incorporating) translator labor, as well as techniques and technologies intended to optimize foreign language acquisition. Sites investigated include Warren Weaver’s collection of Alice in Wonderland translations and Luis von Ahn’s Duolingo app, which trades language lessons for translation work. The chapter to be workshopped explores MT research between 1966 and 1992 – typically viewed as an MT dark age – with a focus on significant events in Canada.

    Marking the lower date is the 1966 ALPAC report that famously brought an end to MT funding in the US. In Canada, however, a growing commitment to bilingualism across government operations, and the ever-pressing demand to furnish parliamentary publications instantaneously in both French and English, encouraged Canada’s National Research Council to inaugurate its own MT research program, despite waning interest south of the border.  One result of this initiative was TAUM-MÉTÉO, a rule-based system used for two decades to translate daily Environment Canada weather reports without human intervention.

    Marking the upper date is the publication of a watershed paper by American IBM researchers that applied statistical approaches to MT, soon followed by a contentious international conference focused on (and titled) “Empiricist vs. Rationalist Methods in MT.” The so-called rationalists remained committed to formal linguistic theory and followed rule-based approaches, while the empiricists, led by IBM, were in ascendancy and breaking ground on a corpus-based approach. Curiously enough, IBM’s breakthrough corpus consisted of 13 years of machine-readable bilingual text provided by the Canadian Government Translation Bureau, rendered thus to facilitate speedier printing of versions of parliamentary proceedings in French and English.

    The chapter posits language as the original “big data” challenge, and follows developers as they grappled with the material and technical implications. What kind of “thing” was English? was French? and how should it be harnessed, programmed and processed by computers? What did translation look like when it was engineered to meet institutional objectives, and when it was conceived as a general problem? How did conceptual, methodological and technological shifts affect the way MT was situated – as a task for computer engineers, for AI, for linguistics, for translators and for readers?
     

2:00 - 3:30 pm: Parallel Sessions II

2A. The Infrastructure of Digital Archives

Session Organizer: Giuditta Parolini (Technische Universität Berlin and Berliner Zentrum für Wissensgeschichte)
Session Description: Digital archives deserve a prominent place among the infrastructures of the information age. While records on paper and other physical media are considered fragile, digital repositories are deemed able to guarantee durability for the data they store. Moreover digital archives are adaptable to a multiplicity of purposes in both the sciences and the humanities and they offer opportunities not only for the safe storage of information, but also for its efficient retrieval and dissemination.

Yet the transfer of records from traditional media to electronic databases is not devoid of consequences. The development of digital archives forces researchers to adopt new strategies and formats for collecting data. It also presents novel issues, such as data curation and quality control, in relation to data management, and provides a wider set of opportunities, ranging from open access to password restrictions, for data sharing. The constitution of electronic archives, therefore, requires a contextual restructuring of research practices and professional roles.

The session will address the many challenges posed by digital archives considering case studies taken from medicine, agriculture, and the management of historical business records. Joseph November will examine an early attempt at digitising the medical record undertaken during the 1960s and the criticism that this digitisation process ensued. Giuditta Parolini will investigate the creation of a digital archive for long-term experimental records in agriculture and the problems connected to producing a digital version of the original paper records. Moving from the sciences to the humanities, Erik Rau will present the strategies currently developed by the Hagley Library to archive electronic business records.

The session’s talks will contribute to the ongoing debate about digital archives examining these three crucial issues: the labour required by the transformation of traditional paper records into digital format; the qualitative differences posed by archiving physical media and electronic records; the new skills which researchers need for dealing with electronic repositories.

  1. Joseph November (University of South Carolina), The Medical Record and the 50-Year Challenge to Computing
    Abstract: Since the 1960s, the medical record, the documentation of a patient’s medical care over time at a particular institution, has proven a particularly vexing challenge to anyone hoping to computerize the day-to-day operations of hospitals and medical clinic.

    Fifty years and many millions of dollars after physician Octo Barnett and his colleagues at Bolt, Beranek and Newman attempted to convert the paper-bound medical record at Massachusetts General Hospital into an electronic one, the “Era of Electronic Medical Records” still tends to be viewed as something that is yet to come. Further, as debates (especially in the USA and Western Europe) grow over the rising cost of healthcare, the advent of the electronic medical record has come to be hailed as a panacea for medicine’s financial woes.

    To date, there have been a few efforts by historians to examine the earliest (1960s) attempts to implement EMR, but these efforts leave off in the early 1970s, when the activities that laid the intellectual and institutional groundwork for much of the current EMR work began to intensify.

    Towards the end of clarifying these activities, this paper will examine the continued work of Dr. Barnett in the 1970s, which culminated in the development of Massachusetts General Hospital Utility Multi-Programming System (MUMPS) or M as it now known – this language met only limited success in the medical context but was widely adopted by financial institutions.

    The paper will also examine the critique Lawrence Weed, MD offered to both computing and medical professionals hoping to computerize the medical record. Weed’s perspective is particularly valuable because it spans several decades as well as several disciplines – his first articulations on the matter appeared in the New England Journal of Medicine in 1968 and he remained engaged in related discussions through 2011, when he published his book, Medicine in Denial.
     

  2. Giuditta Parolini (Technische Universität Berlin and Berliner Zentrum für Wissensgeschichte), From paper to bit: a digital life for the records of long-term experiments in agriculture
    Abstract: Since the nineteenth century the field experiments at Rothamsted Experimental Station (now Rothamsted Research) have been a source of precious data for agricultural science. In the early 1990s the institution created a permanent managed database, the Electronic Rothamsted Archive (e-RA) http://www.era.rothamsted.ac.uk, for the secure storage and dissemination of the data from its classical experiments, and other long-term trials and surveys.

    The talk will address the challenge, posed in the creation of e-RA, by the conversion of the original paper records into electronically-readable data. The results of the long-term investigations at Rothamsted existed in several formats: some had been printed in the station’s reports or in other scientific publications of the Rothamsted staff, but many were scattered in notebooks, archival material, printed and handwritten data sheets. The first task of the statistician in charge of the project consisted in collecting and cross-referencing this data. In addition, qualitative information from present and past staff members of the station and from archival and unpublished sources was added to e-RA to provide the metadata necessary for the interpretation of the records. In collaboration with the computer scientist, who developed the software for the archive, the statistician designed the ‘sheet description’, which determines the layout of the data, their type and format, and the meanings of the different variables within e-RA. Moreover, the statistician had to investigate and resolve inconsistencies between data gathered from different sources.

    I will argue that converting the Rothamsted paper records into bits entailed a restructuring of the meaning attributed to data in agriculture. In particular, it became necessary to rethink the representation of agricultural data, integrating quantitative and qualitative information, and improving the reliability of the figures provided by the original published and unpublished paper sources. The talk will address the history of e-RA using scientific publications and oral histories collected with statisticians and computer scientists, who took part in the project.

    The case study examined in the talk will contribute to extend the literature on digital archives addressing the case of agricultural science, so far neglected, despite the value that long-term datasets from field experiments have in agronomy and climate science.
     
  3. Erik Rau (Hagley Museum and Library), A Future for History (of Technology, Science, Medicine, and the Environment): Understanding the Challenges of Preserving Corporate Records in the Digital Era
    Abstract: For over a decade, David Kirsch has called for preserving the corporate records of the internet-era ventures, particularly electronic records. For while business records generally raise the perennial tension between private and public interests in access and historical interpretation, the rapid rise and fall of startups in places like Silicon Valley in California during the 1990s and 2000s—the rate of churn—made the records of internet businesses particularly volatile. Haphazard record keeping by young and overstressed entrepreneurs, combined with lack of clear standards for electronic records retention and preservation at that time, have made historians like Kirsch anxious for the prospect of a history of the “dot-com” beyond media interviews and oral histories.

    As rarified as Silicon Valley in the dot-com era may seem, it presents us with a portent of a future for historical research more generally. Writing the history of science, technology, medicine and the environment from 2000 onward, to pick a somewhat arbitrary date, will necessarily require access to corporations’ electronic records. Government archives and court records will not by themselves provide an adequately comprehensive record. Most activities in modern society interact in some way with corporate interests and operations. What President Calvin Coolidge told the American Society of Newspaper Editors 90 years ago, “the chief business of the American people is business,” is at least as true today as then, and true for more than just Americans.

    I will provide an overview of strategies employed at Hagley Library to make available the electronic primary sources of tomorrow’s historical research. As the most prominent collection of sources in the history of American business, technology, and industrial design, Hagley Library has had to meet both the researcher’s need for access and businesses’ need for confidentiality concerning current operations. Electronic records are already being preserved. How will the training of researchers need to adapt to working with these sources? Will access mediated by machine (computer) qualitatively alter the nature or historical research?

    The presentation is intended to promote a dialog between both sides—supply and demand—of historical data, and how electronic documentation may affect the future of history.

2B. Peripheral Play & Connectivity

Session Organizer: Brent Strang (Stony Brook University)
Session Description: Among those who imagined possibilities for the television beyond watching programs was Ralph Baer, whose Magnavox Odyssey helped reconfigure the living room as a training ground for digital apperception. The interconnecting of videogame consoles, talking toys, VCRs, disc players, computers, video cameras, Teletext, stereo receivers, etc. changed the electronic hearth into a media octopus, with each arm requiring a new set of technical competencies for users to learn. Peripheral controllers served as the user’s primary point of contact, providing a tactile channel for inputting and receiving coded signals, translated for our eyes and ears. Users growing up in front of the television, with all of its connected components, peripherals, and playthings, have developed skills, habits, desires, and expectations that are necessary to the culture of ubiquitous computing. And yet, these technologies did not enter the home fully formed, but found their place within a domestic space of work and leisure alongside existing media practices and habits.

Our panel brings this historical perspective to bear on various peripherals that have come and gone over the past thirty years. Their disappearance has not made them irrelevant; rather, they remained virtual until certain socio-cultural conditions and technological infrastructure could support their reappearance in later forms. The ‘virtual’ is here conceived as Deleuze’s ‘real-but-abstract’ – what is joined to an object in its passing, not in its fixed corporeal form (Massumi, 2002). The virtual thus acts as a force of continuity, infusing each failed and successful instance with the vision of Baer and others for an increasingly inter-connected, digitally interactive domestic environment. For the materialization of this vision, connectivity is key—not only for the peripherals themselves, but also for the wider infrastructure supporting wireless technological innovations, standards and protocols, and lines of feedback between users and product research & development.

  1. Reem Hilu (Northwestern University), “The Ultimate Doll”: Microprocessor Controlled Talking Dolls and Girls’ Play Practices in the Home
    Abstract:
    Topic: As personal computing was being adopted as a domestic technology, one site through which microprocessors found popularity in the home was in children’s talking dolls. In the 1980s, a number of dolls used microprocessors to enhance their interactive capability – enabling them to react to user inputs and to interface with other domestic media technologies in the home.

    Argument: Even before microprocessors made it possible for dolls to react to their environment, talking dolls were imagined as toys that were aware of and monitored the home and the behavior of the girls that played with them. I argue that the existing play practices and discourses around talking dolls were an important influence in determining how microprocessors were ultimately incorporated into these toys. These dolls were designed with sensors and voice control capabilities that augmented their existing disciplinary relationship to girls’ play. At the same time, the increasing volubility of these microprocessor dolls also disrupted and altered patterns of doll play and of girls’ audibility in the home.

    Evidence: My paper will analyze the design and promotion of popular microprocessor based talking dolls, focusing on Galoob’s Baby Talk and Smarty Bear (1985), developed in part by Ralph Baer. These toys were sold with a suite of products that allowed them to connect to the television to create the impression that they were conversing with characters on the screen. I will also discuss Baer’s notes, sketches, scripts, and design concepts for these toys in which he outlines his plan for a computer-mediated interactive television experience facilitated through the figure of an animated and vociferous doll companion. These examples demonstrate that in the case of talking dolls, computing technology was used to incorporate the doll into a network of media technologies that communicated with each other to more fully monitor the child in her domestic environment while at the same time opening up new possibilities for play.

    Contribution: Thus far histories of domestic computing and play have focused on masculine hobbies such as toy trains (Levy 1984) and backyard boy culture (Jenkins 1998) as influences, but have rarely taken seriously girls’ leisure practices as an important lineage that helped determine how computers were incorporated into the home. My paper intervenes in this history by presenting doll play as a set of practices and techniques on which one form of domestic computing, and more specifically girls’ interaction with computer technology, was modeled.
     
  2. Brent Strang (Stony Brook University), Peripheral Convergence Through User-Centered Design: A Case-Study of Logitech
    Abstract:
    Topic: Television watching, home videogaming, and personal computing are distinct domestic practices, and yet the devices we use for each have converged over the past two decades. Logitech’s practice of User-Centered Design and its interdepartmental organization has been instrumental in the wider infrastructure supporting convergence of remote controls, gamepads, and mice, and by extension, their respective media formats.

    Argument: There is a pervasive tendency in media studies to follow Jenkins (2006) in understanding convergence as the merging of old and new media formats which results from the digitization of content. Media studies can learn, from the disciplines of HCI and design history, that media consumption is more than a matter of content, whether industry- or user-generated, but also a matter of competency that develops from users’ habituated practices with the tools they use for consumption. Convergence therefore concerns more than economic and socio-cultural effects: the physical environments in which we consume media are intensively designed to facilitate an intuitive set of gestures and digital articulations that become standardized across media formats. These leisure activities are habit-forming and fostered through consistency, which is a key principle in interaction design. Thus there is another level of infrastructure, beyond Jenkins’s purview and beyond the technological innovations in RF transceiver chips and MEMS, that has made possible the convergence of remote controls, gamepads, and mice. Companies that have a well-established practice of User-Centered Design, such as Logitech, have built effective infrastructures of communication and coordination across the various departments involved in their product development.

    Evidence: I survey failed instances of convergence in air mouse technology, such as Nokia’s IR ‘TV Mouse’ RCD (1991) and SONY’s RF ‘air egg’ RCD (1995), along with other gamepad/RCD hybrids from Philips CDi (1991), Sega (1995), and the Samsung NUON (2000). I then proceed to Logitech’s entree into the digital home in 2004, when they announced the reinvention of the RCD as the ‘mouse of the digital house’. As evidenced in their corporate archives, employee interviews, and newspaper and magazine articles, Logitech’s culture of communication facilitated user feedback through multiple nodes of contact between project managers, customer experience representatives, interaction designers, engineers, and industrial designers.

    Contribution: This presentation contributes to the disciplines of media studies and history of computing by drawing upon and further developing the work in HCI and UCD by Grudin (2006, 2008), Gulliksen & Lantz (2003), and Venturi, Troost, and Jokela (2006).
     
  3. Laine Nooney (Georgia Institute of Technology), The Infrastructure of Expertise, or What Game Engines Allow
    Abstract:
    Topic: The development, platform effect and labor consequences of Sierra On-Line's proprietary in-house 1980s game engine, the Adventure Game Interpreter [AGI]. The presentation focuses on the ludic, economic, and labor effects that bore out from what has largely only been documented as a technologically-progressivist transition in game development.

    Argument: The AGI engine was developed from 1982-1984, and quickly implemented as Sierra's standard game development platform during the 1980s. Accounts of this transition (journalist or academic) document the engine's significance as largely graphical and intrinsically progressive, in that it catalyzed the shift from static image adventure games to an exploratory, interactive “2.5-D” game space. However, as I will argue, the effect of the AGI engine did not end at simply making Sierra's games more “realistic.” Rather, the engine was a critical business asset with three specific long-term consequences: the introduction of spatial puzzle design; a company business plan that took the proprietary system as its financial backbone; and, most importantly, in the context of this conference, the emergence of a formalized distinction between systems coders and game coders. Within the history of games, code is regarded as king; programmers are lauded as “wizards” and “geniuses” for their presumed coding mastery. However, Sierra presents a case in which an elegant technological solution (the AGI engine) also unshackled the company from its reliance on expert programmers, and further formalized the distinction between the game designer from the game programmer. “Mediocre” programmers, sound designers and animators were suddenly valuable commodities, permitting this specifically regionalist company (isolated as it was in Oakhurst, California) to hire more from the local population and train up, with the assistance of a relatively small number of highly-skilled programmers.

    Evidence: This presentation is based oral histories and personal communications conducted with Sierra programmers and designers, including Al Lowe, Chris Iden, Bob Heitman, Corey Cole, and Dale Carlson. Supporting material is drawn from game software, computing and gaming magazines like Computer Gaming World, Questbusters, and Creative Computing, and Sierra's own publicity materials.

    Contribution: While Henry Lowood's emerging work on the game engine is a lone standout in game studies, his analysis is largely focused on how the engine impacts qualities of games themselves (as Lowood's primary case is Doom). Thus, while engines are examined within game studies to better understand shifts in play or advancements in tech, they are rarely understood from their infrastructure end—how they affect the labor practices of those who use them. In this sense, this paper both reframes the discussion of the engine within video game history toward themes of labor and regional development, but also seeks to bring the game engine into conversation with computer and software history, where there is a more robust tradition of considering how internal software development effects company growth.

2C. Roundtable: Digital Humanities, SIGCIS, and SHOT

Session Organizer and Chair: Kimon Keramidas (New York University)
Session Description: Over the past decade the term "digital humanities” has been greeted with both enthusiasm and suspicion in higher education. Some see digital technologies as a way of providing humanities work with a revolutionary new toolset for data-driven research and information visualization, while others are concerned with the reduction of the messy, critical work of the humanities to positivist and theoretically shallow analysis. As the digital humanities has spread from institution to institution and taken different shapes one particular concern that has developed is whether the scholars who are rapidly adapting digital technologies and media to their work are considering the ways in which the nature of these technologies intrinsically impact digital humanities studies. Critics in fields such as media studies have wondered if the specific characteristics of these technologies are being considered thoughtfully and whether their expediency is being utilized with a considered understanding of the way that the design of these tools may predetermine the results they can create. In a similar way, it seems the rare instance that a deeply-informed understanding of the history of digital technologies is a fundamental part of digital humanities projects, with tools and technologies often accepted at face value without an understanding of how historical and theoretical precedents and predecessors define the role a technology might play in such projects.

To ensure that the history of digital technologies remains an important part of the development of the digital humanities across the academy, we would like to propose a roundtable discussion that would allow the SIGCIS constituency to discuss what role it can play in informing a more educated understanding of the technologies that are making the digital humanities possible in the first place. Along with the session organizer, three participants will give their own definitions of the digital humanities and discuss how the work in their respective fields can better facilitate the conversation between SIGCIS and DH. We will also consider the challenges that scholars of different academic standings–the panel includes two faculty members, a postdoctoral fellow, and graduate student–face in bringing the study of technology and digital humanities together. After a brief 5-10 minute presentations by each panel member and a 15-20 minute period of discussion, the session will be opened to the audience to expand the conversation further and perhaps move towards considering platforms for organized communications after the conference.

Session Participants:

4 – 5:30 pm: Parallel Sessions 3

3A. Networks and Politics

  1. Andrew Schrock (University of Southern California), From Black Hats to White Hats: Constructing the “Ethical Hacker”
    Abstract: The term “hacker” has experienced a recent renaissance in popular culture and has become intimately bound up with an entrepreneurial occupation. Facebook, for example, brands itself as “The Hacker Company” and lays claim to the term “hackathon.” These appropriations are associated with a lifestyle for software engineers that promises high pay and individual freedom. How did it become plausible for a stigmatized term to become associated with free market enterprise? To address this question this article draws on articles in popular media, online records, and corporate documents from the mid-1990s to mid-2000s. I argue that the role of the “ethical hacker” emerged to fulfill a need for knowledge about network infrastructures to flow from underground grassroots collectives to corporate boardrooms.

    Classic definitions of hackers emphasize computer skills, particularly with information security (“infosec”). The “golden era” mythology originating from Universities defines hackers as creative problem-solvers. However, by the mid-1990s the public became increasingly aware through media portrayals of hackers as wily computer experts. Many youth were attracted to the subcultural and resistant connotations of “hacker” promoted in ‘zines such as 2600, successively entering the workforce. Computing infrastructure drove a demand for infosec experts. Yet, the association of hackers with criminality presented a problem for businesses and federal agencies such as the Department of Defense (DOD). In response, IBM coined the term “ethical hacker” in 1995, mostly as a marketing tactic. Rather than being motivated by pursuit of knowledge, they defined ethical hackers as gleaning financial benefit while not having to acclimate to the corporate environment. These hackers occupied a vital but liminal role in the rapidly growing worldwide computing infrastructure.

    The “certified ethical hacker” (CEH) designation formally emerged in 2001 through a company known as the EC-Council, which operates as a for-profit educational service. In 2010 the CEH designation became accepted by the US Federal Government, and tens of thousands have taken the certification to date. The “ethical hacker” came to denote a more formal occupation necessary to ensure the smooth operation and economic vitality of corporations. In the process, the “white hat” made plausible successive corporate identities that migrated increasingly far away from classic hacker origin stories. Even today the designation remains hotly disputed among the computing underground; ironically, the “ethical hacker” identity stripped hackers of the very sense of ethics that gave them power to be extra-institutional actors.
     
  2. Bradley Fidler (UCLA), The Emergence of Border Router Protocols and Autonomous Systems on the Internet, c. 1968-1989
    Abstract: Modern Internet infrastructure is a linked collection of “autonomous systems” – networks of networks that are typically governed by single organizations, and connected with other such systems by border routers.  In this sense, the modern Internet is less a network of networks, than a network of network of networks .  The routers that inter-network these autonomous systems (networks of networks) use the Border Gateway Protocol (BGP) standard (Rekhter et al., 2005).  Crucially, BGP allows organizations flexibility in the design and management of the routing and other protocols within their own autonomous system, thereby enabling the much-triumphed decentralized control of the global Internet.  This flexibility was a necessary condition for the rapid global spread of the US Department of Defense's (DoD) inter-network standards, and the emergence of a multi-billion industry in routers with leaders such as Cisco.

    Here I study innovation and development of border router protocols during US DoD internetworking experiments (1974-83), and their subsequent, initial standardization in operational Internet infrastructure (1983-1986).  I argue that the priorities in the design and standardization of now-dominant router protocols can be linked to requirements of the US DoD in its efforts to develop a new generation of military computer networks throughout the 1970s and into the 1980s.  In particular, I analyze the roles of the Defense Communications Agency (now DISA), the Defense Advanced Research Projects Agency (DARPA), the defense contractor Bolt Beranek and Newman (BBN), and allied civilian organizations such as the Internet Engineering Task Force (IETF).  For this study I draw on my oral histories with performers and participants, as well as new archival research.

    BGP and router protocols more generally are important yet neglected components in the history of computer networks and the Internet, and are a subset of a broader and largely unwritten history, dating from 1972, of the gateways that have connected dissimilar networks (Fidler & Currie, 2015).  While the histories of the TCP/IP protocol suite and the end-to-end principal are well documented (Gillespie, 2006; Russell, 2015) this research should be expanded upon to include the development and consequences of the routers that help create the “inter-” of the Internet.
     
  3. Gerardo Con Diaz (Yale University), IBM and Patent Reform in the United States, 1965-1968
    Abstract: In 1965, President Lyndon B. Johnson created a group called the President’s Commission on the Patent System. This group included several government officials and high ranking officers from firms such as IBM and Monsanto, and it was instructed study the needs of the country’s inventors, firms, and patent system. Among the commissioners’ primary goals was the identification of any issues that could overburden the American patent infrastructure—that is, phenomena that could strain its facilities, personnel, or bureaucracy. Prominent among these issues in the commissioners’ eyes was the possibility of granting patents for computer programs, which in their view required procedures and skilled personnel that the Patent Office lacked.

    This paper studies the legal, philosophical, and bureaucratic considerations that informed the commissioners’ views on software patenting. I argue that the Commission gave special consideration to IBM’s views on the needs of the patent system and the nature of software as a technology and a commodity. In 1967, the Commission recommended that computer programs should be banned from receiving patent protection. By then, IBM’s lawyers had established a tradition of collaboration with the Commission; the firm’s legal team and managers regularly commented on the Commission’s drafts, and they even provided suggestions on how to revise the patent statutes in order to prohibit software patents altogether.

    This argument invites the introduction of patent law into our study of IBM’s immense market and political power in the 1960s. Instead of focusing on familiar issues such as the firm’s bundling practices and antitrust woes, this talk highlights IBM’s efforts to revise the American patent infrastructure. Its sources include trade publications, Congressional documents, and rare archival materials found in the Patent Office’s records. The talk opens with the formation of the Commission, details the group’s relationships with IBM, and studies the Johnson administration’s failed efforts to turn the Commission’s proposed ban into law. It ends by showing how dozens of firms, trade groups, and law associations appeared before Congress to oppose this ban and to show that the software industry was far too young to be legislated.
     
  4. Camille Paloque-Berges (Conservatoire National des Arts et Métiers), Unix networks cooperation as a shadow infrastructure for an early French Internet experience (1983-1993)
    Abstract: The French “battle” of computer communication networks famously resulted in the victory of the national, closed, service-oriented model put forward by France Télécom (Transpac and Minitel) over the model supported by researchers at INRIA (France’s National Institute for Research in Computer Science and Control) (Schafer, 2012; Schafer and Thierry, 2012). However, French researchers participated in the international development of computer networks, while the Internet was spreading as an open and appropriable technology (Russell, 2014; Abbate, 2001). INRIA was effectively connected to the US via TCP-IP 1988, and supported in 1993 the launch of RENATER, the official French academic computer network (Schafer and Tuy, 2013). What happened in between? French Internet early adopters had previously experimented with the Unix-based UUCP-Usenet network, an unofficial French research network in the 1980s. Part of an international “matrix” of computer networks, UUCPnet joined in 1986 the family of Internet-compatible protocols (switching from UUCP to NNTP) (Quarterman, 1990). The French branch of this network, FNET, is an interesting case that prompts us to ask: how did an experimental infrastructure for an international communication computer network actually ran in the context of institutional and ideological hostility?

    I intend to show, with this case study, that FNET was a in fact a “shadow infrastructure”: an informal, unacknowledged network of machines, annexed to the existing French telecom network but experimenting with heterogeneous and decentralized network technologies; but also a network of peers, both administrators and users, mostly employed in French universities and research centers for other goals than running this network. Although the network side of Unix’ history is documented (Kelty, 2012; Salus, 1994), its local role in paving the way for the Internet in France was alluded to but remains strong in the protagonists’ memory  (Griset and Schafer, 2012; Huitema, 1995). My study unravels the story of FNET from a science and technologies studies (STS) perspective, using a socio-technical methodology for analyzing how this infrastructure was negotiated in the shadow of France’s national preference for the telecom model. I particularly looked into how FNET was managed in terms of technical and administrative infrastructure, an interesting issue given that the network had no official existence. My main sources are interviews with protagonists, archives from the Conservatoire national des arts et métiers (Cnam)’s IT department where FNET started, and emails from the protagonists found in Google’s Usenet archives or Cnam’s archives.

3B. Pushing the Limits

  1. Eileen Clancy (City University of New York), Abacus Computing in the Age of Electronics: Sekiko Yoshida and the Early U.S. Space Program
    Abstract: My paper explores the submerged contributions of geophysicist Sekiko Yoshida, a Japanese scientist working in the early U.S. space program. Her analysis of the cosmic ray experiment on Explorer, the first U.S. satellite, provided the scientific underpinning for James Van Allen’s discovery of the vast radiation belts surrounding the Earth. In the absence of a suitable electronic computer, Yoshida used models and techniques that she had learned in Japan to determine the motion of radiation particles in orbit around the Earth, performing the calculations on a Japanese abacus.

    The announcement of the discovery of the Van Allen belts in May 1958 was based on only a schematic understanding of the data. Satellites offered a brand-new vantage points to understand space; but also necessitated new techniques to grapple with the data they acquired. As Carl McIlwain, a graduate student on Van Allen’s team, said later, “We had no idea what was up there.”

    When Van Allen’s group stumbled with the analysis of Explorer’s data, Yoshida’s expertise and knowledge proved invaluable. Working at the University of Iowa, she used an abacus to calculate a spherical harmonic expansion (a complex mathematical function) that generated a more accurate representation of Earth’s magnetic field, and to compute cosmic ray values recorded onto miles of paper tape, mapping the distribution of charged particles in orbit.

    Aiming to complicate our usual historical narratives of nation-states, Asif Siddiqi calls for new frameworks for a “global history of space exploration.” The method he suggests, looking at the “migration of people and knowledge across borders,” is equally applicable to understanding Yoshida’s contributions. Similarly, Corinna Schlombs has written about the necessity of bringing an international perspective to histories of computing to counter its “overgeneralization” and “American-centeredness.” Yoshida's experience as an expatriate woman also makes her story vital as an example of the intersectionality of gender, race, and national identity in the history of computing.

    Using published accounts of Yoshida’s life, scientific papers, and correspondence and interviews with former colleagues in Japan and the U.S., my paper illuminates the significance of Yoshida’s pioneering contributions and places them into the social and cultural circumstances resulting in her becoming attached to a moment in American history that has often been portrayed in the spirit of triumphant nationalism. It also demonstrates the use of an Asian computing system that is thousands of years old in making a quintessentially "American" scientific accomplishment during the Cold War.
     
  2. Nicholas Lewis (University of Minnesota), Increasing the Yield: Nuclear Testing, Weapons Strategy, and Supercomputer Selection at Los Alamos in the 1960s
    Abstract: Since the summer of 2014, the collaborative research project between the High-Performance Computing Division at Los Alamos National Laboratory (LANL) and the Charles Babbage Institute has gained unprecedented access to LANL archival collections relating to the Lab’s long, but largely unknown history of supercomputing.  This paper utilizes materials pulled from the archives to reveal never-before-seen aspects of the Lab’s supercomputing history as they relate to Cold War policy, competing visions for its computing future, and the place of Los Alamos in the IBM/CDC supercomputing rivalry of the 1960s.

    Advanced scientific computing has been critical to the Lab’s evolving mission since the Manhattan Project.  As a consequence, the supercomputers employed at Los Alamos bore directly on its ability to fulfill its responsibilities as a weapons laboratory.  This paper examines how the supercomputer selection process at the Lab in the mid-1960s provides a window onto the pressures, drives, and influences that shaped Lab computing, highlighting how those who selected the Lab’s most powerful computers perceived and imagined the role and future directions of Los Alamos during the Cold War.  

    Charged with finding a new supercomputer to ease the burden on the Lab’s overburdened computer facilities, the computer selection committee envisioned vastly different computing demands than would exist only a few years later.  Initially predicting a gradual increase in demand, the late 1960s instead marked a wholesale expansion in computer power and use at Los Alamos.  This paper argues that changes in nuclear testing policy, the maturation of weapons design, and changes in weapons strategy, greatly altered the computing demands at Los Alamos, spurring a rapid escalation in computing capacity that even Lab computing insiders could not have anticipated.

    While the works of Metropolis (1982), Voorhees (1983), MacKenzie (1991), and very few others have touched upon aspects of computing at Los Alamos and in the lab system, none have approached the level of access this paper provides into the inner workings and role of computing in a weapons laboratory during the Cold War.
     
  3. Devin Kennedy (Harvard University), What was "Real" about "Real-Time"?: Time and Responsiveness in Early Post-War Computing
    Abstract: Comparing the computer under development at the Institute for Advance Study to his own Whirlwind project in 1948, Jay Forrester contrasted IAS’s goal of “scientific calculation” to his computer’s “control applications.” Control applications, he noted, demanded reliable and continuously tested components, high-speed memory access, and strict adherence to specification to ensure integration with the military devices, industrial processes and communications systems to which the machine would be eventually wedded. Electronic computers for scientific calculation, by contrast, could be pursued in a more “leisurely” manner, through “exploration” and with “nebulous” specifications. In short, IAS’s process was “step-by-step, in computer language, serial.”

    Unlike the “serial” computing research of IAS, Whirlwind was in design and application engaged with “real-time” problems. These problems were “real” in a twofold sense: first, that the computer would have to monitor, make predictions, and react to dynamic processes in a way that allowed it to participate in the world (by recommending tactical actions, or controlling industrial processes); and second, that the machine was not an experiment on the possibilities of general computing (as Forrester described IAS’s project) but a prototype of a “real” device that would be used in the defense of the nation.

    The development of real-time systems is a pivotal, yet under-researched chapter in the history of computing. Drawing on published and archival sources, this paper discusses the origins and qualities of “real-time” computing from Project Whirlwind to the establishment of the IBM real time computing program for satellite tracking in Project Vanguard. A “Real-time system” referred all at once to hardware, software, and infrastructural components, as well as to an overall design practice. Real-time required reliable and high-speed components (such as memory and I/O devices including modems and terminals), novel program forms (such as executive and supervisory systems and control priority tables), and new mechanisms for managing traffic on the phone and radio infrastructures connecting computers and their sites of action (radar stations, terminals). The need for speed and reliability also demanded practices of system and component testing that would be called upon in the extension of computing to further mission-critical tasks, including in the manned spaceflight program.

    This paper, a component of my dissertation on the history of conceptions of time in computing, contextualizes the quality of “responsiveness” and the temporality of “real-time” these systems embodied and tracks the spread of the idea of real-time from radar defense and process control to general engineering conception of “on-line” and “open systems” computing in the late 1950s.

3C. Works in Progress II

  1. Megan Finn (University of Washington), “I am so anxious to hear”: improvising information infrastructure
    [contact megfinn@uw.edu for a copy of the paper]
    Abstract: Information infrastructure researchers say that it functions invisibly as “enabling resources” to users until breaking whereupon infrastructures become visible. In this sense, disasters, to the extent that they are related to infrastructural breakdown, offer a compelling site for understanding infrastructure. Historians of disaster have suggested that disasters are particularly revealing research sites to understand the everyday, or “normal,” a line of thinking that dovetails with parallel insights from theorists of information infrastructure. Disaster researchers have debated as to whether disasters are both the products of the everyday workings of society, and sites to see how understandings of “normal” are rebuilt, versus ideas that disasters are opportunities to witness the exceptional, and opportunities to understand the basic aspects of human nature. My work adds to recent disaster and information infrastructure research by demonstrating how disasters are opportunities to understand “normal” qualities of information infrastructure.

    In this paper, I examine post-earthquake infrastructural practices when the large systems for circulating documents are broken. Specifically, I look at the 1906 San Francisco Earthquake and Fire, when people and institutions attempted to locate those who had been displaced after much of the city was destroyed. After the earthquake and fire, San Franciscans were scattered throughout the Bay Area. Key infrastructural technologies that facilitated the circulation of personal news like the telegraph lines, printing presses, and post offices, were burned. Innovative infrastructures such as registration bureaus helped people reconnect with those they had lost in the post-earthquake evacuation of the city. However innovative, the workarounds to broken infrastructure relied on well-resourced organizations and well-honed information-related practices. Where physical telegraph infrastructure was destroyed, the bureaucratic work practices of the post office continued. Despite the destruction of all of the newspaper presses, newspapers remained the best way to quickly broadcast personal news. I show how when physical infrastructure broke, infrastructural practices endured, and that institutions with ample resources could most easily improvise information infrastructure and rebuild. From this perspective, I argue that we need to consider workarounds to and repair of information infrastructure from a political economic perspective. Despite major disruption to the physical infrastructure, people were able to reconstitute aspects of the information infrastructure, lending to a narrative of continuity.
     
  2. Rebecca Miller (Science & Technology Policy Institute), Communication of Disaster-Related Information
    [link to paper]
    Abstract: Americans have dramatically shifted their reliance on types of information-providing technologies over the past century. While radio and newspapers were the dominant forms of information distribution in the early 1900’s, televisions became the primary source of news by the middle of the 20th century. Although television remains the most common medium for following the news, computers and cell phones have become major sources of information across the country.

    Modern disaster forecasting involves the use of earth observations and geographic information systems to provide early warning that can reduce damages to homes and business and save lives and livelihoods. The United States’ ability to forecast and track natural disasters such as hurricanes and wildfires has improved, but communicating the importance of evacuation or preparation to residents remains a challenge.

    Through this paper, I will describe how the United States has historically sought to communicate disaster-related information to people affected by natural disasters. Based on written and digital communications, I will track changes in government use of different communication technologies. I will compound this analysis with a consideration of these changes in the context of America’s changing reliance on different forms of technology.
     
  3. Quinn Dupont (University of Toronto), Plaintext, Encryption, Ciphertext: A History of Cryptography and its Influence on Contemporary Society
    Link to paper
    Abstract: I motivate the question of plaintext by situating Alberti’s fifteenth century cryptography manual De Cifris within the new technology of movable type. According to Mario Carpo (2001), Alberti was the first “typographic” thinker, developing the conceptual properties of the printed alphabet (discreteness, indexicality) for his architectural and cryptographic work. Alberti was at a crossroad—he revived ancient theories of mimetic representation but looked beyond to a new mode of representation. In his day the ancient theory of mimesis was theoretically powerful but was increasingly unable to fully explain the advances in cryptography and proto-computing devices. I trace the shifts in representation in cryptography from the Renaissance through to Modernity, focusing on the way cryptography related to language, especially the universal and perfect language planning that was popular in the seventeenth and eighteenth centuries. These new technological advances ushered in a “notational discourse network.” Drawing on the philosopher Nelson Goodman’s work, I call notation a special kind of writing that focuses attention on the identity requirements of sets of inscription. So-called “allographic” works are constitutive of their performance—in that plaintext is a set of marks that is “discrete.” As a “discourse network” plaintext is a special kind of writing within the cryptographic system design to “select, store, and process relevant data” (Kittler, 1990). Plaintext is thus “nothing more” than your typical alphabetic natural language, but when situated within the cryptographic system it takes on powerful new valences due to its conceptual advances. I suggest that we can, and should, study plaintext as codeworks in the way that software studies has recently turned to socio-political and theoretical investigations of code.