2013 Workshop Abstracts

Session Details for 2013 SIGCIS Workshop
"Old Ideas: Recomputing the History of Technology"

9:00-10:30. Opening Plenary. Kennebec Room.

Introduction to Workshop and of Keynote Speaker by Thomas Haigh, University of Wisconsin--Milwaukee (SIGCIS Chair & workshop organizer). thaigh@computer.org

Keynote Address, William Aspray, University of Texas at Austin, “In Search of the Many Histories of Information.” bill@ischool.utexas.edu

KEYNOTE ABSTRACT: Michael Mahoney has argued that there is no one master narrative of computing but instead there are many histories of computing. This talk is about my personal search to identify some of the many histories of information. For the past five years, I have worked as an historian in a school of information, and for half of that time I have served as editor of Information & Culture: A Journal of History. This talk will present a collection of fragmentary, sometimes autobiographical probes into the nature of the many histories of information. One topic to be covered concerns the relations between the histories of computing, the histories of information technologies, and the histories of information. The purpose of the talk is more to stimulate discussion than to give ready answers.

11:00-12:30. New Wine in Old Bottles? Tensions Between Computer Science and Traditional Disciplines. Kennebec Room.

Organizer: Janet Abbate, Virginia Tech. abbate@vt.edu
Chair: Chuck House, InnovaScapes Institute. housec1839@gmail.com
Commentator: Joseph November, University of South Carolina. NOVEMBER@mailbox.sc.edu

PANEL ABSTRACT: Computer science has been extremely influential both as an academic discipline and as the foundation for world-changing information technologies, yet historical scholarship explaining the tensions accompanying the development of this discipline remains scarce. The dual vision of the nascent science as either computer engineering or information science has been largely forgotten, yet it continues to rupture the field. This session brings together case studies from diverse national contexts to revisit competing notions of computer science as an academic discipline. Some of the “big questions” we address include: How has computer science been framed in terms of continuity or rupture with older disciplines? How have pre-established institutional structures and funding mechanisms affected the development and intellectual focus of computer science in different contexts? 

What has been the role of transnational scientific networks in establishing computer science as a global scientific endeavor, and how have old political tensions been embedded in this new discipline?

Janet Abbate describes debates over the nature of computer science in 1960s America, where the new field was constrained by older academic infrastructures that channeled research funding, faculty appointments, and other resources along established disciplinary lines. She argues that while computer scientists were successful in creating new infrastructural supports such as independent departments, standard curricula, and targeted funding streams, they found it much harder to redraw the boundaries of disciplinary identity. Pierre Mounier-Kuhn describes tensions with established disciplines, particularly mathematics, that shaped the construction of computer science in France. These were combined with tensions between competing agenda among computer scientists, often determined by these pioneers’ different links with traditional disciplines. Moreover, tensions arose between computing as a technical service provided to university scientists, and an emerging "computing science" with theoretical aspirations. 

Finally, Irina Nikiforova compares the development of computer science journals in the United States and in Russia in the Cold War 1960s. Academic journals, being products of national systems of knowledge production, are also influenced by developments abroad through transnational scientific networks. However, these influences did not prevent the development of two alternative notions of computer science in the local institutional contexts.

Janet Abbate, Virginia Tech, “Old Disciplines and New Infrastructures: Constructing Computer Science in the 1960s”. abbate@vt.edu

PAPER ABSTRACT: “After growing wildly for years, the field of computing now appears to be approaching its infancy.” These opening words from the 1967 Pierce Report by the US President’s Science Advisory Committee paradoxically describe computer science as both old and new. Though the field had roots in the older disciplines of mathematics and electrical engineering, by the early 1960s computer scientists were establishing their own university departments and asserting a new—but highly contested—academic identity. While the Pierce Report and the 1966 Rosser Report of the National Academies championed computer science as an independent research area, others preferred to see computing remain a subset of mathematics or a tool for engineering. These distinctions were of more than rhetorical import, for the new field was constrained by older academic infrastructures that channeled research money, faculty appointments, and other resources along established disciplinary lines.

This paper provides a new perspective on early computer science by linking rhetorical discipline building—the boundary-drawing debates over the nature and scope of computer science—to infrastructural efforts. Computer scientists realized that in order to be recognized as an autonomous discipline, they needed to establish structural supports such as university departments and degree programs, specialty journals and conferences, and targeted funding programs. This paper describes two such infrastructural efforts of the 1960s: the ACM’s effort to create a standard curriculum (Curriculum 68) as a basis for computer science degree programs, and the NSF’s establishment of the Office of Computing Activities to fund research and education in computer science. I demonstrate how these efforts were based on, and helped to institutionalize, particular views of computer science that challenged existing disciplinary boundaries and status hierarchies. I argue that while computer scientists were successful in creating new infrastructures such as independent departments, standard curricula, and dedicated research funding, they found it much harder to redraw the boundaries of disciplinary identity. In particular, computer science was never accepted as a true science, despite its tremendous growth as an academic field. Sources for this paper include US government reports, curriculum documents, articles and editorials in the professional literature, and archival records of ACM and NSF.

Pierre Mounier-Kuhn, CNRS & Université Paris-Sorbonne, “’Une Science Encore Incertaine’:  The Emergence of Computer Science in France (1955-2000).” mounier@msh-paris.fr

PAPER ABSTRACT: How was computing constructed as a science ? In the case of France, its emergence process may be sketched out as a movement of divergence from mathematics combined with a movement of convergence with other dynamics, and shaped by a set of tensions.

While no computer was completed in French academic or public laboratories during the 1950s, computing pioneers struggled to develop numerical analysis and assert its legitimacy in universities, supported by non-academic allies. 

Computers were acquired as tools for this low-status sub-discipline from the mid-1950s.

Toward 1960, new, non-numerical applications of computers were investigated, such as language translation or information retrieval, while the quest for better programming methods led to R&D on languages, then on operating systems. These various research programs broadened the computing field, as they called for diverse branches of mathematics, particularly algebra and logic. In the same move, computers and information structures became attractive topics for diverse intellectual agendas, from formal linguistics to graph or information theory.

Computing thus began to gain autonomy from “applied mathematics”, as a few militants endeavored to promote it as a new discipline.

The institutionalization of computer science was a controversial effort, shaped by tensions:

• Tensions with established disciplines, particularly mathematics; • Tensions between computing as a service provided to scientists and informatique as a research field of its own; • Tensions between competing agenda among computer scientists.

Decisive steps were the creation of specific laboratories, curricula and diplomas from the mid-1960s, then the establishment of informatics committees at the national level in the academic system in the mid-1970s. Yet it was not until 2000 that computer scientists, properly speaking, were elected as full members of the Academie des Sciences; or that the CNRS created a full-fledged Informatique department.

This paper builds on previous research and presents the results of two investigations conducted in 2012. One describes the academic computing facilities and their separation from computer science; the other reveals the slow process of convergence between mathematical logic and computer science—far from the linear filiation often asserted in computing literature. My main sources are the archives of the CNRS and of a dozen universities, as well as the records of the Ministry of Education, along with a series of oral history interviews.

Irina Nikiforova, Higher School of Economics (Saint-Petersburg), "Competing Visions of New Science: Computer Science Journals in the US and Russia, 1945-1970." Irina.Nikiforova@gatech.edu

PAPER ABSTRACT: Topic: The creation of new journals is an important event for an emerging discipline, signaling its maturity and intellectual development. New journals dedicated exclusively to computer science appeared in both countries, the United States and the Soviet Union, in 1950s. The paper explores the development of academic literature in computer science in the United States and in Russia during the emergent years of computing – 1940s through 1970s (also called the “Golden Age” of computing in Russia). The comparative perspective helps to juxtapose the development of computer science in two very different cultural contexts.

Argument: The ideas about the development of information and computer research could not be more different in these countries during the years of late 1940s to 1970s. If in the United States the cybernetics fever passed rather quickly, in the Soviet Union the ideas became popular for a number of years and an alternative vision of computing prevailed. The differences in the vision of computer science relate to the differences in the organization of science, education, industry and the mode of knowledge production.

Evidence: The paper uses bibliometric materials about US journals published under “computer science” category, retrieved from the Web of Science, and the journals covering computer science topics published in Russia, retrieved from eLibrary.ru, to compare the organization, impact, and particularities of publications in computer science.

Contribution to Existing Literature: The paper reports on the early stages of a project that retraces the development of computer science in Russia through the analysis of academic journals. The development of computer science in Russia is largely an unstudied area of research. Although a few sources (Trogeman & Ernt, 2001) help to reconstruct the timeline and main actors, going beyond those stories remains a desired direction. Re-tracing the development of computer science in other countries, beyond the United States, remains an important step towards the creation of the international history of computer science (Edwards, 2001; Schlombs, 2006).

11:00-12:30. Old Ideas on Control and Communication. Lincoln Room.

Chair: James Gallo, Science & Technology Policy Institute. jgallo@ida.org

Commentator: John Laprise, Northwestern University in Qatar. j-laprise@northwestern.edu

Julie Cohn, University of Houston, “’The old was analog.  The new was digital’: transitions from the analog to the digital domain in electric power systems.” cohnconnor@comcast.net

PAPER ABSTRACT: While electric utilities adopted the use of analog computing apparatus for power system control before World War II, many made a very slow transition to digital computing after the war.  Some continued to rely on analog machines into the 1980s, despite the obvious advantages of speed, sophistication, and power offered by newer digital technologies.  This raises a number of questions about technical innovation, computers, and industry adoption.  For example, did the industry choose to fully amortize invested capital before switching to new apparatus?  Were 1950s decision-makers less visionary than their forebears?  Was the newer technology simply more expensive? The evidence suggests a different explanation:  the older analog machines conferred benefits unique to this industry that the newer digital computers simply could not match.

In the late 1930s, power industry professionals eagerly embraced the network analyzer, an analog computer, not only because it accomplished rapid calculations, but also because it provided an exact model of electric system behavior.  During World War II, this machine was a valuable tool for modeling the effects of new interconnections. By the 1950s, many utilities used a variety of analog computers to analyze power flow and manage load distribution. Electrical manufacturers began to produce digital computers for the power market, emulating the analog machines then in use, but additionally able to calculate a wider array of data with greater speed.  Oddly, the utility industry resisted a rapid transition to digital computing. According to industry engineers, digital computers fell short of network analyzers in displaying system behavior.  Later in the century, digital models became sophisticated enough to satisfy the needs of the utilities. But even after the last analog computer was discarded, system engineers reminisced about the good old days of the network analyzer.

This paper will address the apparently fickle behavior of the utility industry toward computers during the mid-century. Contemporaneous trade journals, conference proceedings, and professional publications document a heightened interest in the high-tech, big data challenges of increasingly complex power networks. In addition, corporate papers and engineers’ 

private collections lend insight into utility concerns. These materials reflect a growing controversy regarding analog and digital computing in the 1950s and 1960s. This discussion will explore the differences between analog and digital computing machines, and the ways in which the new could not supplant the benefits of the old at that time.

Christopher Leslie, New York University, “A Missing Link: Placing International Teleprinter Networks into the Prehistory of the Internet.” cleslie@poly.edu

PAPER ABSTRACT: One of the frequently repeated anecdotes about the history of the Internet is told at the start of Where Wizards Stay Up Late, where Bob Taylor was irritated by the three proprietary terminals in his office. Hafner and Lyon indicate that this was a motivation for Taylor to direct the IPTO to develop ARPANet, which would create a common interface that could be used to access different services. These were not computer terminals as we might think of them today, however, and two of them were Teletypes. The Teletype, the easiest way to turn text into electronic signals at the time, proliferates in the prehistory of the Internet. Intelligence operators listening for Teletype signals during World War II heard something they could not decode, leading to the realization that the Nazis were using Enigma machines. Later Michael Hart typed the Declaration of Independence into a Teletype machine in 1971, the first document of what would become Project Gutenberg. Dreams and demonstrations of the power of international communication enabled by Teletype, which had been fairly well stabilized before the first electronic computers were operational, lay in the background of the desire to create web of information services.

Given the proliferation of these machines, not to mention the way that the "store and forward" technique helped to gain support for the packet switching concept, the lowly Teletype deserves more attention than it has received in the histories of the Internet. In many texts, such as Abbate’s Inventing the Internet, teletypes are briefly mentioned, but the extent of the network and the role it played in international data exchange is not often appreciated. The presence of the Teletype network specifically but the various teleprinters generically provide a hidden continuity between the idea of global communication at the start of the 20th century and the development of the worldwide Internet toward its end. In this paper, which is based on manuals, news reports, patents, and other documents, I describe how Teletype represents the cultural imperative for rapid and simple worldwide communication between individuals. This dream would be more effectively realized by the Internet, but it was present in the hopes and dreams of individuals who worked on the Teletype network.

Joy Rankin, Yale University, “The Time-Sharing Movement: Building Educational Computing Networks in Minnesota 1965-75.” joy.rankin@yale.edu

PAPER ABSTRACT: During the 1960s and 1970s, Minnesota led the United States in its implementation of interactive computing at its public schools, colleges and universities through the creative deployment of timesharing systems – networks of teletype terminals connected to computers via telephone lines. One such network was Total Information for Educational Systems (TIES), launched in 1967 as the cooperative venture of 18 Twin Cities area school districts. By 1975, TIES had grown to serve over 240,000 students in 44 school districts.  This paper examines TIES as a visionary project that cultivated people as the crucial component of a vibrant information network, and it explores how the students and educators of TIES experienced interactive computing. I argue that the people who built TIES acted together as a social movement. I have studied this organization’s archival records and conducted oral history interviews to emphasize the techniques that TIES used to nurture its network, including local coordinators, member newsletters, and attention to geography in those newsletters. TIES personnel capitalized on the physical connections provided by timesharing telephone lines to foster a sense of togetherness – a virtual community – for their users.

This paper emphasizes the classroom as a rich site of inquiry, thereby drawing attention to the important but little studied area of the history of technology in education. The historian of technology Steven Lubar cogently declares, “We have downplayed the skill and knowledge required by users of technology, looking at the machine and not the task, looking for complex systems on the production side, not on the consumption side.”*  In the case of interactive computing, students and educators were some of the earliest groups of users, and they developed complex systems around time-sharing. I explain how teachers regulated computing access, how students became knowledge producers, and how school system and TIES administrators supported their grand computing experiment. Altogether, these Minnesotans created personal computing before personal computers.

*Steven Lubar, “Men/Women/Production/Consumption,” in His and Hers: Gender, Consumption, and Technology, ed. Roger Horowitz and Arwen Mohun (Charlottesville: University Press of Virginia, 1998), 20.

2:00-4:00. Work in Progress. Kennebec Room.

  • Session Leader: Andrew Russell, Stevens Institute of Technology

Bernadette Longo, New Jersey Institute of Technology, “Giant Brains, or Machines That Think.” (Draft book chapter)

CHAPTER FULLEXT: online here for discussion during session.

CHAPTER ABSTRACT: Edmund Berkeley (1909–1988) was a mathematician, insurance actuary, and a founder of the Association of Computing Machinery. In the 1940s, he envisioned a world where computers would lead to improved social systems, helping people make better decisions about questions involving large groups of people. To realize this vision, Berkeley decided that ordinary people needed access to electronic computers and the know-how to do-it-themselves, in the tradition of amateur radio operators or science hobbyists with personal workshops. Berkeley’s initial contribution to this popularizing effort was authoring what some call the first computer book for general readers, Giant Brains or Machines that Think (Wiley, 1949). He also edited and published Computers and Automation, the first journal for computer professionals (1953-1973).

In an era of room-sized mainframe computers, Berkeley envisioned a personal computer that was “closer to being a brain that thinks than any machine ever did before 1940.”  By October 1946, Berkeley had a contract with Wiley for Giant Brains or Machines That Think which, for the first time, would explain the workings of electronic computers to people who were not “computer people.” Berkeley began the book by explaining how computers and human brains function in similar manners. He then described the design of a “mechanical brain” and reviewed characteristics of machines that existed in the late 1940s. The last chapters of the book looked forward to future developments in the field, ending with “Chapter 12 – Social

Control: Machines that Think and How Society May Control Them.” In this chapter, Berkeley argued that in order to be “of true benefit to all of humanity,” people needed to implement systems of control over these thinking machines.  Because these mechanical brains had potential both to help humans and also to become robotic weapons, people needed to establish social systems to ensure that these devices were used for peaceful purposes.

I am currently completing a biography of Edmund Berkeley, which covers his work both as a computer developer and as a social activist. I am currently drafting Chapter 7 of eight (or nine) chapters and will have a draft of this chapter on Giant Brains prepared for review and discussion at the SIGCIS workshop in the fall. I would like to know whether readers find the connections between computer development and social responsibility to be clear, convincing, interesting, and relevant

Trevor Croker, Virginia Tech, “Cloud Computing and the Physicality of the Internet.” (Dissertation in Progress)

DISSERTATION PROPOSAL FULLEXT: online here for discussion during session.

DISSERTATION ABSTRACT: For my dissertation, I intend to investigate the history and current state of cloud computing while looking at the importance of physical geography when discussing modern communications networks, primarily the Internet. 

Distributed and ubiquitous computing increasingly relies upon “cloud”  services, platforms, and infrastructures in order to function. My dissertation will examine how the language of the cloud was introduced and what the implications of this metaphor are for contemporary computer systems.

My proposal hopes to highlight the physical as an important site for consideration when interpreting and developing digital networks. My core argument, that the physical an important site of investigation for modern communication networks, arises from the notion that cloud computing systems exist in real, concrete places that are subject to the constraints of physical geography. Secondarily, I argue that the metaphor of the cloud has unique implications for understanding how different individuals and groups manage, create, and use different communication systems.

The methodological and theoretical backings of my project are interdisciplinary. In addition to computer historians’ scholarship, much of my theoretical framings will be drawn from the field of science and technology studies, as well as history of technology.

My dissertation is still in an early form, so my hope is that the SIGCIS workshop would give me the opportunity to refine my research questions and methodological approaches. I have included a rough sketch of my proposed dissertation chapters.

Chapters

I.          Introduction: Cloud Computing and Literature
a.         Literature ReviewII.         Historical Origins of Cloud Computing
a.         Creation of the term
b.         Development of the term

III.        Cloud Meets The Ground: Geography Case Studies
a.         Case Study #1: 2008 Submarine Cable Outage
b.         Case Study #2: TBA
c.         Case Study #3: TBA

IV.        Conclusions

Jacob Gaboury, New York University, “Image Objects: Computer Graphics at the University of Utah.” jacob.gaboury@nyu.edu

PAPER FULLEXT: online here for discussion during the session. Also support graphics in low (2MB) and high (13MB) quality versions.

PAPER ABSTRACT: In the early 1960s a transformation was taking place in the emerging field of computer science. It was a shift away from procedural mechanization, and toward a dynamic field of interactive objects. This transformation took place at multiple levels, from the design of programming languages to new forms of modeling and representation though computer graphics and visualization. At the center of these transformations were a handful of corporate and university research centers funded largely by the Information Processing Techniques Office of the Advanced Research Projects Agency. Principal among these was the recently founded computer science division in the University of Utah's school of engineering, whose focus was in the field of computer graphics research and design.

 From 1965-1979 almost all fundamental principals of computer graphics were conceived and developed at Utah, including raster graphics, frame buffers, graphical databases, hidden surface removal, texture mapping, object shading, and more. Many graduates went on to become industry leaders in the field of computing. The founders of Pixar, Adobe, Silicon Graphics, Atari, Netscape, and WordPerfect were all graduate students at Utah during this period. Still others would go on to found influential research institutions and production houses at Xerox PARC, the New York Institute for Technology, LucasArts, and Industrial Light and Magic. The influence of the Utah program on the contemporary field of computing is massive, yet almost no historical research has been devoted to its innovations.

This project traces Utah's influence through the early history of the department and its role in a broad shift that begins with graphics, but comes to transform the whole of computer science. Through original archival research and oral history I trace the emergence of the early computing industry and the transition toward object oriented design paradigms modeled on graphical interaction. I argue that the simulation paradigm first utilized by early graphics reaches far beyond this field, and that its lineage can be traced through a genealogy of influence that includes many key figures in the modern history of computing. Ultimately I show that it is through graphics that computing transforms from a technical process for procedural description and execution into a medium with a unique ontology, one that is oriented toward objects both real and virtual.

Thomas Haigh, University of Wisconsin--Milwaukee, "Actually, Turing Didn't Invent the Computer" (draft Historical Reflections column for Communications of the ACM).

COLUMN FULLTEXT: online here for discussion during session.

COLUMN ABSTRACT: This is one of a series of “Historical Reflections” columns I’ve been contributing to Communications of the ACM. CACM goes to all 100,000 or so members of the Association for Computing Machinery. After getting very dull it remade itself as a glossy Scientific American style publication, with more focus on viewpoints, computing practice, and reviews and less on the presentation of technical research. I focused on Turing here because computer scientists and the public (particularly in the UK) have developed a wildly overblown sense of his influence on the invention of the computer. Addressing this in 3-4,000 words is challenging. Like my other columns, this tries to give people a sense of what historians actually do and how we approach the past, rather than just telling historical anecdotes. So as well as the content of the column itself (which should appear in the Jan 2014 issue) this also gives us a chance to talk about the contribution the SIGCIS community can make by engaging with broader publics, including the computer science community.

2:00-4:00. Old Ideas and New Technologies. Lincoln Room.

Chair: Lars Heide, Copenhagen Business School
Commentator: Steven W. Usselman, Georgia Tech

Barbara Hahn, Texas Tech University, “Punch Cards and Industrial Control: Old Devices with New Relevance”

PAPER ABSTRACT: The links between the jacquard loom (ca. 1801-1804) and the history of computing are well established.  In the first case, sequentially ordered punched cards directed a loom to weave patterned cloth—damasks and brocades, for example.  Later, the Hollerith machine (and then the punched cards used for data entry and programming) mirrored that innovation.  This paper re-examines the connection between the punched cards of jacquard looms and those of tabulating systems.  It compares the structures that shaped labor and capital provisions in the textile and insurance industries, and the impact of new devices in both periods.  It also addresses the changing meaning of the word “information” as it related to changes in artifacts.

Mary E. Hopper, Digital Den Inc., “Wisdom from Athena: A Paradigm for Precognition.” mehopper@mehopper.net

PAPER ABSTRACT: Academic computing organizations at the Massachusetts Institute of Technology, Brown University and Carnegie Mellon University invested in projects to develop advanced distributed computing systems in the early 1980s.  Each of the projects resulted in important technical developments, but MIT’s Project Athena was  the largest and most influential. The development of X-Windows and Kerberos were among the many successes. There were also a number of less well known impacts   such as having a direct influence on Steve Job’s vision for NeXT as well as Tim Berners-Lee’s creation of the World Wide Web.  However, Project Athena was more than simply a successful technical feat that played a key role in the development of the distributed computing paradigm. Its success also stands as evidence for the value of using a concurrent, application driven software development model for improving and shortening development cycles within advanced computing projects. More importantly, case studies of Project Athena show that each phase of its evolution foreshadowed specific social and economic developments during the spread of distributed computing in the form of the Internet and the World Wide Web ten years later. This demonstrates that case studies of advanced technology projects that use a concurrent development model can be invaluable for predicting the broader impacts on society before the technologies are disseminated.

Rebecca Elizabeth Skinner, “The Impasse and the Breakthrough: The Pregnant Pause of the early 1950s, and the Birth of Artificial Intelligence Computing”

PAPER ABSTRACT: 1950s, an odd decade during which the computational metaphor of intelligence as information processing, on which 20th-century cognitive psychology and Artificial Intelligence computing rests, did not yet exist.

 Metaphors of intelligence and understandings of the computer were both variegated and highly fluid. Intelligence was seen in wildly different ways according to information theory, cognitive psychology, and Cybernetics; none of these contributed to the problem-solving view of intelligence which would benefit early AI and computer science. Information theory focused on the transmission of information as a sort of railway station interchange rather than on the qualitative semantic nature of data being processed. Cognitive psychology was in its infancy in the early 1950s, and could not inform computer researchers as to the nature of problem-solving, memory, or the structure of creativity. Typically, psychologists understood learning through Behaviorist models of conditioning or the Cybernetic concept of feedback. These were intrinsically indirect approaches: human problem-solving would soon be understood using clinical psychological testing and using information processing as a model. Finally, Cybernetics lacked a research agenda for basic work in departmentalized disciplines. This had never been its goal, as was a meta-science intended to provide a high-level terminology for discussing traits of intelligence in both machinery and humans.

In the realm of computing, metaphors such as the computer as a ‘giant brain’- proffered by ACM founder Edmund Berkeley- were singularly unhelpful. The idea of programs that used computer languages for problem-solving was not yet consolidated. This was a crucial roadblock, as this concept was essential to the development of standardized computer languages used in increasingly uniform hardware platforms. Thus, early ideas about computing’s potential- in the form of ‘thinking machines’ (Turing’s term), or complex information processing (Newell and Simon’s term), or AI as we know it- could not yet be aided by psychology or by computing science such as it was.

The author seeks to describe this fluid period, and to show how the development of computer languages, Von Neumann and Turing’s contributions to the formation of AI, and Newell, Shaw and Simon’s project at the Systems Research Laboratory, all helped to lead to the Dartmouth Conference and the foundation of AI. 1950s, an odd decade during which the computational metaphor of intelligence as information processing, on which 20th-century cognitive psychology and Artificial Intelligence computing rests, did not yet exist.

Ulf Hashagen, Deutsches Museum, “The Computation of Nature, Or: Does the Computer Drive Science and Technology?” u.hashagen@deutsches-museum.de

PAPER ABSTRACT:  It has often been claimed that the computer has not only revolutionized everyday life but has also affected the sciences in a fundamental manner. 

Even innational systems of innovation which had initially reacted with a fair amount of reserve to the computer as a new scientific instrument, it is today a commonplace to speak about the “computer revolution” in the sciences. In his path breaking book Revolution in Science, Cohen diagnoses that a general revolutionary change in the sciences had followed from the invention of the computer. While he asserts that the scientific revolution in astronomy in the 17th century was not based on the newly invented telescope but on the intellect of Galileo Galilei, he maintains in contrast that the “case is different for the computer, which [. . . ] has affected the thinking of scientists and the formulation of theories in a fundamental way, as in the case of the new computer models for world meteorology”. Although history of computing has been established as a sub-discipline of the history of technology during the last decades and contributed to a better understanding of the development of hardware and software as well as of the advent of the information age, there are still large gaps in our knowledge on the history of “scientific computing”. There are only a few studies that have contributed to our understanding of the use of computers in the many fields of science and/or research institutions. The paper aims to outline a research programme that will hopefully help historians of science and technology as well as scientists and engineers in their understanding of Cohen’s assertion: By what means and to what extent does the computer change science and technology?

4:20-5:50. An Ancient Continent as a New Frontier: Discovering that Computing has a History in Asia  (Closing Plenary). Kennebec Room.

  • Chair: Jeffrey Yost, University of Minnesota (Charles Babbage Institute)
  • Commentator:  James W. Cortada, University of Minnesota (Charles Babbage Institute)

Ross Bassett, North Carolina State University, “Rethinking the Victorian Internet:  The Mahratta and the Rise of Technological Nationalism in Poona, India, 1881-1901” ross@ncsu.edu

PAPER ABSTRACT: Information technologies have been implicated in revolutionary social movements from the Reformation to Tahrir Square.  This paper uses an analysis of the early Indian nationalist newspaper, the Poona Mahratta, to argue that one component of the Mahratta’s nationalism was a technological nationalism that owed a great debt to a Victorian global informational environment that was in some ways analogous to today’s World Wide Web.

This paper uses Richard John’s concept of an informational environment to argue that at a time when few Indian Brahmins traveled overseas, the Victorian informational environment allowed the Mahratta’s editors to describe global events and trends to their readers.  One key element of this informational environment was a system that produced a great volume of print information, particularly in Europe and America:  newspapers, journals, books and reports.  But as important was a system of distributing information consisting of newspaper exchanges and mail rapidly delivered by steamship that allowed the Mahratta to access information produced in Britain and the United States.

This highly networked system enabled the Mahratta to connect its readers with an informational web, which took them a great distance from Poona.  Mahratta readers learned about the latest works of Thomas Edison and Alexander Graham Bell.  They read about the organization of the Massachusetts Institute of Technology.  They read about advice to American businessmen published in the Confectioner’s Journal.  They read about British fears of being overtaken by American and German industry.

The Mahratta’s editors then used this information to create a technological nationalist argument for India, claiming that Germany and the United States showed that nations rose and fell and that a system of technical education along with a proper entrepreneurial spirit could transform India

Ramesh Subramanian, Quinnipiac University, “Old Ideas: BBSs and the Emergence of Online Communities in India.” ramesh.subramanian@quinnipiac.edu

The Internet arrived in India in 1988 through a joint Government of India-UNDP project named ‘Education and Research Network’ (ERNET). Access was limited to researchers at ten elite academic and research institutions and a few ERNET staff. The Indian public, especially the computer enthusiasts, were left starved of access to network technology. It was under these limiting circumstances that Bulletin Board Systems (BBSs) first made their appearance in India in 1989, thanks to a few pioneers. 

Computer enthusiasts whetted their appetites by flocking to these new BBSs. In doing so they created an online eco-system that would mirror current social networks in sheer enthusiasm, participation, variety, and range of topics. There was even a women-only BBS.

These early BBSs in India are interesting and important for several reasons. First, unlike the U.S. and other advanced countries, India’s BBSs followed the arrival of the Internet, rather than preceded it. Second, BBS pioneers had to overcome restrictive government technology policies, under which even 1200-baud modems were unavailable – a victim to import restrictions (in some cases, users had to smuggle modems into the country using ingenious methods). Third, telephone connections were extremely restricted (typical wait times for a connection being six years), often forcing BBS ‘sysops’  to use a single phone line for residential as well as BBS use. Fourth, even with a telephone connection and a modem, it was illegal to connect the two without cumbersome permissions from the Department of Electronics. Fifth, the sysops had to run their BBSs on old, indigenous PCs with little memory and non-standard versions of DOS.

However, the Indian BBS pioneers found ingenious workarounds to these impediments, and their BBSs played an important role in fostering vibrant online communities which remained unrivaled until 1995 (when public Internet was finally available). The BBSs were a platform for online communication, collaboration, discussion and learning. They were repositories for hundreds of downloadable software files and utilities. Interestingly, even those with ERNET connections used these BBSs to learn about computer networks, write shell programs, and use UNIX commands.

The thriving BBSs of the 1990s demonstrate a deep historical continuity to today’s thriving social networks in India. This paper extends Jason Scott’s history of BBSs to document India’s early experience with BBSs.  It is a compelling story of political economy, technology policy, access to basic technologies such as telephone and modems, and intrepid self-styled “geeks” driven to create networked communities.

Ling-Fei Lin, Cornell University, “The Origins of Laptop Contract Manufacturing in Taiwan and the Transnational Learning Years, 1988-2001.” ll289@cornell.edu

PAPER ABSTRACT: Facing low growth rates and low profit margins, the once glorious computer industry in Taiwan has become a target of attack target in recent years. They are blamed for sticking to the “thinking of contract manufacturing” and do not engage in higher value activities such as innovation and brand marketing. 

This “enormous condescension of posterity” shows a paradoxical history in Taiwan’s computer industry— it was one of the major growth engines in the economy of Taiwan between the late 70s and early 2000s, but since most of them have been contract manufacturers (CMs) for brand companies, they have been invisible and de-valued to customers, outsiders, and even to the later generations. My project aims to explore how Taiwanese CMs produce not only machines but also innovations and unique knowledge that helped the consolidation of design/manufacturing of laptops in Taiwan’s firms.

Specifically, I will explore the development of the early laptops and the early encounters and knowledge exchanges between engineers of Taiwan and those of the United States and Japan, mainly through the observation of Taiwanese producers. This exploration shows the encounters of the globalization of goods and localization of knowledge production in the laptop history.

In 1988, Quanta managed to design/make its first laptops by using existing parts and components from desktops, and successfully gained orders based on this trial-and-error model. In 1991, Acer (now Wistron) learned how to design/manufacture laptops from cooperating with Japanese engineers. In the process of contract manufacturing, I argue that they were not passive followers, but active actors who struggled to do different engineering innovations that helped shape today’s information society full of relatively cheap products.

This project will engage with two bodies of literature. First, it will contribute to the history of computing by emphasizing the role of CMs. In the literature (Edwards 1996; Ceruzzi 1998; Campbell-Kelly & Aspray 2004), there is little attention to the roles of manufacturers, not to mention those of CMs. Second, it will contribute to the themes of tacit knowledge (Polanyi 1958; Collins 1992; MacKenzie 1996), local knowledge (Wynne 1996; Epstein 1996), and how knowledge travels (Latour 1987).

This research is based on interviews and archival research. Two of the three largest laptop CMs in Taiwan today (also in the world), Quanta Computer and Wistron Corporation, are the research subjects.

By making visible the role of CMs as an important intermediary between ideas and the material world, I believe the project will deepen the understanding of both the relations of production in the form of contract manufacturing and the important transnational exchange of knowledge among computer engineers from different countries and different cultures.