Like any well-trained Ph.D. student, I have come to see my own discipline as the master discipline, upon which all other forms of knowledge are based. For instance, I have repeatedly pestered my fianceé, who works in math education, with the idea of teaching mathematics historically. What better way (I enthuse) to teach, say, imaginary numbers than to understand why they were invented in the first place; the historical context that led to their emergence.Still convinced (perhaps quite foolishly) that this is a brilliant idea, I have begun to think recently about how the same concept might apply in computing--how, that is, the history of computing might be used to teach computer science.
Please consider helping the community sharpen its engagement with new ideas. Back in graduate school I read feverishly in labor history, business history, history of technology social history, organizational sociology, etc in preparation for my oral examinations. My classes covered still more eclectic topics, ranging from a "greatest hits" of literary theory to nonparametric methods. Over the ten years since I physically left Penn I've been focused on an ever more specialized set of literatures, primarily the burgeoning history of computing field, which I know in ever more depth. In general I've also been doing more writing and less reading.
A couple weeks back I discussed Matthew Lasar's article on Ars Technica about the invention of the PC. Lasar has done it again this week with an excellent piece on the surprising persistence of old technologies. Tech pundits, Lasar notes, are very quick to declare technology dead or obsolescent, when the latest, hot thing comes along:
WNYC's Radiolab is a show dedicated to making difficult scientific issues accessible and interesting for a popular audience. I sometimes assign segments of episodes to my history of technology students to reward them after particularly dry or difficult readings, so the recent episode on AI, called "Talking to Machines" caught my eye.
Not only does Sherry Turkle weigh in Furbies, but the episode also hearkens back, again and again, to Turing's early questions about what human capacities computers can replace (or improve). Particularly how imitation can function as effectively as the "real" thing in many cases we might not expect, like when an AI researcher unknowingly falls in love with a chatbot... twice. Over and over again we see Turing's "polite convention that everyone thinks" being cautiously extended to hardware and software, often with surprising results.
There was an article in the New York Times recently that summarized findings of scientists studying the effects of light on sleep/wake cycles. One of the most interesting findings, for historians of computing, was the fact that the bright, bluish light put out by modern computer screens very effectively suppresses the body's ability to generate melatonin, and therefore to sleep well and regularly. Disturbed sleep, however, was not the only effect observed. In fact, the studies go on to describe how the suppression of melatonin production can lead to everything from mood disorders to obesity.
This raises the perennial question of how we live through our technologies, and how we are molded by them, not just socially and economically, but even at very basic biological levels.