Monthly Archives: May 2014

The Whorf Hypothesis

Lieutenant Worf

Lieutenant Worf in 2366

No, no, we’re not talking about the big tough guy from Star Trek. Instead, we’re taking a little detour from what we usually talk about to consider the applicability of linguistic relativity, also referred to as the (Sapir) Whorf Hypothesis. (I put Sapir’s name in parentheses because he was an influential teacher and mentor of Whorf but they never collaborated on the hypothesis).

Benjamin Lee Whorf’s Hypothesis “is the idea that differences in the way languages encode cultural and cognitive categories affect the way people think, so that speakers of different languages will tend to think and behave differently depending on the language they use.”

Benjamin Whorf

Benjamin Whorf

A minor example of this might be the English word “parent” in Spanish is “padre” which is the same word that is used for “father.” How could a teacher send home a non-gender specific letter to the parent(s) of one of her students if the Spanish word for parent is father? Another example would be trying to describe or explain snow in the native language of someone from New Guinea or Vanuatu or other equatorial countries where the temperature never drops below 75 degrees.

Benjamin Whorf graduated from MIT in 1918 with a degree in chemical engineering and had a long successful career in fire prevention. In addition to his work preventing fires, in 1925 he began studying the Nahuatl language (Aztec) and Mayan hieroglyphics. By 1930 he was considered a leading name in Middle American linguistics, prompting him to travel to Mexico and to publish more papers about Nahuatl and Mayan. After 1931, Whorf started taking classes from Sapir, who had a significant influence on Whorf’s thinking about language. In 1938, Whorf’s health began to decline because of cancer and he passed away in 1941.

The Wikipedia biography of Whorf includes this anecdote:

“Another famous anecdote from his job was used by Whorf to argue that language use affects habitual behavior. Whorf described a workplace in which full gasoline drums were stored in one room and empty ones in another; he said that because of flammable vapor the “empty” drums were more dangerous than those that were full, although workers handled them less carefully to the point that they smoked in the room with “empty” drums, but not in the room with full ones. Whorf explained that by habitually speaking of the vapor-filled drums as empty and by extension as inert, the workers were oblivious to the risk posed by smoking near the “empty drums”.”

So what does this have to do with parallel programming? The point here is that when we talk about parallel programs and what is going on under the hood, we have to use programming words (fork, join, lock, synchronize, etc) to describe parallel activities which constrains or limits how we think about the actual parallel activities that we want to happen. Our thinking is limited to the just artificial constructs provided in the programming languages.

Much like Whorf’s Hypothesis, the Avian Hypothesis is that programming language determines the actual program behavior, what can be done, and even how one can think about its actions. By changing our thinking about the parallel activities to more natural scenarios, such as flocks of birds, it becomes easier to think about what we want done without the constraints of what the programming language allows us to think about.

Review of Multiprocessors and Parallel Processing (1974)

DSCN2329I recently came across a book that I had rescued a long time ago from the discard pile of the technical library at work. Titled Multiprocessors and Parallel Processing, it was published in 1974 by John Wiley & Sons, Inc. and edited by Phillip H Enslow. What a trip down memory lane that was! One of the subtle points of the book was how mini-computers (DEC, etc) were encroaching on mainframe computers (Burroughs, IBM, etc). The whole microcomputer revolution that started in the late 1970’s and 1980’s was completely invisible to all but the most radical thinkers and future forecasters.

From a historical perspective, it’s interesting to see what subjects were covered. It begins with an overview of computer systems, including a four-page section describing “The Basic Five-Unit Computer”, including four block diagrams that illustrate the various configurations of the “Input Unit”, the “Arithmetic Logic Unit”, the “Memory Unit”, the “Control Unit”, and the “Output Unit.” I guess it was pretty revolutionary stuff in ’74, but it seems pretty simple stuff compared to what they pack into a dual-core processor these days.

One interesting emphasis in the book was the potential for improved reliability that multiprocessor computer could offer. It’s hard to remember that computer failures were a major source for worry in the ’60s and ’70s. There were many discussions during the Apollo missions about whether the guidance computers should be redundant system, but the added weight, complexity and fragility of the computers of their time made redundant systems unattractive, especially since redundant computers wouldn’t add significantly to the overall reliability. An MIT document on the Apollo Guidance system put the estimated failure rate at 0.01% per thousand hours, so the mission time of 4000 hours produced a “low” probability of success of 0.966, which required that repair components had to be included in every flight along with the tools and procedures to repair their computers during the mission. Our current computers now use components that have less than 1 failure per million hours, or one thousand times more reliable (or more).

One thing that hasn’t changed is the motivation for Multiproccessor and Parallel Processing Systems – improving system performance. Improving performance and reliability are the subjects covered in Chapter 1, along with the basic components and structure of computer systems. Chapter 1 takes 25 pages to cover its material.

Chapter 2, Systems Hardware, spends 55 pages describing memory systems and how to share memory between multiple processors. Also covered were fault tolerance, I/O, interfacing, and designing for reliability and availability. Nearly half of the pages in the first four chapters are in Chapter 2.

Chapter 3 covers Operating Systems and Other System Software in 27 pages. It covers the organization of multiprocessor systems (Master-Slave, Symmetric or Anonymous processors), Resource allocation and management, and special problems for multiprocessor software, such as memory sharing and error recovery.

Chapter 4, Today and the Future, is the last chapter and it summarizes the future of multiprocessor systems in 10 pages or less. The final section in the chapter, The Future of Multiprocessor, includes the statement, “. . . the driving function for multiprocessors would be applications where. . . availability and reliability are principal requirements; however the limited market for special systems of that type would not be enough incentive to develope [sic] a standard product specifically for those applications.” Sounds like they didn’t expect multiprocessor and parallel processing to be a big market in the future.

 

DSCN2331Which brings us to a total of about 130 pages of a 330 page book. The remaining 200 pages of the book are appendices that describe commercially available multiprocessor systems available from Burroughs (D 825 and B 6700), Control Data Corporation (CDC 6500, CYBER-70), Digital Equipment Corporation (DEC 1055 and 1077), Goodyear Aerospace STARTRAN, Honeywell Information Systems (6180 MULTICS System and 6000 series), Hughes Aircraft H4400, IBM (System 360 and 370), RCA Model 215, Sanders Associates OMEN-60, Texas Instruments Advanced Scientific Computer System, Sperry Rand (UNIVAC 1108, 1110, and AN/UYK-7) and Xerox Data Systems SIGMA 9 computers.

For those of you who actually read thru the list, you probably noticed that it included MULTICS system that inspired Thompson and Ritchie to develop Unix. MULTICS may not have been a commercial success, but one of those systems continued to run until the year 2000 when the Canadian Department of National Defence in Halifax, Nova Scotia, Canada finally shut it down. Must have been quite a system if it inspired a couple of people to write a completely different operating system.

What I think is most interesting is that the majority of the book is about the hardware and that system software (and the joys of parallel programming) are only discussed on 27 of the 330 total pages, and most of them discussed how the software interacted with the hardware. At the time, everyone saw the major hurdle as being the hardware, something that integrated circuits and massive shrinking of components has almost rendered irrelevant. The amazing efforts of Intel, AMD, TI, and other chip makers have made high-performance, high-availability, high-reliability hardware available at reasonable prices, components that hardware designers could only (just barely) dream of in the 1970’s.

Now it’s time for software developers to start thinking outside the box and come up with new ways to design software that is faster than the tried-and-true-slow-and-expensive-and-behind-schedule methods currently in use.