When we think of programming a computer, we automatically think of using a programming language, such as Java or c++ or (shudder) COBOL. And it’s not surprising – programming languages were considered great advancements when they were first rolled out. Unfortunately, our program development technology hasn’t kept pace with our computer technology. Our computers are millions of times faster than they were when the first “human readable” programming languages were developed, but our ability to develop the programs isn’t much faster than it originally was.
A Short History of Programming Languages
Ada Lovelace is arguably the first computer programmer, working with Charles Babbage in the 1840’s on his Analytical Engine. There is some discussion about how closely they worked together, but in her Notes to an article about the Analytical Engine, she describes the algorithm to calculate Bernoulli numbers. They also worked/corresponded on the “Table & Diagram” that represented the punch card flow to be used for calculating Bernoulli numbers. The concept of punch cards were borrowed from the weaving industry, where the steam-powered mills required control mechanisms that worked faster than humans could.
Flash forward approximately 100 years and ENIAC (Electronic Numerical Integrator and Computer) was the first general purpose digital computer whose initial primary goal was to calculate artillery firing tables for the United States Army’s Ballistic Research Laboratory. Input to and output from the 30-ton beast was all done using punched cards. Each program card was punched with the actual numeric values for one or more machine level instructions that ENIAC was to perform; e.g., Move the Contents of Accumulator A to Accumulator B, etc.
Because the numerical codes weren’t particularly meaningful to humans, Assembly Language was created with abbreviations that represented the desired activity, such as MOV, ADD, JMP. The Assembly Language instructions were translated 1 to 1 into the appropriate numerical values for the machine language instructions to be performed.
Grace Hopper is credited with developing the first compiler for a programming language and conceptualizing the machine-independent programming language. Her contributions led to the development of COBOL (Common Business Oriented Language), one of the first modern programming languages. She is also credited with popularizing the term “debugging”. She was awarded rank of Rear Admiral in the United States Navy for her many contributions to computer science.
It’s A Generation Thing
First generation languages were typically machine specific and frequently had to be entered as binary numeric values using the switches on the front panel of the computer.
COBOL, Fortan, and ALGOL are examples of what are now referred to as Second Generation Programming languages. They were hailed as breakthroughs because each “high-level” instruction could actually represent multiple machine-level instructions and the ability to develop more sophisticated computer programs was enhanced. Second generation languages also brought more structure to programming languages.
Third generation languages, such as BASIC, Pascal, and c, added more refinements to make the languages more “programmer friendly.” Pascal was created to implement the concepts of “structured programming”, which tried to do away with evil “Go To” instructions and spaghetti code by forcing program modules to have a single entry point and a single exit point.
Fourth generation languages were all the rage in discussions in the late 80’s and 90’s, but they failed to provide any significant improvements. Probably more accurately, consensus was never reached on which language provided enough improvements to make it worth the trade-offs needed to change over to the language.
So third+ generation languages, such as c++, Java, and Modula 3 were developed, implementing object-oriented programming (OOP). OOPs programming was intended to encapsulate all of the functions and data associated with some conceptual chunk into one single object. By bolting-on object extensions onto the familiar third generation languages, the dream was that programmers wouldn’t lose too much productivity while they learned to use the new objects.
No Silver Bullet
OOPs programming has failed to significantly improve the productivity of developers or reduce the number of bugs in computer programs. Yes, the sophistication of programs was significantly increased, but so were the training prerequisites.
For example, consider the “hello world” program, traditionally the first program we wrote in the c language. It was just a few lines long, depending on how you placed the curly braces.
However, to write this simple hello world in a window necessitated using the standard Windows framework which required calling a “create new window” function with a half-dozen parameters. It’s a lot more sophisticated interface, but also a lot more complicated with more opportunities for mistakes.
The advances that we see now in program capabilities is based more on the availability of standard libraries and less on program language improvements. Our more sophisticated programs leverage the capabilities built into the libraries; our languages are essentially 30 years old and unlikely to change.
This series of blogs builds on the blogs written about brain limitations and programming language, specifically examining how programming languages fail to overcome human limitations. And until we address human limitations, we won’t be able to improve developer productivity or reduce the number of bugs in programs. This series will also keep an eye on parallel programming and why current programming languages cannot be the foundation for the efficient development of multi-threaded and/or multi-core programs.