An Essay on the Need for a New Computing Paradigm
Deaprtment of Computer Science
Computer science is a soft science. That is, computer science, like mathematics is based of accepting the true of some basic theorems. The mathematical laws on which the conventional CS paradigm is based have served to guide CS during its history of incredible improvements of CPU speed and memory size, but like the relationship between the arrangements of the fingers on the hand and the layout of the QWERTY keyboard, the relationship between these laws and the real world of electrons, light beams and digital circuits is arbitrary. Absolute physical benchmarks used in the physical sciences such as Physics are Chemistry are ignored in the conventional CS paradigm. Instead, algorithm analysis depends on the nature of an algorithm in the limit. That is, even though computers are of a finite size and the speed of electricity is bounded, algorithm analysis is based on processing unlimited data sizes and efficiently utilizing increasingly insignificant CPU and memory costs while ignoring the impact of the complexity of algorithms on the increasingly dominate programming costs.
Unfortunately since the laws for the current paradigm are taught as absolute truths to every CS freshman, they are very difficult to challenge. Indeed, the perception is that they are infallible. The traditional relationship between mathematics and the sciences has been for mathematicians to develop abstract constructs without regard to engineering or science and then, later, perhaps centuries later, discover that they have a useful application. However, in the case of computer science, the abstract laws and theorems of mathematics have become the very basis of computer science with the consequence that any alternative paradigm can be PROVEN to be inferior. The result is that new computer paradigms, architectures and languages, which could address many of todays issues such as the inefficiency of multitasking operating systems and hardware, have been overlooked, even suppressed. The wealth of alternative computer architectures, programming languages and alternative paradigms of earlier years, such as systolic arrays, data flow, data parallelism and associative computing, have been abandoned.
This point is illustrated by the Connection Machine fiasco of the late 80’s and early 90’s. The CM that was designed at MIT and promoted by DARPA was an attempt to develop an alternative machine architecture. It failed because of inappropriate algorithm paradigms and programming techniques. That is, because of the mismatch between the accepted CS paradigm and the available programming languages and the reality of the machine, the programmers could not develop “cost effective” algorithms. The most obvious example of this mismatch is sorting versus searching. The need for sorting data for efficient retrieval is accepted without question in the conventional paradigm. To suggest that data can be efficiently retrieved without sorting is not only heresy, but can be proved to be wrong by any freshman computer scientist using conventional paradigm laws. Yet the CM, using data parallelism, could and did find any datum in memory in one step – O(1) time. This would be an existence proof in any other discipline, but it was not and still is not accepted by the CS community because the standard paradigm “proves” that it takes at least O(log n) time to find something (even conveniently ignoring the cost of sorting the data.) As a result, the massive searching approach to data selection and retrieval was not pursued. One of the results of the law that data need to be sorted in order to find them efficiently are the tree organizations. However, using a data parallel searching paradigm, there is no need for nested menu trees or any other type of sorted data. Data parallel searching, by eliminating the need for sorting, does not need linked lists and allows all data organizations – chronological, by recipient, by topical category – to be utilized simultaneously via associativity. That is, data parallelism and associative computing is one possible answer to simpler computing. This approach requires that you go back to the flat tables of yesteryear, but with gigabytes of memory and gigahertz of speed, flat tables are manageable, even preferable.
Moreover, the data parallel search based paradigm is well suited for high school level tabular data structures and natural language communication with your computer. Both features significantly reduce programming costs. But unfortunately, like the internal combustion engine, once a technology has progressed it a certain point, it is practically impossible to overcome it. However, as there is the possibility that the eventual depletion of our natural energy resources will cause economic changes in the design of car engines, the eventual depletion of “free” improvements in speed and memory may force a change in the conventional CS paradigm. Even today, super computers spend from 70 to 95% of their computing power in overcoming the limitations of the current paradigm so that only 5 to 30% of the actual speed improvements are obtained by the user. However, a change in paradigm is most likely to occur in the Laptop, PDA and cell phone environment not the super computer environment, because the lack of power is not the driving point, but the lack of physical space for keyboards, display screens, mice, etc. and the demand for easier modes of computing. Eventually, hopefully, computer scientists will come to realize that, CS has been trapped by the von Neumann models success and that just as the laws of plain geometry work well over a small portion of the Earth’s surface, but fail when applied to a much larger portion, so too, the laws of the current CS paradigm work well for the old world of “slow”, few, expensive computers and cheap programmers, but are not well suited to the new world of fast, ubiquitous cheap computers and expensive programmers.