Advanced Computing
|
High Performance Computing
- Note the widespread availability and mushrooming popularity of low cost multi-processor computers. Multi-processor computers are becoming common in the medium to high end server markets. They will migrate downstream into the department server and workstation markets within the next five years. Chip densities will continue to double every 24 months. Increased densities will result in bigger chip harvests per wafer of silicon. Bigger chip harvests lower the price of individual chips thus making multi-processors more affordable.
- By the year 2005, supercomputers will be able to perform one trillion floating point operations per second (1 teraflop).
- Neurocomputer sales and performance will continue to increase. Sales will equal $1 billion by the year 2010. Wafer density of neurocomputers sold in the year 2010 will routinely exceed 1 billion neurons and interconnections. By the year 2020, large neurocomputers built from optical components will contain as many neurons as the human cerebral cortex. As a benchmark, in 1990 Intel’s most advanced neural network computer contained only 10,000 neuron or gates.
- By 2005, the price of multi-processor workstations will decline significantly. As a result, they will be deployed in large numbers. As the number of multi-processor workstations increase, “clustering” will become increasingly popular.
- By 2005, chip manufacturers will begin shipping multiple CPU’s on a single chip.
- Supercomputers are designed to split a single computational problem into a number of smaller problems that can be solved simultaneously by an army of small computers working in parallel. The key measure of a supercomputer is the number of calculations it can perform in a given period of time. About five years ago, designers began linking several supercomputers together into clusters in an effort to increase computational throughput. Clustering is now common practice for most high performance computing. Recent advances in the field of software design have made it possible to link geographically distant computers into “global clusters”. We should expect this capability to become widely used.
- The earliest supercomputers and supercomputer clusters were built with proprietary hardware and software. As time went on, manufacturers continued to use proprietary hardware but began using industry standard software (i.e. operating systems). This had the effect of expanding the entire market for high performance computers. Today, many computer manufacturers still sell proprietary hardware but they have begun offering free (i.e. open source) operating systems with their products. Of equal importance is the emergence of a do-it-yourself market for supercomputers. Today, it is common to see supercomputer clusters that where assembled from many “commodity” (i.e. mass produced) computers and run on open source operating systems. We can expect the do-it-yourself segment of the market to grow.
- The age of stand alone supercomputers is rapidly coming to an end. Individual, highly specialized machines are no longer capable of matching the throughput of two or more machines that have been tied together into a “cluster”. As the price of computer clusters continue to decline, it will become increasingly uneconomical to maintain highly specialized, state-of-the-art, stand alone devices like Cray computers.
- As clusters proliferate, they will be joined together via the Internet to form a ”grid”. Grids will also take on the appearance of a single, massively parallel computer albeit on a very large scale.
|
|
Disposable Computers
- Current US tax law allows computers to be fully depreciated after 3 years. In effect, computers only have a 3 year economic lifespan. Computer equipment looses one third of its value each year.
- Low end clone makers are continuing to reduce prices. Computers costing less then $1000 are now commonplace. With prices falling so low, it is often cheaper to buy a new computer than it is to repair a broken one. Note the birth of disposable computers.
- Hardware and software companies will increasingly rely on planned obsolescence and short product life spans as a way to insure continuous cash flow. We can expect to see product and upgrade release cycles will become more regularized.
|
|
Standard Product Families and Price Ranges
- About 5 years ago, computer product categories and price ranges began to stabilize. Today, standard product categories and price ranges are the norm (i.e. inexpensive, moderate and expensive price ranges exist for desktop computers, laptops and servers).
- New categories may emerge over time (i.e. the PDA market), but when they do, the same price stratification phenomenon can be expected to take place.
- If Moore’s Law holds true, the technical capabilities for a given type of product (like a laptop) will improve 100% every 18 months, but its price will remain the same.
|
|
Improved Computer Performance
The price performance ratio for all types of computers will continue to increase. By way of comparison, today’s average desktop computer has the same capabilities that a $1 million computer had 10 years ago. If Moore’s law continues to hold true, the same can be said of computers 10 years from now.
|
|
Standardized Form Factors
Within 5 years, all personal computers will be packaged into one of three categories. The first category will contain one handed, push button devices like cell phones or pagers. The second category will contain two handed or pen based devices like PDA’s or smart phones. Category three will contain knee, lap or desktop devices.
References: Based on an article in PC Week by Anne Knowles entitled A New Kind of Client (dated January 3, 2000).
|
|
Standard Hardware Configurations
- Standardization will help the IT staff diagnose and solve software and equipment problems in a timelier manner. This will result in fewer disruptions to a user’s workday. Standardization will thus improve customer service.
- Standardized hardware is easier to deploy, repair and maintain.
|
(Previous Page)(Next Page)