Page 41

EN-July2017-eMag6

parallel chip architecture can be extended to using multiple processors or multiple computers, which execute parallelised applications.   PARRALLELISH UNIVERSE Uniprocessor parallelism is the simplest form of parallel computing and is combined with the benefits of RISC (Reduced Instruction Set Computing) processor architecture developed during the late 1980s to produce increased performance. As all the parallelism is achieved within the chip, the system design is relatively simple and the parallelism is achieved by the application software compilers. Modern high performance desktop workstations utilise uniprocessor parallelism. Multiprocessor parallelism was the next logical step for Uniprocessor systems, where a single computer contains more than one processor. Again, compilers take care of the parallelism in a semi-automatic fashion and, because more that one processor is being accommodated, the operating system software is more complex. Multiprocessor systems have ‘shared memory’ and a single copy of the operating system. The most common form of multiprocessing is Symmetric Multi-Processing or SMP where the processors have equal rank – there is no master processor. With this arrangement you can have up to 128 processors in a single box running under a single operating system. The programming model for multiprocessor systems is relatively easy and the system administration is straight forward, because individual machines are concerned. From the users point of view, multiprocessor systems offer the most cost effective form of parallel computing. SMP systems are also general purpose compute servers covering a large range of application areas. However, multiprocessor systems have limited ‘scaleability’ because the processors share system resources within the machine. This means that the increase in performance reduces incrementally, as more processors are added. This effect is compounded with faster processors. Multicomputer parallelism is where the parallel computing environment is provided by two or more self contained computers linked by a network. This type of environment is often referred to as ‘clustering’. The characteristic of this environment is that nothing is shared. Each machine has its own processor(s), memory, input/output architecture and operating system. The memory is dedicated to the processor(s) within the machine.   The advantage of this environment is that it is flexible; individual machines can be switched from parallel tasks to specific dedicated tasks. Also, the individual machines can be uniprocessor machines or SMP machines. In the latter case, as there is double parallelism, there is a double performance benefit. SMP machines increase the throughput per node within the cluster. Application software must be parallelised manually because of the complexity of this environment. The price/performance of these systems is excellent as they offer ‘super computing’ processing power at multiple workstation prices. They are highly scaleable as nothing is shared. Each time a machine is added the performance of the cluster increases almost linearly although this clearly will depend on the software being executed in a cluster environment. However, a cluster cannot yet be treated as a single system with global shared memory so that it performs like a ‘virtual’ super computer. Parallel hardware produces performance gains, which can be maximised by appropriately parallelising the application software, either to utilise the parallel architectures, or to use the system resources in clusters more effectively. VS Multicomputer parallelism is where the parallel computing environment is provided by two or more self contained computers linked by a network. This type of environment is often referred to as ‘clustering’. www.engineeringnews.co.nz 41


EN-July2017-eMag6
To see the actual publication please follow the link above