|MadSci Network: Computer Science|
In comparison to a supercomputer, how many FLOPS can a pentium III process per second? FLOPS stands for 'Floating-point Operations Per Second'. FLOPS doesn't mean anything. If you want meaningless numbers, you can go to companies web sites and ask their marketing departments. I'm not going to bother - you can find that out yourself. To get a meaningfull comparison, you have to time a standardized, representative task, ( called a benchmark) on both computers. Marketing departments go to absurd lengths to avoid such apples to apples comparisons. Never trust a benchmark invented by the same company that is trying to sell computer hardware - it's certain to be distorted somehow. Supercomputers are usually used for floating-point calculations to the exclusion of almost everything else. Supercomputers are consequently optimized to do floating-point operations very quickly. Any supercomputer that I've ever heard of had really slow integer operations in comparison to its floating-point operations. X86 computers are typically used for integer operations, uncommonly requiring floating-point operations. I'm presently using a (nearly obsolete) NexGen-90 computer without a floating-point coprocessor. We use a software floating-point emulator for the NexGen and have hardly noticed, except for a couple of ill-behaved applications that refuse to work unless they detect a real hardware floating-point coprocessor. Software emulation is considerably slower than dedicated hardware, but it's hard to notice when you rarely require the task you are emulating. Our 90MHz NexGen performed like a 100MHz Pentium in real life. Raw megahertz apples to oranges comparisons can be misleading because they don't give any indication of how much actual computing was accomplished. All sorts of tricks can be used to speed things along. One of the more interesting (and difficult to accomplish) tricks that the NexGen and K6 CPUs do is called 'speculative execution' - when they encounter a branch (intersection in the road), and aren't sure which way to go yet, they take BOTH paths in parallel and decide which one was correct later. One of my acquaintances is a fellow that used to work as a programmer for Convex (a supercomputer manufacturer). The following comes from a story he once told me. Their competition in that day was IBM. IBM's hardware was theoretically noticeably faster (I'm guessing 20% faster). They of course claimed a higher number of FLOPS. The two companies computers performed almost exactly the same speed in real life. Programs running on these computers would usually be heavily dependent on a standardized library of floating-point matrix functions. Convex had very optimized libraries, closely following the research of some university professors. IBM's libraries were not as optimized, lowering overall performance.
Try the links in the MadSci Library for more information on Computer Science.