Newsgroups: comp.arch
Path: utzoo!utgpu!news-server.csri.toronto.edu!rpi!zaphod.mps.ohio-state.edu!wuarchive!rice!ariel.rice.edu!preston
From: preston@ariel.rice.edu (Preston Briggs)
Subject: Re: Compilers and efficiency
Message-ID: <1991May13.211555.28824@rice.edu>
Sender: news@rice.edu (News)
Organization: Rice University, Houston
References: <28297C23.6984@tct.com> <rwa.674151323@aupair.cs.athabascau.ca> <12268@mentor.cc.purdue.edu>
Date: Mon, 13 May 91 21:15:55 GMT

hrubin@pop.stat.purdue.edu (Herman Rubin) writes:

>But the CRAYs, for example, take about 20 instructions
>to perform a double precision multiplication of two single precision
>numbers, while the CYBER 205 takes exactly 2.  The earliest computers
>multiplied two integers of more than 32 bits and obtained a double length
>result in one instruction.

Sure, but each instruction was very slow, in modern terms.

>Now the early computers were fast on memory and transfer, but the newer
>ones are very definitely not so, unless trickery is used.

New computers have much faster memory access.
They also have much, much faster FP and integer instructions.
So the balance has changed.  Everyone knows this.

The idea is that the overall throughput goes up.
Certainly my programs run faster.  I expect yours do too.

If you want to take advantage of new architectures and implementations,
your probably going to have to rethink some of your old assumptions.
For example, in the old days, FP was much more expensive than integer
arithmetic.  Now they are about the same.  In the old days, FP was
more expensive than memory accesses.  Now FP is generally cheaper.

It's easy to get old instructions (say int x int -> long int) in 1 cycle,
just slow the cycle time down to old rates.

