[HN Gopher] The Apple M1 compiles Linux 30% faster than my Intel i9
___________________________________________________________________
The Apple M1 compiles Linux 30% faster than my Intel i9
Author : geerlingguy
Score : 122 points
Date : 2021-06-01 19:12 UTC (3 hours ago)
(HTM) web link (www.jeffgeerling.com)
(TXT) w3m dump (www.jeffgeerling.com)
| dathinab wrote:
| Funny thing is the newest i7 from Intel (10nm) also might compile
| it noticeable faster then a i9 MacBook.
|
| There are very few if any Laptops out there which handle a i9
| well. And Apple is kinda well known to not do a very good job
| when it comes to moving heat out of their Laptops. This is often
| blamed on Intel due to their CPU producing to much heat, but
| there are other Laptops which do handle this CPUs just fine...
|
| Anyway that doesn't change that Apples M1 are really good.
| bhouston wrote:
| Regarding the displays. I use a dual screen setup on the mini
| (hdmi and display port) and it is perfect. So it is either a
| hardware issue in the cables, the mini or the monitor. I use
| currently use mono price 32 inch hdr monitors.
| bhouston wrote:
| I would suggest swapping the monitor and the cables separately.
| I do think there is an issue with one of them.
| geerlingguy wrote:
| I've tried two different (known-good) HDMI cables and only have
| the one DisplayPort cable (which works fine on the i9 MBP and
| even on my 13" MacBook Air)... it seems to be something funky
| with the mini only.
|
| At least with the DisplayPort cable, the dropouts don't happen,
| it's just annoying to have to manually turn off my monitor
| every time I walk away lest it go into the on/off/on/off cycle
| while the Mac is asleep.
|
| I did order a CableMatters USB-C to DisplayPort cable today to
| see if maybe going direct from the USB4->monitor will work
| better than TB3->CalDigit->DisplayPort->monitor.
| secondcoming wrote:
| Man, I have huge DisplayPort issues when using my Dell 7750
| with an external monitor. It can take a couple of reboots
| before it'll send a signal to it. The OS can see the monitor,
| but it just won't use it. It's incredibly annoying.
| rsynnott wrote:
| Their DP problem sounds vaguely familiar; I'm almost certain I
| had the same thing years ago with a 2014 MBP. Can't remember
| what the fix was, though...
| 1_player wrote:
| Note: 30% faster than a thermally challenged i9 on a MacBook Pro,
| not a desktop one. Given the comments on similar threads, I feel
| this needs to be mentioned.
| systemvoltage wrote:
| Why? It's all the more apples-to-apples as a comparison because
| the form factor remains the same and thermal limitations are
| similar between two system.
|
| Why would you want to compare a desktop class i9 with a 10 watt
| M1 chip?
| weatherlight wrote:
| Not sure why you are being downvoted.
| programmdude wrote:
| The title is implying M1 is always better than every intel,
| given that I9 is the best intel (consumer) chip.
|
| It's been known for years that apple has been limiting the
| intel chips by providing insufficient cooling. I don't overly
| care about how fast an M1 chips in a macbook is compared to
| an intel chip in a macbook. I want to know how fast an M1 is
| compared to a desktop I9 (given mac minis have M1 chips now),
| or compared to a properly cooled laptop latest-gen I9.
|
| All this experiment shows is that insufficiently cooled
| processors perform worse than sufficiently cooled ones. It's
| a classic example of cherrypicking data. Admittedly, my
| solution would be different to the article's author. Instead
| of using a badly cooled laptop to compile stuff, I'd setup a
| build server running linux.
| monocasa wrote:
| Because there are a lot of issues with i9s in those form
| factors that leads to less perf than even an i7 from the same
| generation.
|
| There was a Linus Tech Tips the other day about how even
| current gen Intel laptops can see better perf on an i7 than
| an i9. It looks like the i9s simply don't make sense in this
| thermal form factor and are essentially just overpriced chips
| for people who want to pay the most to have the biggest
| number.
| jayd16 wrote:
| Its also a 9th gen i9, a two and a half year old chip.
| Bancakes wrote:
| You know what's interesting, China and Russia have been
| struggling for years to get something on the level of intel
| Westmere. And here comes Apple out of the blue with a
| proprietary arch and hardware emulator; cinebench showing it
| to be around a x5650 xeon (Westmere). Easy.
|
| M1X and M2X in the making, too?!
| nguyenkien wrote:
| Apple have at least 13 years experience.
| GloriousKoji wrote:
| Not to dismiss the hard work the engineers at apple put in
| but China and Russia hasn't poached as many engineers over
| the years as Apple has.
| nosequel wrote:
| That's correct, China just poaches the tech when it lands
| on their soil without paying a thing. At least Apple pays
| those they've poached a salary.
| strangemonad wrote:
| Definitely not out of the blue. This has been a long and
| steady march
| mhh__ wrote:
| Not only do Apple have decades of experience both as
| themselves and PA Semi, they can also probably outspend
| efforts that this countries could do politically (Russia
| yes, China probably not, but you get the idea) _especially_
| when weighted against their ease of acquiring information.
| ethbr0 wrote:
| Also Intrinsity.
| bayindirh wrote:
| > And here comes Apple out of the blue with a proprietary
| arch and hardware emulator...
|
| Apple is designing processors and GPUs at least since
| iPad2's tri-core GPU. They're neither coming out of the
| blue, nor newbies in this game.
| dathinab wrote:
| It's not out of the blue and not even surprising.
|
| Look at this from this POV:
|
| - Apple started on custom ARM many years ago.
|
| - Apple isn't really smaller then Amd, and Amd also rewrote
| and restructured their architecture some years ago.
|
| - Apple hired many greatly skilled people with experience
| (e.g. which worked before with Intel)
|
| - Apples uses state of the art TSMC production methods,
| Intel doesn't and the "slow" custom chips from China and
| Russia don't use that either as they want to have chips
| controlled by them produced with methods controlled by
| them. (TSMC production methods are based on tech not
| controlled by Taiwan).
|
| - Apples had a well controlled "clean" use-case, where they
| bit by bit added more support. This includes that they
| could drop hardware 32 bit support and don't have any
| extensions they don't need for their Apple products, this
| can make thinks _a lot_ easier. On the other hand x86 has a
| lot of old "stuff" still needing support and use cases are
| much less clear cut (wrt. thinks like how many PCI lanes
| need to be supported, how much RAM, etc.). This btw. is not
| limited to their CPUs but also (especially!) their GPUs and
| GPU drivers.
|
| So while Apple did a grate job it isn't really that
| surprising.
| dylan604 wrote:
| >China and Russia have been struggling for years
|
| Sometimes it's hard to figure out all of the things left
| out of the plans that were stolen. Some engineer saw
| something not working, looked at the plans, and then
| noticed where the plans were wrong. The change gets
| implemented, but the plans don't get updated. Anyone
| receiving the plans will not have those changes. Becareful
| of those plans that fall of the back of trucks.
| [deleted]
| PragmaticPulp wrote:
| I will never understand why Intel stuck with the i3, i5, i7,
| and later i9 branding across so many generations.
|
| I've lost track of how many times I've heard people wonder why
| their 10-year old computer is slow. "But I have an i7"
| lostgame wrote:
| _Thank you_ for saying this.
|
| Honestly - it's not even the i3, i5, i7, i9 thing. It's the
| fact that two i5s, etc; can be _ludicrously_ different in
| terms of performance from one another because of the sub-
| generations within the named generations.
|
| Yes - it's ridiculous that I could buy an i7 ten years ago,
| buy an i7 today, and yet - of course - they are absolutely
| nothing close to each other in terms of performance.
|
| IIRC the Pentium line did not make this mistake. (Though the
| Celeron line could be very confusing, if I recall correctly.)
| caspper69 wrote:
| To play devil's advocate, I can buy a Corvette today that
| is nothing like the one from ten years ago too.
|
| In fact, lots of things are like this.
| Dylan16807 wrote:
| How much does the top speed differ?
| Invictus0 wrote:
| Corvettes like all cars are identified by their model
| year. Hence there is no confusion that a 2021 Corvette is
| "better" than a 2011 Corvette.
| dylan604 wrote:
| Are the '21 models better than an '11? I don't think
| anyone would say they'd rather have a '21 than a '69.
| NoSorryCannot wrote:
| Almost everyone knows that the model year is part of the,
| idk, "minimal tuple" for identifying vehicles, though,
| and you can count on it always appearing in e.g.
| advertisements.
|
| In CPU land, the architecture codename or process node
| might be part of such a "minimal tuple" but these are
| frequently omitted by OEMs and retailers in their
| advertising materials.
| throwaway894345 wrote:
| The point is that people think the numeral in the brand
| is something like a version number in which larger
| numerals are better. I.e., an i7 is always better than an
| i5 when in fact a new i5 might exceed the performance of
| a dated i7 for some particular metric.
| the_arun wrote:
| Instead they could have just called them i2017, i2018...
| going with year of manufacturing. That way it is useful to
| make some sense out of performance with an understanding iN
| is always better than i(N-1)
| Dylan16807 wrote:
| That gives you the opposite problem, where someone gets a
| brand new dual core and is confused by it being slower.
| londons_explore wrote:
| Best is to give them a number that approximately maps to
| performance.
|
| The "pro" version might be an i8, while the budget
| version is i3. In a few years time, the pro version will
| be up to i12 while the budget version is now i8.
|
| You have model numbers for when someone needs to look up
| some super specific detail.
| vetinari wrote:
| Year of manufacturing says nothing; you can have two
| different gens manufactured in the same year, one for
| lower price tier and the other for the higher one. Just
| like Apple still produces older iPhones, same thing.
|
| Instead, you have designations like "Intel Core i7 8565U
| Whiskey Lake" or "Intel Core i7 10510U Comet Lake". The
| first one is 8th generation (=8xxx), the second one is
| 10th generation (10xxx, but the 14nm one, not the 10nm
| "Ice Lake"), and most OEMs do put these into their
| marketing materials and they are on their respective web
| shops (these two specifically were copied from two
| different Thinkpads X1 Carbon models).
| bluejekyll wrote:
| Except this is an article about the Apple MacBook Pro 16"
| which came out aprox. 1 year ago (edit, 1 and a half years
| ago).
| utopcell wrote:
| still.
| danpalmer wrote:
| To go another layer deeper in analysis though, it is still >20%
| faster on the thermally throttled M1 MacBook Air. That's a
| laptop without a fan, and it's still faster than an i9 with a
| fan.
| sillysaurusx wrote:
| This right here. I was so skeptical of getting an Air. And
| yes, during heavy compile sessions with make -j8, it can hang
| after a half hour or so. But (a) you can make -j7 instead,
| and (b) it's impressive how long it lasts without hitting
| that point.
|
| I've been thinking of doing the cooling mod too, where you
| pop open the back and add a thermal pad to the cpu. It
| increases conductivity with the back of the case, letting you
| roast your legs while you work, aka cool the cpu. :)
| dylan604 wrote:
| Do any of the laptop cooler systems with fans help the M1
| Air thermal issues? I used one on an older 2011 MBP, and it
| definitely helped that laptop. It might have just been a
| placebo of getting the laptop off of a flat surface to
| allow air to circulate around it, but the fans could only
| help in that.
| Bancakes wrote:
| Being thermally challenged is part of the design, huh...
| ajross wrote:
| This was a famously bad CPU/cooling design, actually. LOTS of
| people complained about it at the time. You can place blame
| on either party according to your personal affiliation, but
| similar Coffee Lake chips from other vendors with more robust
| (and, yes, louder) cooling designs were running rings around
| this particular MacBook.
| Traster wrote:
| It's a problem of intels own making - marketing vastly
| different capabilities under the same brand in order to segment
| the market.
| frozenport wrote:
| or Apple doing a bad job with the previous generation
| patmorgan23 wrote:
| Or both
| tedivm wrote:
| The 2019 macbook ironically had better heat dissipation
| than the previous generations, but it's still pretty bad.
|
| We can blame Apple for using chips that are too intense for
| their laptops, and we can blame Intel for making garbage
| chips that can't really perform in real world cases while
| spending a decade not bothering to innovate. Apple at least
| is moving away from Intel as a result of all of this, and
| I'm really impressed with how well the M1 transition has
| been going.
| chippiewill wrote:
| Ehh. I take the view that Apple has been intentionally
| sandbagging their laptops for a while to facilitate an
| ARM transition.
|
| Not to say that M1 isn't amazing, but I think Apple has
| been preparing for this for a while and needed to make
| sure it would succeed even if their ARM CPUs weren't
| quite as groundbreaking as they turned out to be.
| ethbr0 wrote:
| Possibility 1: Apple was making do with what Intel gave
| them, because their profit margins didn't care and they
| were busy naval gazing into their post-Jobs soul
|
| Possibility 2: Apple had a master plan to intentionally
| torpedo performance in order to make their future first-
| party chips appear more competitive
| Dylan16807 wrote:
| What Intel supplied was the bigger problem, but Apple was
| definitely not trying to make the chips perform well.
| They were hitting thermal limits constantly, and more
| directly toward "sandbagging" the recent macbook airs
| have a CPU heat sink that _isn 't connected to anything
| and has no direct airflow_. They could easily have fit a
| small array of fins that the fan blows over, but chose
| not to.
| reader_mode wrote:
| Compared to other amazing Intel laptops of similar form
| factor ? All Intel laptops are insanely loud and generate
| tons of heat for any reasonable performance level. They
| are just generation(s) behind in process, plus they start
| from an architecture designed for servers and desktops
| and cut down, Apple went the other way so it's reasonable
| they will do better on thermals and power consumption.
| rsynnott wrote:
| For _five years_?
|
| Or longer, really; while everyone, of course, loves the
| 2015 MBP, they're mostly thinking of the integrated GPU
| one; the discrete GPU version was pretty thermally
| challenged. Arguably Apple's real problem with the
| post-2015 ones was that Intel stopped chips with fast
| integrated GPUs, so Apple put discrete GPUs in all SKUs.
| bluejekyll wrote:
| Are there any benchmarks you can point to that have a
| similarly spec'd laptop (ideally similar size & weight
| too) that would show that Apple is sandbagging?
| rvz wrote:
| Good! The M2 will be even more faster. Can't wait to skip the M1
| then.
| rowanG077 wrote:
| I really hope Apple can control themselves with these CPUs. The
| M1 has the perfect thermal envelope for the Macbook Pro. No
| thermal throttling ever. I greatly fear the future were Apple
| starts going down Intels path were you buy a sick CPU on paper.
| But once you actually try to do anything with it it throttles
| itself into the ground.
| klodolph wrote:
| Historically, this is one of the reasons Apple went with
| Intel CPUs to begin with. The PowerPC G5 was a nice processor
| but never ended up with a thermal envelope acceptable for a
| laptop. So from 2003 to 2006, you could buy a Mac G5 desktop,
| but if you wanted a laptop, it was a G4. 2006 was the
| beginning of the transition to Intel, who made better
| processors that Apple could put in laptops.
|
| It's not the only reason Apple switched to x86, but it
| perhaps the most commonly cited factor.
| geerlingguy wrote:
| I complain about how hot the i9 gets... but then I remember
| the period where Apple was transitioning from G4/G5 to
| Intel Core 2 Duo chips... in both cases they were _searing_
| hot in laptops, and Apple's always fought the battle to
| keep their laptops thin while sometimes sacrificing
| anything touching the skin of the laptop (unlike most PC
| vendors, who are happier adding 10+mm of height to fit in
| more heat sinks and fans!).
| selectodude wrote:
| Heck, even before that with the 867MHz 12" Powerbook G4.
| Pretty sure that thing is why I don't have children.
| secondcoming wrote:
| My i7-10875H pretty much stopped thermal throttling when
| running CineBench R23 when I changed the thermal paste to
| Kryonaut Extreme.
| rowanG077 wrote:
| How about running CineBench R23 and a GPU workload
| continuously for an hour? I'm willing to bet it will
| throttle. That little chip you have there is not only a
| CPU. It's also a GPU. Utilizing half of it's function and
| then saying it doesn't throttle is not that impressive.
| Still there are many Intel laptops that throttle even by
| going half power.
|
| What laptop do you have? If it's a gaming or workstation
| laptop those are generally much better cooled then thin &
| lights like macbook pros.
| geerlingguy wrote:
| With my luck, Apple's going to release some new devices that
| will blow the M1 Macs I just bought last week out of the
| water... that is the way of things, with tech!
|
| I'm still trying to unlearn my fear and trepidation surrounding
| the use of my laptop while not plugged into the wall. I was
| always nervous taking the laptop anywhere without the power
| adapter in hand, because 2-3 hours of video editing or
| compilation work would kill the battery.
|
| The Air can go a full day!
| cpr wrote:
| If they announce the rumored M2 Macs next week, you might be
| within the 15 days to return the M1's and order (with plenty
| of waiting) the M2's.
| dogma1138 wrote:
| The M1 is great indeed but one thing holds true for Apple
| never buy a first gen device, 3rd Gen onwards is usually
| where you get to see them becoming viable for long term
| support.
|
| While the M1 is great there are clearly issues to be ironed
| out even if it's just the limited bandwidth available for
| peripherals.
|
| I'm also betting on major GPU upgrades over the next 2
| generations.
| kstrauser wrote:
| I mean, _kind of_ , but it seems that the main issue here
| with the M1 is that it's only 30% faster than an i9. If I
| were buying a new Mac today, I would only consider an M1
| system. It seems to be better at literally everything _I_
| want to do with it than the Intel equivalent.
|
| While M2 will undoubtedly be better yet, I see no downside
| to jumping aboard M1 today _for must people who aren 't
| running specialized software_.
| paxys wrote:
| Plus, I don't think Apple has really released a "Pro" M1
| laptop yet. The current M1 MacBook Pro has max 13 inch
| screen, max 16 GB RAM, max 2 TB storage, only 2
| Thunderbolt/USB ports, only a single external display
| supported, no external GPU supported.
|
| If I had to guess I'd say they meant to call this just
| MacBook but tacked on the Pro since they discontinued the
| non-Pro line entirely.
| eyelidlessness wrote:
| I think it's pretty likely M2 (or M1X, or whatever they brand
| it) MacBook Pros will be announced next week at WWDC, given
| the recent rumors generally coalescing. They may not be
| released right away but most rumors have suggested a summer
| release. Not that you should regret your purchase, but for
| (future) reference it's a really good idea to check the
| rumors when considering a non-urgent Apple purchase.
| vmception wrote:
| I really can't wait for my fleeting happiness of seeing
| their next processor!
|
| The rumors really describe the perfect machine for me and
| many people's use cases.
| flatiron wrote:
| macOS though? I don't feel very productive using their
| OS. I would rather have a slightly slower laptop and feel
| more productive. But I don't compile anything locally or
| anything. It's all in the cloud and stuff.
| selectodude wrote:
| Sounds like you're not the target market then. Apple
| generally tries to sell computers to people who feel
| productive using their OS.
| geodel wrote:
| This is surprising. Are we really running out of people
| who would try run datacenter and an electron App on Apple
| laptop and then tell us here how these machines are not
| for professional users.
| vmception wrote:
| I will absolutely try to do that with a M2 processor and
| 64gb RAM per device
| nodesocket wrote:
| You can return it (within 30 days) and get the new
| generation. I am gonna upgrade my 16" MBP Intel i9 as soon as
| I can buy the Apple Silicon in 16".
| geerlingguy wrote:
| Never thought of that... but I'll cross my fingers then and
| see what Apple releases.
|
| These Macs may still be perfect for my needs though. 10G on
| the mini means I skip the giant external adapter, and the
| Air doesn't have the dumb Touch Bar.
| OldTimeCoffee wrote:
| Having substantially more L1 and L2 cache per core but no L3 has
| to be a massive part of why the M1 performance is so good. I
| wonder if Intel/AMD have plans to increase the L1/L2 size on
| their next generations.
| amackera wrote:
| Can you help me understand why removing L3 cache would speed
| things up? Genuinely curious!
|
| Increasing L1 and L2 make intuitive sense.
| jessermeyer wrote:
| I think the idea is by removing L3 has allowed for an
| increase of both L1/2.
| lanna wrote:
| I think he meant "despite not having L3"
| cogman10 wrote:
| Removing L3 frees up transistors to be spent on L1/L2. On a
| modern processor the vast majority of transistors are spent
| on caches.
|
| Why this might help, ultimately, because the latency for
| getting something from L1 or L2 is a lot lower than the
| latency from L3 or main memory.
|
| That said, this could hurt multithreaded performance. L1/2
| are used for 1 core in the system. L3 is shared by all the
| cores and a package. So if you have a bunch of threads
| working on the same set of data, having no L3 would mean
| doing more main memory fetches.
| vbezhenar wrote:
| Apple will invent L3 for workstation-level CPU.
| duskwuff wrote:
| Wild theory: for workstation-class systems, the 8/16 GB of
| on-package memory becomes "L3", and main memory can be
| expanded with standard DIMMs.
| hajile wrote:
| AMD went from 64kb in Zen 1 down to 32kb in Zen 2/3. Bigger
| isn't always better. It only matters if the architecture can
| actually use the cache effectively.
|
| M1 has a massive reorder buffer, so it needs and can use more
| L1 cache. It's pretty much that simple.
| monocasa wrote:
| It's more complicated on x86 because of the 4k page size. The
| L1 is heavily complicated if it is larger than the number of
| cache ways times the page size, since the virtual->physical
| tlb lookup happens in parallel. 8 way * 4kb = 32kb.
| out_of_protocol wrote:
| Compiling stuff is not correct benchmark since end result is
| different - binary for arm vs binary for x86.
|
| Cross compiling is not good as well because one platform has
| disadvantage of compiling non-native code
| eyesee wrote:
| Is there an advantage to compiling native vs non-native code?
| Certainly during execution I would expect that, but I'm not
| clear why that would be true for compilation.
|
| Agreed that a better benchmark would be compiling for the same
| target architecture on both.
| postalrat wrote:
| Maybe you could cross compile on both systems and see if it
| actually does make a difference. I'm doubting it but don't
| have much to base that on.
| Someone wrote:
| Non-native can be a bit harder for constant folding (you have
| to emulate the target's behavior for floating point, for
| example), but I think that mostly is a thing of the past
| because most architectures use the same types.
|
| What can make a difference is the architecture. Examples:
|
| - Register assignment is easier on orthogonal architectures.
|
| - A compiler doesn't need to spend time looking for auto-
| vectorization opportunities if the target architecture
| doesn't have vector instructions.
|
| Probably more importantly, there can be a difference in how
| much effort the compiler makes for finding good code.
| Typically, newer compilers start out with worse code
| generation that is faster to generate (make it work first,
| then make it produce good code)
|
| I wouldn't know whether any of these are an issue in this
| case.
| sp332 wrote:
| True but this is addressed in the article. If what you need is
| code that runs on a RPi, this is a meaningful comparison.
| [deleted]
| aduitsis wrote:
| Regarding the point in the article mentioning the fans starting
| to spin at the drop of a hat: The macbook pro i9 16", albeit a
| fabulous device in almost every aspect, has a bug: Connecting an
| external monitor at QHD will send the discrete graphics card into
| >20W of power draw, whereas usually it's about 5W. At 20W for the
| graphics card alone, it's not difficult to see that the fans will
| be spinning perpetually.
| jscheel wrote:
| This problem is so infuriating. There was a thread the other
| day about it. It's clearly a bug, but it seems to be one that
| nobody wants to take responsibility for.
| RicoElectrico wrote:
| Huh. I have Dell Latitude 5501 and it's almost always in
| hairdryer mode when connected to the dock (on which there's
| 1920x1200 HP Z24i and 2560x1440 Dell U2515H). Your description
| seems suspiciously similar.
|
| Different graphics, though - MX150.
| asdff wrote:
| I wonder if anyone who prefers to dock their laptop has thought
| about sticking it in a mini freezer under their desk
| geerlingguy wrote:
| It gets worse--if you are charging the battery, you can
| immediately see the left or right side Thunderbolt ports get a
| lot hotter, fast. Probably because piping 96W through those
| ports heats things up a bit.
|
| The thermal performance on the 2019 16" MacBook Pro is not
| wonderful.
| fpgaminer wrote:
| > I know that cross-compiling Linux on an Intel X86 CPU isn't
| necessarily going to be as fast as compiling on an ARM64-native
| M1 to begin with
|
| Is that true? If so, why? (I don't cross compile much, so it
| isn't something I've paid attention to).
|
| The architecture the compiler is running on doesn't change what
| the compiler is doing. It's not like the fact that it's running
| on ARM64 gives it some special powers to suddenly compile ARM64
| instructions better. It's the same compiler code doing the same
| things and giving the same exact output.
| znpy wrote:
| there's no reason for a cross-compiler to be slower than a
| native compiler.
|
| if your compiler binary is compiled for architecture A and
| emits code for an architecture B, it's going to perform the
| same as a compiler compiled for an architecture A and emitting
| code for the same architecture A.
| cle wrote:
| Well to get a little nuanced, it depends on if the backend
| for B is doing roughly the same stuff as for A (e.g. same
| optimizations?). I have no idea if that's generally true or
| not.
| karmakaze wrote:
| Well there's one. If people tend to compile natively much
| more often than cross-compile, then it would make sense to
| spend time optimizing what benefits users.
| mlyle wrote:
| There are some small nits, where representation of constants
| etc can be different and require more work for a cross-
| compiler.
| [deleted]
| tedunangst wrote:
| In theory, yeah. In practice, a native compiler may have
| slightly different target configuration than cross. For
| example, a cross compiler may default to soft float but native
| compiler would use hard float if the system it's built on
| supports it. Basically, ./configure --cross=arm doesn't always
| produce the same compiler that you get running ./configure on
| an arm system. As a measurable difference, probably pretty far
| into the weeds, but benchmarks can be oddly sensitive to such
| differences.
| herpderperator wrote:
| No, it's not true. Just a common misconception because people
| believe it's some sort of emulation.
| dan-robertson wrote:
| Some cross-compilation may need some emulation to fold
| constant expressions. For example if you want to write code
| using 80 bit floats for x86 and cross-compile on a platform
| that doesn't have them, they must be emulated in software.
| The cost of this feels small but one way to make it more
| expensive would be also emulating regular double precision
| floating point arithmetic when cross compiling. Obviously
| some programs have more constant folding to do during
| compilation than others.
| messe wrote:
| Is constant folding going to be a bottle neck? In this
| particular instance, in the kernel, floating point is going
| to be fairly rare anyway, and integer constant folding is
| going to be more or less identical on 64-bit x86 and ARM.
| rubyist5eva wrote:
| How does a virtualized ARM build, of Ubuntu for example, run in
| Parallels vs. the same workload on an x86 virtual machine in the
| same range?
|
| If my day to day development workflow lives in linux virtual
| machines 90% of the time, is it worth it to get an M1 for
| virtualization performance? I realize I'm hijacking but I haven't
| found any good resources for this kind of information...
| xxpor wrote:
| This is very dependent on setup. If your IO is mostly done to
| SR-IOV devices, your perf will be very close to native anyway.
| The difference would be about the IOMMU (I have no idea if
| there's a significant difference between the two here). If
| devices are being emulated, the perf probably has more to do
| with the implementation of the devices than the platform
| itself.
___________________________________________________________________
(page generated 2021-06-01 23:00 UTC)