[HN Gopher] Apple M1 Ultra Meanings and Consequences
       ___________________________________________________________________
        
       Apple M1 Ultra Meanings and Consequences
        
       Author : carlycue
       Score  : 180 points
       Date   : 2022-03-14 13:56 UTC (9 hours ago)
        
 (HTM) web link (mondaynote.com)
 (TXT) w3m dump (mondaynote.com)
        
       | planb wrote:
       | > Surprise, Apple announced not an M2 device but a new member of
       | the M1 family, the M1 Ultra.
       | 
       | Really this dod not come as a surprise to anyone interested in
       | Apple's chips, it was rumored as "Jade-2C" for a long time. And
       | they were not expected to release a "pro" version of M2 before
       | the standard version for the new MacBook Air neither.
        
       | ramesh31 wrote:
       | I'm not sure how Intel can ever compete in the consumer laptop
       | market again. Apple could easily produce a knocked down M1
       | Macbook base model for ~$599, at which point all but the low end
       | Chromebook market goes to them.
        
         | wmf wrote:
         | Intel Arrow Lake may be competitive with M2 Pro, maybe M2 Max.
        
         | kllrnohj wrote:
         | Alder Lake is no slouch and already out. M1, and really MacOS
         | itself, need to both be substantially better than they are
         | already to get people to move off of Windows or Linux & x86 in
         | general
        
           | klelatti wrote:
           | 2021 Mac market share gains prove this to be incorrect.
        
             | kllrnohj wrote:
             | Source & analysis needed.
             | 
             | Poking around Apple's 2021 marketshare wasn't anything
             | particularly special, and
             | https://www.statista.com/statistics/576473/united-states-
             | qua... isn't showing any sort of "M1 driven spike", either.
             | Apple's y/y growth was larger than PC's y/y growth (Annual
             | growth for Mac was 28.3%, against 14.6% for the global PC
             | market), but note there both are still growing. Meaning M1
             | didn't suddenly convert the industry.
             | 
             | Regardless a one year data point certainly doesn't prove a
             | trend, certainly not of the "Apple is going to destroy
             | Intel/AMD/Nvidia" variety.
        
               | klelatti wrote:
               | So M1 and MacOS aren't good enough to get people to
               | switch but we have Mac sales increasing at almost double
               | the PC market. Hmmm.
               | 
               | And fair enough to say many people won't switch because
               | of inertia / software lock in which is true - but that's
               | not because of the quality of M1 or MacOS.
        
         | hajile wrote:
         | With inflation and chip supply issues, Apple is much more
         | likely to keep their prices and wait for everything else to
         | catch up.
        
         | kaladin_1 wrote:
         | Well, with the pocket size of Intel, I would assume that some
         | sort of serious research would be going on at the moment to
         | maintain their dominance in that market.
         | 
         | They would fight Apple in this. They also have AMD to fight for
         | the non-M1 chip market... For a company used to enjoying
         | dominance they have a lot of work to do to remain at the top.
        
           | faeriechangling wrote:
           | Intel at this point is famous for blowing tons of R&D budget
           | with little to show for it. I think the better question is if
           | Intel can successfully eject its incompetent managers from
           | the company.
        
         | kayoone wrote:
         | i don't think Apple is particularly interested in the low end
         | Chromebook market. It has also been largely replaced by tablets
         | and phablets and is pretty niche nowadays.
        
           | ramesh31 wrote:
           | >i don't think Apple is particularly interested in the low
           | end Chromebook market. It has also been largely replaced by
           | tablets and phablets and is pretty niche nowadays.
           | 
           | Kind of my point though; the low end isn't even worth it for
           | PC manufacturers anymore. And by the time you get to a ~$600
           | laptop, you can have a cheap M1 device with 10x the
           | performance/watt. The new M1 iPad Air starts at $700, and
           | absolutely blows any existing Intel laptop out of the water,
           | short of a desktop replacement gaming rig with discrete GPU.
        
         | simonh wrote:
         | It'll never happen, far too many people that would otherwise
         | buy a $1000 laptop would buy a $600 one instead. Why give up
         | that $400?
        
       | MichaelRazum wrote:
       | Would love if you could run linux on it. That would be really
       | amazing.
        
         | WhyNotHugo wrote:
         | Progress on this front has been going pretty well.
         | 
         | It's still not ready for everyday usage (though the people
         | working on porting it might already be using it), but it's way
         | moving a lot faster than one would guess. I'm considering a Mac
         | Mini build machine sometime during 2022, it might be feasible
         | for that.
         | 
         | See https://asahilinux.org/2021/12/progress-report-oct-
         | nov-2021/ or follow https://twitter.com/marcan42
        
       | sebastianconcpt wrote:
       | Crayne's Law: _All computers wait at the same speed_
        
       | nickcw wrote:
       | Amdahal's Law says hello.
       | https://en.wikipedia.org/wiki/Amdahl%27s_law
       | 
       | If we keep scaling up number of processors rather than clock
       | speed, what is going to be the maximum number useful cores in a
       | laptop or desktop? 20? 100? 1000? At some point adding more cores
       | is going to make no difference to the user experience, but the
       | way we are going we'll be at 1000 cores in about a decade so we
       | better start thinking about it now.
       | 
       | Or to put it another way, what normal workloads will load up all
       | the cores in the new M1 chip?
       | 
       | Being a software developer, compiling things is the obvious
       | choice, except when you come to that rather serial linking phase
       | at the end of the compile job. Already my incremental Go compiles
       | are completely dominated by the linking phase.
       | 
       | There are a few easy to parallelise tasks, mostly to do with
       | media (as it says in the article). However a lot of stuff isn't
       | like that. Will 20 cores speed up my web browser? How about
       | Excel?
       | 
       | Your average user is going to prefer double the clock rate of
       | your processors to doubling the number of processors.
       | 
       | Anyway, I don't want to rain on Apple's parade with these musings
       | - the M1 Ultra is an amazing achievement and it certainly isn't
       | for your average user. I wish I had one in a computer of my
       | choice running Linux!
        
         | tomjen3 wrote:
         | Amdahals law is sometimes brought out as a whammy (not saying
         | that is what you do), but economically we would expect that
         | there will be a usage for this extra compute: we did it server
         | side by hosting SAAS and thereby adding more users than anybody
         | could think of.
         | 
         | I suspect that rather than 1000 cores we might start to see
         | more levels of cores, and hardware for more things. Already
         | Apple has video encoding support. AI seems an obvious idea and
         | it scales much better than most classical computing.
         | 
         | If I may bring up something that may be more of a wish: I wish
         | that we could give up the idea of shared memory and we could
         | have many more cores that communicated by shared messaging. We
         | are already seeing this spread with webworkers - if it became
         | cheap to create a new thread and computers weren't bottlenecked
         | then maybe more games would use it too.
        
           | samwillis wrote:
           | Message passing without shared memory is slow, having to copy
           | data and maybe even serialise/deserialise it in the process.
           | Message passing where you are just passing a pointer is fast.
           | 
           | Web workers are basically the worst case, you have to
           | serialise your data to and from JSON when passing it to and
           | from a worker. It's not built for performance. There have
           | been many cases where people have tried to improve
           | performance of their apps by offloading work to a web worker
           | but the added cost of serialisation ultimately made it slower
           | than running on the main thread.
        
         | minhazm wrote:
         | I'm guessing you mean double the single core performance and
         | not clock speed. Even Intel Pentium 4's from 2000 had higher
         | clock speeds than Apple's M1 chips. The clock speed matters far
         | less than what you can actually get done in a single clock
         | cycle. Just looking at Apple's own products and their single
         | core performance from Geekbench you can see they achieved
         | around a 40% improvement in just single core performance [1].
         | So it's not just adding more cores. Apple has usually been the
         | one to hold out on cores and usually has resisted adding more
         | cores to the iPhone / iPad chips in the past and focused on
         | better single core performance.
         | 
         | > Your average user is going to prefer double the clock rate of
         | your processors to doubling the number of processors.
         | 
         | I disagree. The reality is that these days people are running
         | multi-threaded workloads even if they don't know it. Running a
         | dozen chrome tabs, Slack, Teams, Zoom, some professional tools
         | like IDE's, Adobe creative suite, etc. adds up very quickly to
         | a lot of processes that can use a lot of cores.
         | 
         | [1] https://browser.geekbench.com/mac-benchmarks
        
           | rayiner wrote:
           | > Even Intel Pentium 4's from 2000 had higher clock speeds
           | than Apple's M1 chips.
           | 
           | It's wild to see that in print.
        
             | ac29 wrote:
             | Isnt true though - the Pentium 4 was released in 2000, but
             | the fastest version available that year was only 1.5GHz. It
             | wouldnt hit 3.2GHz (~M1) until 2003.
        
           | pantulis wrote:
           | Isn't running multiple single-threaded workloads at the same
           | time also a case for better performance with more cores? So
           | even for basic tasks having more cores is better (although
           | maybe not that cost-efficient)
        
             | katbyte wrote:
             | only if those workloads are very CPU intensive, and the
             | ones that are should probably be parallelized as much as
             | possible anyway.
        
               | BolexNOLA wrote:
               | Sort of related: Adobe creative cloud is the most
               | bloated, CPU-inefficient system I've ever had the
               | displeasure of working with as a professional editor. I
               | get FCPX is apple's NLE but the render times smoke
               | premier in comparison and CC in general is just way too
               | busy in the background.
        
         | zitterbewegung wrote:
         | If I were to make a best case scenario on where multiprocessing
         | , GPU acceleration and the neural engine I would say a iOS App
         | development using a simulator , a container that contains your
         | backend services and the iOS app using ML inference. Training
         | on the M1 Ultra isn't really worth it.
        
         | paulmd wrote:
         | Apple's advantage is actually that it provides much higher
         | single-thread (really, per-thread) performance than
         | "comparable" x86 processors. A M1 Pro can match a 12900HK in
         | perfectly-threaded high-code-density scenarios like Cinebench,
         | with half the thread count. Real-world IPC is something like 3x
         | (!) that of Intel right now - obviously it also clocks lower
         | but Apple hits very hard in lower-thread-count scenarios.
         | 
         | If your code is bottlenecked on a single thread, or if it
         | doesn't scale well to higher thread counts, Apple is actually
         | great right now. The downside is that _you can 't get higher
         | core counts_, but that's where the Pro and Ultra SKUs come in.
         | 
         | (The real, real downside is that right now you can't get higher
         | core counts on M1 without being tied to a giant GPU you may not
         | even use. What would be really nice is an M1 Ultra-sized chip
         | with 20C or 30C and the same iGPU size as A14 or M1, or a
         | server chip full of e-cores like Denverton or Sierra Forest,
         | but that's very much not Apple's wheelhouse in terms of
         | products unfortunately.)
        
           | __init wrote:
           | > Real-world IPC is something like 3x (!) that of Intel right
           | now - obviously it also clocks lower
           | 
           | That's the problem, though -- if you clock yourself much
           | lower, of course you can get higher IPC; you can pack more
           | into your critical paths.
           | 
           | Now, certainly Apple has some interesting and significant
           | innovations over Intel here, but quoting IPC figures like
           | that is highly misleading.
        
             | paulmd wrote:
             | Of course IPC needs to be contextualized, but it's still a
             | very important metric. And Intel's processors aren't
             | clocked 3x higher than Apple either - that would be 9 GHz,
             | and you can't even sustain 5 GHz all-core at 35W let alone
             | 9 GHz which just isn't physically possible even on LN2.
             | 
             | https://images.anandtech.com/graphs/graph17024/117496.png
             | 
             | That's an absolutely damning chart for x86, at iso-power
             | the M1 Max scores 2.5x as high as a 5980HS in FP and 1.43x
             | as high in integer workloads, despite having just over half
             | the cores and ~0.8x the transistor budget per core. So it's
             | a lot closer to the ~2.5-3x IPC scores than you'd think
             | just from "but x86 clocks higher!". And these results do
             | hold up across the broad spectrum of workloads:
             | 
             | https://images.anandtech.com/graphs/graph17024/117494.png
             | 
             | Yes, Alder Lake does better (although people always insist
             | to me that Alder still somehow "scales worse at lower power
             | levels than AMD"? That's not what the chart shows...) but
             | even in the best-case scenario, you have Intel basically
             | matching (slightly underperforming) AMD while using twice
             | the thread count. And that's a single, cherrypicked
             | benchmark that is known for favoring raw computation and
             | disregarding performance of the front-end, if you are
             | concerned about the x86 front-end, this is basically a
             | best-case scenario for it... high code compactness and
             | extremely high threadability. And it still needs twice the
             | threads to do it.
             | 
             | https://i.imgur.com/vaYTmDF.png
             | 
             | Like your "but x86 uses higher clock rates", you can also
             | say "but x86 uses SMT", so maybe "performance per thread"
             | is an unfair metric in some sense, but there is practical
             | merit to it. If you _have_ to use twice the threads to
             | achieve equal performance on x86 then that 's a downside,
             | where Apple gives you high performance on tasks that don't
             | scale to higher thread counts. And if Apple put out a
             | processor with high P-core count and without the giant GPU,
             | it would easily be the best hardware on the market.
             | 
             | I just strongly doubt that "it's all node" like everyone
             | insists. Apple is running fewer transistors per core
             | already, and AMD/Intel are not going to double or triple
             | their performance-per-thread within the next generation
             | regardless of how many transistors they might use to do it
             | (AMD will be on N5P this year, which will be node parity
             | with Apple A15). x86 vendors can put out a product that
             | will be competitive in one of several areas, but they can't
             | win all of them at once like Apple can.
             | 
             | And going forward - it's hard to see how x86 fixes that IPC
             | gap. You can't scale the decoder as wide, Golden Cove
             | already has a pretty big decoder in fact. A lot of the
             | "tricks" have already been used. How do you triple IPC in
             | that scenario, without blowing up transistor budgets
             | hugely? Even if you get rid of SMT, you're not going to
             | triple IPC. Or, how do you triple clockrate in an era when
             | things are actually winding backwards?
             | 
             | Others are very, very confident this lead will disappear
             | when AMD moves to N5P. I just don't see it imo. The gap is
             | too big.
        
         | kayoone wrote:
         | Well, chrome could certainly use more threads for all your tabs
         | which might make it even more of a resource hog but probably
         | also run faster. Software Engineering will evolve too, like we
         | have seen in game development over the last decade. While 10
         | years ago most game engines were hardly using let alone
         | optimized for multicore systems, nowadays the modern engines
         | very much benefit from more cores and I guess the modern OSes
         | also benefit from having more threads to put processes on.
        
         | simonh wrote:
         | These pro machines are squarely aimed at creative professionals
         | doing video work and image processing. Of course there are used
         | for a lot of other stuff too, but that's the biggest single use
         | case by long way. The studio name for the new systems was well
         | advised.
        
         | xmodem wrote:
         | Derbauer pointed out that, for Intel's 12 series, that the
         | e-cores dominate the p-cores not just in performance per watt,
         | but also performance-per-die-area. I haven't run the numbers
         | for the M1, but I'd be shocked if it wasn't similar.
         | 
         | It seems unavoidable that you can get more total performance
         | with larger numbers of slower cores than smaller numbers of
         | faster cores. The silicon industry has spent the entire multi-
         | core era - the last 15 years - fighting this reality, but it
         | finally seems to have caught up with us, so hopefully in the
         | next few years we will start to see software actually start to
         | adapt.
        
           | hajile wrote:
           | It's pretty well established that M1 E-cores are something
           | like 50% the performance, but 1/10 the power consumption.
           | 
           | A55 is probably 1/8 the performance, but something like 1/100
           | of the power consumption and a miniscule die area. I wouldn't
           | want to have all A55 cores on my phone though.
           | 
           | Performance per die area is also relative. For example, Apple
           | clocks their chips around 3GHz. If they redesigned them so
           | they could ramped them up to 5GHz like Intel or AMD, they
           | would stomp those companies, but they would also use several
           | times more power.
           | 
           | What is really relevant is something like the ratio of a
           | given core's performance per area per watt to the same value
           | for the fastest known core.
           | 
           | The only interesting area for ultimate low-power in general
           | purpose computing is some kind of A55 with a massive SIMD
           | unit going with a larabee-style approach for a system that
           | can both do massive compute AND not have performance plummet
           | if you need branchy code too.
        
             | paulmd wrote:
             | Apple's e-cores are actually (relatively speaking) much
             | better than Intel's. From memory, Apple gets about the same
             | performance (actually higher in single-threaded, but same
             | MT performance) out of about 2/3rds the transistor count as
             | Intel. Note that this is invariant of node - obviously
             | Blizzard cores are physically much smaller than Gracemont,
             | but they are _more_ smaller than you get out of a node
             | shrink alone, apple is doing more with _actually less
             | transistors_ than Intel.
             | 
             | Since the Intel e-cores still have a relatively wide
             | decoder, e-core designs may be the part where the bill
             | comes due for x86 in terms of decoder complexity. Sure it's
             | only 3% of a performance core, but if you cut the
             | performance core in half then now they're 6%. And the
             | decoder doesn't shrink _that_ much, Gracemont has a 3-wide
             | decoder vs 4-wide on Golden Cove, and you still have to
             | have the same amount of instruction cache (instruction
             | cache hit rate depends on the amount of  "hot code", and
             | programs don't get smaller just because you run them on
             | e-cores). A lot of the x86 "tricks" to keep the cores fed
             | don't scale down much/any.
             | 
             | edit:
             | 
             | Intel Golden Cove: 7.04mm^2 with L2, 5.55mm^2 w/o L2
             | 
             | Intel Gracemont: 2.2mm^2 with L2, 1.7 mm^2 w/o L2
             | 
             | Apple Avalanche: 2.55mm^2. (I believe these are both w/o
             | cache)
             | 
             | Apple Blizzard: 0.69mm^2 (nice)
             | 
             | Note that N7 to N5 has roughly 1.6x logic density scaling -
             | so a Blizzard core would be 1.24mm^2 or roughly 73% of the
             | transistor count of Gracemont for equivalent performance!
             | For the p-cores the number is 82%.
             | 
             | This is one of the reasons I feel Apple is so far ahead.
             | It's not about raw performance, or even efficiency, it's
             | the fact that Apple is winning on _both those metrics_
             | while using 2 /3rds the transistors. It's not just "apple
             | throwing transistors at the problem", which of course they
             | are, but just they're starting from a much better baseline
             | such that they can afford to throw those transistors
             | around. The higher transistor count in total is coming from
             | the GPU, the cores themselves Apple is actually much more
             | efficient (perf-per-transistor) than x86 competitors.
        
               | hajile wrote:
               | Golden Cove has 6-wide decoders (that should be using
               | massive amounts of resources).
               | 
               | Of course, it doesn't help that Intel lists laptop chip
               | turbo frequencies to use either 95w or 115w and
               | Anandtech's laptop review of one had the 12900H hitting
               | those numbers with sustained power draw at an eye-raising
               | 85w. That's 2-3x the power of M1 Pro and only 20-30% more
               | performance.
               | 
               | That laptop also showed that cutting power from 85w to
               | 30w roughly halved the performance. On the plus side,
               | this means their power scaling is doing pretty well. On
               | the negative side of things, it means their system gets
               | worse multithreaded performance at 30w despite having 40%
               | more cores.
        
           | BackBlast wrote:
           | That's not the only factor at play. Many to most applications
           | are dominated by single thread performance. JavaScript's
           | interpreters are single threaded, for example.
           | 
           | Something I don't often see, but it does come up here and
           | there. One nice thing about the M1 is the performance is
           | consistent as it doesn't have a massive auto-scaling boost
           | involved. An Intel or AMD chip might start off at top speed
           | single thread, but then something else spins up on another
           | core, and you take a MHz hit on your primary thread to keep
           | your TDP in spec. The background task goes away, and the MHz
           | goes back up. Lots of performance jitter in practical use.
           | 
           | Interconnects and IO also consume power. You can't just scale
           | small e-core counts without also hitting power walls there
           | too.
           | 
           | All that said, I'd love to see some E-core only chips come
           | out of intel targeted at thin clients and long battery life
           | notebooks.
        
             | paulmd wrote:
             | > All that said, I'd love to see some E-core only chips
             | come out of intel targeted at thin clients and long battery
             | life notebooks.
             | 
             | they exist, that's called Atom. "e-core" is just a
             | rebranding of Atom because the Atom brand is toxic with a
             | huge segment of the tech public at this point, but
             | Gracemont is an Atom core.
             | 
             | There's no Gracemont-based SKUs yet, but Tremont-based
             | Atoms exist (they're in one of the models of NUC iirc),
             | which is the generation before. Also, the generation before
             | that is Goldmont/Goldmont Plus which are in numerous
             | devices - laptops, thin clients, and NUCs.
             | 
             | Keep an eye on the Dell Wyse thin-client series, there are
             | a lot of Goldmont-based units available if you can settle
             | for a (low-priced surplus) predecessor.
        
         | bcrosby95 wrote:
         | It's a bit of a chicken-and-egg problem. In general, devs don't
         | target machines that don't exist.
         | 
         | But from some quick searching, excel will split out independent
         | calculations into their own threads. So for that, the answer
         | seems to be: it depends. If you're using 20 cores to calculate
         | a single thing, it seems like the answer is "no". But if you're
         | using 20 cores to calculate 20 different things, it seems like
         | the answer is "yes".
        
         | gameswithgo wrote:
         | It can accelerate compilation times quite a bit at times, which
         | could allow us to use more compile time safety checks and more
         | compile time optimizations, indirectly making the lower core
         | count computers faster.
        
       | GeekyBear wrote:
       | The most interesting new thing going on in the M1 Ultra:
       | 
       | >combining multiple GPUs in a transparent fashion [is] something
       | of a holy grail of multi-GPU design. It's a problem that multiple
       | companies have been working on for over a decade, and it would
       | seem that Apple is charting new ground by being the first company
       | to pull it off.
       | 
       | https://www.anandtech.com/show/17306/apple-announces-m1-ultr...
        
         | jayd16 wrote:
         | Not that it isn't good for the M1 but is such a setup really
         | something other companies are attempting to pull off? It seems
         | like GPU makers just put more cores on a single die.
         | 
         | Does this tech apply to discrete graphics? You can't really
         | connect separate cards with this, right?
         | 
         | Are you saying this is a blow to Intel's graphics? Or maybe
         | you're implying it's a way for integrated graphics to become
         | dominant?
        
       | sanguy wrote:
       | They have an 8 tile version in internal testing for the next
       | generation Mac Pro workstations.
       | 
       | It won't launch until 3nm is ramped up.
       | 
       | But that is when it is completely over with Intel, AMD, Nvidia
       | completely.
        
         | kllrnohj wrote:
         | > But that is when it is completely over with Intel, AMD,
         | Nvidia completely.
         | 
         | AMD is already shipping on 9 tile SoCs, and Intel is doing tile
         | stuff, too.
         | 
         | Unless Apple gets back into the server game, and gets a lot
         | more serious about MacOS, pretty much nothing Apple does makes
         | it "game over" for Intel, AMD, or Nvidia. Especially not Nvidia
         | who is still walking all over every GPU coming out of Apple so
         | far, and is so _ridiculously_ far ahead in the HPC  & AI
         | compute games it's not even funny.
        
         | CamelRocketFish wrote:
         | Source?
        
         | faeriechangling wrote:
         | Having impressive or even superior hardware does not mean that
         | the ecosystems built up around Intel/AMD/Nvidia disappear over
         | night. Wake me up when Metal has the same libraries that Cuda
         | does, etc. etc.
        
         | neogodless wrote:
         | When you say 8 tile, you mean 8 M1 Max CPUs glued together?
         | 
         | So presumably $16,000 (or greater) systems?
         | 
         | In what way does this eliminate competitors from the market?
         | Also is Apple doing something that is literally impossible for
         | anyone else to do?
         | 
         | And will the use cases that Apple currently does not cover
         | cease to exist?
        
       | pcurve wrote:
       | The author Jean-Louis Gassee was the head of BeOS, a modern multi
       | threaded OS that Apple almost chose over Nextstep
        
         | jeffbee wrote:
         | Pardon me but even as a person who developed a few toy
         | applications for BeOS back in the 90s, what about it could have
         | been described as "modern", then or now? Certainly today, 25
         | years after the fact, it doesn't feel like an OS written
         | partially in C++, that had a few SMP tricks but barely any
         | working networking, is modern. Even at the time, its modernity
         | was in question compared to even Windows NT.
        
           | rayiner wrote:
           | It still holds up pretty well. I'll take a multi-threaded C++
           | GUI toolkit over Electron any day.
        
             | [deleted]
        
             | Guillaume86 wrote:
             | Electron has become the Godwin point of software
             | performance discussions.
        
               | xyzzyz wrote:
               | Considering how everyone called everyone a Nazi over past
               | 5 years, I think it's to safe to say that the Godwin's
               | Law belongs to an era of internet that has expired.
        
           | ianai wrote:
           | I liked the GUI. It also booted incredibly fast compared to
           | windows at the time.
           | 
           | I remember the GUI being responsive and laid out in a way I
           | wished Windows was at the time. I remember reading about the
           | prospects for BeOS 5 menus later. They were going to have a
           | ribbon of color follow your menu selections through drop
           | downs. I forget the look since it's been so long, but it was
           | a cool idea. Would have made drill downs easier to follow.
           | Notably, modern OSes can be pretty finicky about menu drill
           | downs and outright user hostile. It's pretty easy to lose an
           | entire drill down by moving the mouse a couple pixels one or
           | another way too far, for instance.
           | 
           | Mobile UI of course is amongst the most limited interfaces.
           | We've gone backwards a lot in ways on mobile. It also seems
           | mobile may be steering people away from certain careers by
           | simply being good enough to ignore learning things like touch
           | typing, Linux/foss, or hobbies that lead to tech careers.
           | (Not sure how much sense this last point makes-just spreading
           | to general trends I've heard or seen.)
           | 
           | Edit-maybe I'd say BeOS had a certain polish that seems
           | lacking even in todays FOSS GUIs/OSes but especially back
           | then.
        
             | KerrAvon wrote:
             | > Notably, modern OSes can be pretty finicky about menu
             | drill downs and outright user hostile. It's pretty easy to
             | lose an entire drill down by moving the mouse a couple
             | pixels one or another way too far, for instance.
             | 
             | The Mac basically solved this problem in 1986 when Apple
             | first introduced hierarchical menus. To make it work, the
             | UI layer has to be able to avoid strict hit testing of the
             | mouse cursor during menu tracking, which I would conjecture
             | is probably difficult in some environments.
        
             | User23 wrote:
             | Being able to play 8 quicktime movies smoothly at once on a
             | PPC 603 while the UI remained responsive was pretty
             | impressive back in 1997.
        
           | Synaesthesia wrote:
           | The indexed and searchable file system was pretty cool
        
           | wmf wrote:
           | At the time there was an idea that normal PCs could never
           | afford to run "professional" OSes like NT, Unix, or NeXTSTEP
           | and thus BeOS was only competing with "PC" OSes like classic
           | MacOS, Windows 95, or OS/2. In retrospect this was wrong,
           | although it did take until 2001-2003 for real OSes to make
           | their way to the mainstream.
        
             | elzbardico wrote:
             | Exactly, around 98 I ran Windows NT 4.0 at work, in a very
             | expensive Pentium II with plenty of memory (I guess 64MB at
             | the time). At home in my personal machine with an older AMD
             | K-5 and only 16MB (another guess) Windows NT was not very
             | usable, although it was way more stable than windows 95.
             | And nevermind the price difference.
        
           | KerrAvon wrote:
           | You've got to be kidding. The state of the art for the vast
           | majority of desktop computer users was classic Mac OS and
           | Windows 9x. Did you never see the BeOS demos where they
           | yanked the plug out of the wall while doing heavy filesystem
           | operations and booted straight back up without significant
           | data loss? I don't remember if NTFS existed back then, but
           | most people wouldn't use NT and successors until the turn of
           | the century.
           | 
           | There were gaping holes in functionality, but BeOS was a
           | revelation at the time.
        
             | jeffbee wrote:
             | NTFS came out years before BeFS and had journaling from day
             | 1. At the time that people were experimenting with BeOS
             | there were already millions of users of Windows NT. Even
             | Windows NT 3.1 for workstations sold hundreds of thousands
             | of copies.
             | 
             | Be's _all time_ cumulative net revenues were less than $5
             | million.
        
         | Kyro38 wrote:
         | He also was the head of Apple France.
        
         | Hayvok wrote:
         | JLG also worked at Apple through the 80s. Ran the Mac team and
         | a few other groups.
        
       | rs_rs_rs_rs_rs wrote:
       | >A slightly orthogonal thought: because every word, every image
       | of Apple Events is carefully vetted, one had to notice all six
       | developers brought up to discuss the Mac Studio and Mac Ultra
       | benefits inter work were women.
       | 
       | Well yes, the event was on March 8, International Women's Day.
        
         | [deleted]
        
         | jws wrote:
         | Also, the style of that segment was a continuous monologue but
         | changing speakers on each sentence. The speakers all had very
         | similar voices, so it sounded very much like a single speaker.
         | It was an interesting effect, though we lost the message
         | because we started discussing whether they were processing the
         | voices to be more similar or if Apple is just big enough to say
         | to their customers "We need a female developer, camera ready,
         | and her voice needs to sound like this." and get 6 hits.
        
         | throwaway284534 wrote:
        
           | schleck8 wrote:
           | What a weird comment, you are talking about there being women
           | at all but the observation is that there were exclusively
           | women
        
             | throwaway284534 wrote:
             | I don't believe the author would've put the same emphasis
             | on the gender ratio if it was all male speakers.
             | 
             | That they consider an all female group a noteworthy
             | observation is exactly what's antiquated about this kind of
             | thinking. All male group? Business as usual. But fill the
             | stage with women and suddenly it's "saying something." --
             | The implication that Apple is putting them on stage to
             | virtue signaling they're a female lead company.
        
           | flumpcakes wrote:
           | Where did the author say women are needed to take care of the
           | housework? Even with the sarcasm this is a pretty thoughtless
           | and offensive post.
        
         | notreallyserio wrote:
         | That was a truly bizarre segue. I wonder what emotions or
         | thoughts he was hoping to inspire with that sentence.
        
           | katbyte wrote:
        
             | nanoservices wrote:
             | Claiming misogyny at every turn shuts down discourse and
             | actually hurts progress as people will just avoid all
             | discussion even when it is legitimately misogyny.
        
               | dragonwriter wrote:
               | Avoiding discussion just because misogyny is mentioned
               | when misogyny is, in fact, at every turn is what actually
               | hurts progress, as is avoiding mentioning misogyny
               | because of people doing that. (But, of course, that's the
               | motivation for the avoidance in the first place.)
        
               | nanoservices wrote:
               | > Avoiding discussion just because misogyny is mentioned
               | 
               | Its human nature, people will avoid it by not engaging if
               | it just gets thrown around and used to berate without
               | cause.
               | 
               | > when misogyny is, in fact, at every turn
               | 
               | Yes, it pervasive but not at every turn. Case in point,
               | this thread.
               | 
               | > as is avoiding mentioning misogyny because of people
               | doing that.
               | 
               | No one is saying to avoid it. Just saying that it is not
               | helping when you indiscriminately label everything as
               | misogyny.
        
             | avazhi wrote:
             | This sort of comment isn't helpful, and you should stick to
             | Reddit.
             | 
             | On a more personal level, you should query why your
             | response to a question about why every single presenter was
             | a woman (in the computer industry it's statistically
             | impossible that it was a random assortment) is to accuse
             | the questioner of being a misogynist. You're probably the
             | same kind of person who would label a person who asks why
             | blacks make up 14% of the population but commit 52% of the
             | crime in the United States a racist. In neither case have
             | you contributed anything, nor have you done anything to
             | further the enquiries that are clearly being hinted at: Why
             | did Apple think it was appropriate to have nothing but
             | women presenting a keynote? Corporate virtue signalling? If
             | blacks commit so much crime, why, and if they don't, why
             | are the numbers inaccurate?
             | 
             | But of course, labelling somebody as a misogynist or a
             | racist takes 2 brain cells and 5 seconds.
             | 
             | Do better.
        
               | katbyte wrote:
               | It was on international women's day, so there is no
               | surprise why every dev was a women. What was strange is
               | having a a weird out of place callout about it in a
               | blogpost/article that was entirely dedicated to the
               | tech/cpu.
        
               | notreallyserio wrote:
               | > If blacks commit so much crime, why, and if they don't,
               | why are the numbers inaccurate?
               | 
               | We don't know this because the figures cited are biased
               | towards successful convictions. If white folks are able
               | to afford better, more competent lawyers, you can expect
               | them to mount successful defenses. I've never, ever, seen
               | anyone post the 13/50 stat without acknowledging this
               | fact in their comments or replies. And I've seen it a
               | lot.
        
               | avazhi wrote:
               | Obviously my use of this stat was to make a point apropos
               | of the commenter I was replying to - but your response is
               | exactly what I meant in the sense that, assuming we are
               | both interested in talking/learning about the subject,
               | there's a dialogue to be had. Calling me a racist would
               | obviously do nothing except convince me that I must be
               | right because namecalling in isolation is the ultimate
               | white flag in internet discourse (well, so I say).
               | 
               | At any rate - my response is that not only are blacks
               | exponentially more likely to be convicted, but they are
               | exponentially more likely to be arrested in the first
               | place (talking about per capita population here). This
               | obviously has nothing to do with lawyers, because it's
               | data from the stage preceding the lawyers showing up.
               | You'll probably respond by saying that in fact the system
               | is rigged against blacks (cops are racists, you'll say),
               | and maybe you'll point out how NYC's stop and frisk
               | policy disproportionately targeted young black males (the
               | same ones who are exponentially more likely to be both
               | arrested and convicted in the first place). You'll point
               | out correctly that my reasoning on this point is
               | circular, and then I might bring up the stats from both
               | before stop and frisk, during it, and then after it was
               | no longer city policy, to suggest some causality. I might
               | also draw the link between IQs and crime rates (the
               | causality of which has been demonstrated across racial
               | groups), and I'd point out how black adults are basically
               | a full standard deviation below the average white or
               | Asian American. I'd also probably point out that IQ isn't
               | something that can be changed very quickly, whether by
               | lawyers, or nutritionists, or water filters, and that
               | there's nothing to suggest that the IQ gap is likely to
               | improve very quickly (and that's assuming that it's the
               | result of environmental and not genetic factors in the
               | first place, which isn't clear). On that note I'd also
               | probably bring up how intelligence and personality seem
               | to be 70-90% genetic, and the problems that fact alone
               | ostensibly presents. You might, again (and not
               | incorrectly), point out how 'the system' has basically
               | been fucking blacks for the past 200 years, and that it's
               | therefore impossible to say with certainty what they
               | would look like in an environment without such horrendous
               | baggage, and I'd respond that notwithstanding it being
               | true that they've been mistreated, enslaved, and in some
               | US states subjected to what we'd today call genocide, it
               | doesn't change the fact that they are (apparently) both
               | dumber than non-blacks (on average) and, as the plausible
               | result of being significantly less intelligent, much more
               | likely to commit violent crimes. Where would we go from
               | there?
        
             | faeriechangling wrote:
             | Varying from the corporate norm of ensuring every single
             | bit of promotional material you have correlates with a
             | representative sample of the population sticks out like a
             | sore thumb. I don't know why people are feigning shock that
             | people found it remarkable.
        
               | katbyte wrote:
               | the presentation was on international women's day.
        
               | faeriechangling wrote:
               | Yup, that does seem to be the most likely reason they
               | cast women for the presentation instead of men.
        
             | throw-8462682 wrote:
             | > Misogyny?
             | 
             | What an extremely unkind thing to say. There was nothing in
             | the GP to suggest this. In fact it's entirely plausible
             | that Apple decided to honor women's day this way.
             | 
             | You must have known that accusing people of misogyny can
             | have a impact on their livelihood in today's hyper PC
             | world. That makes your comment even more unkind. I think
             | you should apologize.
        
               | katbyte wrote:
               | I was agreeing with the comment I replied to? that it was
               | strange and out of place for a blog post that was
               | entirely dedicated the tech/CPU to call out something
               | that had nothing to do with it? (and it was a pretty
               | obvious nod to international women's day)
        
             | flumpcakes wrote:
             | That's an extremely unfriendly thing to jump to. There was
             | nothing in the text to suggest that.
             | 
             | I also noticed that every developer was from one gender and
             | even commented to my partner at the time, at which point I
             | remembered it was international women's day and perhaps
             | this was Apple showing their support in an unvoiced/non-
             | bombastic way.
             | 
             | Does this make me a misogynist? For noticing something?
        
               | katbyte wrote:
               | there is quite a difference between noticing something
               | (and then correctly noting it is international women's
               | day) and making a strange note of it in an article that
               | is entirely about the CPU. It really doesn't have a place
               | there.
        
               | dymk wrote:
               | At minimum, it gives you a taste of what it feels like to
               | be a woman in tech, and everybody around you is male.
        
               | csunbird wrote:
               | I haven't watched the event, although, I would notice the
               | all female cast just like I would notice the all male
               | cast. It is a simple anomaly to have every caster to be
               | male or female.
        
               | flumpcakes wrote:
               | I don't need a "taste". I have basic empathy for other
               | human beings, regardless of gender.
        
               | User23 wrote:
               | I'm sure you're writing in good faith, but you may be
               | interested to know that in internet slang "noticing" is a
               | racist and possibly sexist dog-whistle[1].
               | 
               | [1]
               | https://www.urbandictionary.com/define.php?term=Noticing
        
               | tomca32 wrote:
               | Interesting. Could you suggest how to rewrite that
               | comment and express that a person noticed something (in
               | the literal sense of the word) without using a dog
               | whistle?
        
               | dylan604 wrote:
               | It occurred to me that...
               | 
               | I realized that...
               | 
               | I became aware...
               | 
               | That's without even looking for a thesaurus. Not that I'm
               | supporting this alt-definition of noticing, but your
               | question seemed pretty trivial to answer.
        
               | hajile wrote:
               | Then it's 5 seconds before all those are "bad words" too.
               | 
               | Maybe it's better to assume that most people in the world
               | aren't evil degenerates unless there's hard evidence to
               | the contrary.
        
               | dylan604 wrote:
               | Welcome to the "woke" world. You'll be learning a lot
               | about what people totally unfamiliar to you do that
               | you've also done at some point without knowing it. We're
               | all sinners in a woke world
        
               | tomca32 wrote:
               | Thanks, that does answer the question I wrote, however my
               | point was that I'm very uncomfortable with this
               | environment where any innocent word could be interpreted
               | as a dog-whistle, and was curious if any "notice"
               | synonyms are also considered dog-whistles.
        
               | nanoservices wrote:
               | Interested to know this as well. I am not sure if
               | pointing to a definition on Urban Dictionary with ~100
               | thumbs is enough to redefine a word let alone assume that
               | the average person is using it as a dog whistle.
        
               | flumpcakes wrote:
               | Are you implying I'm racist or sexist for using a normal
               | word in it's usual context?
               | 
               | I don't particularly care if a normal word, which is in
               | plenty of dictionaries, is co-opted by a small group of
               | people. My friends won't know this secret meaning, my
               | family won't know that, whatever is on the TV, Radio, or
               | Cinema won't know that.
               | 
               | And now this conversation is completely off topic.
               | 
               | The internet is so hyper partisan that even normal,
               | boring usage of the English language is now weaponised as
               | shibboleths.
        
               | nicoburns wrote:
               | I find the concept of calling people out for using terms
               | that are dog whistles quite problematic. By their very
               | nature, dog whistles are terms in widespread innocent
               | use, and as such the majority of the people using the
               | term will using it in its ordinary usage in good faith.
               | We shouldn't assume people are using these terms as dog
               | whistles unless we have some other evidence to suggest
               | that.
        
               | karaterobot wrote:
               | What a strange culture we've trapped ourselves in.
        
               | fleddr wrote:
               | No, not at all. Clearly, in recent years Apple's keynotes
               | and adverts have been making a clear diversity statement,
               | in particular an optical one, that is pretty much
               | opposite to reality.
               | 
               | They've taken it so far that in a way, they've swung to
               | the other end of the extreme. Less diversity by
               | emphasizing diversity too much, if that makes sense.
               | 
               | Since discussing gender and skin color always gets people
               | worked up, let me pick a less controversial one: wealth.
               | 
               | When you check Apple's adverts and see the actors using
               | their products, they're clearly living the Valley
               | lifestyle. They're all young, fit, filthy rich, have
               | fantastic work spaces and living rooms, carefully
               | designed by fashionable architects, lead an inner city
               | lifestyle where they hop on Ubers to go to Starbucks, you
               | get the idea.
               | 
               | ...none of which represents even a fraction of the actual
               | customer base. The working class is not featured, rural
               | people are not featured, other countries are not
               | featured, which leads me to conclude that Apple's
               | signaling is not diverse. It's fashion for the elite,
               | with a vast distance to their actual users.
               | 
               | Apple's products are desirable enough for it to not
               | matter, but it's fine to spot it. It's hard to miss.
               | 
               | Just don't misread it as anything but optics. Have a look
               | at leaked tech memos to understand how Apple really
               | thinks. They're predatory towards competitors. They'll do
               | anything to dodge taxes and bypass consumer protection
               | laws. They sabotage open standards. They rely on
               | exploitation for their manufacturing. They make secret
               | deals so that employees can't switch. They treat their
               | store staff as garbage.
               | 
               | Apple is a deeply unethical, predatory, neoconservative
               | company. Yet if you feature lots of black women in your
               | ads, you optically look progressive and good.
               | 
               | Woke capitalism from the world center of hypocrisy:
               | California.
        
               | flumpcakes wrote:
               | > fashion for the elite
               | 
               | I agree completely. I also believe that hackernews is the
               | exact target audience for Apple products. People who can
               | argue that $250,000 for a non-manual, 9-5 job is somehow
               | not a good salary.
               | 
               | Apple like to show people (all people from all
               | backgrounds) as if they are normal or average rather than
               | the hyper-elite that they are. If Apple had an advert
               | with the average family in an average home it would
               | probably be distressing to most people.
               | 
               | I'm lucky enough to be in the top 20% of earners in my
               | country and I am in my 30s, but I still would never be
               | able to afford a house as depicted in Apple's
               | advertising. Or have the budget to spend $5,000 on a
               | computer for my work.
        
           | JKCalhoun wrote:
           | I'll give Gassee the benefit of the doubt. If you were
           | ignorant of the fact that it was Women's Day (guilty here!)
           | he might have thought it disingenuous to _not_ mention it? At
           | least in passing as he did.
           | 
           | I'm glad Apple are showcasing the diversity of their
           | workforce (I raised three daughters of my own, wish they
           | would have shown an interest in programming, not really) but
           | I worry that there is a danger of backlash for going too far.
        
           | redox99 wrote:
           | The author detected an "anomaly" (see proof below) and
           | pointed it out. I think it's pretty hostile to call such
           | comments as bizarre, or question the author's intentions.
           | 
           | It seems that around 90% of developers are men[1]. Therefore,
           | using a binomial distribution, if 6 developers were picked at
           | random, the chance that all of them would be women (or other
           | non men gender) is 0.0001%. (Interestingly, there is a 53%
           | chance that all 6 of would be men).
           | 
           | The reason for such unlikely occurrence is that most likely,
           | as the parent mentioned, Apple wanted to feature women
           | developers for International Women's Day.
           | 
           | [1] https://www.statista.com/statistics/1126823/worldwide-
           | develo...
        
         | 323 wrote:
        
       | starwind wrote:
       | Am I the only one bothered that the M1 "Max" was apparently not
       | the best Apple could do? In what would is the "Ultra" version of
       | something better than the "Max" version? MAXIMUM MEANS AS GREAT
       | AS POSSIBLE!!!
       | 
       | Anyone? Just me? ...ok
        
         | jmull wrote:
         | You're never going to make it in marketing, my friend.
        
         | r00fus wrote:
         | I take it you never played Steet Fighter in the 90s.
        
         | jayd16 wrote:
         | fine print was local* maximum, I guess.
        
         | layer8 wrote:
         | > MAXIMUM MEANS AS GREAT AS POSSIBLE!!!
         | 
         | And "ultra" means "beyond" [great]. It enters the next realm.
         | :)
        
         | wmf wrote:
         | Then you won't want to know that USB "full speed" is slower
         | than "superspeed".
        
           | layer8 wrote:
           | Also, Full HD is lesser than Ultra HD. They should have
           | called it Overflowing HD.
        
             | jayd16 wrote:
             | I think Full HD was really just to separate from 720p. So
             | if you consider "HD" to mean 1080p and UltraHD to mean 4k,
             | its like saying a "Full v6" and a "v8" or something. For
             | whatever reason it made just enough sense to never bothered
             | me too much.
        
       | shantara wrote:
       | >Here 5 nm and 3 nm refer to the size in nanometers, billionths
       | of meter, of circuit elements used to build chips
       | 
       | Not this again. They don't mean anything except for being purely
       | marketing designations
        
         | monocasa wrote:
         | Sort of. Foundaries can normally point to some real world
         | metric for their Nnm node name. But because they pick different
         | metrics it's not useful for comparing between nodes/foundaries
         | hence why people say it doesn't mean anything practical.
        
         | zamalek wrote:
         | Within the same foundry, they are comparable. It's just a
         | decrementing version number, though.
        
       | [deleted]
        
       | bklyn11201 wrote:
       | > "which points to the problem I'd have if I wanted to update my
       | own five-year old iMac, I'd need to jettison a perfectly good
       | display in order to move to an Apple Silicon CPU."
       | 
       | Is it impossible to use the 27" iMac as a display monitor for the
       | new Mac Studio?
        
         | jmull wrote:
         | There are workarounds, like "Luna Display" (I haven't tried it
         | and I'm not affiliated).
         | 
         | I also wonder how well running something like a VNC client
         | fullscreen on the old iMac with universal control might work.
         | (I've also used Jump Desktop's "Fluid" protocol which is the
         | same general idea as VNC, though provided a higher-quality
         | lower lag connection in my case.)
         | 
         | I think there are some decent solutions for a secondary
         | display, but I kind of doubt any of these would be good enough
         | for a primary display for most use cases.
         | 
         | I would guess all of these have tradeoffs in terms of lag,
         | frame rate, quality, reliability, etc. though I'd love to hear
         | different.
        
         | auggierose wrote:
         | It's impossible to use it as a display for anything than
         | itself.
        
         | newsclues wrote:
         | Target display mode exists but it's compatibility support could
         | be improved.
        
         | tl wrote:
         | Target Display Mode which turns iMacs into monitors is not
         | supported on newer iMacs (anything post 2015):
         | 
         | https://support.apple.com/en-us/HT204592
        
           | SketchySeaBeast wrote:
           | "You can use more than one iMac as a display, if each iMac is
           | using a Thunderbolt cable to connect directly to a
           | Thunderbolt port on the other Mac (not the other iMac)."
           | 
           | Do Mac's not have USB ports? It's truly bizarre that you need
           | 1 thunderbolt port per monitor. That's an incredible amount
           | of wasted bandwidth.
        
             | stephenr wrote:
             | How is a USB port from pre-2015 going to help you transmit
             | a DisplayPort signal?
        
               | SketchySeaBeast wrote:
               | Good point.
        
           | ChuckNorris89 wrote:
           | This sounds terrible for the consumers and for the
           | environment as once the computer inside dies or is obsolete,
           | you have to throw away a perfectly good display that could
           | have a second life as an external monitor plugged into a more
           | modern system since display technology doesn't go outdated as
           | fast as computing.
        
             | jkestner wrote:
             | It's terrible. The original reason Apple dropped this
             | feature was because there wasn't an external bus that could
             | push that many pixels. But Thunderbolt can do it now. This
             | reuse would be more meaningful than preconsumer recycling
             | of aluminum.
             | 
             | I use https://astropad.com/product/lunadisplay/ to use my
             | iPad or an old Mac's screen as a secondary screen and it's
             | good.
        
             | jibbers wrote:
             | Wouldn't an obsolete computer still be perfectly fine for
             | less demanding users? Average Joes, poor people, a kid's
             | first PC, etc. -- give it away, I say.
        
             | BolexNOLA wrote:
             | As someone who finally bit the bullet and scrapped/curbed 2
             | old iMacs, yeah. It hurt a lot. Perfectly good, well-
             | calibrated 21.5" and 27" monitors just reduced to paper
             | weights. Plucked the ram/HDD's and sent them on their way.
             | I looked into _every_ possible option for them. Such a
             | waste.
        
               | kllrnohj wrote:
               | Especially a waste since it's even the same panel as the
               | one you're throwing away. Apple hasn't upgraded the 27"
               | 5k for 8 years now, this new one included (lots of non-
               | display upgrades like the camera and audio, sure, but the
               | display itself hasn't changed)
        
         | sylens wrote:
         | I believe they got rid of target display mode support a number
         | of years ago, in terms of both a hardware and software (High
         | Sierra or earlier only)
        
         | jeffbee wrote:
         | Target display mode has been dead and gone since 2015.
        
         | 1123581321 wrote:
         | This used to be possible on older iMacs and was called Target
         | Display Mode. I believe a five year old iMac is too recent to
         | be able to do it.
         | 
         | DIY display from the hardware should be possible with a display
         | driver from Aliedpress, though.
        
           | matthewfcarlson wrote:
           | I looked around for something like this but finding something
           | that can drive the panel is tricky. In my case it was an old
           | surface studio with a 3000x2000 resolution. I think it's
           | actually two panels fused together and calibrated at the
           | factory and it's all custom. Finding a board seemed nigh
           | impossible
        
         | pdpi wrote:
         | They've never supported target display mode for any of the 5k
         | iMacs. 5k displays, in general, seem to be few and far between,
         | and poorly supported (haven't seen a single one for PC, only
         | Apple's offerings), so I imagine trying to support target
         | display mode wouldn't be great.
        
           | simonh wrote:
           | I remember reading at the time that they couldn't reasonably
           | get the connector throughput working. The original 5K iMacs
           | used custom high bandwidth internal interconnects because
           | there weren't any standard spec interconnects that could do
           | the job, therefore no commercial external cables or
           | connectors up to it either.
           | 
           | It might be possible in theory now, but I suppose that ship
           | has sailed.
        
         | soci wrote:
         | My late 2009 27" iMac can be used as a monitor by connecting a
         | laptop to the minidp port. I doubt this is still possible in
         | newer Macs after 2015.
        
         | toqy wrote:
         | yeah i was very disappointed to find out i couldn't use my
         | wife's imac as a display for my mpb
        
       | chrisoverzero wrote:
       | >In passing, we'll note there is Mac Studio version sporting a
       | single M1 Max chip that happens to weigh 2 pounds less, most
       | likely the result of a smaller power supply.
       | 
       | This article was published on 13 March. It's been known for 5
       | days (as of the time of this comment) that the difference in
       | weight is due to the Ultra variant's using a copper heat sink, as
       | opposed to an aluminum one. The whole article has this kind of
       | feeling of off-the-cuff, underinformed pontification, and I don't
       | think it's a very good one.
        
         | [deleted]
        
         | yborg wrote:
         | I kind of had a similar impression, but Jean-Louis is kind of
         | an elder statesman in the industry from his time at Apple. I
         | actually find it heartening that a guy in his mid-70s and long
         | out of the industry still follows it from a technical
         | standpoint even at this level.
        
           | wmf wrote:
           | I love JLG but it doesn't excuse putting out wrong
           | information. He should fully retire if he's not going to do
           | the work properly.
        
             | elzbardico wrote:
             | Geez man! I think you forgot the /s
        
         | datavirtue wrote:
         | Typical apple blog.
        
           | dang wrote:
           | Could you please stop posting unsubstantive comments? We ban
           | accounts that do that repeatedly. You've unfortunately been
           | doing it repeatedly, and I don't want to ban you again.
           | 
           | https://news.ycombinator.com/newsguidelines.html
        
       | rwmj wrote:
       | How do you connect 10,000 signals between two microscopically
       | tiny dies reliably?
        
         | colejohnson66 wrote:
         | Assuming they're on different dies, it would be the same way
         | they connect the thousand or so pins (of an FPGA or CPU) from
         | the die to the BGA/PGA/LGA package: interposers with wires
         | whose widths are in the nanometer range
        
         | hajile wrote:
         | Same way AMD did it with their HBM GPUs. There's was 4K wires,
         | but once you've moved to etching the wires with lithography,
         | even millions of wires wouldn't be impossible.
        
         | cjensen wrote:
         | The two dies and interconnect are all part of the same silicon.
         | In the chiplets used by AMD and many others, they use separate
         | CPU dies with literal wires between them. This replaced the
         | wires with hard silicon.
         | 
         | Disadvantage of this technique is yields will be worse because
         | the individual component is bigger, and this is limited to
         | combining exactly two chips because it's not a ring-bus design.
        
           | systemvoltage wrote:
           | This is false. Apple's M1 Ultra uses an interposer to connect
           | two chips.
        
           | u320 wrote:
           | No, this is a multichip design, Apple was clear about that.
           | And you don't make wires out of silicon.
        
         | Synaesthesia wrote:
         | With a custom chip interconnect which has its wires made with
         | lithography
        
       | ossusermivami wrote:
       | > A slightly orthogonal thought: because every word, every image
       | of Apple Events is carefully vetted, one had to notice all six
       | developers brought up to discuss the Mac Studio and Mac Ultra
       | benefits inter work were women.
       | 
       | it was international women day on that day, i think it was a nice
       | touch from apple
        
       | Maursault wrote:
       | So, surprise, the M1 Ultra is 2x M1 Max chips. They've been
       | secretly planning since inception to attach 2 chips together. Why
       | am I so unimpressed? Because connecting 2 chips together is so
       | dang obvious. Because I would have expected Apple to connect 10
       | of them together in a semi-circle and not only own the desktop
       | and mobile market, but in a truly shocking surprise, release a
       | new Apple server that has 10x the processing power of the next
       | most powerful server running at 1/10th the Wattage, and a new
       | macOS server version with a footprint smaller than iOS that is
       | binary compatible with the whole of linux development.
        
         | Melatonic wrote:
         | ARM is already being heavily looked at for datacenters
         | regardless of Apple - I do not see them entering the server
         | market anytime soon. Their bread and butter has always been to
         | market "Pro" devices but more at the consumer level.
        
         | kllrnohj wrote:
         | > release a new Apple server that has 10x the processing power
         | of the next most powerful server running at 1/10th the Wattage
         | 
         | The power efficiency of M1 is _vastly_ overstated. Reminder
         | here that the M1 Ultra is almost certainly a 200W TDP SoC
         | (since the M1 Max was ~100W, and this is 2x of those...)
         | 
         | So 10x M1 Max's would be ~1000W. That's possibly to put in a
         | server chassis, but it's of course also not remotely 1/10th the
         | wattage of existing server CPUs, either, which tend to be in
         | the 250-350w range.
         | 
         | And interconnects aren't free, either (or necessarily even
         | cheap). The infinity fabric connecting the dies together on
         | Epyc is like 70w by itself, give or take.
        
           | jeffbee wrote:
           | The M1 Pro is slightly faster than other CPUs at that ~35W
           | design point, but it doesn't scale up beyond that, or Apple
           | just doesn't care if users can experience its latent scaling
           | abilities, if they exist. I like this chart from "Hardware
           | Unboxed" that puts current Apple, AMD, and Intel CPUs in
           | context: https://imgur.com/PKHelVY
        
             | wtallis wrote:
             | It's suspicious how the bottom point on each of those
             | curves is exactly 35W. Real-world power measurements don't
             | line up that cleanly, so I wouldn't be surprised if what
             | they're graphing on that axis (for the x86 processors) is
             | merely the power limit they set the systems to throttle to,
             | rather than a real measurement. That or they interpolated
             | to produce those points.
        
           | GeekyBear wrote:
           | >The power efficiency of M1 is vastly overstated. Reminder
           | here that the M1 Ultra is almost certainly a 200W TDP SoC
           | 
           | 114 Billion transistors are going to draw some power.
           | 
           | An RTX 3090 is 28.3 Billion transistors drawing 350W or so
           | for just the GPU portion of the system.
        
         | sbierwagen wrote:
         | Sounds like you're looking more for the upcoming M2 refresh of
         | the Mac Pro.
        
       | formerly_proven wrote:
       | > Second, the recourse to two M1 Max chips fused into a M1 Ultra
       | means TSMC's 5 nm process has reached its upper limit.
       | 
       | M1 Max is ~20x22 mm (~430 mm2), double this, even without some of
       | the interconnect die space, doesn't fit into the reticle anyway.
        
         | MikusR wrote:
         | Cerebras is 46225 mm2
        
           | _ph_ wrote:
           | That is made in a multi-step process, as far as I understand
           | its design, they made the point of designing it from a
           | completely repetitive structure which is the size of one
           | step.
        
           | jecel wrote:
           | Among the issues that Cerebras had to overcome was how to
           | connect tiles on a wafer where each tile has to be smaller
           | than a reticle. In normal chips you have a gap on all sides
           | where the diamond saw will cut the wafer into individual
           | dies. Having wires cross that gap requires non standard
           | processing. And the tiles themselves would still be limited
           | to a single reticle (under 35mm on a side), so a multi-
           | reticle M1 would not be easy to design.
        
       | allie1 wrote:
       | Means MacabookPro m1max orders will get even more delayed
        
         | kaladin_1 wrote:
         | The delays are bad enough at the moment, it should not get
         | worse pls.
         | 
         | Some are yet to receive orders placed since January for the
         | 2021 MBP. Especially for anything beyond the base model. I
         | wonder if they have the capacity to serve the current one going
         | on Sales this week.
        
         | meepmorp wrote:
         | Or replaced with orders for Studios. I'm not in the market for
         | a laptop, but if I were getting a computer before last week, I
         | probably would've gotten an m1max MBP because I always want
         | more memory. Now, I've got a desktop option.
        
       | sharikous wrote:
       | > the M1 Ultra isn't faster than an entry-level M1 chip [...] the
       | clock speed associated with the 5nm process common to all M1 chip
       | hasn't changed for the M1 Ultra
       | 
       | It's telling that almost the only "bad" thing that you can say
       | about the M1 Ultra is that its single threaded performance is on
       | par with the M1, whose performance is great anyway. Apple pumped
       | up the integration, cache size, pipeline length, branch
       | prediction, power efficiency and what not.
       | 
       | I think that in terms of clock frequency increase that road is
       | closed, and has been for 15 years already.
       | 
       | Realistically the only disadvantage I heard about Apple Silicon
       | is that the GPU performance is not quite as earth-shattering as
       | they claim.
        
         | kllrnohj wrote:
         | > I think that in terms of clock frequency increase that road
         | is closed, and has been for 15 years already.
         | 
         | Possibly, except M1 runs at relatively low clock speeds of
         | around 3.2ghz. This is in no small part how it achieves good
         | power efficiency. It's a bit surprising that a wall powered
         | unit is still capped at this clock speed, although whether
         | that's intentional or just something Apple hasn't gotten around
         | to fixing is TBD. That is, the M1 largely lacks the load-based
         | turbo'ing that modern Intel & AMD CPUs have. So it's "stuck" at
         | whatever it can do on all-cores & max load. This could be
         | intentional, that is Apple may just not ever want to venture
         | into the significantly reduced perf/watt territory of higher
         | clock speeds & turbo complications. Or it could just be an
         | artifact of the mobile heritage, and might be something Apple
         | addresses with the M2, M3, etc...
        
           | r00fus wrote:
           | > It's a bit surprising that a wall powered unit is still
           | capped at this clock speed
           | 
           | The Intel/x86 mantra that desktops should be allowed to be
           | massively inefficient just to pump up the clock speed
           | fractionally is what's changing.
           | 
           | I for one, agree with Apple - 450W beasts aren't really
           | needed. Most workflows can be (or already are) parallelized
           | so multiple cores can demolish what a fast single-thread can
           | tackle.
        
             | kllrnohj wrote:
             | > The Intel/x86 mantra that desktops should be allowed to
             | be massively inefficient just to pump up the clock speed
             | fractionally is what's changing.
             | 
             | Nonsense. Nobody is going to just give up "free"
             | performance.
             | 
             | > I for one, agree with Apple - 450W beasts aren't really
             | needed.
             | 
             | Nobody makes a 450W consumer CPU, so this is a strawman.
             | The M1 Ultra's 200W already puts it quite far beyond the
             | typical consumer setup of 65-125W anyway.
             | 
             | Regardless if 450W lets your work finish faster than 200W,
             | that's a tradeoff ~everyone makes. Nothing about that is
             | changing.
             | 
             | > Most workflows can be (or already are) parallelized so
             | multiple cores can demolish what a fast single-thread can
             | tackle
             | 
             | If this is truly the case for you then you'd already be on
             | the Threadripper/Epyc train and the M1 Ultra would be kinda
             | boring.
        
               | wtallis wrote:
               | Please try to be consistent or at least explicit about
               | whether you're including a GPU in the power numbers you
               | are referencing; changing that context mid-sentence makes
               | it quite hard to tell whether you really have a solid
               | position.
        
               | r00fus wrote:
               | > Nonsense. Nobody is going to just give up "free"
               | performance.
               | 
               | No, they're done with "free" waste. Apple chips idle at
               | far lower and due to efficiency cores, handle moderate
               | workloads with low TDP.
               | 
               | > Nobody makes a 450W consumer CPU
               | 
               | CPU, yes. But the M1 (along with all Apple chips) is a
               | SoC so if you include graphics, storage, memory and
               | motherboard - you can easily eclipse 450W for many
               | enthusiast consumer (gaming) builds. Most gaming builds
               | are 300-500W.
        
           | ls612 wrote:
           | But for a desktop chip like the M1 Ultra which will always be
           | on wall power I don't see why Apple would be uncomfortable
           | pushing the thermal envelope of the M1 architecture.
        
         | zamalek wrote:
         | > It's telling that almost the only "bad" thing that you can
         | say about the M1 Ultra
         | 
         | The M1 is truly a great thing. It beats the pants off the Intel
         | 2019 MBP that work gave me while I fixed some M1 problems.
         | 
         | That is, however, comparing Apple Intel to Apple Silicon. The
         | 2019 Intel MBP is, on an absolute scale (vs. my own AMD laptop
         | of the same year), completely and utterly incompetent.
         | 
         | Comparing Apple Silicon to Intel and AMD isn't as
         | straightforward, and there's a lot of good and bad for all
         | three. Apple is now merely competitive.
        
         | protomyth wrote:
         | I never understand that criticism from the user point of view.
         | A modern OS runs a lot of threads and the M1 isn't exactly a
         | slow poke, and the M1 Ultra has a lot more cores to run all
         | those threads.
        
       | genmon wrote:
       | I don't get this:
       | 
       | > Second, the recourse to two M1 Max chips fused into a M1 Ultra
       | means TSMC's 5 nm process has reached its upper limit. It also
       | means TSMC's 3 nm process isn't ready, probably not shipping
       | until late 2022. Apple, by virtue of their tight partnership with
       | TSMC has known about and taken precautions against the 3 nm
       | schedule, hence the initially undisclosed M1 Max UltraFusion
       | design wrinkle, likely an early 2021 decision.
       | 
       | "recourse"... "design wrinkle"... wouldn't something like
       | UltraFusion be an architectural goal at the outset, rather than
       | something grafted on later? Feels pretty fundamental.
       | 
       | I have a vague memory that AMD has/had something similar -- the
       | idea what their entire range would be the same basic core, fused
       | together into larger and larger configurations. Seems like a
       | smart move to concentrate engineering effort. But chip design not
       | even slightly my area.
        
         | infinityio wrote:
         | > I have a vague memory that AMD has/had something similar
         | 
         | You are correct - AMD CPUs from 2016 onwards make use of a
         | collection of up-to-8-core chiplets linked by what they call
         | "Infinity Fabric"
        
         | rayiner wrote:
         | Depends on what the UltraFusion interconnect is. Ganging up
         | chips as am unplanned stop-gap isn't unheard of (e.g. the ATI
         | Rage Fury Max). But it's much harder to do when you're talking
         | about grafting on a cache coherent interconnect. If they're
         | using something off the shelf like CXL maybe it wasn't planned
         | from the outset.
        
           | monocasa wrote:
           | It wasn't unplanned. The work by marcan on asahi Linux
           | revealed support for multi die configurations baked into the
           | M1 Max.
           | 
           | > While working on AIC2 we discovered an interesting
           | feature... while macOS only uses one set of IRQ control
           | registers, there was indeed a full second set, unused and
           | apparently unconnected to any hardware. Poking around, we
           | found that it was indeed a fully working second half of the
           | interrupt controller, and that interrupts delivered from it
           | popped up with a magic "1" in a field of the event number,
           | which had always been "0" previously. Yes, this is the much-
           | rumored multi-die support. The M1 Max SoC has, by all
           | appearances, been designed to support products with two of
           | them in a multi-die module. While no such products exist yet,
           | we're introducing multi-die support to our AIC2 driver ahead
           | of time. If we get lucky and there are no critical bugs, that
           | should mean that Linux just works on those new 2-die
           | machines, once they are released!
           | 
           | https://asahilinux.org/2021/12/progress-report-oct-nov-2021/
        
       | nicoburns wrote:
       | > It also means TSMC's 3 nm process isn't ready, probably not
       | shipping until late 2022. Apple, by virtue of their tight
       | partnership with TSMC has known about and taken precautions
       | against the 3 nm schedule, hence the initially undisclosed M1 Max
       | UltraFusion design wrinkle, likely an early 2021 decision.
       | 
       | I find it hard to believe that this was a last-minute decision.
       | Rather, I think this pattern of a new core design (using a new
       | process if there is one) releases first for the smallest devices
       | (iPhones), and the gradually moves its way up the lineup all the
       | way up the Ultra before the cycle repeats with a new generation
       | is likely Apple's new strategy going forwards.
       | 
       | My understanding is that this is pretty much what Intel and AMD
       | do too (releasing their smaller dies on new processes first) and
       | that this is a general strategy for dealing with poorer yield
       | number on new process nodes. The idea that Apple would ever have
       | considered releasing their biggest chip as the first chip on a
       | new node seems far-fetched to me.
        
         | PaulKeeble wrote:
         | Oddly for GPUs its the other way around and has been for a
         | while. Nvidia and AMD seem to start with the mid and big dies
         | first before filling out the small ones later. Intel seems to
         | be coming from small to big however so they may reverse the
         | trend at least for themselves. But it could still be driven by
         | yield issues where the margins are less good for smaller dies
         | and they sell in much larger volume so things need to be
         | working well to hit the volume of the market and still be
         | highly profitable.
        
           | mschuster91 wrote:
           | > Nvidia and AMD seem to start with the mid and big dies
           | first before filling out the small ones later.
           | 
           | For GPUs, yield is less of an issue... the chips are
           | manufactured with the expectation of a number of cores of the
           | many thousand small (in terms of silicon area) ones being
           | defective - overprovisioning, basically. That allows them to
           | simply bin the sliced chips according to how many functional
           | core units the individual chip has.
           | 
           | In contrast, even the Threadripper AMD CPUs have only 64
           | large cores which means the impact of defects is vastly
           | bigger, and overprovisioning is not feasible.
        
             | chickenimprint wrote:
             | Current AMD chiplets come with a maximum of 8 cores.
        
               | Macha wrote:
               | Right, that's a different strategy again. If you're
               | making a monolithic 64 core die and one of the cores is
               | defective, and the next one down in your product lineup
               | is the 48 core, that's going to make the 64 core model
               | harder to stock (and maybe not worth aiming for at all,
               | if your yields are bad enough that this happens often).
               | 
               | Meanwhile if you're making 6 x 8 core chiplets and one of
               | those cores is defective, well that chiplet can go into a
               | 48 core or be a midrange consumer cpu or something, and
               | you'll just pick one of your many many other 8 core
               | chiplets to go with the rest for the 64 core.
        
               | agloeregrets wrote:
               | Ala: The 14/16 inch MacBook Pro making all full-die
               | function models built to order. I do find it interesting
               | that they did a 7 core GPU version of the M1 much like
               | the A12X/A12Z binning. I wonder if getting full function
               | M1 was ever the intent.
        
           | snek_case wrote:
           | Probably because the beefier GPUs for the server or compute
           | market are way more profitable for Nvidia than high-end
           | gaming GPUS, which are also way more profitable than chips
           | Nvidia might make for laptops or low end gaming.
        
         | wlesieutre wrote:
         | Incidentally, there's a rumor that the iPhone 14 is going to
         | use a new A16 processor in the iPhone Pro, but stick to last
         | year's A15 for the non-pro version.
         | 
         | That's a big change from Apple where they've historically put
         | their newest processor in every single phone they launch (even
         | the $430 iPhone SE announced last week has the A15 now).
         | 
         | I wonder if it's purely a cost cutting measure, or if they're
         | not expecting good enough yields to supply them for every
         | iPhone, or if they're holding some fab capacity back to have
         | room for the higher end chips alongside the A16.
        
           | josh2600 wrote:
           | Honestly, my guess is that the fab production issues impact
           | everyone. I wonder if they get the same number of runs per $
           | as they used to pre-covid, my guess would be no. If that's
           | the case, that drives BoM up and Apple is hyper margin
           | conscious.
        
           | nicoburns wrote:
           | I'd imagine if anything it's a capacity issue now that
           | they're producing mac chips in addition to iPhone chips. I
           | suspect the fact the current chip is already so fast may also
           | be a factor. Pretty much nobody using an iPhone is clamouring
           | for a faster CPU. I have an iPhone 6s, and even the A9 in
           | that is fast enough that my phone never feels slow.
        
             | gumby wrote:
             | > I have an iPhone 6s, and even the A9 in that is fast
             | enough that my phone never feels slow.
             | 
             | Last year I upgraded from a 6s+ to a 12 and I can confirm
             | this the other way around: my old phone _did_ feel slow for
             | a couple of key apps; turns out they are still slow on the
             | new phone. They are just poorly written, and one is
             | basically just a CRUD app with no excuse.
             | 
             | So my lesson is to likely keep this phone for a decade.
             | It's not like I couldn't afford to upgrade but why bother?
        
               | nonamenoslogan wrote:
               | I will echo this, I was on a 7 and it broke, so I got a
               | regular model 12. The phone itself feels much faster, it
               | loads quicker, the batteries last longer and the screen
               | is much nicer to look at--but some of the apps I use are
               | about the same for opening/loading time. I blame the apps
               | not the phone; some are much better--Pokemon Go for
               | example loads in just a second or two on the 12 but takes
               | 30 or so on the 7.
        
               | lolive wrote:
               | Isn't the camera better? And also the battery life?
               | 
               | [happy owner of a 6s here]
        
               | lynguist wrote:
               | I found the biggest difference to be the display. I
               | changed from iPhone 7 Plus to iPhone 13 mini. From 7 on
               | the display is wide color and the newer phones are all
               | OLED with true blacks. They're impressive.
               | 
               | Another big difference is 5G. I can easily get 500+
               | MBit/s downstream while outside.
        
               | tblt wrote:
               | How did you find the change in display size?
        
               | gumby wrote:
               | The camera is better at low light photos but I don't
               | really care about that. Camera plays no role in my
               | selection of phone and I wish the lenses didn't protrude
               | (which they could address by putting more battery in!).
               | 
               | Yes, the battery was kinda shot but I could have replaced
               | it.
        
           | hajile wrote:
           | I'd guess that they are having product differentiation issues
           | between the iphone and the iphone Pro. Most people just don't
           | see the value in faster screen, stainless band, and a not-so-
           | great telephoto camera (I own a pro, but I'm hardly their
           | primary market). An entire processor generation difference
           | would give a big selling point to average consumers for
           | paying the extra few hundred dollars.
           | 
           | There's been rumors that they'll be skipping A15 cores for
           | the upcoming M2 processors.
           | 
           | If they skipped over the best-selling iphone, that would give
           | them a TON of extra space for M2 chips. This would allow them
           | to put a little more ground between the new Air with the M1
           | and the pro iPads. It would also allow them to drop a new
           | version of the macbook air and drive a lot of sales there. I
           | know I'd gladly upgrade to a M2 model -- especially with a
           | decent CPU bump and a 24/32GB RAM option.
           | 
           | Then again, they could just stick with what people expect. I
           | wouldn't be surprised either way.
        
             | dangus wrote:
             | I feel like I can see increased efforts at differentiation
             | on the 13 Pro compared to the 12 Pro.
             | 
             | The iPhone 12 Pro was perhaps the least differentiated
             | high-end model Apple has ever put out.
             | 
             | I think the 13 Pro has a few features that make it a bit
             | more of a compelling buy:
             | 
             | - The new telephoto lens is a massive improvement (I wonder
             | if your last experience was with the 12 or older? The new
             | camera is actually worth something while the old one had
             | mediocre quality compared to the main lens).
             | 
             | - ProMotion has no tangible benefit, but it makes every
             | interaction with the screen look smoother. When you go back
             | to old phones that don't have it, it's jarring. I can see
             | why some of the Android-using tech enthusiasts have
             | criticized Apple for not delivering high refresh rate for
             | so long.
             | 
             | - The previous iPhone 12 had identical main/wide cameras
             | with the Pro model unless you got the Max variant, which is
             | no longer the case. The iPhone 13 Pro has different/better
             | cameras all around over the 13.
             | 
             | - The GPU of the 13 Pro has an extra core over the 13,
             | which was not the case for the iPhone 12 lineup. Anyone who
             | does mobile gaming on graphically intense games should
             | probably choose the Pro model over the regular one.
             | 
             | - Significantly better battery life over the non-Pro
             | version, which was not the case for the 12 models, which
             | had identical ratings.
        
               | hajile wrote:
               | For an average consumer, are those things worth hundreds
               | of dollars?
               | 
               | I know the value, but I work in tech and spend tons of
               | time digging into hardware as a hobby. Camera matters to
               | some, but most of the rest are pretty bare features
               | compared to the $200 (20%) increase in price.
               | 
               | When I list all the things I can buy with $200, where do
               | these features rank in comparison to those other things?
               | I'm blessed with a good job, so I can afford the luxury.
               | I was poor when I was younger and I definitely wouldn't
               | be spending that for those features. $220 out the door
               | would be almost 20 hours of work at $15/hr (after taxes).
        
               | dangus wrote:
               | I think that's a valid question. Objectively, no, those
               | features are not necessarily worth the literal dollar
               | value the price segmentation is commanding.
               | 
               | But, there are some other points to consider:
               | 
               | - It seems like most people in the USA who buy mid to
               | high-end phones finance their phones from carriers, and
               | pay 0% interest for it. So, what the consumer is really
               | considering is "is the Pro model worth $5-8/month more to
               | me?" or "Would I pay $200 extra over 2-3 years?" and I
               | think that's an easier justification for many people.
               | 
               | - Carriers offer a number of financial incentives and
               | discounts in exchange for loyalty (there aren't any
               | contracts anymore, but there are "bill credits" that
               | function the same way).
               | 
               | - You did use $15/hour as an example, which around the
               | median US salary, but Pro models are not intended to be
               | the top selling model for the median earner in the US.
               | They're marketed at, I would guess, the top 20% of
               | earners, which lines up with the Pro/Pro Max models only
               | making up 20% of iPhone sales in 2020 [1]. That would
               | mean that Apple would expect individuals buying the
               | iPhone Pro models to make about $75,000/year or greater.
               | About 10% of the population makes a 6-figure salary. [2]
               | 
               | - Smartphones are the primary communication and computing
               | device for many if not most people. I think that there
               | are many people who see the smartphone as the most
               | valuable possession they own.
               | 
               | [1] https://www.knowyourmobile.com/phones/most-popular-
               | iphone-mo...
               | 
               | [2] https://en.wikipedia.org/wiki/Personal_income_in_the_
               | United_...
        
               | nicoburns wrote:
               | > So, what the consumer is really considering is "is the
               | Pro model worth $5-8/month more to me?" or "Would I pay
               | $200 extra over 2-3 years?" and I think that's an easier
               | justification for many people.
               | 
               | I find this attitude bizarre. It's not any cheaper! I
               | guess it can make a difference if you have cash flow
               | issues. But an iPhone Pro is decidedly a luxury, so if
               | you have cash flow issues then you probably just
               | shouldn't buy one?
        
               | [deleted]
        
         | paulmd wrote:
         | It absolutely was not. The A15/"M2" architecture isn't even
         | going to be on 3nm, it will be N5P, so only a "plus" of the
         | current node. There was absolutely no scenario where Apple was
         | on 3nm this year.
         | 
         | Incidentally this means that Apple will no longer have a node
         | advantage once Zen4 launches - both Zen4 and A15 will be on the
         | same node, so we can make direct comparisons without people
         | insisting that Apple's performance is solely due to node
         | advantage/etc.
         | 
         | But yeah, that does go to show that 3nm is slow to launch in
         | general - Apple would not willingly give up their node lead
         | like this if there were anything ready for an upgrade. I don't
         | think it's actually _falling behind_ in the sense that it was
         | delayed, but it seems even TSMC is feeling the heat and slowing
         | down their node cadence a bit.
         | 
         | Also, as far as this:
         | 
         | > Second, the recourse to two M1 Max chips fused into a M1
         | Ultra means TSMC's 5 nm process has reached its upper limit.
         | 
         | There is still Mac Pro to come, and presumably Apple would want
         | an actual Pro product to offer something over the Studio
         | besides expansion.
         | 
         | marcan42 thinks it's not likely that quad-die Mac Pros are
         | coming based on the internal architecture (there's only IRQ
         | facilities for connecting 2 dies) but that still doesn't rule
         | out the possibility of a larger die that is then connected in
         | pairs.
         | 
         | Also bigger/better 5nm stuff will almost certainly be coming
         | with A15 on N5P later this year, so this isn't even "the best
         | TSMC 5nm has to offer" in that light either.
        
           | caycep wrote:
           | What would be interesting in the far future is if, say, Zen 4
           | (or 5+) and M2(+) claims some of Intel's new foundry
           | capacity...the comparisons would be very interesting...
        
           | marcan_42 wrote:
           | > quad-die Mac Pros
           | 
           | I said quad- _Jade_ Mac anythings aren 't coming because
           | _that_ die is only designed to go in pairs (that 's the M1
           | Max die). Everyone keeps rambling on about that idea because
           | that Bloomberg reporter said it was coming and got it wrong.
           | It won't happen.
           | 
           | Apple certainly can and probably will do quad dies at some
           | point, it'll just be with a new die. The IRQ controller in
           | _Jade_ is only synthesized for two dies, but the
           | _architecture_ scales up to 8 with existing drivers (in our
           | Linux driver too). I fully expect them to be planning a
           | crazier design to use for Mac Pros.
        
         | BolexNOLA wrote:
         | This article is a little nauseating in its low key drooling
         | over Apple but I think articulates what you're saying somewhat.
         | Basically build a product around the chip and move it down the
         | product line, double capacity at regular intervals.
         | 
         | https://www.theverge.com/22972996/apple-silicon-arm-double-s...
        
           | gumby wrote:
           | > This article is a little nauseating in its low key drooling
           | over Apple
           | 
           | The author ran Apple Europe and then moved to the US and was
           | an Apple VP for a long time. If anyone is allowed to have
           | this kind of attitude then it's reasonable in Gassee.
           | 
           | In people in general, it's...weird.
        
             | BolexNOLA wrote:
             | Wasn't aware of that - thanks for the context!
        
         | dom96 wrote:
         | Side question but I'm curious if anyone knows, where are we
         | heading with this constant decrease in nm for chip
         | manufacturing processes? When will we hit a wall and where will
         | gains in performance come from then?
        
           | barbacoa wrote:
           | >When will we hit a wall and where will gains in performance
           | come from then?
           | 
           | nm notation used to mean the width of the smallest feature
           | that could be made. Even today there are processes such as
           | atomic layer deposition (ALD) that allows singular atom thick
           | features. The difference between nodes now are in shrinking
           | macro features, you don't necessarily make them smaller, more
           | important is density. This is currently done with 3d
           | transistors (finfet) and perhaps in the future going full
           | vertical. When all other optimization have been exhausted
           | it's likely to see multiple layers of stacked transistors
           | simulator to what they are doing with NAND memory chips.
           | Eventually even that will hit a wall due to thermal
           | limitations. Beyond that people have proposed using
           | carbonnano tube transistors. That tech is very early but has
           | been proved to function in labs. If we ever figure out how to
           | manufacture carbon nanotubes chips, it will be truly
           | revolutionary; you could expect at least another 50 years of
           | semiconductor innovation.
        
             | sharikous wrote:
             | > If we ever figure out how to manufacture carbon nanotubes
             | chips
             | 
             | That's the problem. We can't. All those technologies are in
             | such a primordial state, if at all, that we don't even know
             | if we will ever be able to use them efficiently 20 years
             | from now.
        
             | whazor wrote:
             | Besides shrinking and increasing size chips, there is
             | another big problem that might cause us to hit the wall.
             | When the transistor count increases, it also increases the
             | amount of effort. This is a big problem because chips need
             | to be profitable.
             | 
             | Although if you just shrink the chips and keep the
             | transistor count the same, then you have a more energy
             | efficient chip. Which is especially useful for portable
             | devices.
        
           | sharikous wrote:
           | We have already hit the wall. Decreases have been minimal,
           | and came with a high cost. The numbers you hear about "7 nm",
           | "5 nm", etc.. are just false. They do not represent anything
           | real.
           | 
           | The real numbers have been around 20 nm for a decade. They
           | decreased a bit with Intel's competitors achieving better
           | lithography than them before them. And we are in the realm of
           | tons of little tricks that improve density and performance -
           | nothing really dramatic but there are still improvements here
           | and there. The tens of billions of dollars thrown at research
           | achieved them but it is not comparable to the good old days
           | of the '80s, '90s and the '00s
        
             | kllrnohj wrote:
             | > And we are in the realm of tons of little tricks that
             | improve density and performance - nothing really dramatic
             | but there are still improvements here and there.
             | 
             | I don't think that's fair. Density is still increasing
             | fairly substantially. Just going off of TSMC's own numbers
             | here:                   16nm: 28 MTr/mm2         10nm: 52.5
             | MTr/mm2             7nm: 96.5 MTr/mm2         5nm: 173
             | MTr/mm2
             | 
             | Performance (read: clock speeds; but for transistors those
             | are one & the same) are not really increasing, though,
             | those have pretty much plateaued. And the density achieved
             | in practice doesn't necessarily keep up, as the density
             | numbers tend to be for the simplest layouts.
        
             | paulmd wrote:
             | Yes, to emphasize: "nm" marketing is just marketing. There
             | is no dimension on a 5nm chip that is actually 5nm. It used
             | to represent gate length and half-pitch but that stopped
             | being true about 20 years ago and officially became
             | nonsense about 10 years ago, it's "what size planar node do
             | we think these features would perform like" now.
             | 
             | Because it's all subjective now, companies went wild with
             | marketing, because consumers know "lower nm => better".
             | But, say, GF 14nm is much more comparable to Intel 22nm,
             | and GF 12nm is still solidly behind late-gen 14++, probably
             | more comparable to TSMC 16nm. Generally Intel has been the
             | most faithful to the "original" ratings, while TSMC has
             | stretched it a little, and GF/IBM and Samsung have been
             | pretty deceptive with their namings. Intel finally threw in
             | the towel a year or so ago and moved to align their names
             | with TSMC, "10nm ESF" is now "Intel 7" (note: no nm) and is
             | roughly comparable with TSMC 7nm (seems like higher clocks
             | at the top/worse efficiency at the bottom but broadly
             | similar), and they will maintain TSMC-comparable node names
             | going forward.
             | 
             | Anyway, to answer OP's question directly though, "what
             | comes after 1nm" is angstroms. You'll see node names like
             | *90A or whatever, even though that continues to be
             | completely ridiculous in terms of the actual node
             | measurements.
        
               | corey_moncure wrote:
               | 900 angstroms = 90 nanometers
        
               | akmarinov wrote:
               | Good bot
        
           | querulous wrote:
           | nm is purely a marketing term and has been for 10 years or 25
           | years depending on what you think it measures
           | 
           | future improvement is going to come from the same place it
           | mostly comes from now: better design that unlocks better
           | density and a revolutionary new litho process that as of yet
           | doesn't exist
        
           | jasonwatkinspdx wrote:
           | No one has a crystal ball, but here's the industry road map:
           | https://irds.ieee.org/editions/2021 (start with executive
           | summary).
           | 
           | TL;DR: things get really murky after a notional 2.1nm
           | generation. Past that we'll need a new generation of EUV
           | sources, advancements in materials, etc, that AFAIK are still
           | quite far from certain (but I am not an expert on this stuff
           | by any means).
           | 
           | I personally think we're headed to a stall for a while where
           | innovation will focus mostly on larger packaging/aggregation
           | structures. Chiplets and related are definitely here to stay.
           | DRAM is moving in package. Startups are playing around with
           | ideas like wafer scale multiprocessors or ssds. I think
           | clever combinations of engineering at this level will keep us
           | with momentum for a while.
        
           | monocasa wrote:
           | Like most curves that looked like exponentials initially,
           | Moore's law turned out to be an s curve. We're already on the
           | top half of that curve where gains are increasingly more
           | difficult and increasingly spread out over time. There's
           | still a more or less direct road map for another five or so
           | full nodes, and we'll probably come up with some cute way to
           | increase density in other ways.
        
         | GeekyBear wrote:
         | It looks like Qualcomm is bailing out of Samsung's leading edge
         | node, so competition for TSMC's leading edge node is higher
         | than ever with Intel, Apple and Qualcomm all in the running.
         | 
         | >Qualcomm has decided to switch back to TSMC for the Snapdragon
         | 8 Gen2 Mobile Platform. Samsung's 4nm process node is plagued
         | by a yield rate of as low as 35 percent.
         | 
         | https://www.techspot.com/news/93520-low-yield-samsung-4nm-pr...
        
         | auggierose wrote:
         | I was thinking the same. Once UltraFusion has been designed,
         | why not use it for 3nm later on as well?
        
       | AltruisticGapHN wrote:
       | I wonder how usable those new Apple displays will be with PCs or
       | Linux - as they add more and more builtin chips and software.
       | 
       | I have an old 27" LED Cinema which I used with a PC for many many
       | years, and then with Ubuntu native... and now back to the mac on
       | a Mac Mini.
       | 
       | I'm ithcing to replace it eventually with its "double pixel
       | density" big brother, which is essentially what this new Studio
       | Display is (exactly double of 2560x1440). Personally I love the
       | glass pane, and I really dislike those "anti glare" bubbly/grainy
       | coatings I've seen on PC displays.
        
         | fuzzy2 wrote:
         | There's an article[1] on The Verge where an Apple spokesperson
         | talks about this. Apparently, you can use the camera (without
         | Center Stage) and the display. That's it. No 3D audio, no True
         | Tone, no updates.
         | 
         | Of course, the PC would need an appropriate USB-C connector
         | with support for 5K resolution.
         | 
         | [1]: https://www.theverge.com/2022/3/9/22969789/apple-studio-
         | disp...
        
           | monitron wrote:
           | Nice, that's good enough for me.
           | 
           | Now if only I could find a box with a button on it to switch
           | the monitor between two or three computers, at full
           | resolution, retaining Power Delivery and attached USB
           | devices. I'd buy one right now.
        
             | kllrnohj wrote:
             | Why not just pick up the LG 5k or any number of 4k displays
             | that are cheaper (and better) then the apple 27" in the
             | case though?
             | 
             | This new Studio 27" isn't really a good display on its own.
             | It's an 8 year old panel and is missing a host of modern
             | display upgrades like higher refresh rates, variable
             | refresh rates, HDR, or local dimming
        
               | holmium wrote:
               | There is only one other monitor on the market with the
               | pixel density and screen size equal to or greater than
               | the Studio 27", eight year old panel or not.[0] If you
               | want high PPI, right now, you get this, or you buy the
               | 16:9, 60Hz 8K 32" Dell monitor and pray you can get the
               | dual DP cables to work with your setup.
               | 
               | Unfortunately, Apple no longer sells the LG Ultrafine 5K
               | [1], and no one knows if LG is even going to restock
               | them.[2] So, you'll have to find one used, and you'll
               | have to hope that LG continues to service this incredibly
               | flaky series of monitors when you inevitably run into an
               | issue.
               | 
               | On the flip side, if you don't care about the pixel
               | density, you could have bought any of the low res gaming
               | monitors, or 4k 28" monitors, or whatever other
               | ultrawide, low PPI monstrosity the market has coughed up
               | in the past eight years. They've been waiting this long
               | for a reason.
               | 
               | You are stuck choosing between those modern features you
               | listed and a >200ppi display. That is the state of the
               | market right now. Until Apple solves this issue and
               | charges you like $3,000 for the privilege later this
               | year.[3]
               | 
               | ----------
               | 
               | [0] https://pixensity.com/list/desktop/
               | 
               | [1] https://www.macrumors.com/2022/03/12/apple-lg-
               | ultrafine-5k-d...
               | 
               | [2] https://www.lg.com/us/monitors/lg-27md5kl-b-5k-uhd-
               | led-monit...
               | 
               | [3] https://www.macrumors.com/2022/03/10/studio-display-
               | pro-laun...
        
           | sbr464 wrote:
           | a usb-c to displayport cable works well/reliably, with full
           | resolution support for the 6k pro display xdr (windows).
        
           | cruano wrote:
           | I think some of the Nvidia Cards do come with a USB-C port,
           | which should work [1]
           | 
           | > We also tried connecting a 4K monitor with a USB-C to
           | DisplayPort adapter, and that worked well - as expected.
           | 
           | [1] https://www.eurogamer.net/articles/digitalfoundry-2019-02
           | -28...
        
             | howinteresting wrote:
             | The USB-C connector was present with the 20xx series, but
             | was sadly abandoned for the 30xx series.
        
       | codeflo wrote:
       | Multiple dies integrated with an interconnect on a single chip is
       | how you build big processors these days. AMD has been doing the
       | same thing for years, and I'm sure the largest M2 will as well.
       | 
       | What I find interesting is that there's no desktop Mac with an M1
       | Pro, leaving a gap between the entry-level M1 in the Mac mini,
       | and the Mac Studio with its M1 Max.
       | 
       | For those who might not remember the full lineup: the M1 Pro and
       | M1 Max have the same CPU part, the main difference is the number
       | of GPU cores. For many CPU-bound applications, the Pro is all you
       | need.
       | 
       | I wonder if this is an intentional strategy to sell the more
       | expensive product or if it's supply related.
        
         | NhanH wrote:
         | There is probably an updated mac mini with m1 pro and m1 max
         | soon alongside the mac pro.
        
           | danieldk wrote:
           | Why would anyone buy the baseline Mac Studio if there was a
           | Mac Mini with the M1 Max (or even M1 Pro)?
        
             | klausa wrote:
             | I don't think there'll be a Mini with a Max, for exactly
             | the reason you mentioned, but going from Pro to Max gets
             | you more RAM, more I/O, more GPU, and more displays
             | controller, if any of those are things you care about.
        
         | hajile wrote:
         | I'd guess that there's a market for an iMac with an M1/M2 Pro
         | chip.
        
       ___________________________________________________________________
       (page generated 2022-03-14 23:01 UTC)