[HN Gopher] Apple's M4 has reportedly adopted the ARMv9 architec...
___________________________________________________________________
Apple's M4 has reportedly adopted the ARMv9 architecture
Author : rbanffy
Score : 138 points
Date : 2024-05-24 11:37 UTC (11 hours ago)
(HTM) web link (wccftech.com)
(TXT) w3m dump (wccftech.com)
| capl wrote:
| Hmm, I was thinking of buying an M3 Pro 16" this summer, but
| maybe I should wait then
| Waterluvian wrote:
| Apple's tempo is so regular that this has been a problem my
| entire adult life.
|
| I'll buy a little later. I'll buy a little later!
| capl wrote:
| Hehe yeah, same when I think about it... I guess best thing
| to do is buy on launch and update every 2-5 years.
| spacebanana7 wrote:
| If staying on the latest model isn't important to you,
| there's significant cost savings to be made from just
| buying the previous version.
| asddubs wrote:
| so wait for M5 to buy M4
| MediumOwl wrote:
| It entirely depends how long you keep your devices. I try
| to keep my iPhones until release year + 6, so I would
| need the price of a previous version to be reduced by
| more than 1/6th on a new version release, which is
| usually not the case.
| spacebanana7 wrote:
| Similar to cars, most depreciation happens in the first
| year.
|
| So owning a device for 6 years between age 1 and 7 will
| generally have a lower cost than owning a device between
| age 0 and 6.
|
| For Apple products it's generally feasible to effectively
| buy first hand devices aged 1+ because they're still
| available for sale (at least in some retailers) after a
| new edition is released.
| nothercastle wrote:
| That's a good strategy with most things that aren't prone
| to mfg variability . For cars having a launch version there
| are a lot of initial manufacturing defects that need to be
| worked out.
| wombat-man wrote:
| Yeah, feels like if you don't buy in the first few months you
| might as well hang on for the next one.
| a-french-anon wrote:
| The way to exit that loop is to convince yourself that the
| next one will bring a truly lasting difference. Which is why
| I'm still waiting for GDDR7 GPUs with my 4GB RX 480.
| uptown wrote:
| Get the math co-processor! It'll rip.
| 2OEH8eoCRo0 wrote:
| That's why my desktop CPU is 11 years old.
| MrFantastic wrote:
| I've had several devices fail right before the new model was
| released.
|
| So frustrating.
| vundercind wrote:
| I just only buy every fourth or fifth _whatever it is_.
| Usually the previous model, too, when I do, sometimes used or
| official-refurb. Works great.
| yardie wrote:
| I'm still on a i7 MBP because everytime I think I'm ready to
| update the next one is announced.
| 0_____0 wrote:
| These machines are great. I still use my 2015 rMBP as a
| secondary. It's a little slow now but a couple years ago I
| was still running Solidworks (in Bootcamp) on it with minimal
| issues.
| tmalsburg2 wrote:
| My wife is still using her 2012 MBP. We maxed out RAM and
| gave it an SSD in 2016. She uses it for video editing and
| music production. The thing look like new. Completely
| ridiculous. Only downside: no OSX updates since I don't
| know when.
| jasomill wrote:
| You might find OpenCore Legacy Patcher[1] worth a look.
| In many cases, it allows later-that-supported Mac OS
| versions to be installed on older Macs.
|
| As a data point, I still use a 2013 Mac Pro as my primary
| desktop, and I've been using Sonoma on it for several
| months, have been able to install all Sonoma patches
| over-the-air on release without incident, and have only
| experienced a single, trivial problem: the right side of
| the menu bar occasionally appears shaded red, in a way
| that doesn't affect usability; switching applications
| immediately resolves the problem (the problem appears to
| be correlated with video playback).
|
| [1] https://dortania.github.io/OpenCore-Legacy-Patcher/
| manquer wrote:
| video encoder/decoder support and performance has order
| of magnitude improvement in M series, I am surprised that
| didnt sway you.
|
| Not just that, for high res stuff or modern codecs like
| AV1 or h265 is probably not supported at all in a 2012
| device without updates for so long?
|
| Even if support was possible it would be software
| encoding and even short clip it can take hours to render
| ?
|
| I would happily use an older device for development a lot
| of dev work especially if not frontend or UI usually i
| can use any laptop just as a terminal, but UI or video
| editing I wouldn't be able to.
| lproven wrote:
| OCLP is your friend.
|
| https://dortania.github.io/OpenCore-Legacy-Patcher/
| jkestner wrote:
| I can't help but reply every time this thread comes up. I'd
| still probably be using my 2010 if it wasn't for a series
| mechanical failures. Paid to replace the keyboard once (85
| screws, didn't need to do that to myself), but third
| battery crapping out, trackpad not clicking (probably due
| to swollen battery) and the MagSafe connector getting loose
| and glitchy was the end of it. Though I did just boot it up
| because my phone is somehow still supposed to sync music
| from it.
| mschuster91 wrote:
| If the battery is swollen, get rid of it as soon as
| possible. Swollen battery == ticking time bomb, and I'm
| not joking about the bomb part. These things can, do and
| _will_ explode randomly.
| gtirloni wrote:
| Besides maybe battery life (which is a huge win), anything
| you'd benefit from the M's? I only had a 2008 macbook so I'm
| curious.
| benterix wrote:
| I can run uncensored models locally - slow but useable.
| OBFUSCATED wrote:
| Noise level, M series are completely silent in my
| experience.
| astrange wrote:
| llama.cpp somehow causes the fans to spin up pretty hard
| even if you just leave it at the prompt, but I assume
| that's performance bugs on their part.
| danielbln wrote:
| Overall, my 2020 M1 MBP is infinitely better than the 2015
| MBP I had before, it's not even close. Battery life,
| thermal output, speed, noise, neural engine (for ML
| workloads). It's an utter workhorse that just marches on,
| no matter what I throw at it. I haven't even considered
| upgrading to another more current Mx version because this
| one just.. works. Best laptop I ever owned.
| danslinky wrote:
| I just want to echo this experience and sentiment. I
| absolutely adore my 2020 13" M1 mbp, for all the reasons
| you list. I do ML workloads and Linux builds and I'm
| starting to think they forgot to put fans in mine because
| I've never heard them! Despite the annoying limitation of
| 1 external screen, it's up there with my 2007 13" mb
| (rest in peace) as being the best laptop I've ever owned.
| gbear605 wrote:
| +1
|
| I recently upgraded from a 2019 Intel Mac to a similarly-
| specced M3 Mac, and it really is night and day. My
| battery life is more than doubled - I can run IntelliJ
| and multiple Docker containers on battery for more than
| my whole work day, when before it would barely last a
| couple hours with that load and be slow while doing so.
| The fan hardly ever runs while on my Intel Mac it would
| run constantly.
| hnrodey wrote:
| I upgraded from a 2011 MacBook Pro to a M1 MacBook Air and
| never looked back.
|
| Battery life, portability, COOL. Like, my actual lap is no
| longer burning.
| thfuran wrote:
| I mean, it won't do your laundry, but it'll be much better
| in every way a laptop plausibly could be.
| etempleton wrote:
| Battery life is insanely better. If you have not used one
| of the M series laptops it cannot be overstated how much
| better the battery life is. It is worth it for battery
| alone.
|
| But beyond that they are also incredibly fast and run cool.
| In the MacBook Air there is no fan and on the Pros they
| barely ever spin up in an audible way.
| philjohn wrote:
| The fans literally never come on for my personal M2 MBP
| 14" or on my work 16" M1 (it helps that the heavy lifting
| of running stuff and compiling happens on a dev server)
|
| During work from home during Covid I was still using an
| Intel MBP and video conferences invariably caused the
| fans to kick up to the point where using noise cancelling
| headphones and not the built in speakers was necessary
| for sanity.
| rbanffy wrote:
| I NEVER heard the fans on my 16" M2. Not even when
| building Docker images for six different platforms at the
| same time.
| ben7799 wrote:
| I went from the last Intel i9 16" MBP to an M3 Pro in the
| last month at work.
|
| I think it's saving me an hour a day and the fan has never
| come on, the the laptop has never felt warm, and the
| battery life is just mind blowing.
|
| I run docker & compilers all day. The i9 would run the fan
| 75% of the time and had to throttle down any time it was on
| battery power and it was lucky to last 3 hours on battery.
| nomel wrote:
| I can definitely say there's a downside. I sometimes take
| the bus home, but it can get chilly at night. Previously, I
| would fire up a little python script that saturate all the
| cores, to warm my lap. My old Intel was _plenty_ warm to
| keep me from getting too uncomfortable. I can 't even feel
| my M2 through my pants, and sticking it into my shirt makes
| me look like an idiot.
| astrange wrote:
| They make battery powered hand warmers for that, but it
| could make you infertile, or I guess set your pants on
| fire.
| opan wrote:
| If you care about Asahi support, an M1 or M2 would probably
| be better short-term anyway.
| lowbloodsugar wrote:
| I've got a top spec i9 MBP and my same-price M1 Max blows it
| out of the water, while being vastly cooler and lasting
| forever on battery.
| grecy wrote:
| I just upgraded from a mid 2014 MBP to a used M1 air.
|
| It is much, much faster, silent, and I use it for days
| without power. Editing 4K video is not just possible, it is a
| non event.
| tedivm wrote:
| I wait until the new release, and then look at the refurbished
| store to get a discount on the last generation model. I do this
| every four years or so.
| a13o wrote:
| I do this technique too, and it's a great time for it. The
| OLED screen on the new iPad signals that Apple devices are
| moving to a better panel. If you've been waiting for the
| right time to move off an Intel Mac and onto a SoC Mac, it's
| now. Pick up a refurbished M2 MacBook. They're in the sweet
| spot for support, power, and cost.
|
| The next one will probably have an OLED screen; so if you
| wait til then, your refurb M1/2/3 will be on Apple's short
| list of devices they don't want to support. (And you might
| have panel FOMO.) Or you'll have to pay the premium price for
| the latest model.
| rkuska wrote:
| That's actually a nice side effect of all the *rumors pages.
| The rumors of future products keep me of buying the current
| products. I keep on using my previous products while saving
| money and planet and being excited about what future holds.
| repelsteeltje wrote:
| > The rumors of future products keep me of buying the current
| products.
|
| Spot on!
|
| Back in the nineties, Intel managed to push competing RISC
| architectures (UltraSparc, MIPS, DEC Alpha, PowerPC) out of
| the market using _nothing but_ promises that Itanium was
| going to blow them all out of the water.
|
| And apparently Apple is okay with procrastinating and
| cannibalizing current sales of M1, 2, 3 if it helps prevent
| some Snapdragon (or Ampere) sales.
| thfuran wrote:
| There may be a world in which Apple is procrastinating in
| chip design, but it's not this one.
| gumby wrote:
| > And apparently Apple is okay with procrastinating and
| cannibalizing current sales of M1, 2, 3 if it helps prevent
| some Snapdragon (or Ampere) sales.
|
| Not sure where "procrastinating" fits in (a typo?), but as
| Scott McNealy once said, "If someone shows up and eats our
| lunch, it might so well be us."
| ruined wrote:
| >And apparently Apple is okay with procrastinating and
| cannibalizing current sales of M1, 2, 3 if it helps prevent
| some Snapdragon (or Ampere) sales.
|
| sales of what
|
| i actually can't think of a single competing product.
| admittedly i don't keep up with laptop news but still, i
| haven't heard of anything yet that can meaningfully compete
| with the m1 from four years ago
| nolongerthere wrote:
| Microsoft just announced some lackluster arm laptops that
| they claim can compete with M-series chips. The question
| is what windows programs are gonna run on them...
| fl0ki wrote:
| Some folks have looked into it, and it doesn't sound too
| bad.
|
| https://www.youtube.com/watch?v=uY-tMBk9Vx4
|
| For me at least, the best possible outcome of this is
| that Windows handheld gaming devices become more power-
| efficient. That might be an advantage over Linux-based
| handhelds for a while, unless Valve decide that Proton
| needs to also be an architecture emulator. The chip
| efficiency wins must surely be tempting in this form
| factor.
| Tagbert wrote:
| Some people have been running Windows 11 for Arm on a VM
| in Apple Silicon. It has an automatic transcoder that
| translates most x86 code at start. It seems to run many
| apps well. Microsoft claims these new machines have a
| better transcoder. This might work.
| rbanffy wrote:
| Your question answers itself. "What Windows programs" is
| the key part.
|
| I don't have any need for any Windows-only program.
| sitkack wrote:
| https://en.wikipedia.org/wiki/Osborne_effect
| Kon-Peki wrote:
| On the contrary, I think that the reliable update cadence
| in modern electronics means that people should generally
| all but ignore future product roadmaps.
|
| When you actually _need_ to get a new device, just get
| whatever the up-to-date thing is.
|
| OK, ok, I suppose that it's reasonable to check the rumor
| sites to see if you should delay by a month or two. But not
| any longer than that.
| rbanffy wrote:
| It's much harder with PCs, where you can get, for
| instance, new Thinkpad's with anything from 11th gen Core
| i all the way to new Core Ultras. And, now, ARMs as
| well...
| usefulcat wrote:
| > The rumors of future products keep me of buying the current
| products.
|
| For myself, I like to think of it as applied procrastination.
| I _could_ buy that new thing I want today.. but something
| better will come along in time, so I can afford to put it off
| a while longer yet..
| thisislife2 wrote:
| > The rumors of future products keep me of buying the current
| products.
|
| You may have heard of the 5-minute rule - _" Will doing this
| take me less than 5 minutes? If the answer is yes, do it
| now."_ An adaption of that to reduce impulse purchases is -
| _" Do I really need this product right now? If the answer is
| no, don't buy it."_
| tonyarkles wrote:
| And on the flip side I am generally hesitant to buy first-
| release Apple hardware. Over the 20 years I've been buying
| Apple kit I've generally found it to be exceptionally robust
| but newly released hardware has had enough bugs (either
| hardware or OS) that I just sit back and let other users find
| the issues first. But I do simultaneously have the same
| issue: if WWDC is coming up within a month or two I'm not
| going to be buying any hardware because there's a good chance
| that something new will be released or the hardware I was
| going to buy is going to get a refresh or a price drop.
| throw0101d wrote:
| > _this summer_
|
| In recent years the MBP line has been updated towards the end
| of the year (Oct/Nov) or early (Jan):
|
| * https://buyersguide.macrumors.com/#MacBook_Pro_16
|
| So if you can 'limp' along towards the autumn/winter/Christmas,
| then it's probably worth the wait to get the M4 (or pickup an
| M3 when the price presumably drops to clear inventory).
| andy_ppp wrote:
| I just bought a second hand M2 Air in perfect condition and it
| feels faster than my M1 Max in a really beautiful body for
| travel. I'm not certain it matters that much anymore to be
| honest. What are you using it for?
| zorrn wrote:
| This is so real. I have this exact problem... but I think I'm
| just buying a? refurbished MacBook Air M2 13"
| pmontra wrote:
| Osborne effect https://en.wikipedia.org/wiki/Osborne_effect
| deergomoo wrote:
| Since the move to Apple Silicon you are realistically never
| more than 12-18 months away from a new chip generation in a
| MacBook. An M1 is still plenty good for the vast majority of
| workloads, especially if it's an M1 Pro/Max/Ultra.
|
| Actually probably the best thing to do is wait until the M4
| machines launch then bag a good deal on a clearance M3.
| rbanffy wrote:
| The next ones to get M4 will probably be the Mini, the
| Studio, and the Pro. iMac and MacBooks got an M3 refresh, but
| the other desktops have M2s now.
| alwillis wrote:
| For those with MacBook Pro FOMO, _do not_ read about the
| rumored foldable 18.8-inch screen MacBook Pro running on the M5
| coming in 2026 [1].
|
| [1]: https://www.macrumors.com/2024/05/23/18-8-inch-foldable-
| macb...
| JulianWasTaken wrote:
| Without anything but a skim, surely there is no way a MB
| _Pro_ ships with a virtual keyboard, that is pure torture.
| solardev wrote:
| Maybe it uses the camera for gesture recognition so you can
| air-write each letter one at a time? Air-quotes will be
| fun... air-tabs, not so much. "Space, but <widens arms>
| BIGGER!"
| dbspin wrote:
| Obviously Apple's upcoming LLM will be used to infer
| based on observation of your past behaviour what you
| would have typed, and type it for you.
| jasomill wrote:
| So basically the next generation of
|
| https://www.youtube.com/watch?v=R8gF0KTfMrQ
| LordDragonfang wrote:
| If anything, they'll probably use the "studio" branding (or
| more likely just have it under the iPad line, since they
| have desktop chips in them now anyways)
| cjk2 wrote:
| Kill me with a blunt spoon before I take on a foldable touch
| screen.
| yumraj wrote:
| The _rumored_ price is enough to make me not worry about
| those.
| brigade wrote:
| Inflation still has a chance to make $3500 in 2026 dollars
| equivalent to the starting price of the 2016 16" MBP
| threeseed wrote:
| macOS does not support touch input.
|
| It would require the biggest UI redesign in the history of
| the company to ensure every input control is at least a
| centimetre away from anything else.
|
| And would require every Mac developer to absorb the cost for
| major updates to their apps as well.
|
| This would almost certainly be an iPad.
| meindnoch wrote:
| So binary sizes are going to double?
| phkahler wrote:
| >> So binary sizes are going to double?
|
| If you're already supporting 2 arch it will only increase by 50
| percent to support a 3rd ;-)
| georgeburdell wrote:
| IME Mac binaries are much smaller than Windows for your
| average third party software. I don't know why.
| ben7799 wrote:
| Windows binaries often seem to have excess statically
| linked libraries.. even though they are called DLLs which
| is supposed to mean dynamic. They might be loading it
| dynamically but they still seem to have decided to include
| their own private copy.
|
| I've even seen windows binaries have multiple different
| versions of the same DLL inside them, and it's a well known
| DLL that is duplicated multiple places elsewhere.
|
| All OSes/Apps do this but maybe a lot of Mac apps do it a
| little less. (I don't even have any real statistical idea
| how common this is with windows apps either)
| tonyarkles wrote:
| From having worked on Windows, OSX, and Linux desktop
| software over the years there's a few factors at play off
| the top of my head:
|
| - Windows DLLs don't usually have strong versioning baked
| into the filename. On OSX or Linux, there's usually the
| full version number baked in (libfoo.so.3.32.0) with
| symlinks stripping off version components. (libfoo.so,
| libfoo.so.3, libfoo.so.3.32) would all be symlinks to
| libfoo.so.3.32.0 and you can link against whichever
| major/minor/patch version you depend on. If your Windows
| app depends on a specific version it's going to be
| opening DLLs and querying them to find out what they are.
|
| - Native OSX software (not Electron) seems to depend much
| less on piles of external libraries because the OSX
| standard library is very rich and has a solid history of
| not breaking APIs and ABI across OS versions. While eg
| CoreAudio is guaranteed to be installed on an OSX install
| and be either compatible or discoverably-incompatible,
| the version of DirectSound you're going to have access to
| on Windows is more of a crapshoot.
|
| - Windows apps (except for the .Net runtime sometimes)
| are often designed for longevity. A couple of months ago
| I installed some software that was released in 1999 on my
| Windows 11 machine and it just worked. Bundling up those
| DLLs is part of why they work.
|
| - Linux apps can rely on downstream packaging to install
| the necessary shared libraries on demand, generally
| speaking. Linux desktop apps distributed as RPMs or DEBs
| can "just" declare which libraries they need and get them
| delivered during install.
| ziml77 wrote:
| On Windows isn't it possible to have the OS deal with the
| DLL version issue by using side-by-side assemblies? I
| believe in practice that's only ever used by DLLs
| provided by the OS, but I thought it was possible to
| apply the mechanism to other DLLs as well.
| Hamuko wrote:
| The most impressive I've seen has been four-arch support in a
| single bundle: PowerPC, 32-bit x86, 64-bit x86 and ARM64.
| pasc1878 wrote:
| Openstep had as a default 4 hppa, sparc, i386 and m68k - I
| often built stuff on a HP for production use on Intel and
| 68000 boxes and I think they also had unreleased m88k as
| well at the same time so internally might have had five way
| binaries.
| JKCalhoun wrote:
| And you'd like to think the binaries are still not the
| largest component of an app contributing to the file size.
| But who knows these days.
| DaiPlusPlus wrote:
| Its the static-linking of the Swift runtime - it's
| incredibly inelegant.
| saagarjha wrote:
| The Swift runtime is not statically linked on Apple's
| platforms
| kenferry wrote:
| (Anymore)
| HeatrayEnjoyer wrote:
| How so?
| fl0ki wrote:
| This is more like supporting AVX512 than a whole separate
| architecture. If you have to target both old and new devices
| from one binary, you do a runtime feature check and call the
| corresponding code.
|
| That is certainly more code, but not double. You only need it
| for the parts of the code that are both (a) bottlenecks worth
| optimizing and (b) actually benefit by using the new
| instructions.
| rbanffy wrote:
| And, in modern desktop software, code is a tiny bit of the
| total size of the application - visual elements tend to
| occupy a lot more space than code.
| dumbo-octopus wrote:
| In games, perhaps. In basically nothing else though. And I
| have 0 games on my M1, yet my apps folder is 23 GB. Docker,
| Edge, and SketchUp all 2+ GB, despite not having almost any
| UI to speak of.
|
| (Edit to remove iMovie from the list, as it has GB's of
| "Transitions" and "Titles" that I really should just
| delete)
| parl_match wrote:
| No, in pretty much everything.
|
| All three examples you gave have substantial UI and other
| bundled assets. For example, the Docker Desktop app is
| about 2GB on my computer, yet included assets make up at
| least 1.2GB, and a further 600MB is a bundle containing
| the UI, which itself is about 100MB of binaries.
|
| If you actually open those bundles (as they're called on
| macos) and take a look inside, you'll see that they don't
| even contain all of their assets, anyways, often linking
| to frameworks contained in ~/Library
|
| This is a very layperson explanation, btw, but I assure
| you that "in modern desktop software, code is a tiny bit
| of the total size of the application" is a very true
| statement.
| hajile wrote:
| ARMv9 is just ARMv8.5 with 4 extra extensions. It's not a
| complete overhaul like the ARMv7 to ARMv8 change was.
|
| It's more comparable to x86 chips with AVX-512 and chips
| without AVX-512. 99% of your code is the same, but the compiler
| will generate SSE, AVX, and AVX-512 variants and choose the
| correct one based on the CPU.
| 201984 wrote:
| Are there any extensions that ARMv9 is required to have? I'm
| looking through the reference manuals and those 4 extra
| extensions are all marked as "OPTIONAL" for ARMv9.
| hajile wrote:
| I believe it requires the ARMv8.5 instruction sets (but
| maybe some of those are optional too?)
| Aaargh20318 wrote:
| The most incredible thing about the new iPads is that even with
| the crazy fast M4 chip MS Teams manages to crawl to a halt.
| Clearly it takes all the engineering skills of the largest and
| most valuable software company in the world to make text entry go
| at about 1 fps on a chip as powerful as the M4.
| akmarinov wrote:
| That's usually what cross platform at an enterprise company at
| scale gets you
| chongli wrote:
| MS Teams is by far the worst piece of software I've ever used.
| It is ungodly slow and it just gets slower the more you use it.
|
| I believe it is actually hitting the server to update the
| online/away status light for every single message in a
| conversation. If you turn off all the status update stuff in
| the settings then the software speeds up dramatically. Another
| thing you can do is find the folder where it caches everything
| and just trash the entire thing. Somehow, they've managed to
| make caching slow everything down rather than provide a speed
| up.
| Dalewyn wrote:
| https://en.wikipedia.org/wiki/Wirth%27s_law
| treyd wrote:
| Why is that not at least done asynchronously? I thought part
| of the whole narrative of shipping these new terrible pieces
| of software as standalone Google Chrome instances was that it
| makes it easier to spawn async JS workers for background
| tasks and whatnot?
| chongli wrote:
| I think that's what it's doing. If you have a conversation
| with a person spanning hundreds of messages (over many
| weeks) it'll be updating the status light next to their
| name on every single message in the history. The more
| messages in the history, the more workers you get!
| bluGill wrote:
| async is still difficult. There is no getting around data
| synchronization issues. either you need to spend a lot of
| time in design or will get constant problems with things
| like not having a mutex when you should, mutex deadlock,
| holding a mutex too long, locking/unlocking too often.
|
| I haven't done async JS, but I've done enough async
| elsewhere to know that language cannot work around bad
| design.
| xnx wrote:
| I thought this was mostly/fully solved with CRDTs?
| lucianbr wrote:
| You still need to design a CRDT that solves your
| particular problem, you don't just say the magic word
| "CRDT" and the problem is gone. And the performance will
| depend on how good the design is.
| beanjuiceII wrote:
| hmm I use teams daily all day for work and don't have these
| issues, maybe your org messed something up?
| chongli wrote:
| I have used it at two different orgs (a university and a
| company) and it had the same issue on both. Perhaps your
| org has the settings figured out!
| dagmx wrote:
| It is also possible that you don't notice the slow downs?
|
| Teams installed directly from the App Store or Microsoft on
| any device I own (high end windows/mac/ipad) devices are
| all terribly slow
| nunez wrote:
| If you think Teams is bad, then you haven't had to endure
| Google Chat.
| birdman3131 wrote:
| Can't have technical debt if you throw it all away every
| couple years and start over.
|
| Work started off using Google talk. I am expecting
| something to replace google chat like they did talk and
| hangouts.
| lostlogin wrote:
| Have you tried 'New Teams'?
|
| It's exactly the same but then you can have to open twice and
| drain your resources much quicker.
| z500 wrote:
| And as a bonus, Outlook stops reporting presence!
| heroprotagonist wrote:
| Every character you type results in some sort of hit to their
| telemetry server. It will include the actual letter you typed,
| if you or your org are not configured to be in EU. With their
| EU configuration option (pulled from server every launch) it
| will only report the fact that you typed _something_.
|
| Now if that's not fun enough, their telemetry also covers mouse
| movements. Go ahead and watch your CPU as you spin your mouse
| in circles around the Teams window.
|
| For extra fun, block their telemetry server and watch Teams
| bloat in RAM, to as much as your system has, as it keeps every
| action you take in local memory as it waits for the ability to
| talk to that telemetry server again.
|
| If you're going to block their telemetry its best to fake an
| accept via some mitm proxy and send back a 200 code.
|
| I do not know exactly how much this applies to iPad version,
| compared to their desktop apps. Mobile offers both more and
| less data possibilities. It's a different context.
| Aaargh20318 wrote:
| I'm in the EU and should have the EU configuration.
|
| The problem mainly occurs when I mention someone in a reply
| to a thread. Once I type @<name> the text input just slows
| down so much I can type much faster than it can render the
| text.
| heroprotagonist wrote:
| It's still going through the same telemetry action, it just
| omits the actual character you typed. And yes, it is
| character by character. That collection (eg, the timestamp
| you hit the character, channel/person it was to, etc) is
| the inefficiency causing your typing to slow.
|
| If it was a straight text box that they polled contents of
| _occasionally_ or after you hit 'send', it would be a much
| better user experience.
| zamadatix wrote:
| As much as I dislike overbearing telemetry in the context
| of an M4 or even an N95 the computer should be more than
| capable of logging a few kb of telemetry about inputs per
| second without even a notice on the performance
| statistics. The problem remains that every single thing
| in the app is just implemented god awful slow and would
| still be regardless if it's also recording telemetry on
| the input data or not.
| Klonoar wrote:
| ...do you have any source verifying this? Setting aside
| how insane of an issue it'd be network-wise/privacy-
| wise/etc, this is like a day one "debounce the call" fix.
|
| I'm not even saying I doubt you, I'm just curious how you
| ascertained this exact behavior.
| torginus wrote:
| Haha what exactly does it protect me from? If they see
| that I typed something to someone, and see the chat
| history of me having sent a particular message at a
| particular time, it doesn't take a genius to put things
| together.
| zorrn wrote:
| Do u have a source for this?
| aaomidi wrote:
| It'd be trivial to MITM it with wireshark
| zamadatix wrote:
| What's the easiest way to get the Teams app to accept the
| MITM on TLS?
| aaomidi wrote:
| I believe it just uses your systems rootstore. So adding
| the signer cert in there should probably be enough.
| zamadatix wrote:
| Ah, that's right, Chrom* based things look at the system
| store by default and it's the Firefox based things that
| don't (without configuration at least). Thanks.
|
| Edit: and that reminds me I should probably run this test
| on new Teams, where it now uses the built in WebView2
| transpute wrote:
| On iOS, Charles Proxy.
| runjake wrote:
| For those interested, try Burp Proxy[1] or Charles
| Proxy[2].
|
| 1. https://portswigger.net/burp/documentation/desktop/get
| ting-s...
|
| 2. https://www.charlesproxy.com/
| causal wrote:
| I really want one too. This is the closest thing I could
| find, but doesn't claim the keystroke level of detail
| described above: https://www.zdnet.com/article/i-looked-at-
| all-the-ways-micro...
| aaomidi wrote:
| This is so stupid. They are the sender and receiver of the
| messages. They can backfill their telemetry using batch
| processes offline ffs.
| rbanffy wrote:
| Unless someone really complains loudly enough large orgs
| switch to a competitor, it's tech debt and not a bug they
| need to fix.
| SllX wrote:
| So they're running a keystroke logger and masquerading it as
| "telemetry"? That should be outlawed. It's not a drafts
| feature, it's not an online word processor, it's just a
| straight up keystroke logger.
| fredley wrote:
| It is outlawed, in the EU.
| yegle wrote:
| Your description is so absurd that I can't tell if it's real
| or a satire.
|
| Please tell me this is a satire piece...
| vundercind wrote:
| Asana used to sometimes have textareas that would take a
| full three seconds to display each key press. On then-
| current MacBook pros. You know, something that had nearly
| zero latency on first-gen single-core Pentium chips. Hell
| it may still do that, I never saw them fix it, I just
| finally got to stop using it.
|
| Never underestimate the ability of shitware vendors to make
| supercomputers feel slower than an 8086. These days it
| usually involves JavaScript, HTML, and CSS.
| lucianbr wrote:
| I had the exact same feeling. Can't tell if it's real or a
| joke. It's not only outrageous for privacy, but also very
| bad engineering.
| rbanffy wrote:
| It is the kind of engineering Teams feels like.
|
| A glorified IRC client should run in under a megabyte of
| memory.
| spamizbad wrote:
| No clue why they're even doing this and not just sampling
| after the fact. There's no way they are gleaning anything
| useful that they couldn't more efficiency (and anonymously)
| capture.
| mft_ wrote:
| LOL, you wanna try it on my ~2 generations old Core i5
| corporate laptop. Sometimes, the first steps of drawing the
| calendar view are roughly the same speed as me drawing it in
| Paint.
|
| Maybe someone should normalise giving developers crappy laptops
| to develop on.
|
| (Has anyone done a deep dive into Teams to explain what on
| earth is going on? I mean, if VSCode can be fast despite its
| underlying architecture, surely something could be done about
| Teams?)
| pmontra wrote:
| Do companies using Teams have a choice of using something
| else or are their C*Os and IT departments married with
| Microsoft? If the latter, they'll use whatever Microsoft
| throws at them, even if it doesn't work.
| deergomoo wrote:
| Teams' USP is that it's part of the suite corporations are
| almost universally already paying for.
| phkahler wrote:
| >> Maybe someone should normalise giving developers crappy
| laptops to develop on.
|
| Then the developers will complain the _hardware_ is unusable
| to do their job even though that this was a supercomputer
| back in the day. Then you say "No, it's the software please
| fix it."
| intelVISA wrote:
| Weird, a statically built ELF that supports TLS1.3 + HTTP1.1
| is like ~30kb, all you need is Emacs as a UI and you have
| Teams 2 at 1.0e-5% resource usage.
| deergomoo wrote:
| Yup, my work machine has an older i7 (2021-era maybe?) with
| 32GB RAM and between Teams, Slack, "new" Outlook, Jira, WSL,
| driving a 4K display off the piddly integrated GPU, and a VPN
| that involves every packet doing a transatlantic roundtrip
| whenever I want to connect to an internal service, everything
| is just dog slow. And the fan noise--my god the fan noise.
|
| Some days it makes me extra motivated to make the code I
| write fast and efficient; other days I want to give up
| entirely.
| rkangel wrote:
| Teams on the desktop has improved with their "v2" client. It's
| not the world's fastest piece of software, but I find it to not
| be embarrassingly slow now (on a reasonably specced machine).
|
| One as to hope that the same perfomance lens will now be turned
| on the mobile apps.
| meindnoch wrote:
| The wonders of web development.
| jandeboevrie wrote:
| Try Pidgin with the excellent ms teams plugin:
| https://github.com/EionRobb/purple-teams - less than 100mb ram
| usage and notifications that still work after an hour. Only for
| (video) calls you need to open teams..
| DeathArrow wrote:
| That's the power of Javascript!
| MarkSweep wrote:
| What's the modern form of "what Andy giveth, Bill taketh away"?
| "What Tim giveth, Satya taketh away"?
|
| https://en.m.wikipedia.org/wiki/Andy_and_Bill%27s_law
| ko27 wrote:
| It seems that M4 was overhyped. Almost of all of the performance
| improvements, in Geekbench for example, comes from new
| instructions that most apps won't use, and even if they do they
| might end up using the faster GPU/NPU for those tasks.
|
| https://twitter.com/toniievych/status/1788596920627118248
| talldayo wrote:
| > Technically, Intel has its matrix extensions (Intel AMX), but
| Geekbench does not support it.
|
| Lmao, and people say Geekbench _isn 't_ biased towards ARM
| ribit wrote:
| Geekbench supports Intel AMX and AVX-512. This is all in GB
| documentation.
| dragonelite wrote:
| Makes one wonder has the apple miracle mostly been first the
| transition to ARM and having access to TSMC highest end nodes
| before the rest even comes into the picture. But im glad new
| competition is coming from qualcomm x elite and Huawei with
| their Kirin and Ascends chips. Hopefully ARMsrace will be more
| interesting to follow than the x64 race between intel and AMD.
| hajile wrote:
| Oryon was designed to compete with M1 then the clockspeeds
| were ramped up to compete with M2. M3 clearly beat it out and
| M4 has only furthered that lead.
|
| Oryon will still probably beat x86 designs massively in
| performance per watt which is pretty much the most important
| metric for most people anyway (as most people use laptops).
|
| EDIT: your username `dragonelite` is quite interesting. You
| joined 2019, but the coincidence is fascinating.
| sgerenser wrote:
| It's a decent, but not revolutionary improvement. Yes, most of
| the gains outside of SME are coming from clock increases not
| IPC. I don't know if I would call it overhyped, more like
| misunderstood.
| ribit wrote:
| No, per-clock performance improvements between M3 and M4 range
| from 0% to 20%, this is ignoring the two subtests that benefit
| from SME. That Twitter post is moot. GB results show high
| variation, it is easy enough to cherry pick pairs of results
| that show any point you might want. You have to compare result
| distributions. There were some users on anandtech forums who
| did it and the results are very clear.
| alberth wrote:
| I truly never understood why Apple deprecated Bitcode.
|
| It was a super great idea because it allowed recompilation on
| the App Store to take advantage of new instructions.
| sgerenser wrote:
| SME is very specialized, right now no compiler (that I know
| of) is really able to take general-purpose code and output
| optimized SME. So for these instructions at least, bitcode
| wouldn't be of any benefit.
| plorkyeran wrote:
| Bitcode did not allow recompilation to take advantage of new
| instructions. They dropped bitcode because they never
| actually managed to do anything with it other than the armvk7
| to arm64_32 recompilation, and that required specifically
| designing arm64_32 around what was possible with bitcode.
|
| Updating apps to use new vector instructions is far more
| complicated than upgrading to a new compiler version and
| having it magically get faster.
| astrange wrote:
| Autovectorization doesn't work without extreme levels of
| handholding, so the optimization idea was basically a myth.
| ribit wrote:
| They have not adopted ARMv9. This is still ARMv8, but with SME.
| axoltl wrote:
| Yep, the binaries are all arm64e.
| saagarjha wrote:
| This doesn't really say much
| hajile wrote:
| ARMv9.0 is very similar to ARMv8.5 (9.0 supersets 8.5 with
| SVE2, TME, TLA, and CCA), so it's not a massive deal. SME
| implies v8.7 which is basically identical to v9.2 except for
| those couple extensions previously mentioned.
|
| I wonder if there is licensing at play though. Apple may have
| gotten a really great licensing deal on ARMv8 that they
| wouldn't be offered for ARMv9.
| skavi wrote:
| Does anyone have insight into why arm CPU vendors seem so
| hesitant about implementing SVE2? ~They seem~ *Apple seems to
| have no issue with SSVE2 or SME.
|
| Edit: Only Apple has implemented SSVE and SME I think.
| hajile wrote:
| SVE2 is an extension on top of SVE which some stuff already
| implements. The issue is more likely to be the politics of
| moving to ARMv9 than anything else.
|
| As to SVE though, I'd guess variable execution time makes
| the implementation require a bit of work. Normally, multi-
| cycle tasks have a fixed number. Your scheduler knows that
| MUL takes N cycles and plans accordingly.
|
| SVE seems like it should require N-M cycles depending on
| what is passed. That must be determined and scheduled
| around. This would affect the OoO parts of the core all the
| way from ordering through to the end of the pipeline.
|
| That's definitely bordering on new uarch territory and if
| that is the case, it would take 4-5 years from start to
| finish to implement. This would explain why all the ARMv8
| guys never got around to it. ARMv9 makes it mandatory, but
| that was released in 2021 or so which means non-ARM
| implementors probably have a ways to go.
| skavi wrote:
| This isn't a convincing explanation to me. There are
| plenty of variable latency instructions on existing high
| performance arm64 cores.
| dzaima wrote:
| SVE doesn't need variable-execution-time instructions,
| outside of perhaps masked load/store, but those are
| already non-constant. Everything else is just traditional
| instructions (given that, from the perspective of the
| hardware, it has a fixed vector size), with a blend.
| ribit wrote:
| What do you mean? Apple is the only one who has an SME/SSVE
| implementation.
| skavi wrote:
| I misremembered. Looks like it is only Apple. I
| appreciate the correction.
| brigade wrote:
| What is the _measurable_ benefit to implementing 128b SVE2?
| Like, ARM has CPUs that implement that, and it 's not even
| disabled on some chips. So there must be benchmarks
| somewhere showing how worthwhile it is.
|
| And implementing 256b SVE has different issues depending on
| how you do it. 4x256b vector ALUs are more power hungry
| than generally useful. 2x256b is only beneficial over
| 4x128b if you're limited by decode width, which isn't an
| issue now that A32/T32 support has been dropped. 3x256b
| would probably imply 3x128b which would regress existing
| NEON code. And little cores don't really want to double the
| transistors spent on vector code, but you can't have a
| different vector length than the big cores...
| skavi wrote:
| Masked instructions primarily. But apart from that it's
| just a more complete ISA vs NEON. More comparable to
| AVX512/AVX10.
|
| > 2x256b is only beneficial over 4x128b if you're limited
| by decode width
|
| This is only true if we ignore more complex instructions
| and focus on things like adding two vectors.
| brigade wrote:
| What is the percentage gain of using masked instructions
| on any benchmark/task of your choice? It can be negative
| on weird kernels that do lots of vector cmp since even
| ARM decided the cost of more than one write port in the
| predicate register file wasn't worth it, or if the
| masking adds lots of unnecessary and possibly false
| dependencies on the destination registers.
|
| > This is only true if we ignore more complex
| instructions and focus on things like adding two vectors.
|
| ARM implemented a CPU that had 2x256b SVE and 4x128b
| NEON. Literally the only benchmarks that benefitted from
| SVE were because they were limited by the 5-wide decode
| in NEON.
|
| Do you have an actual real-world counterexample?
| skavi wrote:
| I think it's somewhat unfair to ask for real world
| examples when there really aren't many people writing
| optimized SVE code right not. Probably because there are
| hardly any devices with the extension.
|
| I think the transition from AVX2 to AVX512 is comparable
| in that it provided not only larger vectors, but also a
| much nicer ISA. There were certainly a few projects that
| benefited significantly from that move. simdjson is
| probably the most famous example [0].
|
| [0]: https://lemire.me/blog/2022/05/25/parsing-json-
| faster-with-i...
| neonsunset wrote:
| This.
|
| AVX512 is all around a nice addition as JIT-based
| runtimes like .NET (8+) can use it for most common
| operations: text search, zeroing, copying, floating point
| conversion, more efficient forms of V256 idioms with
| AVX512VL (select-like patterns replaced with vpternlog).
|
| SVE2 will follow the same route.
| brigade wrote:
| CPUs with SVE have been generally available for two years
| now. SME and AVX-512 got benchmarks written showing them
| off before the CPUs were even available. Seems fair to
| me.
|
| simdjson specifically benefitted from Intel's _hardware_
| decision to implement a 512b permute from 2x 512b
| registers with a throughput of 1 /cycle. That's area-
| expensive, which is (probably) why ARM has historically
| skimped on tbl performance, only changing as of the
| Cortex-X4.
|
| Anyway simdjson is an argument for 256b/512b vector
| permute, not 128b SVE.
|
| Having written a lot of NEON and investigated SVE... I
| disagree that SVE is a nicer ISA. The set of what's
| 2-operand destructive, what instructions have maskable
| forms vs. needing movprfx that's only fused on A64FX, and
| dealing the intrinsics issues that come from sizeless
| types are all unneeded headaches. Plus I prefer NEON's
| variable shift to SVE's variable shifts.
| hajile wrote:
| I'd say that the theoretical ability to gang units
| together would be appealing.
|
| If you have four 128-bit packed SIMD, you must execute 4
| different instructions at once or the others go to waste.
| With SVE, you could (in theory) use all 4 as a single,
| very wide vector for common operations if there weren't a
| lot of instructions competing for execution ports. You
| could even dynamically allocate them based on expected
| vector size or amount of vector instructions coming down
| the pipeline.
|
| Additionally, adding two 2048-bit vectors using NEON
| (128-bit packed SIMD) would require 16 add instructions
| while SVE would require just one. That's a massive code
| size reduction which matters for I-cache and the frontend
| throughput.
| dzaima wrote:
| You can't do 2048 bits of addition in one SVE
| instruction; not portably, at least (and definitely not
| on any existing hardware). SVE requires hardware to have
| a minimum of 128-bit vectors (maximum allowed being
| 2048-bit), but the hardware chooses that, not the
| programmer. For portable SVE, your code needs to work for
| all of those widths, not just the smallest or largest.
| (of related note is RISC-V RVV, which allows you to group
| up to 8 registers together, allowing a minimum portable
| operation width of 128x8 = 1024 bits in a single
| instruction (and up to 65536x8 = 64KB for hypothetical
| crazy hardware with max VLEN), but SVE/SVE2 don't have
| any equivalent)
| brigade wrote:
| A for() loop does the same thing at the cost of like 3
| instructions. 4x128b has the flexibility that you don't
| _need_ 512b wide operations on the same data to keep the
| ALUs fed. If you have 512b wide operations being split to
| 4x128b instructions, great, otherwise the _massive_ OoOE
| window of modern chips can decode the next few loop
| iterations to keep the ALUs fed, or even pull
| instructions from a completely different kernel.
| ribit wrote:
| My guess is that Apple is simply not interested in some of
| the ARMv9 features. They are not eager to implement SVE and
| the se Ure virtualization features are probably not that
| relevant to them.
| rmccue wrote:
| From what I've read previously, Apple has a special licensing
| deal already as they were part of founding Arm, although I
| don't know if there's any details on exactly how that works.
| astrange wrote:
| That seems like something people just made up, seeing as
| Apple didn't use ARM for something like a decade or two
| after that.
|
| However, Apple basically commissioned ARMv8 in the first
| place to develop the A/M chips, so that presumably helps.
| zimpenfish wrote:
| > Apple didn't use ARM for something like a decade or two
| after that.
|
| They used the ARM610 in the Newton in 1993 (ARM was
| founded in late 1990) and then an 8 year gap to the iPod
| in 2001 (ARM7TDMI which are ARM designs.) Their first in-
| house ARM design (I believe) is the iPhone 4 in 2010.
|
| They definitely didn't "architect/design ARM" for nearly
| a couple of decades after founding ARM, yeah, but they
| did use them.
| NobodyNada wrote:
| Apple cofounded ARM for use in the Newton product line;
| they released new Newton products from 1993-97 and
| discontinued them in 1998. They then used ARM again for
| the iPod, released in 2001.
| pkaye wrote:
| I believe its an architectural license which lets them
| design their own cores based on the the ARM instruction
| set. I think a few other companies may have this license
| but its not disclosed.
|
| https://www.electronicsweekly.com/news/business/finance/arm
| -...
| zimpenfish wrote:
| I believe they also don't have a per-chip license cost
| either (which, at Apple scale, probably adds up.)
| pjmlp wrote:
| So finally MTE enabled?
| saagarjha wrote:
| Apple's chips have been MTE enabled for a while, it's just
| turned off
| astrange wrote:
| No, it has PAC and I think BTI but not MTE.
| alberth wrote:
| Can someone ELI5 the significance.
| saagarjha wrote:
| Apple added some standard vector processing instructions
| awill wrote:
| What's so shocking about this? The Arm Cortex X2 which launched 2
| years ago has ARMv9
| eyelidlessness wrote:
| Why do you assume anything is supposed to be shocking about
| this?
| javawizard wrote:
| I'm guessing GP meant something more like:
|
| What's so _newsworthy_ about this?
| jamiek88 wrote:
| Apple.
|
| Literally as simple and as understandable as that. People
| care what Apple do. They alwaus have. Even when they were
| tiny. Either to applaud, boo or roll their
| eyes.
|
| A portion of HN seems perpetually confused or in denial of
| this.
| saagarjha wrote:
| I do find it amusing that journalists never go beyond Twitter for
| discussion on this because this was all being confirmed on
| Mastodon days before any of the posts in the article
| johnklos wrote:
| ARM really could've come up with better numbering /
| identification. I suppose it's ARM, emphasis on v, then 9, to
| differentiate it from ARM9, such as ARM9E-S?
___________________________________________________________________
(page generated 2024-05-24 23:01 UTC)