[HN Gopher] Swift's native Clocks are inefficient
___________________________________________________________________
Swift's native Clocks are inefficient
Author : mpweiher
Score : 137 points
Date : 2024-05-05 07:10 UTC (1 days ago)
(HTM) web link (wadetregaskis.com)
(TXT) w3m dump (wadetregaskis.com)
| foolswisdom wrote:
| > we're talking a mere 19 to 30 nanoseconds to get the time
| elapsed since a reference date and compare it to a threshold.
|
| The table shows 19 or 30 _milliseconds_ for Date / NSDate. Or am
| I misunderstanding something?
| Medea wrote:
| "showing the median runtime of the benchmark, which is a
| million iterations of checking the time"
| foolswisdom wrote:
| Thanks.
| taspeotis wrote:
| Divide by a million
| netruk44 wrote:
| I've been learning me some Swift and coming from C# I feel
| somewhat spoiled when it comes to timing things.
|
| In C# native Stopwatch class is essentially all you need for
| simple timers with sub-millisecond precision.
|
| Swift has not only the entire table of options from TFA to choose
| from, but also additional ones like DispatchTime [0]. They might
| all boil down to the same thing (mach_absolute_time, according to
| the article), but from the perspective of someone trying to learn
| the language, it's all a little confusing.
|
| Especially since there's also hidden bottlenecks like the one
| this post is about.
|
| [0]:
| https://developer.apple.com/documentation/dispatch/dispatcht...
| chubs wrote:
| Just use CACurrentMediaTime for that, or Date(), both simple
| options :)
| fathyb wrote:
| I believe `CACurrentMediaTime` depends on
| `QuartzCore.framework` and `Date` is not monotonic.
|
| I would also find it confusing if I found code doing
| something like network retries using `CACurrentMediaTime`.
| vlovich123 wrote:
| Clock_gettime
| tialaramex wrote:
| In the Swift library documentation itself, hopefully a Swift
| person can tell me: What is the significance of the list of Apple
| platforms given? For example the Clock protocol shows iOS 16.0+
| among others.
|
| I can imagine that e.g. ContinuousClock is platform specific -
| any particular system may or may not be able to present a clock
| which exhibits change despite being asleep for a while and so to
| the extent Apple claim Swift isn't an Apple-only language,
| ContinuousClock might nevertheless have platform requirements.
|
| But the _protocol_ seems free from such a constraint. I can write
| this _protocol_ for some arbitrary hardware which has no concept
| of time, I can 't _implement_ it, but I can easily write the
| protocol down, and yet here it is iOS 16.0+ anyway.
| Daedren wrote:
| Apple just doesn't backport APIs, it's a very very very rare
| occurence when it happens. It was introduced last year
| alongside iOS 16 so you require the latest OS, it's the norm
| really.
| tialaramex wrote:
| I guess maybe I didn't explain myself well. Swift is
| supposedly a cross platform language. This "protocol" unlike
| the specific clocks certainly seems like something you could
| equally well provide on say, Linux. But, it's documented as
| requiring (among others) iOS 16.0+
|
| Maybe there's a separate view onto this documentation if you
| care about non-Apple platforms? Or maybe there's just an
| entirely different standard library for everybody else?
| lukeh wrote:
| Same standard library (Foundation has some differences,
| that's another story). But the documentation on Apple's
| website only covers their own platforms.
| rockbruno wrote:
| The standard library stopped being bundled with Swift apps when
| ABI stability was achieved. They are now provided as dynamic
| libraries alongside OS releases, so you can only use Swift
| features that match the library version for a particular OS
| version.
| beeboobaa3 wrote:
| Yikes. So after bundling their development tools with their
| operating system they are now also bundling some language's
| stdlib with the operating system? Gotta get them fingers in
| all of the pies, I guess.
| KerrAvon wrote:
| Every Unix and most Unixlikes have always done this. It's
| standard practice in that world.
| beeboobaa3 wrote:
| Which distro ships the go standard library?
|
| Also unixes let the sysadmin install additional
| libraries. How do I `apt install libswift2` on an iPhone?
| MBCook wrote:
| > they are now also bundling some language's stdlib with
| the operating system
|
| much like libc, isn't it? Apple writes tons of their own
| software in Swift and the number keeps going up. They're
| trying to move more and more of the system to. It's going
| to be loaded and in every system whether a user uses it or
| not.
|
| No different from the objective-C runtime.
| jackjeff wrote:
| Absolutely agree.
|
| On Windows the equivalent would be MSVCRT which is
| sometimes shipped with the OS and sometimes not
| (depending on the versions involved). Sometimes you even
| need to worry about the CRT dependency with higher level
| languages because their "standard librairies" depend on
| the CRT.
|
| So if you see that being installed with Java or C# or
| Unity, now you know why.
| pdpi wrote:
| According to their changelog[0], Clock was added to the
| standard library with Swift 5.7, which shipped in 2022, at the
| same time as iOS 16. It looks like static linking by default
| was approved[1] but development stalled[2].
|
| I expect that it's as simple as that: It's supported on iOS 16+
| because it's dynamically linked by default, against a system-
| wide version of the standard library. You can probably try to
| statically link newer versions on old OS versions, or maybe
| ship a newer version of the standard library and dynamically
| link against that, but I have no idea how well those paths are
| supported.
|
| 0. https://github.com/apple/swift/blob/main/CHANGELOG.md
|
| 1. https://github.com/apple/swift-
| evolution/blob/main/proposals...
|
| 2. https://github.com/apple/swift-package-manager/pull/3905
| koenneker wrote:
| Might this be a hamfisted reaction to timing attacks?
| xyst wrote:
| Post mentions this Apple doc,
| https://developer.apple.com/documentation/kernel/1462446-mac...,
| which states it can potentially be used to fingerprint a device?
|
| How can this API be used to fingerprint devices? It's just
| getting the current time.
|
| My best guess, you can infer a users time zone? Thus get a very
| general/broad area of where this user lives (USA vs EU; or US-EST
| vs US-PST)
|
| Maybe I should just set my time to UTC on all devices
| simcop2387 wrote:
| The same way that you can do it from javascript I'd imagine.
|
| Timezones and such are one data point but the skews and
| accuracy and such are able to help you differentiate users too.
|
| https://forums.whonix.org/t/javascript-time-fingerprinting/7...
| lapcat wrote:
| mach_absolute_time is unrelated to clock time. It's basically
| the number of CPU cycles since last boot, so it's more of an
| uptime measure.
|
| I suspect the fingerprinting aspect is more indirect:
| mach_absolute_time is the most accurate way to measure small
| differences, so if you're trying to measure subtle differences
| in performance between different devices on some specific task,
| mach_absolute_time would be the way to go.
| thealistra wrote:
| Yeah this is correct. Other comments seem misinformed.
|
| You can fingerprint a device using this because you know the
| wall clock difference and you know the previous cpu cycles.
| So you can assume any device with appropriately more cpu
| cycles may be the same device.
|
| We're talking measurements taken from different apps using
| Google or Facebook sdk.
| MBCook wrote:
| > It's basically the number of CPU cycles since last boot, so
| it's more of an uptime measure
|
| And there's the problem. Different devices have different
| uptimes. If you can get not only the uptime but a very very
| accurate version, you've got a very strong fingerprint.
| VogonPoetry wrote:
| Consider N devices behind a NAT. They all make requests to a
| service.
|
| If the service can learn the individual but current values of
| mach_absolute_time, then after a minimum of two requests you
| can likely compute N and distinguish between each device
| making a request.
|
| This is possible because devices never reboot at exactly the
| same time.
| interpol_p wrote:
| My understanding is this gets something like the system uptime?
| (I may be reading the docs wrong).
|
| In which case, it could be used as one of many signals in
| fingerprinting a device, as you could distinguish a returning
| user by checking their uptime against the time delta since the
| uptime at their last visit. It's not perfect, but when combined
| with other signals, might be helpful
| twoodfin wrote:
| The problem is it's getting the current time with relatively
| high precision, which is the same reason a developer would
| prefer it for non-nefarious uses.
|
| Once you have a high-precision timer, there are all sorts of
| aspects of the user's system you can fingerprint by measuring
| how long some particular API dependent on device performance
| and/or state takes to execute.
|
| Platform vendors long ago figured out not to hand out the list
| of available fonts, but it's a couple orders of magnitude
| harder to be sure switching some text from Menlo to Helvetica
| doesn't leak a fractional bit of information via device-
| dependent timing.
|
| EDIT: Others noted it's actually ticks since startup, which is
| probably good for a few bits all on its own if you are tracking
| users in close to real time.
| Waterluvian wrote:
| It's amazing just how much we lose here, elsewhere, and in
| browsers, because we have to worry about fingerprinting.
| lesuorac wrote:
| It gets even more funny when you realize that devices &
| browsers let you set data per device that you can just
| retrieve later ...
|
| Why bother fingerprinting when you can just assign them an
| id and retrieve it later.
|
| https://developer.apple.com/documentation/devicecheck
|
| https://engineering.deptagency.com/securely-storing-data-
| on-...
|
| https://developer.mozilla.org/en-
| US/docs/Web/API/Window/loca...
| Aloisius wrote:
| That doesn't give you a fixed device ID like
| fingerprinting does.
|
| A fixed device ID survives even when an app is
| uninstalled and reinstalled and is the same for
| unaffiliated apps.
| singron wrote:
| The offset from epoch time is probably unique per device per
| boot, and it only drifts one second per second while the device
| is suspended.
|
| You can get the time zone from less naughty APIs, and that has
| way fewer bits of entropy.
| user2342 wrote:
| I'm not fluent in Swift and async, but the line:
| for try await byte in bytes { ... }
|
| for me reads like the time/delta is determined for every single
| byte received over the network. I.e. millions of times for
| megabytes sent. Isn't that a point for optimization or do I
| misunderstand the semantics of the code?
| ajross wrote:
| Yeah, this is horrifying from a performance design perspective.
| But in this case you'd still expect that the "current time"
| retrieval[1] to be small relative to all the other async
| overhead (context switching for every byte!), and apparently it
| isn't?
|
| [1] On x86 linux, it's just a quick call into the vdso that
| reads the TSC and some calibration data, dozen cycles or so.
| jerf wrote:
| Note the end of the article acknowledges this, so this is
| clearly a deliberate part of the constructed example to make
| a particular point and not an oversight by the author. But it
| is helpful to highlight this point, since it is certainly a
| live mistake I've seen in real code before. It's an
| interesting test of how rich one's cost model for running
| code is.
| marcosdumay wrote:
| The stream reader userspace libraries are very well optimized
| for handling that kind of "dumb" usage that should obviously
| create problems. (That's one of the reasons Linux expects you
| to use glibc instead of making a syscall directly.)
|
| But I imagine the time reading ones aren't as much optimized.
| People normally do not call them all the time.
| samatman wrote:
| The code, as the author makes clear, is an MWE. It provides a
| brief framework for benchmarking the behavior of the clocks.
| It's not intended to illustrate how to efficiently perform the
| task it's meant to resemble.
| spenczar5 wrote:
| But it seems consequential. If the time were sampled every
| kilobyte, the code would be 1,000 times faster - which is
| better than the proposed use of other time functions.
|
| At that point, even these slow methods are using about 0.5ms
| per million bytes, so it should be good up to gigabit speeds.
|
| If that's not fast enough, then sample every million bytes.
| Or, if the complexity is worth it, sample in an adaptive
| fashion.
| metaltyphoon wrote:
| I'm not sure about Swift, buy in C# and async method doesn't
| have to be completed asynchronously. For example, when reading
| from files, a buffer will be first read asynchronous then
| subsequent calls will be completed synchronously until the
| buffer needs to be "filled" again. So it feels like most
| languages can do these optimizations
|
| again.
| Shrezzing wrote:
| This is almost certainly intentional, and is very similar to the
| way web browsers mitigate the Spectre vulnerability[1]. Your
| processor (almost certainly) does some branch prediction to
| improve efficiency. If an application developer reliably knows
| the exact time, they can craft an application which jumps to
| another application's execution path, granting them complete
| access to its internal workings.
|
| To mitigate this threat, javascript engine developers simply
| added a random fuzzy delay to all of the precision timing
| techniques. Swift's large volume of calls to unrequired methods
| is, almost certainly, Apple's implementation of this mitigation.
|
| [1]
| https://en.wikipedia.org/wiki/Spectre_(security_vulnerabilit...
| fathyb wrote:
| If this was intentional, shouldn't it also affect
| `mach_absolute_time` which is used by the standard libraries of
| most languages and accessible to Swift?
|
| Also note you can get precise JavaScript measurements (and
| threading, eg. using pthreads and Emscripten) by adding some
| headers: https://developer.mozilla.org/en-
| US/docs/Web/API/Window/cros...
| Shrezzing wrote:
| > Also note you can get precise JavaScript measurements (and
| threading) by adding some headers
|
| Though you can access these techniques now, in the weeks
| after Spectre attacks were discovered, the browsers all
| consolidated on "make timing less accurate across the board"
| as an immediate-term fix[1]. All browsers now give automatic
| access to imprecise timing by default, but have some
| technique to opt-in for near-precise timing.
|
| Similarly, Swift has SuspendingClock and ContinuousClock,
| which you can use without informing Apple. Meanwhile
| mach_absolute_time & similarly precise timing methods require
| developers to disclose the reasons for its use before Apple
| will approve your app on the store[2].
|
| [1] https://blog.mozilla.org/security/2018/01/03/mitigations-
| lan...
|
| [2] https://developer.apple.com/documentation/kernel/1462446-
| mac...
| fathyb wrote:
| That makes a lot of sense, thank you!
| vlovich123 wrote:
| No it doesn't. Higher performance APIs like Date and
| clock_gettime are still available and not specially
| privileged and 40x faster. This looks pretty clearly like
| a bug.
|
| Spectre mitigations also are really silly here because as
| a swift app you already have full access to all in-
| process memory. It would have to be about meltdown but
| meltdown is prevented through other techniques.
| lxgr wrote:
| Nothing prevents applications from just calling the underlying
| methods mentioned in the article, so that can't be it. The
| author even benchmarked these!
| Someone wrote:
| Nothing? FTA: _"The downside to calling mach_absolute_time
| directly, though, is that it's on Apple's "naughty" list -
| apparently it's been abused for device fingerprinting, so
| Apple require you to beg for special permission if you want
| to use it"_
| asow92 wrote:
| It isn't difficult to be granted this permission. All an
| app needs to do is supply a reason defined in https://devel
| oper.apple.com/documentation/bundleresources/pr... as to
| why it's being used in the app's bundled
| PrivacyInfo.xcprivacy file, which could be disingenuous.
| darby_eight wrote:
| It may not be difficult, but it's an additional layer of
| requirement. Defense in depth baby!
| Someone wrote:
| In addition, if you get caught lying about this, your app
| may be nuked and your developer account terminated. May
| not be a big hurdle, but definitely can hurt if you have
| many users.
| sgerenser wrote:
| All the other methods "above" mach_absolute_time are still
| allowed though, including clock_gettime_nsec_np that's only
| ~2x slower than mach_absolute_time. While the Swift clock
| is ~40x slower than mach_absolute_time. I don't see how
| intentional slowdown for fingerprinting protection can be
| the cause.
| kevin_thibedeau wrote:
| Someone took inspiration from
| FizzBuzzEnterpriseEdition[1] and made their integer query
| API future proof.
|
| [1] https://github.com/EnterpriseQualityCoding/FizzBuzzEn
| terpris...
| cvwright wrote:
| All of the new privacy declarations are silly, but this one
| is especially ridiculous.
|
| I'm pretty sure I can trigger a hit to the naughty API just
| by updating a @Published var in an ObservedObject. For
| those unfamiliar with SwiftUI, this is the most basic way
| to tell the system that your model data has changed and
| thus it needs to re-render the view. Pretty much _every_
| non-trivial SwiftUI app will need to do this.
| vlovich123 wrote:
| but date and clock_gettime are still accessible and not
| much more overhead than the Mach API call. Additionally as
| I mention in another comment, this would have to be about
| Meltdown, not Spectre, and Meltdown is mitigated in the
| kernel through other techniques without sacrificing timers.
| beeboobaa3 wrote:
| Have to protect those pesky application developers from knowing
| the time so they can write correct software.
|
| It makes sense for a web browser. Not for something like Swift.
| vlovich123 wrote:
| No, this is pretty clearly just a bug / poor design. Mistakes
| happen.
| beeboobaa3 wrote:
| Probably but I'm just responding to GP who implied that
| Apple, in all its infinite wisdom, did this on purpose.
| stefan_ wrote:
| Literally one page into the article there is the full stack
| trace that makes abundantly clear there is no such thing going
| on, they just added a bunch of overhead.
|
| That's despite OSX having a vDSO style mechanism for it:
| https://github.com/opensource-apple/xnu/blob/master/libsysca...
| vlovich123 wrote:
| This would have to be for Meltdown not Spectre. Spectre is in
| process and meltdown is cross-process. In process would be
| pointless for a language like swift.
|
| And it's a weird mitigation because Meltdown afaik has been
| mitigated on other OSes without banning high res timers.
|
| The nail in the coffin is that it's unlikely about security is
| Date and clock_get_time are accessible and an order of
| magnitude faster.
|
| This seems a more likely scenario of poorly profiled
| abstraction layers adding features without measuring the
| performance.
| saagarjha wrote:
| This is not true in the slightest, and I feel that you might be
| misunderstanding how these attacks work. Spectre does not allow
| you to control execution of another process. It does not touch
| any architecturally visible state; it works via side channels.
| This means all it can do is leak information. The mitigation
| for Spectre in the browser adds a fuzzy delay (which is not
| considered to be very strong, fwiw). Just making a slower timer
| does not actually mitigate anything. And if you look at the
| code (it's all open source!) you can see that none of it deals
| with this mitigation, it's all just normal stuff that adds
| overhead. These attacks are powerful but they are not magic
| where knowing the exact time gives you voodoo access to
| everything.
| Veserv wrote:
| No, that is nonsense.
|
| A competent organization would not make the function call take
| longer by a random amount of time. You would just do it
| normally then add the random fudge factor to the normal result.
| That is not only more efficient, it also allows more fine-tuned
| control, the randomization is much more stable, and it is just
| plain easier to implement.
|
| Though I guess I should not put it past them to do something
| incompetent given that they either implemented their native
| clocks poorly as the article says, or they incompetently
| implemented a Spectre mitigation as you theorize.
| rfmoz wrote:
| OSX clock_gettime() [0] offers CLOCK_MONOTONIC and
| CLOCK_MONOTONIC_RAW, but not CLOCK_UPTIME, only CLOCK_UPTIME_RAW.
|
| Maybe someone knows why? On FreeBSD it is available [2].
|
| [0]: https://www.manpagez.com/man/3/clock_gettime_nsec_np/ [2]:
| https://man.freebsd.org/cgi/man.cgi?query=clock_gettime
| loeg wrote:
| Are you talking about CLOCK_UPTIME_FAST on FreeBSD? It does not
| have CLOCK_UPTIME_RAW.
|
| In FreeBSD, the distinction lives here:
| http://fxr.watson.org/fxr/source/kern/kern_time.c#L352
| jepler wrote:
| I was curious how linux's clock_gettime compared. I wrote a
| simple program that tried all the clock types documented in my
| manpage:
| https://gist.github.com/jepler/e37be8fc27d6fb77eb6e9746014db...
|
| My two handy systems were an i5-1235U running 6.1.0-20-amd64 and
| a Ryzen 7 3700X also running 6.1.0-20-amd64. The fastest method
| was 3.7ns/s call on the i5 and 4ns/call on the Ryzen
| (REALTIME_COARSE and MONOTONIC_COARSE were about the same). If a
| "non-coarse" timestamp is required, the time increases to about
| 20ns/call on ryzen, 12ns on i5. (realtime, tai, monotonic,
| boottime).
|
| On the i5, if I force the benchmark to run on an efficiency core
| with taskset, times increase to 6.4ns and 19ns.
| jeffbee wrote:
| You can knock almost a third off that fastest time by building
| with `-static`. In something that is completely trivial like
| reading the clock via vDSO the indirect call overhead of
| dynamic libc linking becomes huge. `-static` eliminates one
| level of indirect calls. The indirect vDSO call remains,
| though. % ./clocktest | rg MONOTONIC_COARSE
| MONOTONIC_COARSE : 2.2ns percall
| feverzsj wrote:
| I know swift has poor performance, but not expect they did it
| purposely.
| sholladay wrote:
| Maybe unrelated, but I've noticed that setting a timer on iOS
| drains my battery more than I would expect, and the phone gets
| warm after a while. It's just bad enough that if the timer is
| longer than 15 minutes, I often use an alarm instead of a timer.
| Not something I've experienced on Android.
| whywhywhywhy wrote:
| Lot of glitches in timers and alarms on recent iOS, sometimes
| they don't even fire. Extremely poor setting like a timer for
| cooking, checking your phone and there just isn't a timer
| running anymore and your left wondering how far it's overshot.
|
| 15 Pro so definitely not an old phone new software issue.
| beeboobaa3 wrote:
| Still hilarious how apple goes about their "system security".
|
| Instead of actually implementing security in the kernel they just
| kinda prevent you from distributing an app that may call that
| functionality. Because that way they can still let their buddies
| abuse it without appearing too biased (by e.g. having a whitelist
| on the device).
|
| This technical failing probably, partially, explains why they are
| so against allowing sideloading. That, and they're scared of
| losing their cash cow of course.
| asveikau wrote:
| The hilarious thing is how people justify Apple's bugs with a
| security concern.
|
| Just squinting at the stack trace from the article, my
| intuition is that someone at Apple added a bunch of nice
| looking object-oriented stuff without regard for overhead. So a
| call to get a single integer from the kernel, namely the time,
| results in lots of objects being created on the heap and tons
| of "validation" going on. Then somebody on hacker news says
| this is all for your own good.
| NotPractical wrote:
| > This technical failing probably, partially, explains why they
| are so against allowing sideloading.
|
| This occurred to me the other day. I've always laughed at the
| idea that Apple blocks sideloading for security purposes, but
| if the first line of defense is and always has been security
| through obscurity + manual App Store review (>= 2.0) on iOS,
| it's very possible that sideloading could cause problems. iOS
| didn't even have an App Store in release 1.0, meanwhile the
| Android security model has taken into account sideloaded apps
| since the very beginning [1]:
|
| > Android is designed to be open. [...] Securing an open
| platform requires a strong security architecture and rigorous
| security programs. Android was designed with multilayered
| security that's flexible enough to support an open platform
| while still protecting all users of the platform.
|
| [1] https://source.android.com/docs/security/overview
|
| Edit: Language revised to clarify that I'm poking fun of the
| idea and not the one who believes it.
| threatofrain wrote:
| Is there a reputation of a security difference between
| Android and iOS? And in what direction does the badness lean?
| beeboobaa3 wrote:
| There is a reputation of Apple being more secure, but it's
| largely unfounded. It just looks that way because the
| ecosystem is completely locked down and software isn't
| allowed to exist without apple's stamp of approval.
| kbolino wrote:
| Apple drove genuine security improvements in mobile
| hardware well before Android, including dedicated
| security chips and encrypted storage. The gap has been
| closed for a few years now, though, so the reputation is
| not so much "unfounded" as "out of date".
| beeboobaa3 wrote:
| You're not talking about security that protects end users
| against malware. You're talking about "security" that
| protects the device against "tampering", i.e. the owner
| using it in a way apple does not approve of.
|
| Apple's "security improvements" have always been about
| protecting their walled garden first and foremost.
| fingerlocks wrote:
| This just isn't true. We have multiple bricked android
| devices from bootloader infected malware downloaded
| directly from the Play store. Nothing like that has ever
| happened on iOS.
| beeboobaa3 wrote:
| The only thing this may prove is that Apple's app store
| review is more strict.
| kbolino wrote:
| A mobile device, in most users' hands:
|
| - Stores their security credentials for critical sites
| (banks, HR/payroll, stores, govt services, etc.)
|
| - Even if not, has unfettered access to their primary
| email account, which means it can autonomously initiate a
| password reset for nearly any site
|
| - Is their primary 2FA mechanism, which means it can
| autonomously confirm a password reset for nearly any site
|
| That's an immense amount of risk, both from apps running
| on the device, and from the device getting stolen. Both
| of the measures I mentioned are directly relevant to
| these kinds of threats. And, as I already said, Android
| has adopted these same security measures as well.
| beeboobaa3 wrote:
| So the same as any computer since online banking and
| email were invented. This isn't some new development. You
| should stop trying to nanny people.
| kbolino wrote:
| I have no idea what you are trying to say in the context
| of the thread. Hardware security is important for all of
| that and security measures have to evolve over time.
| cyberax wrote:
| Like?
| spacedcowboy wrote:
| I'm not claiming that Apple is perfect, but I think comparing
| to Android, in terms of malware, security updates, and
| privacy, it comes out looking pretty good.
| beeboobaa3 wrote:
| Got some sources to cite, or is this the typical apple
| fanboyism of "android bad"?
|
| I've used android for years, never ran into any malware.
| I've also developed for android and ios. Writing malware is
| largely impossible due to the functional permission system,
| at least it's much, much harder than the other operating
| systems. Apple just pretends it's immune to malware because
| of the manual reviews and static analysis performed by the
| store. It's also why they're terrified of letting people
| ship their own interpreters like javascript engines.
| Aloisius wrote:
| A bit old but,
| https://www.pandasecurity.com/en/mediacenter/android-
| more-in...
|
| One might argue that Android is targeted more than iPhone
| because of its larger userbase which certainly may
| contribute to it, but then _MacOS_ which has a fraction
| of the userbase is more targeted than iOS - that makes
| the case that sideloading or lax app store reviews really
| are at least partly to blame.
|
| Given much of the malware seems to be apps that trick
| users into granting permissions by masquerading as a
| legitimate app or pirated software, it's not really too
| hard to believe that Apple's app store with their
| draconian review process and no sideloading might be a
| more difficult target.
| beeboobaa3 wrote:
| Obviously a strict walled garden keeps out bad actors.
| The question is: Is it worth it? I say no.
|
| People deserve to be trusted with the responsibility of
| making a choice. We are allowing everyone to buy power
| tools that can cause severe injuries when mishandled. No
| one blinks an eye. Just like we allow that to happen, we
| should allow people to use their devices in the way that
| they desire. If this means some malware can exist then I
| consider this to be acceptable.
|
| In the meantime system security can always be improved
| still.
| Aloisius wrote:
| Yes, freedom to do what you want with your device is a
| great ideal.
|
| Yet I still don't want to have to fix my mom's phone
| because its loaded with malware or worse, malware
| draining her bank account.
| realusername wrote:
| Both look pretty similar to me, both in terms of policies
| and outcome.
|
| While iOS has longer device support, it's also way less
| modular and updates of system components will typically
| take longer to reach users than Android, so I'd say both
| have their issues there.
| GeekyBear wrote:
| > the Android security model has taken into account
| sideloaded apps since the very beginning
|
| Counterpoint: tech websites have literally warned users that
| they need to be wary of installing apps from inside Google's
| walled garden.
|
| > With malicious apps infiltrating Play on a regular, often
| weekly, basis, there's currently little indication the
| malicious Android app scourge will be abated. That means it's
| up to individual end users to steer clear of apps like Joker.
| The best advice is to be extremely conservative in the apps
| that get installed in the first place. A good guiding
| principle is to choose apps that serve a true purpose and,
| when possible, choose developers who are known entities.
| Installed apps that haven't been used in the past month
| should be removed unless there's a good reason to keep them
| around
|
| https://arstechnica.com/information-
| technology/2020/09/joker...
|
| "You should not trust apps from inside the walled garden" is
| not a sign of a superior security model.
| NotPractical wrote:
| > Counterpoint: tech websites have literally warned users
| that they need to be wary of installing apps from inside
| Google's walled garden.
|
| This is not a counterpoint to what I was saying. I'm
| talking about sideloaded apps, not apps from Google Play. I
| agree that Google should work to improve their app vetting
| process, but that's a separate issue entirely, and one I'm
| not personally interested in.
| GeekyBear wrote:
| If your security model is so weak that you can't keep
| malware out of the inside of your walled garden, the
| situation certainly isn't going to improve after you
| remove the Play store's app vetting process as a factor.
| NotPractical wrote:
| I avoided making a claim regarding the relative "security
| level" of Android vs. iOS because it's not easy to
| precisely define what that means. All I was saying was
| that Android's security model explicitly accommodates
| openness. If your standard for a "strong" security model
| excludes openness entirely, that's fair I suppose, but I
| personally find it unacceptable. Supposing we keep
| openness as a factor for its own sake, I'm not sure how
| you can improve much on Android's model.
|
| This discussion seems to be headed in an ideological
| direction rather than a technical one, and I'm not very
| interested in that.
| GeekyBear wrote:
| If your point of view is that you value the ability to
| execute code from random places on the internet more than
| security, perhaps that is the point you should have been
| making from the start.
|
| However, iOS makes the security trade off in the other
| direction.
|
| All an app's executable code must go through the app
| vetting process, and additional executable code cannot be
| added to the app without the app going through the app
| vetting process all over again.
|
| In contrast, Google has been unable to quash malware like
| Joker from inside the Play store because the malware gets
| downloaded and installed after the app makes it through
| the app vetting process and lands on a user's device.
|
| > Known as Joker, this family of malicious apps has been
| attacking Android users since late 2016 and more recently
| has become one of the most common Android threats...
|
| One of the keys to Joker's success is its roundabout
| attack. The apps are knockoffs of legitimate apps and,
| when downloaded from Play or a different market, contain
| no malicious code other than a "dropper." After a delay
| of hours or even days, the dropper, which is heavily
| obfuscated and contains just a few lines of code,
| downloads a malicious component and drops it into the
| app.
|
| https://arstechnica.com/information-
| technology/2020/09/joker...
|
| iOS not having constant issues with malware like Joker
| inside their app store has nothing to do with "security
| through obscurity" and everything to do with making a
| different set of trade offs when setting up the security
| model.
| dang wrote:
| We detached this subthread from
| https://news.ycombinator.com/item?id=40274188.
| simscitizen wrote:
| Just use clock_gettime with whatever clock you want. There's also
| a np (non-POSIX) suffixed variant that returns the timestamp in
| nanoseconds.
| Kallikrates wrote:
| https://github.com/apple/swift/pull/73429
| loeg wrote:
| Wow, hundreds of milliseconds is a lot worse than I'd expect. I'm
| not shocked that it's slower than something like plain `rdtsc`
| (single digit nanoseconds?) but that excuses maybe microseconds
| of overhead -- not milliseconds and certainly not hundreds of
| milliseconds.
| gok wrote:
| It's hundreds of milliseconds to do a a million iterations. A
| single time check is hundreds of nanoseconds.
| loeg wrote:
| Oh, thanks. The table was unlabeled and I missed that in the
| text.
|
| Hundreds of nanos isn't great but it's certainly better than
| milliseconds.
| layer8 wrote:
| That's for a million iterations, so really nanoseconds.
| diebeforei485 wrote:
| It's required of native apps because native apps are full of
| API's that collect and sell user data. That's why.
| adsharma wrote:
| clock_gettime_nsec_np() seems interesting in that it returns a
| u64.
|
| I proposed something similar for Linux circa 2012. The patch got
| lost in some other unrelated discussion and I didn't pursue it.
|
| struct timeval is a holdover from the 32 bit era. When everyone
| is using 64 bit machines, we should be able to get this data by
| reading one u64 from a shared page.
| adsharma wrote:
| https://lkml.org/lkml/2011/12/12/438
___________________________________________________________________
(page generated 2024-05-06 23:01 UTC)