[HN Gopher] Russhian Roulette: 1/6 chance of posting your SSH pr...
___________________________________________________________________
Russhian Roulette: 1/6 chance of posting your SSH private key on
pastebin
Author : popcalc
Score : 107 points
Date : 2023-01-28 12:50 UTC (10 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| rtuin wrote:
| Really good! Sent this to my colleagues to raise awareness on
| third party package security risks.
| detrites wrote:
| TIL: malware + gamification = how to get #1 FP on HN
| monsieurbanana wrote:
| It's not malware if you run it on purpose
| captn3m0 wrote:
| Bug: Doesn't work with non-RSA or U2F/GPG keys. Some players will
| get an unfair advantage.
| progbits wrote:
| Bug: Passphrase-protected private keys are posted encrypted.
| throwanem wrote:
| Bug: Uncaught exception if no file exists at the default
| private key path.
| pluc wrote:
| How is "(Math.floor(Math.random() * 6) === 0)" 1 in 6?
| ordu wrote:
| I'm not a javascript guru, but I see two ways how it may work.
|
| 1. The right part of equation converted into integer because
| the left part is integer. Conversion is done by rounding to
| nearest integer, and it will work this way.
|
| 2. The other way it may work is to convert 0 into floating
| point. Math.floor would make some canonical floating point
| representation of 0 from numbers in a range [0..1), and so do
| conversion of (int)0 into float.
|
| One needs to really know his language of choice to master such
| subtleties.
| wincy wrote:
| There are no integers in JavaScript, only double precision
| floating point numbers.
| ordu wrote:
| So the second scenario is at play. 0 is converted into
| floating point.
| jesprenj wrote:
| Probably random() returns a value from 0 to one, multiplying by
| 6 yields a random point between 0 and 6. There is a 1 in 6
| chance for this number to be in range [0, 1), which is checked
| by rounding it down and checking if it's zero.
|
| right?
| pluc wrote:
| > The Math.random() static method returns a floating-point,
| pseudo-random number that's greater than or equal to 0 and
| less than 1
|
| What a poor name for a method.
| alpaca128 wrote:
| Every random generator API I've seen uses this interval by
| default. And it makes sense, what other range is useful in
| more situations and platform independent? In practically
| every other case you'd want to specify your desired range
| with arguments.
| sgerenser wrote:
| The classic C rand() function returns an integer between
| 0 and RAND_MAX. Not being much of a JS developer I would
| have expected something more like that.
| kolinko wrote:
| why?
| pluc wrote:
| It just doesn't feel natural. If I ask you to give me a
| random number, are you going to assume it's random among
| all existing numbers or between the two smallest ones?
| I'd do Math.random(min,max) and then it could default to
| 0,1 though I guess you could go Math.random()*100 or
| whatever.. guess it just doesn't feel like good design or
| very convenient/readable - but then again this is
| JavaScript we're talking about.
| quesera wrote:
| > I'd do Math.random(min,max)
|
| But then you'd need a different way to do random non-
| integers.
|
| Randomness does not naturally produce an integer result.
| The 0-1 range with precision determined by the
| architecture is actually the simplest and most logical
| and flexible way to do it. [EDIT: floats are never
| simple, see below!]
|
| Some languages offer convenience functions on top of the
| low level random number generator. I don't know what's
| available in JavaScript.
|
| E.g. in ruby: irb(main):001:0> rand
| => 0.5562701792527804 irb(main):002:0>
| rand(100) => 44 irb(main):003:0>
| rand(44..203) => 188
|
| ...but of course, I could go on all day long about how
| pleasant Ruby is to work with. :)
| deathanatos wrote:
| > _Randomness does not naturally produce an integer
| result. The 0-1 range with precision determined by the
| architecture is actually the simplest and most logical
| and flexible way to do it._
|
| The most trivial way, I would think, would be to
| interpret a stream of bits would be as an unsigned
| integer. E.g., a u32 is just "take 32 bits of random data
| from your RNG, and cast." That's certainly far more
| natural than trying to build an IEEE double from [0, 1)
| with a uniform distribution. I'm actually not sure how
| you'd do that without doing something like starting with
| a random u32 and dividing by (U32_MAX + 1). Like maybe
| you can shove the RNG output into the mantissa, but it is
| not at all obvious to me about the correctness of that.
| quesera wrote:
| Mmm, great points.
|
| At least for range [0,MAXINT], it is simple to fill the
| bitspace for an integer. Range [0,limit) requires
| slightly more-awkward division.
|
| The problem of uniformity across the range in a populated
| float is critical -- as you point out, IEEE float layouts
| are not simple.
|
| I would guess that the "float" result is constructed as a
| fixed-point decimal in the same way you would an integer
| (sprinkle the RNG bits over the storage bitspace), but
| returned as float (via cast or math) to match the
| language type system.
| pama wrote:
| I would have agreed 50 years ago. It has been used in many
| libraries in many languages historically, nowadays it would
| be confusing if it returned anything else.
| numpad0 wrote:
| "Math.random() generates a randomized binary sequence,
| which is cast and returned as a floating point type; sign
| and exponent bits are hard-coded, such that the sequence
| may represent a number between 0.0f and 1.0f. NOTE: This
| method should NOT be used for cryptographic and/or security
| use cases. Implementation is performance oriented and is
| generally recommended for all non-sensitive programming,
| such as user interactive visualizations, and randomized
| suggestions."
|
| Better?
| messe wrote:
| Seems like a perfectly reasonable range, and what I'd
| expect for a method named that. If it was uniformly
| distributed over the range of a double, it would almost
| always be of large magnitude, which wouldn't be
| particularly useful.
| [deleted]
| [deleted]
| nassimm wrote:
| Math.random() generates a decimal that can be 0 but is less
| than 1, so when multiplied by 6 the range is 0-5.99999, which
| is then rounded down.
| cplusplusfellow wrote:
| I didn't realize the browser could exfiltrate files like this. Or
| is this intended to be run via Node?
| alpaca128 wrote:
| > Or is this intended to be run via Node?
|
| Yes, as can be seen in the single line in the linked readme.
| njsubedi wrote:
| Node.
| TheBrokenRail wrote:
| Definitely Node (or Electron, which is basically Node with a
| browser bolted on).
| cplusplusfellow wrote:
| I figured an honest question would be downvoted on such a site
| as this.
| Karellen wrote:
| The README.md, which is displayed on the linked page and
| should be above the fold on 95% of viewing devices, contains
| only two lines of text. One of which is the README's title
| and the name of the project. The other line is:
| `node main.js` for a 1/6 chance of posting your SSH private
| key on pastebin :)
|
| The question might be honest, but I don't think it adds much
| to the discussion.
| anthk wrote:
| Sometimes I wish Linux and *BSD's followed the Plan9/9front
| security approach. Keys are handled by factotum, and not under
| your home directories.
| recuter wrote:
| Landshark, cleverest species of them all..
| exabrial wrote:
| Everyone likes to crap on PGP, but this is why my ssh keys are
| subkeys of my GPG key and locked up with the GPG SSH agent.
|
| This approach is far from perfect but certainly disallows
| outright exfiltration attacks.
| lostmsu wrote:
| Can you elaborate more? What's the difference between PGP, and
| having a regular SSH key password protected + running a regular
| SSH agent?
| exabrial wrote:
| The advantages are very similar actually.
|
| The added 'benefit' or 'disadvantage' is dependent on your
| use case.
|
| If I regard my PGP key as the toehold to my online identity,
| having an ssh key tied to that identity is quite useful. Kind
| of neat to see the guy that signed the git commit is the same
| guy logging into the server, installing the software signed
| with the same pgp key.
|
| If your threat model needs to key your online identity
| somewhat anonymous, than a private key per server is likely
| the way to go.
|
| An encrypted ssh key is somewhere in the middle there.
| eropple wrote:
| For me, physical smartcard storage + overarching
| identification. I've been using a Yubikey as my GPG key for a
| long time (and another in my safe as my GPG root). You can
| use SSH with FIDO2 today as well, but outside of SSH GPG
| provides a web of trust which has other benefits, such as a
| root of revocation (a different Yubikey in my document safe
| contains my GPG root) and signature validation not being tied
| to SSH key presence/absence on GitHub.
| aaronmdjones wrote:
| I too have a primary YubiKey with my 3 PGP subkeys on it
| (signing, authenticating, decrypting), and a backup YubiKey
| in my safe with my PGP master key on it.
|
| I find it works quite well. The primary YubiKey goes
| everywhere I go; I have a lanyard for it. The backup
| YubiKey stays in the safe until I need it for something
| (e.g. signing someone else's PGP key, rotating subkeys,
| renewing the expiration date on a subkey, ...).
|
| I also use both of them for FIDO, on websites that support
| real 2FA; more specifically, I enroll both of them, but I
| only routinely use the primary one.
| njsubedi wrote:
| One of my colleagues was asking me a question about this last
| week. Can all/any applications running on our device read the
| key? They work on a mac, and wrote a simple python script to
| confirm. Any program running in the userspace can read the
| private key file; have the private keys always been not so
| private all this time?
| tinus_hn wrote:
| Yes, it's actually a bit disappointing they didn't implement
| keychain support which makes this a lot harder. But then people
| would be screaming that Apple is peeping at your private keys,
| even though Apple can't see the contents of the keychain.
| johnklos wrote:
| That's why it's a good idea to use a passphrase with your key
| so that the key by itself is not useful to anyone.
|
| It's not easy for people to run only trustworthy software, or
| even software that has been reasonably vetted by others. Not
| everyone has the aptitude to know how to check for
| surreptitious file accesses, or have the desire to learn just
| to make functional use of their computers.
| mac-chaffee wrote:
| Yep, same with cookies and cloud credentials:
| https://www.macchaffee.com/blog/2023/hacking-myself/
| pluc wrote:
| Only if they run under your user as your private key permission
| should be only you can read it. Programs running as you are
| basically you.
| Bootvis wrote:
| That's why ideally you use a pass phrase with you ssh key. Apps
| can still read it but not use it.
| progbits wrote:
| Even better, if possible switch to something like PGP keys on
| Yubikey which prevents exfiltration of the private key, and
| will only sign things when you enter PIN / touch the device.
| doubled112 wrote:
| This has been my SSH key solution for a while now.
|
| Worked smoothly on most systems.
|
| Kind of messy on Windows, because there are so many SSH
| agent implementations, but GPG4Win's latest version works
| with the native SSH now. Real progress.
| tkanarsky wrote:
| I find that the PIV smart card stack is needlessly
| complicated if all you're trying to do is add a resident
| SSH key to your yubikey. Look at `ed25519-sk` [0], which
| is supported by default by recent versions of OpenSSH
| (and dropbear? idk)
|
| [0]: https://news.ycombinator.com/item?id=29231396
| doubled112 wrote:
| PGP is definitely complicated if you're not going to use
| it for other functionality.
|
| And that's completely separate to the PIV functionality
| on the key.
| egberts1 wrote:
| Not the map you are looking for but there is this
| comparison chart of SSH clients and its algorithms.
|
| https://ssh-comparison.quendi.de/comparison/cipher.html
| doubled112 wrote:
| https://github.com/rupor-github/win-gpg-
| agent/blob/main/docs...
|
| Don't forget this diagram of all the agents, protocols
| and bridges you might hit on Windows.
| grishka wrote:
| But then enter it every time you need to use the key, thus
| negating the advantage of _just magically logging in_ without
| passwords? Because if you use ssh-add and only enter the
| passphrase once per reboot, apps will be able to use it, that
| 's the point.
| jeroenhd wrote:
| "Just magically logging in" is more of a nice side-effect
| than the intended purpose, in my opinion. SSH keys allow
| you to let multiple people log into a server without
| needing to set up complicated user accounts and without
| sharing a password that quickly becomes difficult to
| change.
|
| You can have the best of both worlds by storing the key
| itself in a place that's not readable by many programs.
| TPMs and other such tech can store a key securely without
| risk of FunnyGame.app sending it to a remote server. In
| this model the key would be stored inside a safe, sandboxed
| place, only readable by an SSH agent or similar, which will
| prompt for permission to use the key every time. With
| fingerprint scanners and other biometrics being available
| even in cheap devices, this process can be relatively
| seamless.
|
| If you run sufficiently modern SSH software, you can also
| use external key stores like Yubikeys to authenticate with
| plain old OpenSSH.
| Xylakant wrote:
| You can (and should) use ssh-agent/ssh-add to handle the
| key for you. It will still protect you against apps reading
| the key - ssh-agent only performs crypto operations on
| behalf of programs and will not hand out the private key.
| PlutoIsAPlanet wrote:
| So a malicious app instead could just read your known
| hosts file, use the SSH agent to connect to them and
| spread malware that way, including installing its own
| public key.
|
| Doesn't really protect you.
|
| Sandboxing is pretty much the only way to solve this,
| SELinux does place restrictions but that's a dumpster
| fire of over engineering that's useless for the end user,
| who when they find their computer isn't doing what they
| want it to do, will turn it off.
| Xylakant wrote:
| It protects from exfiltrating the key, which is
| something. Because yes, the app could connect (if the key
| has been loaded, which is not guaranteed) but that's
| something entirely different. Not saying it's not a
| threat, but it's a different threat with different
| mitigation.
| speed_spread wrote:
| Could you individually authorize every app for ssh-agent
| access? Maybe like sudo, the app would get a temporary
| token. This would work well in combination with a
| sandbox.
| jesprenj wrote:
| The app in question can just dump the memory of ssh-agent
| and obtain the private key from there. Or not?
| tristor wrote:
| Usually no. It requires root / Admin to dump memory of
| other processes, generally. Although vulnerabilities do
| exist.
| jesprenj wrote:
| Are you sure this is how, let's say, Linux behaves?
|
| I tested it now in a minimal privilege account in a
| chroot on Debian 11 that I use for login from untrusted
| machines, and strace worked. This is how I captured a
| password entered into a ssh client password prompt,
| opened in another login shell of the same user:
|
| -bash-5.1$ ps aux | grep abcde
|
| z 2502130 0.0 0.3 9500 6132 ? S+ 18:04 0:00 ssh
| abcde@localhost
|
| z 2502140 0.0 0.1 6316 2336 ? S+ 18:04 0:00 grep abcde
|
| -bash-5.1$ strace -p 2502130
|
| strace: Process 2502130 attached
|
| read(4, "s", 1) = 1
|
| read(4, "e", 1) = 1
|
| read(4, "c", 1) = 1
|
| read(4, "r", 1) = 1
|
| read(4, "e", 1) = 1
|
| read(4, "t", 1) = 1
|
| read(4, "\n", 1) = 1
|
| write(4, "\n", 1) = 1
|
| ioctl(4, TCGETS, {B38400 opost isig icanon -echo ...}) =
| 0
| LarryMullins wrote:
| A malicious program could also add a passphrase-logging
| wrapper around `ssh` or `sudo` to your PATH and nab your
| password the next time you try to use either of those.
| This whole model of computing assumes that you'll never
| run a malicious program, it completely collapses if you
| do.
| Xylakant wrote:
| Absolutely, but there are various attack vectors that
| different mitigations are effective against.
|
| The program doesn't even need to be malicious, for a
| while it was a pretty common attack vector to trick
| browsers into uploading random file you could access.
|
| Later, a malicious ssh server could read memory of the
| ssh process, potentially exposing the private key
| (CVE-2016-0777)
|
| Using an agent with an encrypted key protects against
| that. Using a yubikey/smartcard as well. So it's strictly
| a good thing to use it.
|
| A yubikey could potentially protect you against a
| malicious program that wants to open connections if you
| have set it up to confirm every key operation - but that
| comes at a cost. You could also use little snitch to see
| what network connections a program opens, protecting you
| against a program trying to use your agent to access a
| server.
| _def wrote:
| > have the private keys always been not so private all this
| time?
|
| It's not called private key because it is very secure and can't
| be accessed... It's on you to ensure that!
| calvinmorrison wrote:
| Use a pass phrase!
| njsubedi wrote:
| I do. Most probably they do too, but since any running apps
| can access the user's private keys, the whole security
| depends on the strength of the passphrase that can be brute
| forced offline?
| throwaway290 wrote:
| Use a long one.
| mr_mitm wrote:
| Don't run apps you don't trust outside of a container. If
| there is malware on your system, your SSH keys are only one
| of your many troubles.
| the_af wrote:
| What are apps you do trust?
| jeroenhd wrote:
| Passphrases protect against silent key exfiltration. Make
| them long enough (six or seven words these days, I think?)
| and they won't be cracked in your life time unless the
| quantum people figure their stuff out or you become a
| vampire.
|
| If you're trying to protect against running programs, you
| also need to protect against key loggers. Using hardware-
| backed keys and systems like Windows Hello for validation
| can help with that, as their UI is not easily
| interceptable.
|
| In the end, there's no perfect way to protect your keys if
| you have a virus running on your computer.
| jesprenj wrote:
| That's usually my argument when someone mocks me for logging
| into all my computers as root. Having a separate nonprivileged
| user and running tons of desktop/shell programs isn't really
| much better considering all those programs have access to your
| ~, which is on a PC usually the most inportant directory IMHO.
|
| firejail is a program that helps mitigate this issue by
| restricting syscalls of programs.
| LarryMullins wrote:
| Logging in as root just seems like a silly thing to do, if
| for no other reason than because so many applications will
| hassle you about being run as root. Why not just use sudo
| when you need it?
| jesprenj wrote:
| I ended up logging in as root mostly for the sake of
| convenience, as now I am no longer bothered with suid
| wrappers like sudo for mundane tasks, like editing system
| configuration files and udev rules for devices -- as the
| sole user of the computer I no longer face EPERM errors
| that force me into `sudo !!`.
|
| I uninstalled sudo and started this habit on personal
| servers as well when the sudoedit vulnerability was
| announced, allowing anyone on a macine with sudo installed
| (regardless of sudoers config) to escalate to root.
| Kamq wrote:
| Logging in as something other than root also stops you from
| doing something really stupid to your system without explicit
| confirmation (usually by running the command with sudo).
| mbwgh wrote:
| According to the Arch Wiki though, firejail relies on
| blacklisting by default (although this seems to be subject to
| change).
|
| So if it's necessary to be careful about the defaults and to
| audit everything carefully etc. (i.e. if it's not idiot
| proof), I am doubtful this is as helpful in practice as one
| might expect.
|
| I still agree with the general point of your comment though.
| nl wrote:
| This is wrong. Data is important but so too is control of
| executable programs installed on your computer.
|
| Running as root allows a bug in an application like a browser
| to be exploited and give them root access.
|
| Then they can modify programs like firejail and suddenly
| things you thought were protected aren't.
| hsbauauvhabzb wrote:
| I'm the only user on my system, compromise of uid 1000 is
| as bad as root. If you really care, move into a
| containerised operating system.
| jesprenj wrote:
| Fair point, but a browser bug leading to code execution in
| an unprivileged user could, as mentioned, read my SSH
| private keys, GPG private keys, ...
|
| This in turn would allow an attacker to login to my servers
| and other computers leading to a total compromise, as well
| as breaking trust and integrity of my email (PGP keys).
|
| For my PC a compromise of the user I login as would mean
| total chaos and compromise, regardless if this user is root
| or not.
|
| Installation of executable programs isn't limited to the
| root user, a normal unprivileged one can have them as well.
| I mentioned firejail because running the browser inside
| firejail should provide more protection against attacks
| (provided it's correctly cofigured, as a sibling comment
| points out), as the attacker couldn't escape the browser
| sandbox. Though in the current modern world, a browser
| context compromise could be enough to exploit a power user
| -- webmail, domain registrar web interface, stored
| passwords.
|
| I doubt many power users actualy separate their workflow
| well enough as to change to a different VT (or SSH
| connection when working remotely) when performing
| administrative tasks on the computer that require root
| access. Because if users don't do that and just use a suid
| binary, like sudo, a malicious attacker with access to code
| execution in the context of an unprivileged user that
| elevates privileges with sudo could snoop the password
| entered by ptrace or simpler means, like a wrapper binary
| that gets installed without user's knowledge.
|
| (I am by no means a security expert and my opinion
| shouldn't be treated as useful advice!)
| vasco wrote:
| It's totally fine, just do that npm install or `curl | bash`,
| no need to read anything.
| njsubedi wrote:
| You forgot the /s
| suchar wrote:
| This is true for SSH key, but not for all data on MacOS, e.g.
| if you run `find ~/Library/Application Support/AddressBook` the
| OS will ask you if you want to give access to contacts to
| iTerm2/whatever (unless you have given it before). I'm not
| aware of a way to create additional sandboxed "folders".
|
| Also, some applications on MacOS are sandboxed, IIRC Mail is
| one of them. Also, some (all?) applications installed from
| AppStore. That's the reason I prefer installing applications
| from AppStore: they seem to be at least somewhat sandboxed.
|
| For development, I try as much as possible to leverage remote
| development via [JetBrains
| Gateway](https://www.jetbrains.com/remote-development/gateway/)
| and [JetBrains Fleet](https://www.jetbrains.com/fleet/). VSCode
| also has remote development but they explicitly assume that
| remote machine is trusted (in the security note in the remote
| extension plugin readme). In the case of JetBrains tools I have
| not seen any explicit declaration whether remote host is
| trusted (as in: if remote machine is pwnd then we may as well
| let pwn your personal machine), but at a glance it seems like
| there are minimal precautions (if you run web application and
| open it in a browser, the Gateway will ask if you want to be
| redirected to a browser etc.)
|
| Probably best scenario for such remote development clients on
| MacOS would be to put them in AppStore: this way they could
| leverage sandboxing and in the case of thin client, the
| sandboxing likely won't limit functionality.
| EthicalSimilar wrote:
| You can store them in the Secure Enclave on OSX and require
| TouchID to use the key for signing.
|
| See: https://github.com/maxgoedjen/secretive
| cassianoleal wrote:
| I've been using Secretive for a long time now. It's a great
| piece of tech.
|
| Even if you don't require TouchID, no apps will be able to
| upload your private keys anywhere as they never leave the
| enclave. Sure, they can still _use_ the keys without your
| permission but to do that they need to be running on the
| workstation.
|
| That said, TouchID is really not very inconvenient and if you
| couple that with control persistence, muxing and keepalive on
| the SSH client, it's really a no-brainer.
| adrianmsmith wrote:
| > Any program running in the userspace can read the private key
| file; have the private keys always been not so private all this
| time?
|
| That's right, and the reason for that seeming surprising is
| that the threat model has quietly changed.
|
| Previously: You owned your computer and your data on it, and
| you ran programs you trusted e.g. you'd buy Microsoft Word and
| you'd assume that that program acted in your interests, after
| all the seller wants you to buy the program. Desktop operating
| systems originated from the time when this was the current
| threat model.
|
| Now: Programs don't necessarily act in your interest, and you
| can't trust them. The mobile phone operating systems were built
| with this threat model in mind, so mobile "apps" run in a
| sandbox.
|
| As an example of a modern program that doesn't act in your
| interest, Zoom "accidentally" left a web server on Macs, even
| after it was uninstalled.
| https://techcrunch.com/2019/07/10/apple-silent-update-zoom-a...
| lamontcg wrote:
| Also related to how the threat model has changed:
| https://xkcd.com/1200/
| mgdlbp wrote:
| obligatory https://xkcd.com/1200/
| exabrial wrote:
| Correction: Mobile phone operating systems are designed to
| give a single player in the market unlimited access to your
| privacy while locking out competitors. The operating system
| is not your friend.
|
| Bravo on the rest, you nailed it.
| pindab0ter wrote:
| What an incredibly uncharitable take.
| LarryMullins wrote:
| Being charitable to huge corporations (paperclip
| maximizers) is extremely naive.
| hdjjhhvvhga wrote:
| Care to elaborate? Because nothing the parents said is
| untrue. Even if you yourself don't feel that way, there
| are numerous reports of predatory and unethical behavior
| on the part of any corporation that is able to control
| your device, whether this is Sony[0], Samsung[1],
| Microsoft, Google or Apple[2][3].
|
| They even stopped apologizing and consider their actions
| a standard practice. You know, Microsoft actually used to
| asked me if I allow them to send a report when Word
| crashed. What happened? What changed that they no longer
| ask me but do whatever they want? Why with each update
| they insist on "syncing my ms account" and I have to
| disable it each time?
|
| The take is not uncharitable, it's realistic.
|
| [0] https://en.wikipedia.org/wiki/Sony_BMG_copy_protectio
| n_rootk...
|
| [1] https://old.reddit.com/r/assholedesign/comments/pqi48
| 6/samsu...
|
| [2] https://gizmodo.com/apple-iphone-analytics-tracking-
| even-whe...
|
| [3] https://www.forbes.com/sites/jeanbaptiste/2019/07/30/
| confirm...
| voakbasda wrote:
| No, experienced. Too many examples of this being true
| have been presented over the years. You do not own the
| software on your devices. You never have.
| [deleted]
| judge2020 wrote:
| Correction: The operating system is a friend that vets your
| friends. Sometimes I don't want to have to do a full
| background check on "everyone" I want to "friend" so I let
| the OS do it for me.
| franga2000 wrote:
| More like an abusive parent that unilaterally decides who
| you're allowed to do what with - sometimes because they
| think they know better than you and sometimes just
| because it's more convenient to them.
| danuker wrote:
| Indeed. One data point is here:
| https://issuetracker.google.com/issues/79906367
| kube-system wrote:
| Malware has been around for a while. I think the bigger
| difference is that we've started to design computer software
| with inside threats in mind.
| lfodofod wrote:
| It's worth noting that desktop Linux has mostly missed this
| development
| hdjjhhvvhga wrote:
| What do you have in mind? I'm using terminal only and
| don't track desktop development. Whenever I have to run
| something I don't trust, I use another account or, if it
| demands elevated privileges, a virtual machine. I guess
| with desktop it's not much different?
| spoiler wrote:
| Not a security expert, so I could be wrong.
|
| I imagine stuff like AppArmor, Snap (or Craft? I forget)
| sandboxes, or Docker and LXCs help with this. Or do they
| not?
| smashed wrote:
| That is exactly what snap is aiming for.
|
| Apps run in a sandbox and have no access to user files
| except through "portals", which are secure file pickers
| essentially.
| lfodofod wrote:
| Yes, AppArmor and snap try to. Still worlds away from
| what Windows and OS X are doing, not to even mention
| mobile platforms.
| franga2000 wrote:
| Linux with snap or flatpak is far closer to mobile than
| whatever isolation Windows and MacOS have. The difference
| is in how widely and well implemented it is (it's
| neither).
| ElectricalUnion wrote:
| > Still worlds away from what Windows
|
| Not really, it's a on-purpose contrived thing to attempt
| to deploy sandboxed apps on Windows.
|
| Developing a sandboxed app in Windows means deploying a
| correctly sandboxed Appx in Microsoft Store, and getting
| those (Appx deployed on Microsoft Store) correctly
| working is hell for any non-trivial application.
|
| On Linux, you can attempt (it's not garanteed to work) to
| sandbox anything you want. Whenever the sandbox even is
| able to conveniently defend what really matters to you
| (say, your private key files) is another matter.
| kube-system wrote:
| Linux was ahead of the game for quite a while. Back in
| the day, most desktop OSes assumed a single user.
| lfodofod wrote:
| Desktop linux still exists in a single user world today,
| excluding some exotic and super fragile setups you might
| see in .edu networks.
| LarryMullins wrote:
| I think he's referring to the time when desktop Linux was
| competing against the likes of Windows 98. At that time,
| it was common for household PCs to be multi-user because
| one computer was shared by several people in the house.
| But with Windows 98, there was no protection between
| users; anybody using the computer could read anybody
| else's files. Even if you didn't have an account on the
| computer, you could just press [cancel] at the login
| screen and have access to the computer. User accounts on
| Windows 98 were only for the convenience of having
| different desktop settings, there was no concept of files
| being owned by specific users.
|
| Linux was a lot different at that time, in that it
| actually had a concept of users owning files. If you
| wanted to access another user's files without their
| permission you had to jump through more hoops like
| booting into single user mode.
| LoganDark wrote:
| > As an example of a modern program that doesn't act in your
| interest, Zoom "accidentally" left a web server on Macs, even
| after it was uninstalled.
| https://techcrunch.com/2019/07/10/apple-silent-update-
| zoom-a...
|
| Isn't this ridiculous? "the update does not require any user
| interaction and is deployed automatically." OK, how do I know
| if it's installed, or how to get it installed if it doesn't
| work? I guess there is just no help for me if I don't
| remember exactly how many auto-update mechanisms I've turned
| off.
|
| </offtopic>
| TheBrokenRail wrote:
| Yeah, un-sandboxed programs can access _all_ your user files.
| That 's why there has been such a large push for sandboxing
| tech like Flatpak. (In general though, you really shouldn't be
| running programs you don't trust in anything but a VM.)
| mkmk3 wrote:
| Is running untrusted programs in a VM actually safe? Are they
| sufficiently secure that it's not trivial to escape one if
| that's the expected scenario?
| the_af wrote:
| I understand the principle, but it seems too onerous on the
| end user.
|
| What is a program you "trust"? Something you bought online
| from a curated app store? Those occasionally have trojans as
| well. Something you downloaded? Well, if it's open source,
| that's the norm. Something you build from source? Most people
| wouldn't be able to spot an exploit hidden in the source
| code.
|
| So.. it's run everything sandboxed by default the
| recommendation for regular users? Or is it "do not download
| or buy anything, it's simply not safe"?
| Taywee wrote:
| I trust the maintainers of my distro software repositories.
| Any non-distro software, I want to audit before I install
| or it should be sandboxed.
|
| And yes. The recommendation is to not just download and run
| programs you find on the web.
| the_af wrote:
| Unfortunately I think the option you propose (sandboxing)
| is unreasonable for most users. A lot of the software you
| want to run (e.g. games, but also lots of special
| software, including apps/experiments featured on HN) is
| not available as part of your distro. It's unreasonable
| to expect end users to sandbox everything just in case.
|
| It may be the only think that _works_ , but it's also an
| _unreasonable_ expectation. In practice, this makes it a
| non-solution. A security solution must both work and be
| reasonably doable by most users.
| Kamq wrote:
| Most users aren't on hacker news.
|
| You should not confuse general wording, which is directed
| to people who read this website (by the fact that it's
| y'know posted here instead of somewhere else), with
| advice for the average person.
| the_af wrote:
| What percentage of HN readers do you guess sandbox every
| non-distro-packaged program by default? My guess: they
| probably are a minority even here, so it's a nonstarter
| for the general users population.
| LarryMullins wrote:
| It doesn't have to be reasonable for most users.
| GNU/Linux in general isn't reasonable for most users.
| the_af wrote:
| But this problem isn't exclusive to Linux or Unix. It
| affects everyone using a computer (with the possible
| exception of mobiles that sandbox by default).
| spoiler wrote:
| > I understand the principle, but it seems too onerous on
| the end user.
|
| I agree that this is the state of affairs currently, but
| this could made to work similarly to how it works on
| Android perhaps, which has generally good UX for this.
| TheBrokenRail wrote:
| > So.. it's run everything sandboxed by default the
| recommendation for regular users?
|
| Yeah, that is probably the best solution. Most mobile OSes
| do that by default now anyways. Desktop Linux has Flatpaks
| and Snaps. Windows has UWP apps. And I think MacOS has its
| entitlements system IIRC.
|
| If you don't absolutely trust somethibg, you shouldn't
| allow it to run unrestricted.
| the_af wrote:
| If the OS does this by default and it becomes the
| standard way of working, then sure. You would need to
| change how to share files you do want to share and solve
| some other hurdles, of course.
|
| If this isn't the default node -- transparent, where end
| users must do nothing in particular -- I don't see it
| succeeding though.
| dijit wrote:
| This is how it has been, there are ways around this though:
|
| 1) use a pgp derived key, this means that anything
| authenticating will hit your gpg agent and only that, nothing
| is using that key then
|
| 2) load your key and then remove it, which I've done before
| using a LUKS encrypted partition (then load the key into ssh-
| agent, then remove the volume).
|
| 3) Storing your keys in the secure enclave on Apple computers.
| A little bit onerous if you use an external keyboard without
| touchID though.
|
| I have a program on my computer that watches for read events in
| that folder to see if anything actually tries to read an access
| key. I can publish the source if you want. it uses inotify in
| linux.
| hdjjhhvvhga wrote:
| Not that it's very practical, but you can always encrypt your
| key with a passphrase. Useless for automation, very useful
| for cases like these.
| Karellen wrote:
| > Any program running in the userspace can read the private key
| file;
|
| Only programs running as you (or `root`). It's private to _you_
| 0.
|
| Programs running as other users cannot read the file.
|
| (Assuming you've not changed the permissions on the file or the
| `~/.ssh/` directory)
|
| 0 and the sysadmin - but if they're not trustworthy they could
| just replace `/bin/bash` or the kernel with their own version
| that copied everything you typed anyway.
| brnewd wrote:
| I wrote a tiny node app wrapped as a single binary as a second
| factor for ssh public key login using pam_exec.so. It posts a
| Telegram poll, "allow login to server x as user y from ip z?"
| Yes/No with a 30second timeout to a private group. A simple way
| to add some additional protection.
| tete wrote:
| Trivial to avoid by using any other path than the default for RSA
| keys, so a lot of keys made in the last few years.
|
| Also that's why you should have a strong password on things.
| xwdv wrote:
| If you really wanted to play a dangerous game, you could
| construct a terminal command that had a 1/6th chance of doing an
| rm -rf /* at the root directory with full admin privileges and
| automatic yes to all prompts, preferably on a non virtual machine
| in production with no backups.
| agilob wrote:
| Suicide Linux, already exists
| throwawaaarrgh wrote:
| Personally I would want it to remove the most critical files
| first, or at least corrupt the filesystem. Most damage as
| quick as possible, in case they get cold feet and Ctrl+C.
| Maybe trap all signals so they can't do that either. Maybe
| run in the background!
| traceroute66 wrote:
| Buy Yubikey, put SSH key on Yubikey, job done.
|
| You can use Nitrokey too, but IIRC be careful which one you buy
| as some are software-only implementations.
| trelane wrote:
| > You can use Nitrokey too, but IIRC be careful which one you
| buy as some are software-only implementations.
|
| First I've heard of this. Do you have some links where I can
| read more about this?
| krmbzds wrote:
| You can check this guide: https://github.com/drduh/YubiKey-
| Guide
| traceroute66 wrote:
| > First I've heard of this. Do you have some links where I
| can read more about this?
|
| Sure, the comparison table on the Nitrokey site[1] is
| probably sufficient.
|
| Anything without a green tick next to "tamper-resistant smart
| card" is a software implementation with the associated risks
| (e.g. firmware updates are available[2] - i.e. if you can
| update the firmware then you've also got a low-level attack
| vector for miscreants).
|
| Meanwhile all YubiKeys are hardware backed and it has never
| been possible to update firmware on them.
|
| [1] https://www.nitrokey.com/#comparison [2]
| https://www.nitrokey.com/releases
| demindiro wrote:
| Probably relevant: https://github.com/streaak/pastebin-scraper
| jmclnx wrote:
| And this is a good example of why people really should start
| looking seriously at OpenBSD.
|
| By default chrome and firefox uses plege(1) and unveil(1). With
| the defaults, ~/.ssh cannot be seen by these browsers.
| georgyo wrote:
| I sorta understand your point, but this wouldn't help in the
| case of running that script.
|
| Namely, the JS sandbox of the browser already prevents
| filesystem access. But a user running `node` in a shell would
| not be protected by the browser or the hardening of browsers
| you mention. You would need to manually setup those protections
| for your command which most people will not do.
|
| Similarly, Linux has filesystem namespaces, and tools like
| bubblewrap can achieve similar protections.
|
| Lastly, the real risk of the above is that code is easily
| runnable automaticly with an `npm install` and if you have
| private repositories than node/npm would still need access to
| private key (or maybe token for http) information to fetch
| them.
| tryauuum wrote:
| echo {a..z} | tr ' ' '\n' | sort -R | head -1 > /proc/sysrq-
| trigger
|
| this is my favourite linux russian roulette
| stavros wrote:
| This echoes a letter to /proc/sysrq-trigger, with Deck of Many
| Fates results:
|
| https://www.kernel.org/doc/html/v4.15/admin-guide/sysrq.html...
|
| EDIT: Yeah, uh, it works.
| LinuxBender wrote:
| Here's a variant that works in #!/bin/ash
| base64 /dev/urandom | tr -d '/+'| sed s/[^[:alpha:]]//g |tr
| 'A-Z' 'a-z' | dd bs=1 count=1 2>/dev/null
| sillysaurusx wrote:
| For a moment I was excited to learn about a new shell named
| ash.
| captainkrtek wrote:
| https://en.wikipedia.org/wiki/Almquist_shell
| joshenders wrote:
| Got your wish
| fmajid wrote:
| Joke's on them. I only use ed25519 keys.
|
| Seriously, where's the downvote button when you need one?
|
| But yes, it would be nice for Linux to gain a version of
| OpenBSD's unveil system call.
| hamasho wrote:
| I'm already sure that this somehow slips into a dependency of a
| dependency of a dependency of React, and the world will end.
| jay-barronville wrote:
| So how many of y'all have already ran this? Haha.
| cobbzilla wrote:
| Very cute. It would have been cooler as a shell alias for ssh.
|
| Using node seems like cheating; plus you have to call it
| explicitly, you know you really want to use this to prank your
| colleague who left their laptop unlocked.
| jay-barronville wrote:
| > Using node seems like cheating
|
| Well, you can easily turn it into an executable ( _e.g._ ,
| using `pkg` [1]) so that the target computer doesn't even need
| Node.js installed.
|
| [1]: https://www.npmjs.com/package/pkg
| cobbzilla wrote:
| That's even _more_ cheating.
|
| This can be done purely in shell, no extra tools!
| jay-barronville wrote:
| Fair enough.
___________________________________________________________________
(page generated 2023-01-28 23:02 UTC)