[HN Gopher] Hackers used Slack to break into EA Games
___________________________________________________________________
Hackers used Slack to break into EA Games
Author : danso
Score : 219 points
Date : 2021-06-11 13:29 UTC (9 hours ago)
(HTM) web link (www.vice.com)
(TXT) w3m dump (www.vice.com)
| croes wrote:
| >A representative for the hackers told Motherboard in an online
| chat
|
| Isn't it strange how hacking groups more and more seem to behave
| like valid businesses nowadays?
| throwawayboise wrote:
| It's organized crime. They've always operated like a business.
| dylan604 wrote:
| I can't decide if that's insulting to OC being compared to
| the ruthlessness of business like that or not. When big
| business comes after you, you lose everything. When the OC
| comes after you, you at least have a chance to fight back.
| jayski wrote:
| hmm a lot of online services have smart to detect suspicious
| connections, even if you have q cookie. even something as simple
| as verifying the geo of the IP and client type couldve been
| useful here. im not sure if slack has this funcionality
| staticassertion wrote:
| Slack has an audit trail for some accesses, at the least, so EA
| could have done the GEOIP on their side.
| paxys wrote:
| Pretty trivial to route your request through an IP address in
| the same city as the victim.
| plorntus wrote:
| How would mobile networks fair with something like this if
| youre moving around?
| sneak wrote:
| Possible, but not exactly trivial to do the same with an IP
| address in the same city _and_ from the same ASN as the
| impersonation victim.
|
| There are definitely some mitigations you can do servers use
| to reduce the likelihood of success of this kind of attack.
|
| Now I want to go build an auth system. :D
| bellyfullofbac wrote:
| > A representative for the hackers told Motherboard in an online
| chat that the process started by purchasing stolen cookies being
| sold online for $10 and using those to gain access to a Slack
| channel used by EA.
|
| How much to bet that a browser extension was involved? Man, the
| whole landscape is the new "Download this shareware EXE file" but
| with a thin veneer of legitimacy "It's hosted by
| Google/Mozilla/Microsoft, it can't be bad!".
| SV_BubbleTime wrote:
| I have no idea at all, but I would assume that browser
| extensions are not just an exe with unlimited ability to run
| arbitrary code, but rather I would think they are sandboxed to
| have limited scope... right?
| pkulak wrote:
| They aren't sandboxed at all. Almost all of them request full
| access to every page you visit. At least an exe might not be
| able to view and manipulate the dom inside your browser.
| shadowgovt wrote:
| This is semi-correct, but basically leans in the right
| direction.
|
| A bit more detail: browser extensions have all manner of
| permissions and authorizations, generally specified in the
| manifest file. A lack of permissions translates to
| constraints on the extension (often in the form of the
| browser refraining from populating one of the `window`
| child objects in the context of that extension's JS code).
|
| Users downloading extensions can see what the extension is
| authorized to do. Problem is that much like the security
| ecosystem for mobile apps, end-users have no idea what any
| of those permissions mean and lack the technical competency
| (on average) to determine whether a given permission is
| high-risk or low-risk (how many end-users even know what
| CORS or XSRF are, let alone why they matter?).
|
| Plus, some of the manifest permissions are basically "keys
| to the kingdom"-level open permission that allow arbitrary
| behavior within the limits of the browser sandbox itself.
| One of the reasons Google is changing permissions in
| Manifest v3 (as discussed here:
| https://news.ycombinator.com/item?id=27441898) is to nerf
| some of those permissions (in particular, the Manifest V2
| scripting permissions allow an extension's behavior to
| change by changing some script on a remote site, so an
| extension that was well-behaved when added to the Chrome
| Web Store can have its backing server changed or
| compromised later and become an attack vector).
| btown wrote:
| Even with Manifest v3, if you suddenly have access to a
| browser extension (say, you bought the company making it)
| with global "inject a known content script that's been
| vetted by the Web Store and included in the extension
| bundle" permissions... it's possible to add obfuscated
| code in that content script that provides RCE on the page
| to the attacker.
|
| Imagine something like a modified form of jQuery that
| adds and immediately removes a script tag when provided
| specific inputs - inputs that would be expected to be
| controlled by the results of a web-based JSON endpoint
| that's core to the extension's functionality. Is the web
| store going to code review an entire minified jQuery? Can
| an automated system detect it? What if attackers hired
| someone who won IOCCC? The attackers have the upper hand
| here and can pull the trigger on any website they choose.
|
| Which is why Google might say Manifest v3 is all about
| security, but unless attackers get lazy and let their
| injectors get fingerprinted, it's all just airport-style
| security theater... that happens to strangle ad
| blockers...
|
| At any rate, cookies and other methods of XSS will begin
| to leak more and more, and defense in depth will become
| the paradigm of the day. Assume your employees' browsers
| are compromised and plan accordingly.
| shadowgovt wrote:
| That's the thing though... 90% of attackers are bad at
| their job. Even if it traps mostly people who aren't
| investing IOCCC levels of cleverness into the problem or
| buying companies for the purpose of getting malicious
| extensions into the Chrome web store, it makes the system
| safer for the average user. Certainly safer than the
| system is if any script kiddie can just use the manifest
| V2 permissions to turn a clock app into a keylogger.
|
| And if the model is as breakable as you claim, I don't
| worry about the fate of ad blockers.. after all, nothing
| prevents them from using the exploit you just described,
| right? Use a modified jQuery to add and remove a script
| tag to figure out what needs to be blocked?
| btown wrote:
| As far as I can tell, you could, and will still be able
| to, make ads invisible based on dynamic data, but under
| Manifest v3 you would not be able to prevent the ads from
| downloading (and sharing private information, including
| your FLOC cohorts) based on dynamic heuristics, because
| Manifest v3 makes it impossible to use anything other
| than a simple, limited-length blocklist to prevent a
| network request from being made. See: https://developer.c
| hrome.com/docs/extensions/mv3/intro/mv3-o...
|
| It's a decrease in privacy being sold as an improvement
| in security and performance. Could they have instead
| implemented a persistent super-sandbox for network-
| blocking heuristic code that addresses the performance
| and security issues, while still enabling dynamic network
| blocking? Probably! But why would they, when the primary
| use case is heuristic-based ad blocking?
|
| And I suppose you're right - part of any defense in depth
| is reducing the number of first-level successes. And it's
| very possible it was script kiddies who sold the cookies
| to the actual attackers in the EA case. But there's more
| going on here than _just_ security.
| tomjakubowski wrote:
| If the extensions have to request permissions for "full
| access" to the page, doesn't that mean they're sandboxed?
| pkulak wrote:
| It doesn't, in practice, if they all ask for it and
| people are conditioned to always grant it.
| asddubs wrote:
| reminds me of the good old days when httponly didn't exist and
| basically every website was vulnerable to xss somewhere.
| neurostimulant wrote:
| There is a marketplace for stolen cookies? Anyone know where it
| is? Seems interesting.
| ConcernedCoder wrote:
| IMHO tricking an employee to provide a login token is a con, not
| a hack.
| ssully wrote:
| It's social engineering, which is definitely a hack/technique
| used in hacking.
| hpoe wrote:
| Actually I am with the parent on this one. We call it "social
| engineering" because it sounds cool, but I think that issue
| muddies the waters for the lay people.
|
| When people hear "hackers used Slack to break into EA games"
| they think something from the movies with some people in a
| dark room sitting around a screen and say "I'm in" in a deep
| voice. This convinces people hackers are prevaisve, unseen,
| and wield Godlike powers, clearly something they can't
| comprehend or do anything about, so no changes needed in
| their day to day life.
|
| In comparison if you were to phrase it as "some employees got
| conned into giving out their login info" that changes it to,
| "wow look at these guys, everyone knows you shouldn't give
| out your password morons." then the next time they need to
| give out their info they may hesitate for a moment, because
| they don't want to be considered the "moron" in that
| situation.
|
| I don't think it will make a big effect but it will help
| other people become more aware that most of these "hackers"
| are mostly just con artists, instead of letting everyone live
| with the constant anxiety of godlike hackers being able to
| wreak havoc at will.
| danso wrote:
| You really think that any average person, given a login
| cookie, is equally able to blindly infiltrate a company and
| successfully exfiltrate its core assets, as long as they
| can sweet talk a single IT person?
| shadowgovt wrote:
| But no employees in this story _were_ conned into giving
| out their login info.
|
| Instead, hackers acquired enough stolen data to become a
| passable simulacrum of an employee in the text-based
| virtual space of Slack, convinced someone they were a
| coworker, and had that someone hand over the simulated
| employee's extended credentials.
|
| "Hackers can appear to be someone you think you know" is
| exactly the amount of paranoia the public should have.
| Faking an authentic-looking request is the most common form
| of hack, whether that fake is the right 1's and 0's to an
| API, "Hey Bob, got a small problem..." over Slack, or "I'm
| the CEO, and I will have you _fired_ if you don 't fix this
| _right now_ " in voice over the phone.
| [deleted]
| mrlonglong wrote:
| I have banned slack for years since it's just dross.
| no-dr-onboard wrote:
| What do you use instead?
| rednerrus wrote:
| Carrier Pigeons.
| tailspin2019 wrote:
| Ahh. CPaaS
| cecilpl2 wrote:
| It uses IPoAC.
| klhutchins wrote:
| smoke signals
| mcraiha wrote:
| I would assume that same kind of attack vector would work via
| Teams, Discord etc.
| grumple wrote:
| I would not make that assumption if they use proper security
| practices.
|
| You can make cookies unusable (or harder to use) by anyone
| but the initial owner a few different ways. You could include
| a hardware or ip fingerprint in the cookie that has to be
| validated server side, for example. You can time out cookies
| after an hour. Etc.
| S_A_P wrote:
| I beta for a music instrument manufacturer. Getting into their
| slack channel felt completely arbitrary. I never entered a
| password I just clicked a link on whichever browser I was using.
| What? Is that normal?? I've never used slack prior to this.
| jacobsenscott wrote:
| I believe slack can send one time use login links via email.
| Given the short timeout and one time use on these links I would
| consider this more secure than a password based login. Most
| users just re-use passwords or use simple passwords.
| paulpauper wrote:
| I suspect that these hackers are disappointed by how little they
| earned from stolen EA info, assuming they earned much of
| anything. This is why hackers target bitcoin exchanges so much,
| because that is where the money is.
| imglorp wrote:
| The value might not be in the perps selling stolen IP.
|
| If they can make new game builds, they can trojan them and hand
| them out as bootlegs. Boom thousands more intrusions.
| dang wrote:
| Recent related threads:
|
| _Hackers breach Electronic Arts, stealing game source code and
| tools_ - https://news.ycombinator.com/item?id=27470577 - June
| 2021 (111 comments)
|
| _EA got hacked and games source code leaked including new game
| Battlefield 2042_ - https://news.ycombinator.com/item?id=27468766
| - June 2021 (139 comments)
|
| _EA hacked and source code stolen_ -
| https://news.ycombinator.com/item?id=27462952 - June 2021 (35
| comments)
| mcoliver wrote:
| So cookie gets them into Slack. But how were they able to get
| into the network, spin up a vm and connect to p4? That doesn't
| happen with a cookie and MFA token.
|
| They either had a username/password combo already, or support
| also allowed them to reset the compromised users password (ex
| through a one time link to set a new pw, or perhaps more
| egregiously "here is your new password: welcome2EA".
|
| A video call and a few questions could have stopped this. Or a
| password reset flow that requires some sort of previous
| information. Would love more details here.
| kerng wrote:
| What is interesting here is that the hackers actively describe
| and share how they broke in, rather then EA setting the
| narrative.
|
| This isn't very common I think.
| dylan604 wrote:
| There may be a bit of a desire on the hacker's part to
| hummiliate EA. This isn't good PR for EA.
| paxys wrote:
| These hackers are about to find out that the market value of
| 780GB of source code is wayyy less than they think.
| wyldfire wrote:
| An interesting attack could be executed by an automaton that
| encrypted the payload such that the victim could trust that if
| they paid a ransom to some address before $my_fave_cryptocoin
| hit block #X then the key would never be revealed. If they
| didn't pay it, the key would be revealed to the attacker.
|
| Pretty funny how important trust is in the existing ransom
| marketplace. But the current market is about restoring
| operations by recovering inaccessible data. EA didn't lose
| anything here but would likely prefer that no one else get
| access.
| throwawayboise wrote:
| Yeah, who would buy it? It would be radioactive for any
| competitor. Even if they were willing to pay for it and launder
| the funds used, releasing any games that used any EA secret
| sauce would very likely be detectable. They'd need engage in a
| parallel construction of source code history to show how they
| arrived at their implementation independently.
| hu3 wrote:
| It's useful for devs of multiplayer game cheats.
| throwawayboise wrote:
| Is there much money to be made in those? Honest question; I
| don't play games at all.
| tiborsaas wrote:
| Yes, and it's mind-boggling:
|
| https://www.bbc.com/news/technology-56579449
| crysin wrote:
| I would hazard definitely given how rampant online game
| cheating is across competitive shooters. People give
| grief to Blizzard, Steam, etc for their overzealous
| methods of detecting cheats and possibly increasing the
| vulnerabilities of your own machine but without some form
| of way to ban or detect cheats competitive online gaming
| would be a dead market. It's a cat vs mouse market and I
| know back when I was playing WoW every day there was a
| huge market for buying bots to harvest in-game resources
| and fishing.
| dylan604 wrote:
| I'm guessing its worth a lot more for their street cred than
| anything financially.
| fakename wrote:
| It's one cookie, Michael. What could it cost, $10?
| simonw wrote:
| "the process started by purchasing stolen cookies being sold
| online for $10"
|
| Anyone know the details of how this actually works? Does it mean
| browser cookies? How do cookies end up being sold in this way?
| fooey wrote:
| No idea in this instance, but it's trivial for rogue browser
| extensions to siphon off cookies
| InitialBP wrote:
| Slack authentication works like many other web applications and
| once you've successfully authenticated you get a cookie which
| can be used to read/send messages on your behalf.
|
| If someone managed to get some malware running on a machine
| where you have logged into slack it's fairly trivial for
| someone to get your cookie. Something like
| https://github.com/djhohnstein/SharpChromium is an example of a
| tool used to pull browser cookies off a compromised host.
|
| I don't know the explicit details of this particular instance,
| but I imagine the user in question had some kind of malware
| installed on their phone or computer. ( I keep seeing mention
| of a browser extension in the comments, and I have seen some
| working examples of malicious chrome extensions recently that
| would let you steal cookies once installed.)
| tyingq wrote:
| I see browser extensions mentioned. Maybe Tor exit nodes as
| well? Or a compromised corporate MITM proxy...that would be
| funny.
| eli wrote:
| Shouldn't be possible with ssl
| SahAssar wrote:
| In the context of a "MITM proxy" it certainly is.
| eli wrote:
| How would a mitm proxy have a valid certificate for
| slack.com?
| dwild wrote:
| > a compromised corporate MITM proxy
|
| A corporate MITM proxy does has it own certificate
| authority, or else it wouldn't be able to monitor its
| traffic like it's expected to do.
| [deleted]
| VectorLock wrote:
| On underground forums you can buy a plethora of compromised
| accounts from an individual's computer and saved passwords for
| about this much money.
| hiccuphippo wrote:
| That's the real hack here.
| Theizestooke wrote:
| Just ask the girl scouts in your area.
| elliekelly wrote:
| So I could take the cookies from my browser, send them to a
| friend, and my friend could then save the files and use the
| cookies? There's no process to verify it's being used in the
| same browser or computer?
| shadowgovt wrote:
| There should be. You can't guarantee it will always work, but
| it's certainly possible to encode some fingerprinting into
| the cookie so that if the fingerprinting no longer matches
| what the client requesting data from the server looks like,
| the server throws up a red flag and asks for the
| authentication challenge response to be repeated.
|
| But no, it sounds like Slack doesn't do that, which is a
| problem.
| excitom wrote:
| Security I implemented in the 90's: Encode the client IP
| address into the cookie.
|
| Also include a timestamp to force re-authentication at some
| point.
|
| This isn't rocket science.
| jshmrsn wrote:
| Wouldn't the cookie then be invalidated if you e.g.
| switched between WiFi and cellular on mobile?
| __turbobrew__ wrote:
| Yes, it would be invalidated. This would not work well on
| roaming devices.
| josho wrote:
| And now as I connect between ipv4 to v6, connect to my
| VPN, switch from wifi to mobile data each change requires
| a login and I very quickly abandon your app for one that
| doesn't force multiple authentications throughout the
| day.
| VectorLock wrote:
| Good luck with that on mobile.
| teawrecks wrote:
| This is non-trivial to do without gathering more info about
| someone's browser than users would be comfortable with (aka
| fingerprinting). Ideally, a website/app knows nothing about
| the machine it's running on and is perfectly sandboxed away
| from everything else.
|
| A private key shouldn't be device dependent. 2FA was the
| solution here, but was not enforced properly. They were able
| to social engineer IT into resetting their 2FA token without
| any proof they were who they said.
| dwild wrote:
| The IP is something someone isn't comfortable with? It
| seems to be quite trivial to include the IP in the
| cookie/session and verify it's the right one.
|
| Sure it's not perfect in a world where IP are not
| necessarily static (and a VPN may change it), but having to
| login again is not that bad, even more so if you are using
| SSO.
| Jenk wrote:
| > The IP is something someone isn't comfortable with?
|
| No good. Slack's biggest userbase is corporate, behind
| corporate VPNs. Many users have the same IP.
| Alex3917 wrote:
| PhantomBuster makes you fork over your Slack cookie in order to
| use it. Could easily be that or something similar.
| TechBro8615 wrote:
| This would be my guess too. I'd expect common sources to
| include scraping services (like phantombuster, but also
| lower-level infrastructure like proxies) and browser
| extensions.
| miketery wrote:
| I don't know for sure, however I presume if you have a
| compromised browser extension or malware on your device then
| the cookie can be extracted. And then they're sold on
| marketplaces for black hats.
| Jenk wrote:
| Anyone wondering how "easy" it is to obtain a token may be
| interested in the setup instructions for slack-term[0]. As for
| token expiration.. I've been logged into slack-term using the
| same token for weeks if not months, without needing to
| reauthenticate. I assume it refreshes.
|
| [0]: https://github.com/erroneousboat/slack-term/wiki#running-
| sla...
|
| Also free plug for slack-term, I guess. :)
|
| TL;DR: socially engineer an EA employee with a link like
| https://slack.com/oauth/authorize?client_id=32165416946541.1...
|
| and if they are half asleep they just might auth you.
|
| STL;SDR: EA probably permit employees to authenticate with non-
| official apps/clients, which they probably won't be permitting
| anymore.
| misiti3780 wrote:
| How did they use a 10 dollar cookie to get into the slack
| channel?
| maccard wrote:
| The cookie is an authenticated token to log in to the EA slack.
| They put the Cookie in-place on their machine and as far as
| slack is concerned they are the person who the cookie came
| from.
| thrower123 wrote:
| Why would you be surprised that the TCS or Cognizant or Accenture
| customer support rep in Bangalore would care one iota about
| security?
|
| All they care about minimizing Average-time-to-answer and
| Average-handle-time.
| jabroni_salad wrote:
| Speaking as a consultant, let me tell you that literally none
| of my clients over the past three years had a password reset
| authentication policy. Employee just calls IT, and they reset
| the pw on the spot with no checks whatsoever.
|
| Keep in mind that pretty much everyone hates passwords, and a
| lockout constitutes a work stoppage. Adding any additional
| friction is resisted at every level from the C-suite to the
| janitor. I try to mitigate it by proposing self-service
| workflows, and that's popular, but the IT hotline reset is
| incredibly difficult to get rid of.
| sxp wrote:
| The key bit is "The hackers then requested a multifactor
| authentication token from EA IT support to gain access to EA's
| corporate network. The representative said this was successful
| two times."
|
| So this was primarily a social engineering hack after Slack was
| used to get access to a trusted messaging channel.
| barbazoo wrote:
| Exactly, doesn't sound like this has anything to do with Slack
| in particular.
| shadowgovt wrote:
| Exactly. It has about as much to do with COVID-19 as with
| Slack.
|
| In pre-COVID-19 days, IT handling that kind of request would
| have required it to come in face-to-face. But since
| everyone's working from home, that's intractable.
|
| The failure point here is that IT should have confirmed via
| an independent secondary channel the identity of the
| requester, but it appears they either got lazy or their
| protocols assumed Slack could not be compromised in this way
| so the request was already authentic.
| maccard wrote:
| > In pre-COVID-19 days, IT handling that kind of request
| would have required it to come in face-to-face. But since
| everyone's working from home, that's intractable.
|
| In pre covid times it wasn't uncommon for people in my
| office (which didn't have a dedicated onsite IT person) to
| ping IT on slack for the person sitting next to them to ask
| for a PW reset.
|
| Face to face doesn't solve anything. If there's a 2000
| person company and you can get into the building, chances
| are you could walk up to the IT desk and say "hey my name
| is John Doe I'm an engineer here and I locked myself out of
| X, can I get a reset?" And you'd be given it without any
| verification
| dysfunction wrote:
| Previous company I worked at had a policy that you needed
| to include a photo of yourself with the current date
| written with any PW reset request, but of course that
| doesn't work as well at a 2000 person company as at a 100
| person startup where IT knows everyone's face.
|
| Plus that's still vulnerable to googling photos of the
| employee you're impersonating and photoshopping a piece
| of paper with the date.
| adrianmonk wrote:
| If you walk up to the IT desk, they should check your
| identity. (At least for security-sensitive things like
| password resets or locked accounts.)
|
| Most employees have a photo badge, so they could scan
| that and look up your records in the HR database. If
| you've lost your badge, they could ask your name and look
| that up in the HR database, which hopefully has a photo
| on file.
| throwawayboise wrote:
| The one time I locked myself out hard at work and had to
| go "in person" for a password reset, I had to show ID
| even when the person who was resetting my password knew
| me by sight. And this was well over a decade ago.
|
| And in all but the smallest companies I've worked for,
| you needed a card swipe at the door (or an escort) to
| pass from public to internal areas.
|
| This is just really, really basic physical security
| stuff.
| maccard wrote:
| That might be true at some companies, but has not been my
| experience at 3 different companies.
|
| > And in all but the smallest companies I've worked for,
| you needed a card swipe at the door (or an escort) to
| pass from public to internal areas.
|
| And once you're past the barrier of those internal areas,
| you're unlikely to be questioned in all but the most
| strictly controlled places.
| throwawayboise wrote:
| In my case, everyone is supposed to be visibly wearing
| their company ID, on a necklace/lanyard or something like
| that. However even if challenged without one, "oh sorry,
| I left my ID at my desk" is likely to work. However
| everyone but brand-new employees is a least vaguely
| familiar to everyone else, so I'm not sure how easily a
| complete stranger would be able to move around.
| antiterra wrote:
| If you're over 2000 employees, there's little excuse for
| not requiring a badge scan to queue for all IT desk
| interactions. That scan should display name and photo to
| both desk workers and on a 'now helping' screen others
| can see.
|
| It's not foolproof, but it's incredibly difficult to
| prevent tailgating because people don't want to start an
| incident by incorrectly challenging a coworker. I've
| heard of it somehow happening even with card turnstiles,
| double door scans and required photo confirm by lobby
| security.
| jdavis703 wrote:
| Most 2,000 person office buildings have security guards
| and doors/gates/turnstiles that won't let just anyone
| waltz in.
| maccard wrote:
| Right, but once you get past that area nobody will
| question it. In the same way that someone who works for
| your company on slack pings you, you assume they work for
| your company.
| notyourday wrote:
| In the companies that I am familiar with the internal
| space has doors and every single door opens only with a
| badge
| whydoineedthis wrote:
| Aside from that small part of a stolen and long lived cookie
| working on an untrusted device.
| grumple wrote:
| Well, since the user was able to gain access with a stolen
| cookie, these things are possibly true:
|
| 1) Slack does not invalidate sessions after a short enough
| period of time or inactivity (for a cookie to make it to
| another site, be purchased and used, probably takes some
| time).
|
| 2) Slack does not properly terminate sessions on logout or
| inactivity (allowing cookies to be reused after logout).
|
| 3) Slack is not using any more clever techniques to make
| cookies useless to attackers.
| Ueland wrote:
| Session invalidation time is someting the slack admin (That
| is, EA in this case) configures themself.
|
| Fun fact, if you want SSO for your Slack, you have to pay
| extra unless you are already using one of the top tier
| Slack editions. So either pay up, or have multiple ways of
| administering your users, with the security implications
| that causes.
| whydoyoucare wrote:
| I've rarely seen any enterprise Slack ever configured for
| session timeouts. Misplaced trust on the employee device
| combined with convenience over security.
| staticassertion wrote:
| Something important to keep in mind is that Slack has a lot
| of not so great defaults. Go check your settings - infinite
| sessions, infinite channel retention, etc.
| danso wrote:
| The problem probably isn't Slack itself, but EA's policies
| around Slack. I think it's not nothing that the social
| engineering happened over Slack. For starters, somehow login
| cookies to the Slack were stolen and then sold, and those
| were enough to get into the Slack. And then in 2 separate
| instances, the hackers were able to convince IT to give them
| 2 new tokens.
|
| Maybe the IT team/policy is just weak across the board, and
| they would've handed keys over the phone or through internal
| email. But it's not impossible for the IT team to have a
| complacent mindset around Slack.
| Thaxll wrote:
| The thing is you need MFA to log into Slack but having a
| valid cookie bypass that.
|
| On top of that once you're in Slack you can access every
| public channel, search string etc ... I can tell you that
| large compagny have a lot of thing displayed on Slack (
| CI/CD pipelines results, credentials in logs, alerts ect
| ... ) you don't even need to talk to someone to have a lot
| of info.
| hu3 wrote:
| This is a concern of mine on systems I design/maintain.
|
| How do I mitigate a stolen cookie from successfully
| authenticating someone else?
|
| Do I store user browser-agent/IP and check that on every
| request?
| throwawayboise wrote:
| If it's an internal corporate system where all the users
| sit at assigned machines and have fixed IP addresses, yes
| you can do stuff like IP address checking.
|
| Otherwise you probably need short-lived cookies that get
| renewed by the client in the background, with a hard
| expiry of some reasonable "work day" length such as 8,
| 12, 16 hrs. Then even if it's stolen, there's a fairly
| short window of time that it's useful to anyone.
| josho wrote:
| In my opinion you don't. Rely on the authentication
| provider to handle that responsibility. Services like
| duo/Okta perform this risk assessment and may opt to
| request a mfa request.
| bradleydwyer wrote:
| I've never wanted to completely hand over authentication
| to a third-party.
|
| Instead what I'd think I'd like is just the risk
| assessment to be be performed by a third-party when I'm
| handling authentication (i.e. a third-party that has a
| broader view of what's happening across multiple services
| over time). I just send the pieces of information that
| I'm willing to share as an API call and they make the
| best risk assessment they can.
|
| Then I can take that risk assessment result and make a
| final decision if authentication succeeds or not.
| jsmeaton wrote:
| There are risk services out there.
|
| https://sift.com/ Is one you call out to that gives you a
| risk score.
|
| https://datadome.co/ can sit within your cdn layer that
| does risk assessment.
| hu3 wrote:
| That's not always an option.
| bostonsre wrote:
| Yea, I think most users think slack is a safe space. They
| haven't been conditioned to be suspicious of messages on
| slack like they have with email. It's a pretty ripe attack
| vector.
|
| Was bored the other week and found a ton of slack web hook
| urls in public github repos. I think it would be a pretty
| great way to do some phishing, just need to scrape the urls
| and brute force channel names with messages with links to
| websites you own.
| SCHiM wrote:
| It's the same old same old. Like you correctly identified
| it's a new place, it looks different. People don't think
| an attacker could approach them over IM, but they can.
|
| But the problem goes beyond that. Many organizations have
| disabled sharing executable file formats in attachments
| over e-mail. GMail flat-out prevents you from sharing
| executables, and macro enabled word documents as
| attachments, even when put into a zip file.
|
| But on Microsoft teams? I sent a zip file with 8 unsigned
| executables to a colleague a few days ago. No warnings,
| no messages, no nothing :)
| zeusk wrote:
| For general public I can see how that is helpful if they
| can't decide if a foreign executable is trust worthy or
| if they execute everything with admin/root privilege.
|
| As for me, I really hate this "feature". I work with IHVs
| and often have to share private binaries and it's a chore
| using xcopy/sfpcopy to their bespoke network path, from
| where I guess then someone manually copies over to their
| local subnet.
|
| We should have a more robust mechanism in place than to
| outright ban sharing of executable files. Windows
| Smartscreen and Mac's Gatekeeper method of online
| checksum/signature verification is sort of interesting.
| recursive wrote:
| I don't know if it's true, but a co-worker of mine said
| that, "in the wild" signed binaries are positively
| correlated with being malware.
|
| I don't think the signing itself does much.
| manquer wrote:
| Perhaps not in itself. The lack of scans is concerning
| though
| tialaramex wrote:
| I'm interested in what sort of "multifactor authentication
| token" and how IT support were able to grant this request.
|
| Are we talking about a _physical_ token like an old-fashioned
| SecurID OTP keyfob or a Yubikey? Or something custom?
|
| Or are we just talking about a code that real employees get via
| TOTP or worse SMS?
|
| It seems like the same week large employers figured out they
| would need to FedEx the new guy a laptop since he can't come to
| the office, they would likewise realise they want to make sure
| they FedEx that laptop to his actual home, not some building
| site a scammer told them to send it to. And so you'd hope that
| _physical_ tokens likewise can 't just be sent to some random
| social engineer based on one chat.
|
| Even if you do succeed in getting them to FedEx the token to a
| building site, that's not a trivial extra step to retrieve. If
| some teenage "criminal mastermind" gives their parents home
| address to get the token delivered, they've also told cops
| where to start looking for the "hacker".
|
| Whereas if it's just a code then I can more easily imagine IT
| support just pastes it into Slack for you.
| LegitShady wrote:
| >Are we talking about a physical token like an old-fashioned
| SecurID OTP keyfob or a Yubikey? Or something custom?
|
| I used to have a citrix fob that gave me a OTP but then my
| employer found me employer switched to azure and now it's all
| text message based. That was just last year.
| paxys wrote:
| If your employees' browser cookies are being bought and sold on
| the open market and tech support is handing out 2FA tokens over
| chat then which IM software you use is the least of your
| concerns.
| SV_BubbleTime wrote:
| Of course, but on the topic of slack, shouldn't there be more
| security than a browser cookie required to get access?
|
| I get that this in normal scenarios this cookie wouldn't leave
| the machine, I get that it's very convenient to not have to log
| in all the time, I get that Slack people know this stuff a lot
| better than me... but it sure seems like a weakness to have
| single point of failure to leak your slack channel by losing a
| cookie. Isn't it?
| dvt wrote:
| Basically every website on the internet uses cookies to
| authenticate and verify sessions. At the end of the day,
| you'll always have a "single point of failure" when you need
| to restore remote state. Sometimes that's a password,
| sometimes that's a cookie.
| SV_BubbleTime wrote:
| MFA, pin code, os or tpm or browser finger printing /
| cookie signed to your machine, ip address or ip range lock,
| reauth over a short but not unreasonable time, concurrent
| session lock out... I would think there are few options
| besides just throwing your hands up and admitting failure
| where someone can just pay $10 for a plug and play cookie.
|
| I don't know about a giant company like EA, but you could
| really screw with my company's plans if you spent any time
| on our Slack.
|
| So now I need to consider that any malware could (and
| probably is) looking for slack cookies to exfil. So, no, I
| don't really believe there is always going to just be a
| single point of failure and it's an insurmountable problem.
| jacobsenscott wrote:
| Giving your cookies short timeouts, say 5 minutes, and
| including the client's ip address in the cookie are both
| mitigating techniques.
| ubercow13 wrote:
| So you would be signed out of Slack every 5 minutes?
| theshrike79 wrote:
| No, Slack would reauthenticate every 5 minutes. It
| doesn't mean that the user needs to do anything.
| kleinsch wrote:
| If you closed Slack for more than 5 mins, you'd get
| signed out. Users would hate it, which is why nobody does
| this.
| SV_BubbleTime wrote:
| So, the alternative should be _" don't trust anything in
| slack because someone can just get your cookie and log in
| as you"_?
|
| Maybe there is a middle ground?
| kleinsch wrote:
| The middle ground is the SSO reauth interval, where
| companies can decide that for greater security they'll
| made you reauth every 8 hours, daily, weekly etc to
| balance it against user pain. Companies do this today
| already.
| oarsinsync wrote:
| No, the client would need to renew the cookie every 2.5
| mins (similar approach to DHCP leases). If the external
| IP has changed, the cookie should not be renewed. If it's
| expired already, it should not be renewed.
|
| This wont work, though, as nobody wants to sign in once
| per day. That's too inconvenient.
| halestock wrote:
| For customers, yes, but I've worked at places where we
| required employees to re-auth daily (if not more
| frequently).
| throw_away wrote:
| I worked at a place where re-auth was required every
| twelve hours, which was a convenient reminder of when
| you'd been working way too long.
| [deleted]
| sneak wrote:
| If you are using SSO (to enforce U2F) and password
| managers (really, basic levels of enterprise auth
| competence in 2021) then signing in multiple times per
| day isn't a huge hassle because it's just a click and a
| tap.
| paxys wrote:
| ISPs (especially mobile ones) sometimes rotate IP addresses
| in an little as 4 hours. Relying on static IP addresses
| throughout a client session might be more secure but will
| result in people getting logged out _very_ frequently.
|
| Plus, even with the most strict filtering client IP
| addresses can always be spoofed.
| roywashere wrote:
| Encoding the IP as part of the session is not very common
| practice as many people switch their devices between
| different wifi networks and mobile
| rileyteige wrote:
| MAC address would work for this, no? Wouldn't solve the
| spoofing but solves the dynamic IP address issue.
| IiydAbITMvJkqKf wrote:
| MAC addresses are not exposed to web sites, and for good
| reason. It would be the mother of all supercookies.
| stedaniels wrote:
| MAC addresses aren't sent over the Internet.
| jaywalk wrote:
| MAC address cannot be seen past your local gateway.
| mcpherrinm wrote:
| Slack is used for long term communication. If I'm walking
| around town on my phone, I absolutely don't want slack to
| get logged out every time I connect to some wifi network.
| philip1209 wrote:
| The reality is that it's tough to verify somebody's identity in
| remote work. The office used to be a walled garden, and now
| people seem to treat chat as a walled garden. We'll probably see
| more security incidents like this in the future.
| leipert wrote:
| Just do a video call with the employee when they request e.g.
| MFA resets. Have a policy that enforces that. Maybe have them
| show an ID.
___________________________________________________________________
(page generated 2021-06-11 23:00 UTC)