[HN Gopher] Canarytokens: Honeypot for critical credentials, get...
       ___________________________________________________________________
        
       Canarytokens: Honeypot for critical credentials, get notified when
       they are used (2015)
        
       Author : Kovah
       Score  : 111 points
       Date   : 2024-07-30 08:39 UTC (14 hours ago)
        
 (HTM) web link (canarytokens.org)
 (TXT) w3m dump (canarytokens.org)
        
       | dredmorbius wrote:
       | Previous discussion from 28 Dec 2022 (59 comments):
       | <https://news.ycombinator.com/item?id=34157751>
        
       | dredmorbius wrote:
       | The project documentation is ... slightly ... more useful to
       | generate discussion, though it's pretty lacking in details:
       | 
       | <https://docs.canarytokens.org/guide/>
        
       | westpfelia wrote:
       | Been a big fan of CanaryTokens since it was just 3-4 different
       | types.
       | 
       | Super easy to configure via webhooks into a siem or any kind of
       | alerting platform.
        
       | notepad0x90 wrote:
       | a lot of security tools and "platforms" (don't get me started)
       | have deception features now which includes stuff like this.
       | 
       | https://learn.microsoft.com/en-us/defender-xdr/deception-ove...
       | 
       | But in my opinion, deception tech is best implemented in-house.
       | Nothing wrong with using externally developed tools, especially
       | for high signal-to-noise things like honeypots but the actual
       | monitoring and alerting data flow should be ideally be
       | environment specific.
        
         | waihtis wrote:
         | Super biased (founder of a deception software company) but
         | youre right, deception's cornerstone is building good quality
         | internal threat data and thus spraying random decoys here and
         | there (a la the platform features you mentioned) will not be of
         | much use.
        
           | notepad0x90 wrote:
           | the first rule of deception imho is that it needs to deceive
           | the threat actor, and you should assume that the threat actor
           | has done enough recon to know what makes sense and what
           | stands out in an environment. If you have a canary in a file
           | about a project that doesn't exist or use an external
           | "canarytokens.com" domain the document, that might be a low
           | quality decoy. Tools to automate deployment, maintenance and
           | alerting/integration (SIEMS,etc..) are much more useful.
           | 
           | If your company is making deception software, then you
           | already know that Microsoft, Google and similar companies are
           | your competition if your strategy is to do what they do, but
           | I'm sure you've already strategized around all this.
        
             | waihtis wrote:
             | > Tools to automate deployment, maintenance and
             | alerting/integration (SIEMS,etc..) are much more useful
             | 
             | Bingo - between this and not confining yourself into the
             | restrictions of a single platform (what the platform
             | feature vendors have to do - Microsoft deception offers
             | solutions only specific to certain Microsoft
             | infrastructure, etc. ), you can create some terrific
             | outcomes in this domain.
        
         | mdhb wrote:
         | I'm currently in the middle of building one at the moment
         | precisely because of how limited I found canary tokens.
         | 
         | I also have a lot of security knowledge (mostly from books
         | rather than practical experience) from non infosec case studies
         | like intel agency operations, insurgent groups, law
         | enforcement, military, organised crime and other scenarios
         | where the consequences of making bad decisions is incredibly
         | high and probably involves some level of violence at a minimum.
         | 
         | I always thought that this XKCD comic (https://xkcd.com/538/)
         | summed it up nicely precisely how that factor can actually
         | change your threat model fairly significantly in a way that I
         | don't generally see turn up in the field of cyber security
         | specifically.
         | 
         | But more than that I just generally found that to be an
         | invaluable resource for when it comes to how to think about
         | what actually makes these things work in the real world and
         | what you can actually do in order to really make life very
         | miserable for an adversary long before they even realise what's
         | going on.
         | 
         | It's probably been the most fun thing I've built in 20 years.
        
       | jesprenj wrote:
       | How do they detect MS Word docs being opened?
        
         | mdhb wrote:
         | I don't know this for sure, I'm just going from memory from an
         | operation the Dutch pulled off when trying to take down some
         | darknet market places where they used this technique.
         | 
         | But basically they would embed a remote image inside the file
         | so it would call out to a web server upon loading the image. It
         | also had the added benefit from a LEO perspective that if the
         | users main defence was running TOR browser that it would bypass
         | that and provide the users true IP address.
         | 
         | However, as I mentioned in another comment here, I actually
         | think those particular kinds of honeypots are kind of dumb in
         | that they only catch stupid adversaries.
         | 
         | Opening those files on your own machine as an attacker while
         | connected to the net is one of those do not pass go, do not
         | collect $200 go directly to jail moves.
        
           | knallfrosch wrote:
           | > they only catch stupid adversaries.
           | 
           | It's enough to catch one guy, one time. Then you follow his
           | physical or digital traces.
        
             | mdhb wrote:
             | Yeah, I mean like a lot of things in security, it's better
             | than nothing. But you would have to be very undisciplined
             | or uninformed to get caught by this.
             | 
             | There's even an argument that all you've done is tipped
             | your hand to the adversary that deception is at play in
             | this scenario and allow them to adjust their approach
             | accordingly.
             | 
             | Not even suggesting that would be a horrible thing to
             | happen, even in that scenario you most likely can at least
             | slow them down but if you never know you're being targeted
             | in the first place it doesn't matter too much when that
             | clock starts.
             | 
             | The ideal scenario I think you should actually be aiming
             | for here is to craft a situation where you know about them
             | but they don't know that you know. That's a window of time
             | where you very clearly have an upper hand.
             | 
             | That isn't actually that hard to create. For example one
             | technique I have at that really early stage is to return a
             | 403 auth error on a web service and set a cookie that looks
             | very natural to its environment but is also very obvious as
             | to how you could change it in order to no longer get a 403
             | response.
             | 
             | The moment I get a request with that new cookie value I
             | instantly know I have something I should be paying
             | attention to and I know it's a real person not a bot. The
             | adversary however has no idea yet what's going on, they
             | just think they hit a gold mine.
        
             | jabroni_salad wrote:
             | I have a little guy that notifies if someone is operating
             | Responder on the business network and that is my
             | justification as well.
             | 
             | As a defender, you only get to fail once for it to be
             | costly. As an attacker, you can often fail hundreds or
             | thousands of times depending on what they have for
             | observability. Adding offensive elements helps level the
             | playing field.
        
           | tholdem wrote:
           | All adversaries are just humans and humans do stupid
           | mistakes. Opsec and maintaining it is really hard. Just one
           | mistake is enough. It's astounding to read about the really
           | stupid mistakes adversaries do to get caught. But there is of
           | course also the bias that we only read about the mistakes of
           | ones that do get caught.
        
             | mdhb wrote:
             | Zero disagreement here whatsoever.
             | 
             | I have story after story after story of people who should
             | have known better who got done for really silly shit.
             | 
             | I'm only making the argument here that that specific
             | technique has some real limitations and known work arounds
             | to the point that people actively know to look for it and
             | that as a result I would personally look for other
             | techniques that don't have the same set of trade offs.
        
         | OptionOfT wrote:
         | Or the mySQL dump?
        
           | CGamesPlay wrote:
           | That one appeared to set a replication host and then uses a
           | canary DNS record to do the triggering. The canary payload is
           | just base64 encoded, so it's not hard to reverse engineer if
           | you spot it.
        
         | tholdem wrote:
         | You can check by creating a Word Canary and check what's
         | inside. I renamed it from .docx to .zip and it seems there is
         | an external image referenced in the footer. I'm not sure how
         | modern Word handles external images, but I believe you have to
         | approve the download of remote content when you open it
         | nowadays.
        
       | pjot wrote:
       | I've used this to see if my employer was spying on my email. They
       | were.
        
         | ActionHank wrote:
         | Protip, they all are, always, every time, especially if they
         | say they aren't, because they can.
        
           | vntok wrote:
           | Not because they can, because they _must_ . At least they
           | must in most jurisdictions of most countries, except when the
           | mail 's subject is clearly labelled "PERSONAL" or there's
           | some sort of automated classification in place.
           | 
           | If someone receives an email coming from your employer's
           | domain with a virus or a child porn video attached, your
           | employer had better be able to identify the sender account
           | through logs & audit trails.
        
         | omh wrote:
         | Spying how?
         | 
         | If you embed a URL in emails then a lot of corporate email
         | gateways will blindly follow the link, trying to check it for
         | malware.
         | 
         | This may or may not be a useful security measure but it has
         | many issues. One of which is that it could look like spying.
        
         | Alifatisk wrote:
         | Are you certain it wasn't just some gateway that followed the
         | url in the email to give a preview or check for malware?
        
           | pjot wrote:
           | Pretty certain, yes. Series A startups generally aren't that
           | sophisticated.
           | 
           | And when the ip address comes from the employers location...
        
             | notepad0x90 wrote:
             | if they have gmail or office 365/outlook with an enterprise
             | license, urls get sandbox detonated. You can tell if it was
             | the sandbox or not by looking at the IP address and user
             | agent fields in the http request of your canary hit. it
             | should be the IP of your startup offices or in the cities
             | your startup operates in, not some random cloud IP.
        
               | floam wrote:
               | Or just Apple's mail.app on iOS and macOS.
        
       | Tiberium wrote:
       | Should be mentioned that this is not a bullet-proof solution
       | (obviously), for some services the canary tokens can be bypassed,
       | see e.g. https://trufflesecurity.com/blog/canaries ("TruffleHog
       | Now Detects AWS Canaries without setting them off")
        
         | playingalong wrote:
         | But these are some lame canary tokens then. One could generate
         | real AWS API keys with no actual permissions.
        
           | tptacek wrote:
           | CanaryTokens AWS tokens are real AWS creds, last time I
           | checked.
        
       | legobeet wrote:
       | The next step is to actually use underprivileged canary tokens on
       | the client for your day-to-day work, intercept them with a proxy,
       | and replace them with the real deal in a more isolated setting.
       | 
       | For example, an application-specific HTTP proxy for your
       | GITHUB_TOKEN. You can use a canary token for the internal user-
       | facing auth. https://github.com/legobeat/git-auth-proxy [0].
       | 
       | That piece is being used here[1] in order to make it transparent
       | for the user and I intend to add more features there for
       | credentials- and secrets compartmentalization. Been keeping it
       | fairly structured so you could also use it as a reference if you
       | ever do similar stuff and want some inspiration or copypasta for
       | your personal hacking.
       | 
       | [0]: Caveat: The proxy repo is a fork and the documentation is
       | still more reflective of the previous owners intentions. I ripped
       | out all the Azure/k8s integrations.
       | 
       | [1]: https://github.com/legobeat/l7-devenv/
        
         | ghshephard wrote:
         | Don't the "underprivileged canary tokens" then become
         | privileged by virtue of being proxied into more privileged
         | tokens?
        
           | legobeet wrote:
           | Sure, but they are only usable as such if attackers also
           | maintain access to the proxy and stay under the radar.
           | Additionally, the proxy has logging and hooks for monitoring
           | so you can audit and filter usage there.
           | 
           | As opposed to something which can be smuggled out and reused
           | offsite.
           | 
           | I'm also thinking that by centralizing (still locally) the
           | configuration, we can get better key rotation hygiene habits
           | without needing to compromise on credential granularity .
           | 
           | Just like there are security benefits in using a secured HSM
           | instead of a world-readable private-key file stored in your
           | unencrypted home directory, even if, yes, the HSM can be
           | abused by a locally privileged attacker.
           | 
           | (I'm definitely not saying I have a silver bullet though, and
           | I don't think one exists. Like any realistic solution, it
           | should be part of a defense-in-depth strategy. Things like
           | hardware keys make for incremental gains, etc)
        
             | ghshephard wrote:
             | Hashicorp Vault (and presumably other Enterprise credential
             | management tools - like CyberArk's) has a, similar (but not
             | identical - no proxy) take. The credentials that you use
             | are short-lived and provided by vault, which has a
             | privileged connection to the back end Database, AWS,
             | Certificate Server, etc...
             | 
             | You can lock down access to vault with whatever degree of
             | 2FA/IdP you wish. So - your workflow is authenticate to
             | Vault, which uses your identity (and possibly group
             | membership from an IdP like Okta) - to identify the
             | groups/policies you have, which _in turn_ then grant you
             | the authority to request (short lived  < 24 hours
             | typically) tokens that are generated in real-time (and
             | likewise terminated when they age out).
             | 
             | The added benefit here is that if your service token is
             | exposed - (A) The window of vulnerability is very short
             | lived, and, (B) it's isolated to a single service.
             | 
             | I haven't worked with Boundary - but it sounds like your
             | solution has some closer comparison to Hashicorp Boundary,
             | right?
        
               | legobeet wrote:
               | What I am proposing is something you'd run on your
               | independent workstation to interface with existing
               | heterogeneous services and peers.
               | 
               | While you certainly can run Vault and Boundary
               | independently, they are more designed to be deployed
               | across an organization. Setting them up is anything but
               | seamless - by design. Again, I think they can be
               | complementary. Adding a Vault component to l7-devenv is a
               | thought that came up before but I'll probably wait until
               | popular demand before making anything public there. If
               | you already have a setup it should not be too tricky to
               | integrate, I think.
               | 
               | If you squint closer I think you can start seeing even
               | more parallels to HC solutions but that is more because
               | none of these patterns are really fundamentally new but
               | the building blocks of we've all been doing for decades.
               | It's just new clothes and ways to make things play
               | together nicely (xkcd 927). And hopefully we can bring
               | these strategies like mTLS to new audiences and bring
               | down barriers for adoption of secure practices in
               | general.
               | 
               | > no proxy
               | 
               | Look again ;) (Envoy)
        
       | aflukasz wrote:
       | By the way, simple honeypot on Linux using auditd: just set a
       | rule like `-w /etc/secret-file -p rxwa -k some.tag` and use your
       | mechanism of choice to watch logs/journal for the occurrence of
       | `some.tag` string.
       | 
       | `-p rxwa` causes logging of any read, exec, write or attributes
       | change on that file. More in `man auditctl`.
       | 
       | Among others, this has a benefit that, in principle, such
       | honeypot triggers immediately and not only after someone decides
       | to try using some actual credentials/data.
       | 
       | Obviously needs some work to make this robust (logs monitoring
       | plus alerting), but it's a nice building block worth knowing and,
       | if you care, then you probably already have those additional
       | pieces in place anyway.
        
       | declan_roberts wrote:
       | I don't understand how they can keep such a feature-rich service
       | free forever?
        
         | compootr wrote:
         | It's like Costco's rotisserie chickens, a loss leader.
         | 
         | They get your foot in the door, and (particularly techniques)
         | eyeballs looking at ads for their hardware. Looking at their
         | site[0], the minimum you can buy is 2, at a price of $5k total
         | 
         | [0]: https://canary.tools/
        
       | shortsunblack wrote:
       | I wonder whether eBPF allows for increased deception
       | capabilities.
        
       ___________________________________________________________________
       (page generated 2024-07-30 23:00 UTC)