[HN Gopher] U.S. Internet leaked years of internal, customer emails
       ___________________________________________________________________
        
       U.S. Internet leaked years of internal, customer emails
        
       Author : todsacerdoti
       Score  : 70 points
       Date   : 2024-02-14 16:56 UTC (6 hours ago)
        
 (HTM) web link (krebsonsecurity.com)
 (TXT) w3m dump (krebsonsecurity.com)
        
       | gfs wrote:
       | > "The feedback from my team was a issue with the Ansible
       | playbook that controls the Nginx configuration for our IMAP
       | servers," Carter said, noting that this incorrect configuration
       | was put in place by a former employee and never caught.
       | 
       | Even if this were true, that is a pathetic response.
        
         | toomuchtodo wrote:
         | This would've been found in even the most cursory of
         | penetration tests performed by a competent practitioner. I am
         | curious if any have been done.
        
           | dylan604 wrote:
           | It also needs to be part of any regression testing against
           | new releases. Doing it once against current code does nothing
           | other than say "right now we're okay". I know. I've
           | personally been burned by assumption that what was tested
           | previously is assumed to still be good _now_.
        
           | blincoln wrote:
           | That's a very valid concern, but the larger one for me is
           | that it implies that their IMAP servers are sitting right on
           | the internet (no firewall/load-balancer/reverse
           | proxy/whatever), or that they've automated their
           | infrastructure so much that network-level security controls
           | are essentially bypassed because any services in the Ansible
           | definition are assumed to be authorized/intentional, or that
           | someone intentionally added this one as a ham-fisted backdoor
           | into customer email.
        
       | dboreham wrote:
       | I've consulted at a couple of places with similar security risks
       | to this, where I've suggested that we deploy automated testing
       | (perhaps it might be called monitoring since it targets
       | production services) that their auth checks are working -- e.g.
       | that an unauthenticated client can not access a supposedly secure
       | resource, and that authentication as user A does not allow access
       | to data belonging to user B.
       | 
       | Never had any takers.
        
         | jackpirate wrote:
         | I don't see how that would have helped in this case. This was
         | not a resource at a known location that was supposed to be only
         | available to logged in users. This was a resource that the
         | admins didn't know about available at an unknown url that was
         | exposed to the public internet due to a configuration error.
         | Are you going to write a test case for every possible url in
         | your server to make sure it's not being exposed?
         | 
         | Something that could work is including a random hash as a first
         | hidden email inside of every client, and then regularly
         | searching outbound traffic for that hash. But that would be
         | rather expensive.
        
           | toomuchtodo wrote:
           | n=1, head of a security at a fintech. We perform automated
           | scans of external facing sensitive routes and pages after
           | deploys, checking for PII, PAN, and SPI indicators, kicked
           | off by Github Actions. We also use a WAF with two person
           | config change reviews (change management), which assists in
           | preventing unexpected routes or parts of web properties being
           | made public unexpectedly due to continuous integration and
           | deployment practices (balancing dev velocity with
           | security/compliance concerns).
           | 
           | Not within the resources of all orgs of course, but there is
           | a lot of low hanging fruit through code alone that improves
           | outcomes. Effective web security, data security, and data
           | privacy are not trivial.
        
           | delano wrote:
           | You don't need to check every one though. Or any. You create
           | a known account with known content in it (similar to your
           | hash idea) and monitor that.
           | 
           | Even if they never got around to automating it and were
           | highly laissez-faire, manually checking that account with
           | those testcases say once a month would have caught this
           | within 30 days. That still sucks but it's at least an order
           | of magnitude less suck than the situation they're in now.
        
           | blincoln wrote:
           | If the screenshot in the article isn't edited, this was an
           | HTTP service exposed to the internet on an unusual port (81).
           | I'd propose the following test cases:
           | 
           | 1) Are there any unexpected internet-facing services?
           | 
           | * Once per week (or per month, if there are thousands of
           | internet-facing resources) use masscan or similar to quickly
           | check for any open TCP ports on all internet-facing IPs/DNS
           | names currently in use by the company. * Check the list of
           | open ports against a very short global allowlist of port
           | numbers. In 2024, that list is probably just 80 and 443. *
           | Check each host/port combination against a per-host allowlist
           | of more specific ports. e.g. the mail servers might allow 25,
           | 465, 587, and 993. * If a host/port combination doesn't match
           | either allowlist, alert a human.
           | 
           | Edit: one could probably also implement this as a check when
           | infrastructure is deployed, e.g. "if this container image/pod
           | definition/whatever is internet-facing, check the list of
           | forwarded ports against the allowlists". I've been out of the
           | infrastructure world for too long to give a solid
           | recommendation there, though.
           | 
           | 2) Every time an internet-facing resource is created or
           | updated (e.g. a NAT or load-balancer entry from public IP to
           | private IP is changed, a Route 53 entry is added or altered,
           | etc.), automatically run an automated vulnerability scan
           | using a tool that supports customizing the checks. Make sure
           | the list of checks is curated to pre-filter any noise ("you
           | have a robots.txt file!"). Alert a human if any of the checks
           | come up positive.
           | 
           | OpenVAS, etc. should easily flag "directory listing enabled",
           | which is almost never something you'd find intentionally set
           | up on a server unless your organization is a super old-school
           | Unix/Linux software developer/vendor.
           | 
           | Any decent commercial tool (and probably OpenVAS as well)
           | should also have easily flagged content that disclosed email
           | addresses, in this case.
           | 
           | 3) Pay for a Shodan account. Set up a recurring job to check
           | every week/month/whatever for your organization name, any
           | public netblocks, etc. Generate a report of anything that was
           | found during the current check that wasn't found during the
           | previous check, and have a human review it. This one would
           | take some more work, because there would need to be a
           | mechanism for the human(s) to add filtering rules to weed out
           | the inevitable false positives.
        
           | miki123211 wrote:
           | Or a canary token[1] (which won't help you find out that the
           | vulnerability exists, but will hopefully alert you when it
           | actually gets exploited).
           | 
           | [1] https://canarytokens.org
        
         | miki123211 wrote:
         | If you primarily cater to government clients, like this company
         | seems to be doing, you presumably don't care.
         | 
         | All you really care about is meeting whatever criteria the
         | tender offer requires, any further work is wasted effort.
         | 
         | Incidentally, this is also why most government projects really
         | suck on the UI front. There's no way to specify "have a good
         | user experience" as an objective tender offer criterion, so
         | this is not done. In tender proceedings, the lowest bidder
         | meeting all the criteria always wins, so companies that care
         | about actually doing good work quickly get outcompeted by
         | companies that do the absolute minimum required.
        
       | 1024core wrote:
       | From the screenshot alone, a Google search for
       | "link:usinternet.com link:utue.org link:uthelaw.com" (for
       | example) should get you the website. But since there's no match,
       | I'm assuming Google's crawlers never found the page.
        
       | karaterobot wrote:
       | This was a confusing headline until I learned that U.S. Internet
       | Corp is a regional ISP. I wonder if it wouldn't be worth editing
       | the headline to make it clearer.
        
         | dylan604 wrote:
         | There was also a recent headline saying the US Military leaked
         | data for 20k people. So I thought this was a bad edit to that.
         | I love finding companies with truly poor names like this
         | though. It helps weed them out
        
       | wolverine876 wrote:
       | Many businesses limit retention by default, deleting everything
       | older than (e.g.) 30 days unless the user saves them.
       | 
       | They do it to limit liability, but perhaps that should be the law
       | for every business: As part of the responsibility as a custodian
       | of other people's information, you need to minimize retention:
       | Remove PII and other high-risk information asap, extract data
       | needed for the long-term (rather than retaining the entire
       | original record), delete the entire record when it's no longer
       | used (easily determined by how often it's used).
       | 
       | In the networked, electronic era, data becomes much more powerful
       | and control of it becomes much more elusive. That increases our
       | responsibility.
        
         | lazide wrote:
         | But then how will we get juicy emails in the press to drag
         | people through the mud, years after someone notices a problem?
         | 
         | Think of the clicks man!
        
           | dylan604 wrote:
           | Preserving email is different than anything being discussed
           | here though, and it actually can be a business requirement.
        
             | lazide wrote:
             | It's literally the topic we're discussing?
             | 
             | Something deleted years ago can't be accidentally leaked or
             | used against you unless someone thought to do so within the
             | 30 day window. That's literally why the '30 day purge'
             | exists to 'limit liability'.
        
               | dylan604 wrote:
               | but some business requirements says you _CAN 'T_ delete
               | email because of some regulation is my point.
               | 
               | that's specifically aimed at catching the stupid
               | criminals that say things like "please shred all of the
               | incriminating documents before we have to turn them over
               | in discovery". or the emails that gets interrupted in a
               | suspect thread that says "please call me" so that it's
               | specifically not written.
               | 
               | we're sort of talking past each other
        
               | lazide wrote:
               | You're talking about regulations and stuff imposed on
               | someone.
               | 
               | I'm talking about telling companies to do silly things so
               | we all get more lulz.
               | 
               | Yes, we are talking past each other.
        
       | lanthade wrote:
       | As a USI fiber customer (which has been a great service) the most
       | concerning thing to me about the CEO response is that he didn't
       | know who he was talking to and then asked about hiring them.
       | Considering that Krebs On Security is no small player or recent
       | arrival to the scene this is a huge blunder. I know you can't
       | expect a CEO to know in depth all things but there should be
       | people right under him who could have brought him up to speed
       | quick. This is a technology company after all, not some simple
       | widget maker.
        
         | germinalphrase wrote:
         | I've been a huge advocate for USI, but I agree.
        
         | ls612 wrote:
         | Yeah I have USI gigabit and it just works (tm) I'll be sad when
         | I get my PhD and leave the area.
        
       | krebsonsecurity wrote:
       | Possibly useful info: A list of customer domains affected.
       | 
       | https://docs.google.com/spreadsheets/d/1wgKe1VrfNF8Afav1aJtM...
       | 
       | One caveat: This list should not be considered exhaustive or
       | complete by any means. e.g. changing the URL slightly by
       | incrementing or decrementing a number in the URL caused a
       | slightly different set of customers to be listed. I didn't have a
       | chance to go through it all before they took it down (note to
       | self: pillage BEFORE burning).
        
       | 1970-01-01 wrote:
       | So their Privacy Shield statement strongly hints to me that
       | they're in much deeper trouble than they know. They probably need
       | new lawyers.
       | 
       | https://usinternet.com/privacy-policy/
       | 
       | https://en.wikipedia.org/wiki/EU-US_Privacy_Shield#Swiss-US_...
        
       | greatgib wrote:
       | Something not very clear from the title is that US internet corp
       | have a subsidiary specialized in being a cloud gateway for
       | "securing email" for US Internet Corp but also all kind of
       | external customers.
       | 
       | And this is the entity, the gateway, that leaked all the emails.
       | 
       | In my opinion it is a huge epic fail when you pay a service to
       | secure your emails that leaks them...
        
       ___________________________________________________________________
       (page generated 2024-02-14 23:01 UTC)