[HN Gopher] Google's shortened goo.gl links will stop working ne...
       ___________________________________________________________________
        
       Google's shortened goo.gl links will stop working next month
        
       Author : mobilio
       Score  : 192 points
       Date   : 2025-07-25 14:25 UTC (8 hours ago)
        
 (HTM) web link (www.theverge.com)
 (TXT) w3m dump (www.theverge.com)
        
       | edent wrote:
       | About 60k academic citations about to die -
       | https://scholar.google.com/scholar?start=90&q=%22https://goo...
       | 
       | Countless books with irrevocably broken references -
       | https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...
       | 
       | And for what? The cost of keeping a few TB online and a little
       | bit of CPU power?
       | 
       | An absolute act of cultural vandalism.
        
         | djfivyvusn wrote:
         | The vandalism was relying on Google.
        
           | toomuchtodo wrote:
           | You'd think people would learn. Ah, well. Hopefully we can do
           | better from lessons learned.
        
           | api wrote:
           | The web is a crap architecture for permanent references
           | anyway. A link points to a server, not e.g. a content hash.
           | 
           | The simplicity of the web is one of its virtues but also
           | leaves a lot on the table.
        
         | toomuchtodo wrote:
         | https://wiki.archiveteam.org/index.php/Goo.gl
         | 
         | https://tracker.archiveteam.org/goo-gl/ (1.66B work items
         | remaining as of this comment)
         | 
         | How to run an ArchiveTeam warrior:
         | https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior
         | 
         | (edit: i see jaydenmilne commented about this further down
         | thread, mea culpa)
        
           | pentagrama wrote:
           | Thank you for that information!
           | 
           | I wanted to help and did that using VMware.
           | 
           | For curious people, here is what the UI looks like, you have
           | a list of projects to choose, I choose the goo.gl project,
           | and a "Current project" tab which shows the project activity.
           | 
           | Project list: https://imgur.com/a/peTVzyw
           | 
           | Current project: https://imgur.com/a/QVuWWIj
        
           | progbits wrote:
           | They appear to be doing ~37k items per minute, with 1.6B
           | remaining that is roughly 30 days left. So that's just barely
           | enough to do it in time.
           | 
           | Going to run the warrior over the weekend to help out a bit.
        
         | epolanski wrote:
         | Jm2c, but if your resource is a link to an online resource
         | that's borderline already (at any point the content can be
         | changed or disappear).
         | 
         | Even worse if your resource is a shortened link by some other
         | service, you've just added yet another layer of unreliable
         | indirection.
        
           | whatevaa wrote:
           | Citations are citations, if it's a link, you link to it. But
           | using shorteners for that is silly.
        
             | ceejayoz wrote:
             | It's not silly if the link is a couple hundred characters
             | long.
        
               | epolanski wrote:
               | Fix that at the presentation layer (PDFs and Word files
               | etc support links) not the data one.
        
               | ceejayoz wrote:
               | Let me know when you figure out how to make a printed
               | scientific journal clickable.
        
               | diatone wrote:
               | Take a photo on your phone, OS recognises the link in the
               | image, makes it clickable, done. Or, use a QR code
               | instead
        
               | ceejayoz wrote:
               | https://news.ycombinator.com/item?id=9224
        
               | jeeyoungk wrote:
               | This is the answer; turns out that non-transformed links
               | are the most generic data format, without any
               | "compression" - QR codes or a third-party-intermediary -
               | needed.
        
               | epolanski wrote:
               | Scientific journals should not rely on ephemeral data on
               | the internet. It doesn't even matter how long the url is.
               | 
               | Just buy any scientific book and try to navigate to it's
               | own errata they link in the book. It's always dead.
        
               | IanCal wrote:
               | Adding an external service so you don't have to store a
               | few hundred bytes is wild, particularly within a pdf.
        
               | ceejayoz wrote:
               | It's not the bytes.
               | 
               | It's the fact that it's likely gonna be printed in a
               | paper journal, where you can't click the link.
        
               | SR2Z wrote:
               | I find it amusing that you are complaining about not
               | having a computer to click a link while glossing over the
               | fact that you need a computer to use a link at all.
               | 
               | This use case of "I have a paper journal and no PDF but a
               | computer with a web browser" seems extraordinarily
               | contrived. I have literally held a single-digit number of
               | printed papers in my entire life while looking at
               | thousands as PDFs. If we cared, we'd use a QR code.
               | 
               | This kind of luddite behavior sometimes makes using this
               | site exhausting.
        
               | andrepd wrote:
               | I feel like all that is beyond the point. People used
               | goo.gl because they largely are not tech specialists and
               | aren't really aware of link rot or of a Google decision
               | rendering those links unaccessible.
        
               | SR2Z wrote:
               | > People used goo.gl because they largely are not tech
               | specialists and aren't really aware of link rot or of a
               | Google decision rendering those links unaccessible.
               | 
               | Anyone who is savvy enough to put a link in a document is
               | well-aware of the fact that links don't work forever,
               | because anyone who has ever clicked a link from a
               | document has encountered a dead link. It's not 2005
               | anymore, the internet has accumulated plenty of dead
               | links.
        
               | ceejayoz wrote:
               | > I have literally held a single-digit number of printed
               | papers in my entire life while looking at thousands as
               | PDFs.
               | 
               | This is by no means a universal experience.
               | 
               | People still get printed journals. Libraries still stock
               | them. Some folks print out reference materials _from_ a
               | PDF to take to class or a meeting or whatnot.
        
               | SR2Z wrote:
               | And how many of those people then proceed to type those
               | links into their web browsers, shortened or not?
               | 
               | Sure, contributing to link rot is bad, but in the same
               | way that throwing out spoiled food is bad. Sometimes
               | you've just gotta break a bunch of links.
        
               | ceejayoz wrote:
               | > And how many of those people then proceed to type those
               | links into their web browsers, shortened or not?
               | 
               | That probably depends on the link's purpose.
               | 
               | "The full dataset and source code to reproduce this
               | research can be downloaded at <url>" might be deeply
               | interesting to someone in a few years.
        
               | epolanski wrote:
               | So he has a computer and can click.
               | 
               | In any case a paper should not rely on an ephemeral
               | resource like internet links.
               | 
               | Have you ever tried to navigate to the errata corrige of
               | computer science books? It's one single book, with one
               | single link, and it's dead anyway.
        
               | JumpCrisscross wrote:
               | I'm unconvinced the researchers acted irresponsibly. If
               | anything, a Google-shortened link looks--at first glance
               | --more reliable than a PDF hosted god knows where.
               | 
               | There are always dependencies in citations. Unless a
               | paper comes with its citations embedded, splitting hairs
               | between why one untrustworthy provider is more
               | untrustworthy than another is silly.
        
               | jtuple wrote:
               | Perhaps times have changed, but when I was in grad school
               | circa 2010 smartphones and tablets weren't yet ubiquitous
               | but laptops were. It was super common to sit in a
               | cafe/library with a laptop and a stack of printed papers
               | to comb though.
               | 
               | Reading paper was more comfortable then reading on the
               | screen, and it was easy to annotate, highlight, scribble
               | notes in the margin, doodle diagrams, etc.
               | 
               | Do grad students today just use tablets with a stylus
               | instead (iPad + pencil, Remarkable Pro, etc)?
               | 
               | Granted, post grad school I don't print much anymore, but
               | that's mostly due to a change in use case. At work I
               | generally read at most 1-5 papers a day tops, which is
               | small enough to just do on a computer screen (and have
               | less need to annotate, etc). Quite different then the
               | 50-100 papers/week + deep analysis expected in academia.
        
               | reaperducer wrote:
               | _This kind of luddite behavior sometimes makes using this
               | site exhausting._
               | 
               | We have many paper documents from over 1,000 years ago.
               | 
               | The vast majority of what was on the internet 25 years
               | ago is gone forever.
        
               | epolanski wrote:
               | 25?
               | 
               | Try going back by 6/7 years on this very website, half
               | the links are dead.
        
               | leumon wrote:
               | which makes url shorteners even more attractive for
               | printed media, because you don't have to type many
               | characters manually
        
         | bugsMarathon88 wrote:
         | > The cost of keeping a few TB online and a little bit of CPU
         | power?
         | 
         | Oh, my sweet summer child. Try tens of thousands of QPS from
         | bots trying to brute-force URLs to sensitive materials - and
         | some likely succeeding!
        
           | edent wrote:
           | Gosh! It is a pity Google doesn't hire any smart people who
           | know how to build a throttling system.
           | 
           | Still, they're a tiny and cash-starved company so we can't
           | expect too much of them.
        
             | lyu07282 wrote:
             | Its almost like as if once a company becomes this big,
             | burning them to the ground would be better for society or
             | something. That would be the liberal position on monopolies
             | if they actually believed in anything.
        
             | bugsMarathon88 wrote:
             | It is a business, not a charity. Adjust your expectations
             | accordingly, or expect disappointment.
        
             | acheron wrote:
             | Must not be any questions about that in Leetcode.
        
           | quesera wrote:
           | Modern webservers are very, very fast on modern CPUs. I hear
           | Google has some CPU infrastructure?
           | 
           | I don't know if GCP has a free tier like AWS does, but 10kQPS
           | is likely within the capability of a free EC2 instance
           | running nginx with a static redirect map. Maybe splurge for
           | the one with a full GB of RAM? No problem.
        
             | bbarnett wrote:
             | You could deprecate the service, and archive the links as
             | static html. 200bytes of text for an html redirect (not
             | js).
             | 
             | You can serve immense volumes of traffic from static html.
             | One hardware server alone could so easily do the job.
             | 
             | Your attack surface is also tiny without a back end
             | interpreter.
             | 
             | People will chime in with redundancy, but the point is
             | Google could stop maintaining the ingress, and still not be
             | douches about existing urls.
             | 
             | But... you know, it's Google.
        
               | quesera wrote:
               | Exactly. I've seen goo.gl URLs in printed books.
               | Obviously in old blog posts too. And in government
               | websites. Nonprofit communications. Everywhere.
               | 
               |  _Why break this??_
               | 
               | Sure, deprecate the service. Add no new entries. This is
               | a good idea anyway, link shorteners are bad for the
               | internet.
               | 
               | But breaking all the existing goo.gl URLs seems bizarrely
               | hostile, and completely unnecessary. It would take so
               | little to keep them up.
               | 
               | You don't even need HTML files. The full set of static
               | redirects can be configured into the webserver. No
               | deployment hassles. The filesystem can be RO to further
               | reduce attack surface.
               | 
               | Google is acting like they are a one-person startup here.
               | 
               | Since they are not a one-person startup, I do wonder if
               | we're missing the real issue. Like legal exposure, or
               | implication in some kind of activity that they don't want
               | to be a part of, and it's safer/simpler to just delete
               | everything instead of trying to detect and remove all of
               | the exposure-creating entries.
               | 
               | Of maybe that's what they're telling themselves, even if
               | it's not real.
        
           | nomel wrote:
           | Those numbers make it seem fairly trivial. You have a dozen
           | bytes referencing a few hundred bytes, for a service that is
           | _not_ latency sensitive.
           | 
           | This sounds like a good project for an intern, with server
           | costs that might be able to exceed a hundred dollars per
           | month!
        
         | zffr wrote:
         | For people wanting to include URL references in things like
         | books, what's the right approach to take today?
         | 
         | I'm genuinely asking. It seems like its hard to trust that any
         | service will remaining running for decades
        
           | toomuchtodo wrote:
           | https://perma.cc/
           | 
           | It is built for the task, and assuming worse case scenario of
           | sunset, it would be ingested into the Wayback Machine. Note
           | that both the Internet Archive and Cloudflare are supporting
           | partners (bottom of page).
           | 
           | (https://doi.org/ is also an option, but not as accessible to
           | a casual user; the DOI Foundation pointed me to
           | https://www.crossref.org/ for adhoc DOI registration,
           | although I have not had time to research further)
        
             | Hyperlisk wrote:
             | perma.cc is great. Also check out their tools if you want
             | to get your hands dirty with your own archival process:
             | https://tools.perma.cc/
        
             | ruined wrote:
             | perma.cc is an interesting project, thanks for sharing.
             | 
             | other readers may be specifically interested in their
             | contingency plan
             | 
             | https://perma.cc/contingency-plan
        
             | whoahwio wrote:
             | While Perma is solution specifically for this problem, and
             | a good one at that - citing the might of the backing
             | company is a bit ironic here
        
               | toomuchtodo wrote:
               | If Cloudflare provides the infra (thanks Cloudflare!), I
               | am happy to have them provide the compute and network for
               | the lookups (which, at their scale, is probably a
               | rounding error), with the Internet Archive remaining the
               | storage system of last resort. Is that different than the
               | Internet Archive offering compute to provide the lookups
               | on top of their storage system? Everything is temporary,
               | intent is important, etc. Can always revisit the stack as
               | long as the data exists on disk somewhere accessible.
               | 
               | This is distinct from Google saying "bye y'all, no more
               | GETs for you" with no other way to access the data.
        
               | whoahwio wrote:
               | This is much better positioned for longevity than
               | google's URL shortener, I'm not trying to make that
               | argument. My point is that 10-15 years ago, when Google's
               | URL shortener was being adopted for all these
               | (inappropriate) uses, its use was supported by a public
               | opinion of Google's 'inevitability'. For Perma, CF serves
               | a similar function.
        
               | toomuchtodo wrote:
               | Point taken.
        
           | danelski wrote:
           | Real URL and save the website in the Internet Archive as it
           | was on the date of access?
        
           | edent wrote:
           | The full URl to the original page.
           | 
           | You aren't responsible if things go offline. No more than if
           | a publisher stops reprinting books and the library copies all
           | get eaten by rats.
           | 
           | A reader can assess the URl for trustworthiness (is it
           | scam.biz or legitimate_news.com) look at the path to hazard a
           | guess at the metadata and contents, and - finally - look it
           | up in an archive.
        
             | firefax wrote:
             | >The full URl to the original page.
             | 
             | I thought that was the standard in academia? I've had
             | reviewers chastise me when I did not use wayback machine to
             | archive a citation and link to that since listing a "date
             | retrieved" doesn't do jack if there's no IA copy.
             | 
             | Short links were usually _in addition to_ full URLS, and
             | more in conference presentations than the papers
             | themselves.
        
             | grapesodaaaaa wrote:
             | I think this is the only real answer. Shorteners might work
             | for things like old Twitter where characters were a
             | premium, but I would rather see the whole URL.
             | 
             | We've learned over the years that they can be unreliable,
             | security risks, etc.
             | 
             | I just don't see a major use-case for them anymore.
        
         | kazinator wrote:
         | The act of vandalism occurs when someone creates a shortened
         | URL, not when they stop working.
        
         | jeffbee wrote:
         | While an interesting attempt at an impact statement, 90% of the
         | results on the first two pages for me are not references to
         | goo.gl shorteners, but are instead OCR errors or just
         | gibberish. One of the papers is from 1981.
        
         | nikanj wrote:
         | The cost of dealing and supporting an old codebase instead of
         | burning it all and releasing a written-from-scratch replacement
         | next year
        
         | crossroadsguy wrote:
         | I have always struggled with this. If I buy a book I don't want
         | an online/URL reference in it. Put the book/author/isbn/page
         | etc. Or refer to the
         | magazine/newspaper/journal/issue/page/author/etc.
        
           | BobaFloutist wrote:
           | I mean preferably do both, right? The URL is better for
           | however long it works.
        
             | SoftTalker wrote:
             | We are long, long past any notion that URLs are permanent
             | references to anything. Better to cite with title, author,
             | and publisher so that maybe a web search will turn it up
             | later. The original URL will almost certainly be broken
             | after a few years.
        
         | SirMaster wrote:
         | Can't someone just go through programmatically right now and
         | build a list of all these links and where they point to? And
         | then put up a list somewhere that everyone can go look up if
         | they need to?
        
         | jlarocco wrote:
         | IMO it's less Google's fault and more a crappy tech education
         | problem.
         | 
         | It wasn't a good idea to use shortened links in a citation in
         | the first place, and somebody should have explained that to the
         | authors. They didn't publish a book or write an academic paper
         | in a vacuum - somebody around them should have known better and
         | said something.
         | 
         | And really it's not much different than anything else online -
         | it can disappear on a whim. How many of those shortened links
         | even go to valid pages any more?
         | 
         | And no company is going to maintain a "free" service forever.
         | It's easy to say, "It's only ...", but you're not the one doing
         | the work or paying for it.
        
           | gmerc wrote:
           | Ahh classic free market cop out.
        
             | FallCheeta7373 wrote:
             | if the smartest among us publishing for academia cannot
             | figure this out, then who will?
        
             | kazinator wrote:
             | Nope! There have in fact been education campaigns about the
             | evils of URL shorteners for years: how they pose security
             | risks (used for shortening malicious URLs), and how they
             | stop working when their domain is temporarily or
             | permanently down.
             | 
             | The authors just had their heads too far up their academic
             | asses to have heard of this.
        
           | justin66 wrote:
           | > It wasn't a good idea to use shortened links in a citation
           | in the first place, and somebody should have explained that
           | to the authors. They didn't publish a book or write an
           | academic paper in a vacuum - somebody around them should have
           | known better and said something.
           | 
           | It's a great idea, and today in 2025, papers are pretty much
           | the only place where using these shortened URLs makes a lot
           | of sense. In almost any other context you could just use a QR
           | code or something, but that wouldn't fit an academic paper.
           | 
           | Their specific choice of shortened URL provider was obviously
           | unfortunate. The real failure is that of DOI to provide an
           | alternative to goo.gl or tinyurl or whatever that is easy to
           | reach for. It's a big failure, since preserving references to
           | things like academic papers is part of their stated purpose.
        
             | dingnuts wrote:
             | Even normal HTTP URLs aren't great. If there was ever a
             | case for content-addressable networks like IPFS it's this.
             | Universities should be able to host this data in a
             | decentralized way.
        
         | QuantumGood wrote:
         | When they began offering this, their rep for ending services
         | was already so bad I refused to consider goo.gl. Amazing for
         | how many years now they have introduced then ended services
         | with large user bases. Gmail being in "beta" for five years
         | was, weirdly, to me, a sign they might stick with it.
        
         | justinmayer wrote:
         | In the first segment of the very first episode of the
         | Abstractions podcast, we talked about Google killing its goo.gl
         | URL obfuscation service and why it is such a craven abdication
         | of responsibility. Have a listen, if you're curious:
         | 
         | Overcast link to relevant chapter:
         | https://overcast.fm/+BOOFexNLJ8/02:33
         | 
         | Original episode link:
         | https://shows.arrowloop.com/@abstractions/episodes/001-the-r...
        
       | mrcslws wrote:
       | From the blog post: "more than 99% of them had no activity in the
       | last month" https://developers.googleblog.com/en/google-url-
       | shortener-li...
       | 
       | This is a classic product data decision-making fallacy. The right
       | question is "how much total value do all of the links provide",
       | not "what percent are used".
        
         | bayindirh wrote:
         | > The right question is "how much total value do all of the
         | links provide", not "what percent are used".
         | 
         | Yes, but it doesn't bring in the sweet promotion home,
         | unfortunately. Ironically, if 99% of them doesn't see any
         | traffic, you can scale back the infra, run it in 2 VMs, and
         | make sure a single person can keep it up as a side quest, just
         | for fun (but, of course, pay them for their work).
         | 
         | This beancounting really makes me sad.
        
           | ahstilde wrote:
           | > just for fun (but, of course, pay them for their work).
           | 
           | Doing things for fun isn't in Google's remit
        
             | ceejayoz wrote:
             | It used to be. AdSense came from 20% time!
        
             | kevindamm wrote:
             | Alas, it was, once upon a time.
        
             | morkalork wrote:
             | Then they shouldn't have offered it as a free service in
             | the first place. It's like that discussion about how Google
             | in all its 2-ton ADHD gorilla glory will enter an industry,
             | offer a (near) free service or product, decimate all
             | competition, then decide its not worth it and shutdown.
             | Leaving a desolate crater behind of ruined businesses,
             | angry and abandoned users.
        
               | jsperson wrote:
               | I'm still sore about reader. Gap has never been filled
               | for me.
        
           | quesera wrote:
           | Configuring a static set of redirects would take a couple
           | hours to set up, and literally zero maintenance forever.
           | 
           | Amazon should volunteer a free-tier EC2 instance to help
           | Google in their time of economic struggles.
        
             | bayindirh wrote:
             | This is what I mean, actually.
             | 
             | If they're so inclined, Oracle has an always free tier with
             | ample resources. They can use that one, too.
        
           | socalgal2 wrote:
           | If they wanted the sweat promotion they could add an
           | interstitial. Yes, people would complain, but at least the
           | old links would not stop working.
        
         | handsclean wrote:
         | I don't think they're actually that dumb. I think the dirty
         | secret behind "data driven decision making" is managers don't
         | want data to tell them what to do, they want "data" to make
         | even the idea of disagreeing with them look objectively wrong
         | and stupid.
        
           | HPsquared wrote:
           | It's a bit like the the difference between "rule of law" and
           | "rule by law" (aka legalism).
           | 
           | It's less "data-driven decisions", more "how to lie with
           | statistics".
        
         | HPsquared wrote:
         | Indeed. I've probably looked at less than 1% of my family
         | photos this month but I still want to keep them.
        
           | JumpCrisscross wrote:
           | Fewer than 1% of passenger-miles driven involve the seatbelt
           | saving anyone. What a waste!
        
         | fizx wrote:
         | Don't be confused! That's not how they made the decision; it's
         | how they're selling it.
        
           | esafak wrote:
           | So how did they decide?
        
             | nemomarx wrote:
             | I expect cost on a budget sheet, then an analysis was done
             | about the impact of shutting it down
        
               | sltkr wrote:
               | You can't get promoted at Google for not changing
               | anything.
        
         | esafak wrote:
         | What fraction of indexed Google sites, Youtube videos, or
         | Google Photos were retrieved in the last month? Think of the
         | cost savings!
        
           | nomel wrote:
           | Youtube already does this, to some extent, by slowly reduce
           | the quality of your videos, if they're not accessed
           | frequently enough.
           | 
           | Many videos I uploaded in 4k are now only available in 480p,
           | after about a decade.
        
         | firefax wrote:
         | > "more than 99% of them had no activity in the last month"
         | 
         | Better to have a short URL and not need it, than need a short
         | URL and not have it IMO.
        
         | sltkr wrote:
         | I bet 99% of URLs that exist on the public web had no activity
         | last month. Might as well delete the entire WWW because it's
         | obviously worthless.
        
         | FredPret wrote:
         | "Data-driven decision making"
        
         | SoftTalker wrote:
         | From Google's perspective, the question is "How many ads are we
         | selling on these links" and if it's near zero, that's the value
         | to them.
        
       | jaydenmilne wrote:
       | ArchiveTeam is trying to brute force the entire URL space before
       | its too late. You can run a Virtualbox VM/docker image
       | (ArchiveTeam Warrior) to help (unique IPs are needed). I've been
       | running it for a couple months and found a million.
       | 
       | https://wiki.archiveteam.org/index.php/ArchiveTeam_Warrior
        
         | pimlottc wrote:
         | Looks like they have saved 8000+ volumes of data to the
         | Internet Archive so far [0]. The project page for this effort
         | is here [1].
         | 
         | 0: https://archive.org/details/archiveteam_googl
         | 
         | 1: https://wiki.archiveteam.org/index.php/Goo.gl
        
         | localtoast wrote:
         | Docker container FTW. Thanks for the heads-up - this is a
         | project I will happily throw a Hetzner server at.
        
           | wobfan wrote:
           | Same here. I am geniunely asking myself for what though. I
           | mean, they'll receive a list of the linked domains, but what
           | will they do with that?
        
             | localtoast wrote:
             | It's not only goo.gl links they are actively archiving.
             | Take a look at their current tasks.
             | 
             | https://tracker.archiveteam.org/
        
             | fragmede wrote:
             | save it, forever*.
             | 
             | * as long as humanly possible, as is archive.org's mission.
        
         | ojo-rojo wrote:
         | Thanks for sharing this. I've often felt that the ease by which
         | we can erase digital content makes our time period susceptible
         | to a digital dark ages to archaeologists studying history a few
         | thousand years from now.
         | 
         | Us preserving digital archives is a good step. I guess making
         | hard copies would be the next step.
        
         | AstroBen wrote:
         | Just started, super easy to set up
        
         | hadrien01 wrote:
         | After a while I started to get "Google asks for a login"
         | errors. Should I just keep going? There's no indication on what
         | I should do on the ArchiveTeam wiki
        
       | Brajeshwar wrote:
       | What will it really cost for Google (each year) to host whatever
       | was created, as static files, for as long as possible?
        
         | malfist wrote:
         | It'd probably cost a couple tens of dollars, and Google is
         | simply too poor to afford that these days. They've spent all
         | their money on AI and have nothing left
        
       | jedberg wrote:
       | I have only given this a moment's thought, but why not just
       | publish the URL map as a text file or SQLLite DB? So at least we
       | know where they went? I don't think it would be a privacy issue
       | since the links are all public?
        
         | devrandoom wrote:
         | Are they all public? Where can I see them?
        
           | jedberg wrote:
           | You can brute force them. They don't have passwords. The
           | point is the only "security" is knowing the short URL.
        
           | Alifatisk wrote:
           | I don't think so, but you can find the indexed urls here
           | https://www.google.com/search?q=site%3A"goo.gl" it's about
           | 9,6 million links. And those are what got indexed, it should
           | be way more out there
        
             | sltkr wrote:
             | I'm surprised Google indexes these short links. I expected
             | them to resolve them to their canonical URL and index that
             | instead, which is what they usually do when multiple URLs
             | point to the same resource.
        
         | DominikPeters wrote:
         | It will include many URLs that are semi-private, like Google
         | Docs that are shared via link.
        
           | high_na_euv wrote:
           | So exclude them
        
             | ceejayoz wrote:
             | How?
             | 
             | How will they know a short link to a random PDF on S3 is
             | potentially sensitive info?
        
           | ryandrake wrote:
           | If some URL is accessible via the open web, without
           | authentication, then it is not really private.
        
             | bo1024 wrote:
             | What do you mean by accessible without authentication? My
             | server will serve example.com/64-byte-random-code if you
             | request it, but if you don't know the code, I won't serve
             | it.
        
               | prophesi wrote:
               | Obfuscation may hint that it's intended to be private,
               | but it's certainly not authentication. And the keyspace
               | for these goog.le short URL's are much smaller than a
               | 64byte alphanumeric code.
        
               | hombre_fatal wrote:
               | Sure, but you have to make executive decisions on the
               | behalf of people who aren't experts.
               | 
               | Making bad actors brute force the key space to find
               | unlisted URLs could be a better scenario for most people.
               | 
               | People also upload unlisted Youtube videos and cloud docs
               | so that they can easily share them with family. It
               | doesn't mean you might as well share content that they
               | thought was private.
        
               | bo1024 wrote:
               | I'm not seeing why there's a clear line where GET cannot
               | be authentication but POST can.
        
               | prophesi wrote:
               | Because there isn't a line? You can require auth for any
               | of those HTTP methods. Or not require auth for any of
               | them.
        
               | wobfan wrote:
               | I mean, going by that argument a username + password is
               | also just obfuscation. Generating a unique 64 byte code
               | is even more secure than this, IF it's handled correctly.
        
           | charcircuit wrote:
           | Then use something like argon2 on the keys, so you have to
           | spend a long time to brute force them all similar to how it
           | is today.
        
         | Nifty3929 wrote:
         | I'd rather see it as a searchable database, which I would think
         | is super cheap and no maintenance for Google, and avoids these
         | privacy issues. You can input a known goo.gl and get it's real
         | URL, but can't just list everything out.
        
           | growt wrote:
           | And then output the search results as a 302 redirect and it
           | would just be continuing the service.
        
       | cyp0633 wrote:
       | The runner of Compiler Explorer tried to collect the public
       | shortlinks and do the redirection themselves:
       | 
       | Compiler Explorer and the Promise of URLs That Last Forever (May
       | 2025, 357 points, 189 comments)
       | 
       | https://news.ycombinator.com/item?id=44117722
        
       | pluc wrote:
       | Someone should tell Google Maps
        
       | ourmandave wrote:
       | A comment said they stopped making new links and announced back
       | in 2018 it would be going away.
       | 
       | I'm not a google fanboi and the google graveyard is a well known
       | thing, but this has been 6+ years coming.
        
         | goku12 wrote:
         | For one, not enough people seem to be aware of it. They don't
         | seem to have given that announcement the importance and effort
         | it deserved. Secondly, I can't say that they have a good
         | migration plan when shutting down their services. People
         | scrambling like this to backup the data is rather common these
         | days. And finally, this isn't a service that can be so easily
         | replaced. Even if people knew that it was going away, there
         | would be short-links that they don't remember, but are
         | important nevertheless. Somebody gave an example above -
         | citations in research papers. There isn't much thought given to
         | the consequences when decisions like this are taken.
         | 
         | Granted that it was a free service and Google is under no
         | obligation to keep it going. But if they were going to be so
         | casual about it, they shouldn't have offered it in the first
         | place. Or perhaps, people should take that lesson instead and
         | spare themselves the pain.
        
       | micromacrofoot wrote:
       | This is just being a poor citizen of the web, no excuses. Google
       | is a 2 trillion dollar company, keeping these links working
       | indefinitely would probably cost less than what they spend on
       | homepage doodles.
        
       | insane_dreamer wrote:
       | the lesson? never trust industry
        
       | cpeterso wrote:
       | Google's own services generate goo.gl short URLs (Google Maps
       | generates https://maps.app.goo.gl/ URLs for sharing links to map
       | locations), so I assume this shutdown only affects user-generated
       | short URLs. Google's original announcement doesn't say as such,
       | but it is carefully worded to specify that short URLs of the
       | "https://goo.gl/* format" will be shut down.
       | 
       | Google's probably trying to stop goo.gl URLs from being used for
       | phishing, but doesn't want to admit that publicly.
        
         | growthwtf wrote:
         | This actually makes the most logical sense to me, thank you for
         | the idea. I don't agree with the way they're doing it of course
         | but this probably is risk mitigation for them.
        
       | ElijahLynn wrote:
       | OMFG - Google should keep these up forever. What a hit to trust.
       | Trust with Google was already bad for everything they killed,
       | this is another dagger.
        
         | phyzix5761 wrote:
         | People still trust Google?
        
       | musicale wrote:
       | How surprising.
       | 
       | https://killedbygoogle.com
        
         | hinkley wrote:
         | That needs a chart.
        
       | krunck wrote:
       | Stop MITMing your content. Don't use shorteners. And use
       | reasonable URL patterns on your sites.
        
         | Cyan488 wrote:
         | I have been using a shortening service with my own domain name
         | - it's really handy, and I figure that if they go down I could
         | always manually configure my own DNS or spin up some self-
         | hosted solution.
        
       | pfdietz wrote:
       | Once again we are informed that Google cannot be trusted with
       | data in the long term.
        
       | davidczech wrote:
       | I don't really get it, it must cost peanuts to leave a static map
       | like this up for the rest of Google's existence as a company.
        
         | nikanj wrote:
         | There's two things that are real torture to google dev teams:
         | 1) Being told a product is completed and needs no new features
         | or changes 2) Being made to work on legacy code
        
       | gedy wrote:
       | At least they didn't release a 2 new competing d.uo or re.ad, etc
       | shorteners and expect you to migrate
        
       | JimDabell wrote:
       | Cloudflare offered to keep it running and were turned away:
       | 
       | https://x.com/elithrar/status/1948451254780526609
       | 
       | Remember this next time you are thinking of depending upon a
       | Google service. They could have kept this going easily but are
       | intentionally breaking it.
        
         | fourseventy wrote:
         | Google killing their domains service was the last straw for me.
         | I started moving all of my stuff off of Google since then.
        
           | nomel wrote:
           | I'm still _shocked_ that my google voice number still
           | functions after all these years. It makes me assume it 's
           | main purpose is to actually be an honeypot of some sort,
           | maybe for spam call detection.
        
             | mrj wrote:
             | Shhh don't remind them
        
             | throwyawayyyy wrote:
             | Pretty sure you can thank the FCC for that :)
        
             | joshstrange wrote:
             | Because IIRC it's essentially completely run by another
             | company (I want to say Bandwidth?) and, again my memories
             | might be fuzzy, originally came from an acquisition of a
             | company called Grand Central.
             | 
             | My guess is it just keeps chugging along with little
             | maintenance needed by Google itself. The UI hasn't changed
             | in a while from what I've seen.
        
               | JumpCrisscross wrote:
               | > _originally came from an acquisition of a company
               | called Grand Central_
               | 
               | This has protected absolutely nothing else from Google's
               | PMs.
        
             | kevin_thibedeau wrote:
             | Mass surveillance pipeline to the successor of room 641A.
        
             | hnfong wrote:
             | Another _shocking_ story to share.
             | 
             | I have a tiny service built on top of Google App Engine
             | that (only) I use personally. I made it 15+ years ago, and
             | the last time I deployed changes was 10+ years ago.
             | 
             | It's still running. I have no idea why.
        
               | coryrc wrote:
               | It's the most enterprise-y and legacy thing Google sells.
        
         | thebruce87m wrote:
         | > Remember this next time you are thinking of depending upon a
         | Google service.
         | 
         | Next time? I guess there's a wave of new people that haven't
         | learned that that lesson yet.
        
       | charlesabarnes wrote:
       | Now I'm wondering why did chrome change the behavior to use
       | share.google links if this will be the inevitable outcome
        
       | Bluestein wrote:
       | Another one for the Google [G]raveyard.-
        
       | fnord77 wrote:
       | they attempted this in 2018
       | 
       | https://9to5google.com/2018/03/30/google-url-shortener-shut-...
        
       | pkilgore wrote:
       | Google probably spends more money a month than what it would take
       | to preserve this service on coffee creamer for a single
       | conference room.
        
       | pentestercrab wrote:
       | There seems to have been a recent uptick in phishers using goo.gl
       | URLs. Yes, even without new URLs being accepted by registering
       | expired domains with an old reference.
        
       | hinkley wrote:
       | What's their body count now? Seems like they've slowed down the
       | killing spree, but maybe it's just that we got tired of talking
       | about them.
        
         | theandrewbailey wrote:
         | 297
         | 
         | https://killedbygoogle.com/
        
           | hinkley wrote:
           | Oh look it's been months since they killed a project!
        
             | codyogden wrote:
             | Because there's not much left to kill.
        
       | lrvick wrote:
       | Yet another reminder to never trust corpotech to be around long
       | term.
        
       | andrii9 wrote:
       | Ugh, I used to use https://fuck.it for short links too. Still
       | legendary domain though.
        
       | ChrisArchitect wrote:
       | Discussion on the source from 2024:
       | https://news.ycombinator.com/item?id=40998549
        
       | ChrisArchitect wrote:
       | Noticed recently on some google properties where there are Share
       | buttons that it's generating _share.google_ links now instead of
       | goo.gl.
       | 
       | Is that the same shortening platform running it?
        
       | xutopia wrote:
       | Google is making harder and harder to depend on their software.
        
         | christophilus wrote:
         | That's a good thing from my perspective. I wish they'd crush
         | YouTube next. That's the only Google IP I haven't been able to
         | avoid.
        
       | david422 wrote:
       | Somewhat related - I wanted to add short urls to a project of
       | mine. I was looking around at a bunch of url shorteners - and
       | then realized it would be pretty simple to create my own. It's my
       | content pointed to my own service, so I don't have to worry about
       | 3rd party content or other services going down.
        
       | spankalee wrote:
       | As an ex-Googler, the problem here is clear and common, and it's
       | not the infrastructure cost: it's ownership.
       | 
       | No one wants to own this product.
       | 
       | - The code could be partially frozen, but large scale changes are
       | constantly being made throughout the google3 codebase, and
       | someone needs to be on the hook for approving certain changes or
       | helping core teams when something goes wrong. If a service it
       | uses is deprecated, then lots of work might need to be done.
       | 
       | - Every production service needs someone responsible for keeping
       | it running. Maybe an SRE, thought many smaller teams don't have
       | their own SREs so they manage the service themselves.
       | 
       | So you'd need some team, some full reporting chain all the way
       | up, to take responsibility for this. No SWE is going to want to
       | work on a dead product where no changes are happening, no manager
       | is going to care about it. No director is going to want to put
       | staff there rather than a project that's alive. No VP sees any
       | benefit here - there's only costs and risks.
       | 
       | This is kind of the Reader situation all over again (except for
       | the fact that a PM with decent vision could have drastically
       | improved and grown Reader, IMO).
       | 
       | This is obviously bad for the internet as a whole, and I
       | personally think that Google has a moral obligation to not rug
       | pull infrastructure like this. Someone there knows that critical
       | links will be broken, but it's in no one's advantage to stop that
       | from happening.
       | 
       | I think Google needs some kind of "attic" or archive team that
       | can take on projects like this and make them as efficiently
       | maintainable in read-only mode as possible. Count it as good-will
       | marketing, or spin it off to google.org and claim it's a non-
       | profit and write it off.
       | 
       | Side note: a similar, but even worse situation for the company is
       | the Google Domains situation. Apparently what happened was that a
       | new VP came into the org that owned it and just didn't understand
       | the product. There wasn't enough direct revenue for them, even
       | though the imputed revenue to Workspace and Cloud was
       | significant. They proposed selling it off and _no other VPs
       | showed up to the meeting about it with Sundar_ so this VP got to
       | make their case to Sundar unchallenged. The contract to sell to
       | Squarespace was signed before other VPs who might have objected
       | realized what happened, and Google had to _buy back_ parts of it
       | for Cloud.
        
         | rs186 wrote:
         | Many good points, but if you don't mind me asking: if you were
         | at Google, would you be willing to be the lead of that archive
         | team, knowing that you'll be stuck at this position for the
         | next 10 years, with the possibility of your team being
         | downsized/eliminated when the wind blows slightly in the other
         | direction?
        
       | romaniv wrote:
       | URL shorteners were always a bad idea. At the rate things are
       | going I'm not sure people in a decade or two won't say the same
       | thing about URLs and the Web as whole. The fact that there is no
       | protocol-level support for archiving, versioning or even client-
       | side replication means that everything you see on the Web right
       | now has an overwhelming probability to permanently disappear in
       | the near future. This is an _astounding_ engineering oversight
       | for something that 's basically the most popular communication
       | system and medium in the world and in history.
       | 
       | Also, it's quite conspicuous that 30+ years into this thing
       | browsers still have no built-in capacity to store pages locally
       | in a reasonable manner. We still rely on "bookmarks".
        
       | rsync wrote:
       | A reminder that the "Oh By"[1] everything-shortener not only
       | exists but can be used as a plain old URL shortener[2].
       | 
       | Unlike the google URL shortener, you can count on "Oh By"
       | existing in 20 years.
       | 
       | [1] https://0x.co
       | 
       | [2] https://0x.co/hnfaq.html
        
       | mymacbook wrote:
       | Why is everyone jumping on the blame the victims bandwagon?! This
       | is not the fault of users whether they were scientists publishing
       | papers or the fault of the general public sharing links. This is
       | absolutely 100% on Alphabet/Google.
       | 
       | When you blame your customer, you have failed.
        
       | ccgreg wrote:
       | Common Crawl's count of unique goo.gl links is approximately 10
       | million. That's in our permanent archive, so you'll be able to
       | consult them in the future.
       | 
       | No search engine or crawler person will ever recommend using a
       | shortener for any reason.
        
       ___________________________________________________________________
       (page generated 2025-07-25 23:01 UTC)