[HN Gopher] Where's the fastest place to put my server? How much...
___________________________________________________________________
Where's the fastest place to put my server? How much does it
matter?
Author : todsacerdoti
Score : 116 points
Date : 2021-02-03 10:44 UTC (1 days ago)
(HTM) web link (calpaterson.com)
(TXT) w3m dump (calpaterson.com)
| joshxyz wrote:
| Interesting problem really, for telegram bots I found Frankfurt
| Germany to be near their servers, which gives me almost
| instantaneous response.
|
| For trading platforms it's fun monitoring latencies on websocket
| heartbeats, sometimes i get lucky and discover some platform is
| really nearby (e.g. 1ms to 3ms) which is very nice.
|
| Still, nothing beats localhost as the fastest place on earth.
| [deleted]
| mrkurt wrote:
| Shameless plug: https://fly.io
|
| We built Fly _specifically_ so you can run servers close to your
| users. We do a lot for you on the network side, too, like
| terminate TLS in all our regions.
|
| One thing to note, though, is that latencies between cities are
| surprisingly different than their theoretical max. We have apps
| on Fly with lots of South American users. It's frequently faster
| for people in Argentina to connect to Miami than it is to hit Sao
| Paulo or Santiago. The same goes for Africa.
|
| And the "Asia Pacific" region is a monster. Most points of
| presence there are _thousands_ of miles away from each other. So
| we occasionally see people go from Tokyo -> LA instead of Tokyo
| -> Hong Kong/Singapore.
| toast0 wrote:
| > It's frequently faster for people in Argentina to connect to
| Miami than it is to hit Sao Paulo or Santiago.
|
| Yeah, most networks in South America don't interconnect with
| networks in other countries. They mostly connect in Miami,
| because international connectivity is difficult to arrange, and
| if you can only manage to connect to one other country, it
| needs to be the US. Africa likely goes to Europe rather than
| the US? But same thing, it's hard to connect, so first you have
| to connect to where the content is. And there's not (currently)
| enough network stability and capacity to just connect to your
| neighbors and rely on them to get you to the US/EU either.
|
| I don't know if it's current, but in Japan there used to be two
| dominant networks, but they weren't both available at the same
| PoP. You would need a Tokyo-A pop to get to one, and a Tokyo-B
| pop to get to the other; and it was difficult to interconnect
| those pops.
| tudorizer wrote:
| I'm intrigued. Do you own datacenters or piggybacking of some
| existing infra?
| mrkurt wrote:
| We have dedicated hardware in a bunch of facilities, either
| leased or colo.
| subleq wrote:
| The applications I work on do several database calls for single
| user HTTP request. Shouldn't the application server be as close
| as possible to the database, rather than to the user? I
| struggle to think of an example where your model is useful,
| unless your application has no database.
| danielheath wrote:
| All the cacheable responses (including assets) benefit from
| being close to the user.
| mrkurt wrote:
| App servers should have a fast access to data! Which either
| means running caches alongside the app servers, or doing
| clever things with databases.
|
| We just released a preview Postgres + regional replicas. This
| setup keeps the postgres leader in one region and lets people
| add replicas in other regions they're interested in. It works
| really well for typical full stack apps.
|
| We've also found a surprising number of customers who want to
| run in only one specific region. They don't spread apps out
| geographically, they just have a concentrated population of
| users in, say, Sydney. We're increasingly becoming "Heroku
| for <region>".
| kall wrote:
| If it's feasable to shard data by user, you could put it in a
| close shard, most users don't move continents that much. I
| have never done this but as a latency nut I would like to try
| it sometime. Seems like Cloudflare's Persistent Objects kind
| of promises to do this automatically.
|
| Then there are a lot of requests you can serve from read
| replicas or caches that can be close.
| mrkurt wrote:
| Cockroach has a neat and almost transparent way of doing
| this. If you key your tables by customer, it'll happily
| move those chunks of data to the nodes that need them most
| often.
|
| We almost shipped cockroach but it's missing some Postgres
| features that most full stack frameworks rely on.
| e12e wrote:
| It can work as a "smart cache" - like for example:
|
| https://fly.io/docs/app-guides/graphql-edge-caching-apollo/
|
| I'm not entirely convinced this is a great idea (and
| apparently the example uses plain http between the graphql
| prox/cache and openlibrary.org - that might be considered a
| bug, I suppose: https://github.com/fly-apps/edge-apollo-
| cache/blob/master/sr... At any rate I'd assume whatever
| source you're proxying (eg your own openapi rest end points)
| - you might want ssl - or connect the db/api to fly.io via
| vpn a la: https://fly.io/blog/building-clusters-with-serf/
| See also the linked: https://fly.io/blog/incoming-6pn-
| private-networks/ ).
| tptacek wrote:
| The Apollo thing is just an illustration of a pattern a
| bunch of our customers want to be able to do --- fine-
| grained API caching. I hope it's obvious than the HTTP
| connection to OpenLibrary isn't the point. :)
|
| But: while you can very easily use TLS to backhaul to an
| API, a more "modern" Fly.io way to solve this problem is
| with WireGuard gateways; it's trivial --- I'd argue, easier
| than configuring SSH certs --- to get a WireGuard link from
| your cache app on Fly back to AWS, GCP, or wherever your
| non-Fly legacy database lives.
|
| I really think WireGuard is going to change a lot of the
| ways we design systems like this.
| toast0 wrote:
| Assuming you can't get your database close to your users. It
| depends on the data dependencies between your database calls.
| If they're independent, and you do them in parallel, you can
| do pretty well with a frontend near your user, parallel
| queries to backends wherever.
|
| If your queries have data dependencies, you'd get better
| results with a front end near your users, and a middle tier
| api near your database.
|
| But, just TCP (or TLS) termination near your users and the
| real frontend near your database can make a surprising amount
| of difference.
|
| On the other hand. There's a lot you can do to make sure
| things are fast from limited locations. Keep an eye on data
| size, make sure you're monitoring for and fixing slow
| queries, optimize your images, etc. If your html takes
| seconds to serve, it barely matters where you served it from.
| airocker wrote:
| Can you elaborate more about the server in Finland? How does it
| work? The ones I have seen are super expensive and the only thing
| they provide over putting server at home is the network quality.
| Any ideas where we can get a inexpensive data center that can
| on/off the system and give good network?
| macno wrote:
| My guess, the author uses hetzner
| calpaterson wrote:
| Yep.
| preinheimer wrote:
| We (not the author of the article) use this provider in
| Finland: https://creanova.org/
| superzamp wrote:
| Nice article.
|
| > Caches work great for CSS files, images and javascript - stuff
| that doesn't chance for each user. It doesn't work as well for
| the responses to API calls, for which the responses are different
| for each user, and sometimes, each time
|
| Regarding this, there seems to be some people addressing the
| issue now (I have fly.io in mind, and maybe Vercel but I believe
| the latter use lambdas behind the hood so probably less
| effective).
| NDizzle wrote:
| There's always Varnish.
| Nextgrid wrote:
| Ultimately there has to be a source of truth somewhere, and
| geographically scaling a DB is difficult (CAP theorem and all
| that). Running your code at the edge doesn't help much if it
| still has to talk to a database far away to actually produce a
| response.
| CppCoder wrote:
| Does anyone know or has experience in how a ,,Direct Connect"
| line reduced the latency?
|
| Instead of having your users go through the public internet, they
| connect to some proxy which is closeby and sends the traffic
| through the ,,privat" line.
| ignoramous wrote:
| Right now, due to vast CDN footprints, PoPs would be the fastest
| place to run a "web server": S3+Cloudfront / Lambda at Edge,
| Cloudflare Workers, StackPath EdgeEngine etc
|
| Soon the fastest place is going to be the 5G Edge with products
| like vapor.io Kinetic Edge Colo, AWS Wavelength, Google Anthos
| for Telecommunications already making a push for it.
|
| Ex: https://cloud.google.com/blog/topics/anthos/anthos-for-
| telec...
| bob1029 wrote:
| The edge is getting to be really important for high quality
| interactions, but there are some caveats.
|
| In systems in which there is common shared state between all
| participants (e.g. Fortnite), you fundamentally must have a
| form of centralized authority somewhere. In these cases, you do
| not gain very much by pushing things to the edge if most
| interactions require taking a round trip through the business
| state machine.
|
| This realization is why I really enjoy the renewed push for
| server-side hosting technologies. Accept the fundamental
| physics constraints, bring all the state under 1 roof, and
| think differently about the footprint of a global-scale
| application. AWS solved this problem by making their regions
| more-or-less 100% standalone. The only major thing that really
| spans all regions is account/session management, but consider
| that there arent serious UX constraints around a login attempt
| taking more than 150ms.
| viraptor wrote:
| > In systems in which there is common shared state between
| all participants (e.g. Fortnite), you fundamentally must have
| a form of centralized authority somewhere.
|
| I wonder if there's any big online game already which does
| match making preferring local servers. And I don't mean local
| as in "people registered in us-east region", but rather
| "there are 4 people ready to play in Chicago, spawn an
| instance there and connect them".
| bob1029 wrote:
| I am almost certain this already a metric used by
| matchmaking in certain FPS games like Overwatch.
| vermilingua wrote:
| Ditto CS:GO, when I was playing with a friend in Sweden
| (vs Australia), we'd all have to increase our max allowed
| ping to the server, or the game would never find us a
| party.
| sudhirj wrote:
| A lot of the tech now, see Cloudflare Durable Objects and
| DynamoDB Global Tables, allows having _many_ centralised
| authorities. Eventually one in every major city in the world.
| fouric wrote:
| As a not-currently-webdev, it seems to me like making a good
| website, on a technical level, is closer to an engineering trade-
| space (as opposed to "just do it better") than I had thought.
|
| Splitting static content into smaller files allows for better
| cache utilization, but increases page load times due to higher
| latency when those items aren't cached. Bundling content reduces
| roundtrip penalties (latency, various per-request processing
| overheads) at the cost of greater bandwidth usage.
|
| Wealthier users usually are latency-limited, as opposed to
| bandwidth-limited. Mobile users and poorer users in wealthier
| countries are usually limited on both. Users in poorer countries
| have it even worse.
|
| The only way that you can "win" appears to be by making your
| content as static and as simple as possible.
| cookie_monsta wrote:
| > The only way that you can "win" appears to be by making your
| content as static and as simple as possible.
|
| Please.
| jackjackk0 wrote:
| On the related github repo:
|
| "If you just want to see what I did, best to read the makefile
| directly."
|
| Sometimes I'm amazed by the power of a good makefile that allows
| replicating a perhaps fairly complex set of inter-dependent
| targets. I wish this approach was used more in academic research,
| even though fitting data analysis and modelling within a standard
| makefile can get tricky (e.g. some passages going through remote
| cluster computing, some models involving large number of files
| that would need to be listed as dependencies)
| xiii1408 wrote:
| Minor note---you can get much tighter theoretical minimum
| latencies than the ones listed in your table.
|
| 1. The table's mins divide straight line distance by the speed of
| light. This gives you the time it takes for light to travel in a
| straight line from, say, London to New York. However, your "real
| latencies" are roundtrip ("ping") latencies. Thus, you need to
| multiply all the theoretical latencies by two.
|
| 2. Data does not travel through a fiber optic cable at the speed
| of light. This is because light actually bends around the cable
| when transmitted. These are called cosine losses, and mean the
| light travels roughly 5/3 the actual cable distance. So, multiply
| again by 5/3. (This is why HFT firms use microwave links for long
| distances.)
|
| If you multiply the theoretical maxes by 3.33, you'll see that
| they're very close to the actual latencies you're observing. New
| York -> London becomes 62.7 ms optimal, so you're only 13% slower
| than the theoretical max.
|
| Here on the west coast, I typically see within 10% of the
| theoretical min for data going over the Seattle -> Japan
| submarine cables.
| xiii1408 wrote:
| Also a couple of things to note:
|
| 1. Submarine cables can't go in a straight line, since they've
| got to, you know, go down to the bottom of the ocean. (Which,
| you may have heard, is quite deep.) Also, a cable with a length
| of several thousand miles tends to have some slack.
|
| 2. Your packets may take a very curvy route from one city to
| another, even when they're not geographically that distant.
| This may be because your ISP is bad (and has poor/limited
| routes), geographic or geopolitical concerns, or just because
| of the way the Internet infrastructure is built. On the US's
| west coast, I often experience latencies 60%+ slower than the
| theoretical minimum when accessing servers in the central or
| eastern US. (e.g. SF -> Des Moines, IA at 70ms).
| cycomanic wrote:
| It's important to remember you packets are in the network
| layer (probably when you send them even transport or
| application? I'm not so familiar with the higher layers in
| the OSI stack).
|
| So you are still quite a bit removed from from the physical
| layers. Your packet will likely go through several
| electrical-to-optical and optical-to-electrical conversions,
| probably there will be some electric switches, plus
| multiplexers, all of which contain buffers. Then there is
| forward error correction in the physical layer which also
| requires buffers etc..
|
| And you're obviously right that for many reasons the
| "straight path" might not be the path that is being taken, or
| even the fastest one.
|
| Bottom line, estimating ping time from geographic distance is
| a very rough estimate. However, the longer the distance
| through an uninterrupted link (i.e. a submarine cable) the
| better your estimate, i.e. if you sit at in a google
| datacentre which is directly connected to their fibre
| backbone and do a ping to machine in a similar data centre on
| a different country you will get quite close numbers I
| imagine (I don't work for google). On the other hand if you
| sit somewhere in the mid-west at home and ping a server in
| e.g. NY or LA, not so much.
| danaliv wrote:
| For point 1, is the depth of the ocean significant compared
| to the distances traversed? My back-of-the-envelope math
| suggests it's less than half a percent for a cable from New
| York to England. (6 miles down + 6 miles up, divided by rough
| great circle distance of 2600 nautical miles.)
|
| I would think a bigger factor would be that the cables (IIRC)
| don't go in straight lines (which you did allude to).
| sandworm101 wrote:
| Correct. The ocean bottom is relatively flat once away from
| the continents. Also, the submarine cable actually has a
| very slight advantage over a surface cable as at several
| thousand feet below it is following a slightly smaller
| radius curve. I'm waiting for the HFT firm to bore a
| literally strait hole between London and New York. Then we
| know that HFT has gone too far.
| chokeartist wrote:
| > I'm waiting for the HFT firm to bore a literally strait
| hole between London and New York. Then we know that HFT
| has gone too far.
|
| No need. They have Starlink now (or soon).
| foota wrote:
| Think of the shipping opportunities!
| pbhjpbhj wrote:
| Debugging high ping at home once mtr (or similar, I think it
| was a graphical tool) showed the first 5 or 6 hops all in the
| ISPs network and taking about 2/3 of the time; going from
| UK to USA and then leaving their network in Amsterdam IIRC
| only to terminate at another data-center in UK. Pretty crazy.
|
| Ha, it only just struck me that could have been an NSA-type
| routing issue!?!
| hinkley wrote:
| That reminds me of the bug in Google Maps where a route
| from southern Sweden to Norway suggested driving through
| the Chunnel and taking a car ferry across from Scotland.
| WrtCdEvrydy wrote:
| The internet is supposed to be self-healing but the ISP
| routes are sometimes hella-dumb.
|
| More than once I have seen a traceroute that takes you from
| Miami, down to Argentina and then back up to Cali.
| tyingq wrote:
| See this other comment also:
| https://news.ycombinator.com/item?id=26028498
|
| Round trip vs one-way
| calpaterson wrote:
| Well spotted! I have corrected issue #1 you noticed, a very
| silly mistake, thank you!
|
| #2 is great background! But these cosine losses are I suppose
| not a theoretical limit but a limitation of fibre optics so I
| won't include that (but I will link to your comment!).
| singhrac wrote:
| Just as significant for HFT is that speed of light through air
| is significantly higher (3/2) than speed of light through
| glass.
| ksec wrote:
| >so you're only 13% slower than the theoretical max.
|
| Yes. Not to mention those Fibre aren't exactly a straight line.
| There is extra distance for layering the fibre route. 13% is
| very close to practical maximum.
|
| That is why I asked [1] if we have Hollow Core Cable [2] soon
| where we get close to Real speed of light.
|
| [1] https://news.ycombinator.com/item?id=26026002
|
| [2] https://www.laserfocusworld.com/fiber-
| optics/article/1417001...
| xiii1408 wrote:
| That sounds awesome. I would love to see lower latencies.
|
| Do you know if anyone's considering these for consumer
| Internet?
| cycomanic wrote:
| At the moment they are nowhere close to being ready for
| wide deployment or large scale commercial drawing (try to
| find some videos of modern fibre drawing the speed is
| absolutely insane).
|
| Obviously the HFT crowd are very interested in these, but
| they are willing to pay the premiums. Also the next area is
| probably datacentres where latency is very important as
| well, and these fibres already provide similar losses to
| multi-mode fibres at ~900 nm wavelengths.
| toast0 wrote:
| Assuming these cables cost more, or require equipment that
| costs more, I wouldn't expect this on last mile
| connections. ISPs simply don't care that much. DSL
| providers typically run connections with settings that add
| 15+ms of round trip. My DSL provider runs PPPoE for fiber
| connections where fiber is available, etc. When I was on
| AT&T fiber, it was still about 3-5 ms round trip to the
| first hop. It's been a while since I've experienced cable
| to know how they mess it up.
|
| If there's significant deployment of the cable in long
| distance networks, eventually that should trickle down to
| users. It would probably happen faster if there were
| competitive local networks, but regardless, a significant
| drop in latency across a country or ocean can be big enough
| to justify some expense.
| cycomanic wrote:
| What do you mean by cosine losses? I have done research in
| fibre optics for more than a decade and never heard that term.
| Also I have no idea what you mean by light bends around the
| fibre. A simple explanation of light propagation in multi-mode
| fibres can use geometric optics to explain how light propagates
| along the fibre by reflecting of the interface between core and
| cladding, however this simple picture does not apply to single-
| mode fibre (which all long distance connections are) and also
| does not easily explain the group velocities in fibre.
|
| The reason that light travels slower in fibre is because the
| refractive index of slica is about 1.5 while it is 1 in glass
| (in reality it's a bit more complicated, it's the group index
| that counts, which is also approx. 1.5 however).
| preinheimer wrote:
| If you're looking for more detailed ping data this is his source:
| https://wondernetwork.com/pings
|
| (which didn't seem to be worth linking to)
| calpaterson wrote:
| Yes! I'm just going to add a link right this minute!
| antidocker wrote:
| Interesting read. Did you measure your application latency?
|
| It's wrong to think that CA to London is slowest. It may if you
| view things physically but fast POPs make it fastest route
| sometimes.
| bullen wrote:
| I make MMOs, and there are only two "large" cloud providers that
| have machines in central US: Google and IONOS.
| jaywalk wrote:
| Azure has two central US locations: one in Iowa and one in
| Texas. They are probably larger than GCP and IONOS put
| together.
| rsync wrote:
| Many years ago (2006 and 2009, respectively) I had to choose
| European and Asian locations for rsync.net storage arrays.
|
| My primary, overriding criteria in choosing locations was _what
| would be the coolest, most interesting place to visit for
| installs and maintenance_.
|
| I chose Zurich and Hong Kong.
|
| Measured results have exceeded my initial models.
| Hallucinaut wrote:
| This is one of those comments that's difficult to interpret the
| meaning of... until you read the commenter's name.
|
| (Love your work)
| Analemma_ wrote:
| This might set the record as the Pinboard-iest post that didn't
| actually come from Pinboard.
| Johnny555 wrote:
| The cloud ruined that fringe benefit, I used to manage servers
| in London and got to visit them quarterly. But now I manage
| servers all over the world, and it's impossible to see them
| physically, even the ones that are hosted nearby.
|
| Though all things considered, I don't really miss sitting
| inside a cold, loud datacenter or standing in front of a tiny
| KVM monitor for hours just to load CD's to do a software
| upgrade.
| hinkley wrote:
| I annoyed my family by chiming in during The Martian about
| Donald Glover freezing his ass off in the server room:
|
| He doesn't have to be cold! If he moved over one aisle he'd
| be nice and toasty!
|
| Last time I was having to go into a server room, hot and cold
| aisles were still a new thing, so it was just cold
| everywhere.
| Kudos wrote:
| Donald, Danny is his uncle
| hinkley wrote:
| Goddamnit. I had it right and outsmarted myself.
| rsync wrote:
| "But now I manage servers all over the world, and it's
| impossible to see them physically, even the ones that are
| hosted nearby."
|
| Did you know that we have "drive up" access ?
|
| If your account is in San Diego, you can just drive up to the
| datacenter and connect to our Ubiquiti LR AP. No Internet
| required.
|
| We had it deployed in Denver as well but we moved datacenters
| a few years ago and are in a tall high rise[1] with difficult
| access to ground level or roof for antennae ...
|
| [1] The "Gas and Electric" building which is the big carrier
| hotel in Denver ...
| linsomniac wrote:
| I imagined when reading "tall high rise" that you were in
| DG&E. I'm not sure where you were at or if you are looking
| for something else, but we've long been in "Fortrust", now
| "Iron Mountain" which is north of downtown. In the past
| they've been very easy to work with, including at one point
| I used to have a dark fiber that ran from one of my
| cabinets to DG&E's meet me, where we used it to connect to
| Level-3. I know some of their customers have antennas on
| the outside of the building, though I don't believe I'm
| supposed to know why so I won't go into details. :-) They
| have plenty of parking, might be worth investigating.
|
| DG&E is a weird and wonderful building. Used to have a
| client that had a suite in there. For those unaware, it is
| an old, old building, right next to the building that
| houses the Denver main telco switch. It was built over a
| hundred years ago, and has been Denver's largest (? maybe
| that info is out of date) meet-me room for Internet
| providers. Being such an old building, you can imagine it
| has all sorts of problems retrofitting modern requirements
| for cabling and power generation/backup.
| Arelius wrote:
| Any capacity to plug in an good old-fashioned Ethernet
| cable?
| rsync wrote:
| Well, yes and no ...
|
| Of course we have switches in our racks and of course we
| could find a way to plug you in ...
|
| However, there would be a _lot_ of administrative
| overhead just to get you inside the datacenter in
| question, not to mention how _we_ would clear you and
| your equipment, etc.
|
| We regularly accept bare SATA drives as ingress and we
| support that process in reverse - most likely we would
| steer you towards that route ...
| switch007 wrote:
| I've nothing against Switzerland (it's nice) but you're the
| first person I've ever come across to suggest Zurich (or
| anywhere in Switzerland) as the coolest and most interesting
| place in Europe!
|
| I'm genuinely interested to hear more.
| pbhjpbhj wrote:
| Presumably, it's 'of the list of places with a data-center
| meeting the specs we require' which, I imagine, limits the
| range of options somewhat.
| rsync wrote:
| No, it really is mostly about skiing (see above).
|
| Amsterdam, Frankfurt and (perhaps surprisingly) Marseilles
| are the places to go if you want to optimize for routing.
|
| Equinix has datacenters, with he.net POP, in all three
| places. You could do a lot worse than standardizing,
| globally, on EQX datacenters and he.net connectivity ...
| Youden wrote:
| In addition to skiing, Switzerland's fantastic public transit
| gives you a huge array of options. My favourite day for
| guests is:
|
| - Train to Arth Goldau, optionally visit the zoo there
|
| - Train to Rigi Kulm (literally the top of a mountain, with
| some beautiful views)
|
| - Cable car or cogwheel railway to a ferry terminal on the
| Vierwaldstattersee
|
| - Ferry across the Vierwaldstattersee to Luzern
|
| - Train from Luzern back to Zurich HB
|
| If you have a resident buy you a day pass in advance, all of
| this costs 44 CHF ($44).
|
| There's really a lot to do though. You can do a day trip to
| pretty much anywhere in Switzerland. I've done it to Geneva a
| couple of times, also to towns out in the boonies.
| rsync wrote:
| Yes, Zurich is pretty sleepy, but you need to factor in
| driving distances to skiing ...
|
| Zurich is very quick and convenient to any of Chamonix
| (straight west) Davos/St.Moritz (south), Cervina/Zermatt
| (southwest) or Sudtirol (southeast).
| rjsw wrote:
| I wouldn't call Zurich particularly close to any of those,
| apart from Davos you can't do a day trip to them.
| switch007 wrote:
| Ah now it makes sense. I can definitely see the appeal for
| a skier :)
| JoeAltmaier wrote:
| Not Eugene OR. That's where my startup put their servers for a
| conference/collaboration tool. I remember getting a complaint
| from a customer in Jordan, trying to conference with a customer
| in India. They had something like 1sec latency, communicating
| through Eugene and back. Twice across the planet.
| gruez wrote:
| >Here's a table of latencies from London to other world cities
| with more than 5 million people, comparing against the
| theoretical maximum speed, the speed of light:
|
| This is a flawed comparison because it looks like he's comparing
| ping latency (round trip time) with speed of light (one trip), so
| the "Slowdown factor" columns is off by a factor of two.
| tyingq wrote:
| The table updated with that fix.
|
| https://jsfiddle.net/32e0qkxs/
| calpaterson wrote:
| Author here, you are absolutely right and I will correct that!
| krzrak wrote:
| 1 mb (1 milibit) HTML is not that "fat"...
| bdcravens wrote:
| By today's standards, in a nation rich in high speed
| connections, perhaps.
| stjo wrote:
| I think you missed the (a bit pedantic) joke about units
| timbit42 wrote:
| 1 MB = 1 megabyte
|
| 1 Mb = 1 megabit
|
| 1 mB = 1 millibyte
|
| 1 mb = 1 millibit
| sagolikasoppor wrote:
| In Europe a megabit is usually written mbit
| detaro wrote:
| 1 mB = 1 millibucket
___________________________________________________________________
(page generated 2021-02-04 23:00 UTC)