[HN Gopher] Jepsen: MySQL 8.0.34
___________________________________________________________________
Jepsen: MySQL 8.0.34
Author : aphyr
Score : 205 points
Date : 2023-12-19 14:17 UTC (8 hours ago)
(HTM) web link (jepsen.io)
(TXT) w3m dump (jepsen.io)
| adontz wrote:
| Serious question. I have this question for, like 20 years
| already.
|
| Why would anyone start a new project with MySQL? Is it really
| superior in anything? I'm in industry for 20+ years and as far as
| I remember MySQL was always the worst and most popular RDBMS at
| any given moment.
| willvarfar wrote:
| Not up to date, but a decade ago MySQL supported pluggable
| storage engines and so had had some good non-default choices.
| It was possible to fit really big databases onto small boxes
| using tokudb, for example.
|
| This doesn't explain why it was so popular for starting new
| small projects, but people were also choosing mongodb at that
| time too, so ymmv :)
|
| Nowadays postgres has grown a lot of features but I believe it
| is still behind on built-in compression?
|
| Added: this old blog post of mine is still getting traffic 10
| years later; probably still valid
| https://williame.github.io/post/25080396258.html
| Keyframe wrote:
| Once upon a time it was easi-ish to scale out, and was simple
| to use and fast. There was Percona as well. These days, who
| knows? Might still be true.
| WJW wrote:
| - It works well enough.
|
| - It scales up fine for 99+% of companies, and for the ones
| that need to scale beyond that there are battle tested
| solutions like Vitess.
|
| - It is what people know already so they don't have to learn
| anything new.
|
| Same reason why people still make new websites in PHP I guess.
| It's not fancy but it works fine and won't bring any unwelcome
| surprises.
| gog wrote:
| We use it, we know it and can troubleshoot it if needed, it
| satisfies our needs and it works. What more do you need?
|
| It also works for others, Github for example.
|
| The only thing I am missing at the moment is a native UUID type
| so I don't have to write functions that convert 16bit binary to
| textual representation and back when examining the data
| manually on the server.
| dijit wrote:
| I strongly dislike how people look to github as an example,
| its the highest appeal to authority.
|
| I know facebook uses mysql, but I also know that it is a
| bastardised custom version that has known constraints and has
| limited use (no foreign keys for example).
|
| I spoke to the DBA who first deployed MySQL at Github and the
| vibe I got from him immediately was that he had doubled down
| on his prejudice: which is fine, but its not ok to ignore
| that it can be a lot of effort to work around issues with any
| given technology.
|
| For a great example of what I mean: most people wouldn't
| choose PHP for a new project (despite it having improved
| majorly) - the appeal to authority there is to say "it works
| for Facebook" without mentioning "Hack" or the myriad of
| internal processes to avoid the warts of PHP.
|
| That a large headcount company _can use_ something does not
| make it immune from criticism.
| capableweb wrote:
| > most people wouldn't choose PHP for a new project
|
| Is this really true?
|
| I used to be a full-time PHP developer but I personally
| don't touch that language anymore. But it's still very
| popular around the world, I've seen multiple projects start
| this year use PHP, because that's the language the
| founders/most developers in the company are familiar with.
| Probably depends a lot on where in the world you're
| located.
|
| Last Stack Overflow survey had ~20% of the people answering
| the survey saying that they still use PHP in some capacity.
| sroussey wrote:
| The beauty of PHP is that it is stateless and the end of
| the run, everything is freed. It is difficult to have
| memory leaks.
|
| Personally, I like using Typescript/Javascript on both
| front end and backend, but I don't look down at PHP
| backends at all. And it's come a long way as a language.
|
| I've been a fan of rolling your own stdlib as the
| semantics there are old and weird, but vscode tells you
| so who cares anymore.
| stephenr wrote:
| > most people wouldn't choose PHP for a new project
|
| Most people on HN, or most developers in the world?
|
| PHP is still _very_ popular, and plenty of people start new
| projects in it all the time.
|
| > does not make it immune from criticism
|
| Show me a technology without critics and I'll show you a
| technology zero people use.
| dijit wrote:
| I mean, outside of HN too: _new_ projects are less and
| less commonly PHP based.
|
| https://madnight.github.io/githut/#/pull_requests/2023/3
| stephenr wrote:
| I don't think that data is particularly meaningful,
| unless you're also going to claim that both JavaScript
| and Ruby are "less and less commonly" used, because
| they've both had _much_ bigger drops, according to that
| data.
|
| Pulls, Pushes, Issues and GitHub stars are terrible ways
| to gauge the popularity of a language.
| dijit wrote:
| -\\_(tsu)_/-
|
| There is no better measure I'm aware of, and I'll take
| any measure you supply.
|
| I would definitely also argue that Ruby is in pretty
| significant decline, the majority of Ruby projects were
| sysadminy projects from the 2010 era and most sysadminy
| types learned it as an alternative to perl. Web
| developers who learned it were mostly using Rails which
| has fallen somewhat out of favour. YMMV obviously, but I
| can understand it's decline as Python has concretely
| taken over the working space and devops tools like
| Chef/Puppet are not en-vogue any longer as Go and
| Kubernetes/CNCF stuff took the lions share.
|
| Equally: javascript (node, really) is less favourable to
| many JS devs than Typescript. If you aggregate TS and JS
| then you'll see that the ecosystem _is_ growing but many
| people who are JS folks have switched to TS.
|
| I'm taken aback by what you seem to suggest though; Would
| you seriously claim that _most new projects ARE using
| PHP_?
|
| I would happily argue that point with any data you
| supply, it's completely contrary to my experience and
| understanding of things and I have a pretty wide and
| disparate social circle in tech companies.
| stephenr wrote:
| > I'm taken aback by what you seem to suggest though;
| Would you seriously claim that most new projects ARE
| using PHP?
|
| No. I didn't say that, and we need to clarify what you
| meant originally to make sense here.
|
| When you say "most people wouldn't start a project in
| php", there are two ways to interpret "most" in that
| sentence: "the majority of" (ie 50%+) or "nearly all of"
| (ie a much higher percentage). Both are accepted
| definitions for "most".
|
| I _assumed_ you meant the latter: ie "nearly everyone
| would not start a project in php", which is what I
| disagree with, because the former makes little sense in
| context.
|
| If you did in fact mean "a majority of people would not
| start a project in php" then of course I agree because
| that sentence can be substituted to mention any
| programming language in existence and still be true,
| because none are ever so dominant over all others in
| terms of popularity, that more than half of all new
| projects are written in said language.
| dijit wrote:
| it's a little bit hair splitty, but I see what you might
| be trying to get at.
|
| What I tried to convey is that PHP is not enjoying the
| development heyday it once had, and the numbers of people
| choosing PHP for a new project today (even among people
| who learned development with PHP) is decreasing. It's not
| popular.
|
| let's try to leave it as: "I believe PHP to be in decline
| for new projects as a share of total new projects divided
| by the total number of developers who are starting new
| projects".
| johnmaguire wrote:
| Ruby has had a huge decline in the past ten years, IMO.
|
| Also, note that TypeScript is tracked separately from
| Javascript, which is likely part of its decline. I
| wouldn't be surprised if JS backends are ultimately
| declining as well (perhaps Go and Python are taking its
| place?)
| stephenr wrote:
| > a native UUID type so I don't have to write functions that
| convert 16bit binary to textual representation and back when
| examining the data manually on the server
|
| MySQL 8 adds the `BIN_TO_UUID()` function (and the inverse,
| UUID_TO_BIN), and supports the quasi-standard bit swapping
| trick to handle time-based UUID's in indexed columns.
| endorphine wrote:
| As opposed to what? Postgres? Isn't InnoDB most performant for
| read-heavy apps?
| dijit wrote:
| MyISAM is actually considerably faster (than InnoDB) for read
| heavy apps.
|
| InnoDB is comparatively slow, but you get much better
| transactionality (IE; something that is much closer to ACID
| compliance). Row level locking is faster for inserts than
| table level locking, but table level locking is faster for
| reads than row level locking.
|
| Regardless: Both storage engines do not scale with core count
| as effectively as postgres due to some deadlocking on update
| that I have witnessed with MySQL. (not that Postgresql is the
| only alternative btw).
| Topgamer7 wrote:
| MySQL has some aggregation performance over postgres. Having
| done a recent migration of an application two things that come
| to mind are:
|
| - its case insensitive by default, which can make filtering
| simpler, without having to deal with a duplicate column where
| all values are lower/upper cased. - MySQL implements loose
| index scan and index skip scan, which improves performance of a
| number of join aggregation operations
| (https://wiki.postgresql.org/wiki/Loose_indexscan)
| dpratt wrote:
| | its case insensitive by default
|
| This is obviously up for debate, but subjectively I find this
| to be an absolutely _terrible_ design decision.
| hu3 wrote:
| I agree it's debatable. And not intuitive at first.
|
| With that said, in all my years and thousands of tables
| across multiple jobs, I have yet to see a single case where
| I had to change a table to be case sensitive. So I guess
| for me it is a sensible default.
| dissident_coder wrote:
| First, MySQL is the "devil you know". If you've spent a decade
| working exclusively with MySQL quirks, you're just gonna be
| more comfortable with it regardless of quality.
|
| MySQL also tends to be faster for read-heavy workloads and
| simple queries.
|
| Also replication is easier to setup with MySQL in my (outdated)
| experience, even though it's gotten better with Postgres
| recently and I haven't really been able to compare them myself
| since I'm just using Amazon RDS Postgres these days and haven't
| had the need to setup master-master replication (which is the
| pain point in postgres, and was pretty straightfoward with
| mysql the last time I worked with it). Setting up read-replicas
| with postgres is still ezpz.
|
| Postgres specific features tend to be much better than MySQL
| ones, Postgresql JSON(b) support blows MySQL out of the water.
| And as far as I can remember MySQL still doesn't support
| partial/expression indexes, which is a deal breaker for me.
| Especially in my json heavy workloads where being able to index
| specific json paths is critical for performance. If you don't
| need that kind of stuff, you might be fine - but I would hate
| to hit a wall in my application where I want to reach for it
| and it's not there.
|
| MySQL used to be the only game in town, so it was the "default"
| choice - but IMO postgres has surpassed it.
| darrenf wrote:
| > And as far as I can remember MySQL still doesn't support
| partial/expression indexes, which is a deal breaker for me.
| Especially in my json heavy workloads where being able to
| index specific json paths is critical for performance.
|
| Do generated column indexes meet this need?
| CREATE TABLE json_with_id_index ( json_data JSON,
| id INT GENERATED ALWAYS AS (json_data->"$.id"),
| INDEX id (id) )
|
| https://dev.mysql.com/doc/refman/8.0/en/create-table-
| seconda...
| simcop2387 wrote:
| Looks like that would work as an expression index, though i
| can't tell at a glance if this requires the column to also
| be stored which would increase storage size (but probably
| isn't a huge problem if it is). But that likely won't work
| for dealing with the partial index case where you're only
| wanting to keep the ones that aren't null in the index to
| reduce the size (and speed up null/not null checks).
| evanelias wrote:
| MySQL supports indexing expressions directly, which is
| effectively the same as indexing an invisible virtual
| column: https://dev.mysql.com/doc/refman/8.0/en/create-
| index.html#cr...
|
| MySQL supports "multi-valued indexes" over JSON data,
| which offer a non-obvious solution for partial indexes,
| since "index records are not added for empty arrays":
| https://dev.mysql.com/doc/refman/8.0/en/create-
| index.html#cr...
|
| MariaDB doesn't support any of this directly yet though:
| https://www.skeema.io/blog/2023/05/10/mysql-vs-mariadb-
| schem...
| dissident_coder wrote:
| I suppose this is a decent workaround for certain things
| (i've used it in sqlite before), the main kind of index i'm
| using with postgres jsonb looks something like this
| create index on my_table(document ->> 'some_key') where
| (document ? 'some_key' AND document ->> 'some_key' IS NOT
| NULL);
|
| you can use generated columns to get around the first part
| of the index, but you can't have the WHERE part of the
| index in mysql as far as I am aware (but it has been a very
| long time since I've worked with it so I'm prepared to be
| wrong).
| Freeaqingme wrote:
| Also from an operations point of view it's quite easy to
| manage. I'm not that experienced with Postgresql, but my
| understanding is that until recently you had to vacuum it every
| once in a while. Besides, it's also using some kind of
| threading model that most people handle by putting a proxy in
| front of Postgres to keep connections open.
|
| Also, Mysql has had native replication for a very long time,
| including Galera which does two-step commit in a multimaster
| cluster. Although Postgres is making some headway in this
| regard, it is my impression that this is only quite recent and
| not yet fully up to par with Mysql yet.
| stonemetal12 wrote:
| >my understanding is that until recently you had to vacuum it
| every once in a while.
|
| You still do. The auto Vacuum daemon was added in 2008ish, so
| it isn't too bad. Just more complexity to manage.
|
| > it's also using some kind of threading model
|
| It does a process per connection just like web servers did
| back in the day when C10k was a thing. A lot of the buffers
| are configured per connection so you can get bigger buffers
| if you keep the number of connections small.
| dgellow wrote:
| I think what you reference is known as Transaction ID
| Wraparound, Postgres still needs to be vacuumed to avoid that
| problem: https://www.crunchydata.com/blog/managing-
| transaction-id-wra...
| Thaxll wrote:
| Your memory is failing you maybe you don't remember not too
| long ago when PG did not have any replications built-in.
| mrkeen wrote:
| Not too long ago MySQL didn't have transactions.
|
| Edit: I would just love a comment from the person who thinks
| 'missing feature in the past' is wrong, unfair or irrelevant
| as a reply to a 'missing feature in the past' comment.
| evanelias wrote:
| There's a non-trivial nine-year difference between the
| things you're describing: the InnoDB storage engine was
| released in 2001. Postgres gained built-in replication in
| 2010.
|
| That said, personally I wouldn't describe either of these
| as "not too long ago". Technology rapidly changes and many
| things from either 2001 or 2010 are considered rather old.
| adontz wrote:
| Just want to add, that comparing to postgresql is a very modern
| view. There were other databases, not popular today, but quite
| popular back in the day. To name a few: DB2, InterBase,
| Firebird, Paradox, Access, SQL Server Compact. MySQL was a
| really shitty database in early 2000s, still THE most popular.
| viraptor wrote:
| Others mentioned a few reasons already, but compared to
| postgres (because typically that's the other option) I'll add
| index selection. Even with the available plugins and stats and
| everything, I don't want to in an emergency situation spend
| time trying to indirectly convince postgres that it should use
| a different index. "A query takes 20x the time and you can't
| force it back immediately" is a really bad failure mode.
| beltsazar wrote:
| I understand why the default transaction isolation level of most
| DBMS is weaker than serializable (it's for benchmark purposes),
| but I'd argue the best default is serializable. Most DBMS users
| don't even know there are many consistency models [1]. They
| expect transactions to "just work," i.e. to appear to have
| occurred in some total order, which is the definition of
| serializability [2]. And to some who know when to use a weaker
| isolation level for better performance can always set it per
| transaction [3].
|
| ---
|
| [1] https://jepsen.io/consistency
|
| [2] https://jepsen.io/consistency/models/serializable
|
| [3] https://www.postgresql.org/docs/16/sql-set-transaction.html
| Thaxll wrote:
| It has a very high cost in terms of performance.
| eatonphil wrote:
| Is there any measurement of the impact you could point at?
|
| For example, I imagine that it depends on the workload. If
| the workload isn't contentious SERIALIZABLE might not make a
| big difference? Then again if the workload isn't contentious
| maybe it doesn't matter?
|
| Either way, I'd love to see numbers. Not because I don't
| believe anyone but I'm just curious what ballpark we're
| talking about.
|
| Edit: Also, SQLite and Cockroach only allow SERIALIZABLE
| transactions so the unviability of SERIALIZABLE seems
| questionable.
| pkulak wrote:
| And serializable transactions fail all the time. You have to
| always code so that re-running them is trivial and expected.
| 99% of the queries I write are fine at the lowest transaction
| level, and that saves me and the DB lots of time.
| RedCrowbar wrote:
| If your database client is any good, it should do the
| retries for you. EdgeDB uses serializable isolation (as the
| only option), and all our bindings are coded to retry on
| transaction serialization errors by default.
|
| Transaction deadlocks are another common issue that is
| triggered by concurrent transactions even at lower levels
| and should be retried also.
| bcrosby95 wrote:
| I'm curious how you can handle transaction deadlocks at a
| low level - there might have been a lot of non-SQL
| processing code that determined those values and blindly
| re-playing the transactions could result in incorrect
| data.
|
| We handle this by passing our transaction a function to
| run - it will retry a few times if it gets a deadlock.
| But I don't consider this to be very low level.
| ahoka wrote:
| "We handle this by passing our transaction a function to
| run - it will retry a few times if it gets a deadlock.
| But I don't consider this to be very low level."
|
| Oh neat, I was just thinking about something like this
| the other day.
| baq wrote:
| You should anyway...
|
| If you don't, you sooner or later get presented with
| _unexpected_ 'transaction aborted due to deadlock' errors
| in prod. Better have someone who's already been through
| that then, at the very least.
| beltsazar wrote:
| The performance hit is likely worth it--considering that the
| alternative is inconsistent data. In most cases correctness
| is more, if not much more, important than performance. As I
| said, those who know what they're doing can always use a
| weaker isolation in cases where performance is more important
| than correctness.
| gigatexal wrote:
| Snapshot is good enough. And then do all your related table
| changes in a single atomic txn and you're good
| aphyr wrote:
| That depends! As the article discusses, snapshot allows
| anomalies--like write skew--which might violate application
| invariants. Depends on your workload.
| wolfgang42 wrote:
| _> can always set it per transaction [3]_
|
| I just found out yesterday[1] that in Postgres "serializable"
| transactions can still have anomalies if other non-serializable
| transactions are running in parallel! So check your DBMS _very_
| carefully before trying this, I guess.
|
| [1] https://news.ycombinator.com/item?id=38685267
| continuational wrote:
| As soon as you let users edit data, you can't really benefit
| from serializable transactions.
|
| Partly because you don't really want arbitrary long
| transactions that span however long the user wants to be
| editing for.
|
| Partly because it's rather rude to roll back all the users
| edits with a "deadlock detected, please reload the form and
| fill it out again".
| aphyr wrote:
| I, uh, do want to point out that the alternative here is not
| "everything is OK". If you _don 't_ abort when, say, two
| users update the same row concurrently, then you might cause
| (e.g.) silent data loss for one of them. Or you might end up
| with a record in an illegal state--say, one with two
| different fields that should never be in their particular
| states together. You have to look at your transaction
| structure, intended application invariants, and measured
| frequency of concurrency to figure out if using a relaxed
| isolation level is actually safe or not.
| mavelikara wrote:
| IME in this model, the middleware-DB transaction were set to
| be serializable, but the web-user-edits were done under an
| optimistic concurrency model, using versions or timestamps.
| You'd run into edit conflicts, which for many applications is
| a reasonable compromise.
|
| The DB transactions would need to be kept open for user edits
| only if one were using a pessimistic model.
|
| Am I thinking about this correctly?
| bcrosby95 wrote:
| Most systems I've worked on would just let users completely
| overwrite eachother and would neither hold open a
| transaction nor use versioning. For those that didn't
| behave this way, I think versioning is the sanest option
| (as long as requirements permit it).
| bob1029 wrote:
| Serializable by default doesn't seem feasible when you really
| dive into the concept.
|
| "Serializable" is a system property that describes how two or
| more transactions will take effect. In this context, I would
| define "transaction" as a business activity with a clear
| beginning, middle & end and exhibiting specific, predictable
| data dependencies. Without any knowledge of the transaction
| type(s) and their semantics per the business domain, it would
| be impossible to make assumptions about logical ordering of
| anything.
|
| SQLite is the closest thing to what you are asking for. All
| writes are serialized by default, but this is probably not what
| you really want. We can ensure multiple concurrent connections
| don't corrupt the data files, but we aren't achieving anything
| in business terms with this.
| dastbe wrote:
| transaction in the way everyone else here is using it is
| referring to the primitive provided by the database which
| gives certain guarantees (depending on isolation level) wrt
| reads and writes.
|
| Even in the context of "transaction" the business activity,
| they are an extremely useful tool for building up exactly the
| kind of sequencing and dependency guarantees you refer to.
| beltsazar wrote:
| > Serializable by default doesn't seem feasible when you
| really dive into the concept.
|
| I was expecting you'd argue for a weaker isolation level than
| serializable, but then you said:
|
| > Without any knowledge of the transaction type(s) and their
| semantics per the business domain, it would be impossible to
| make assumptions about logical ordering of anything.
|
| Serializable isolation level only guarantees _some_ total
| order of transactions, and yes, it doesn 't guarantee that
| the order will be exactly what you want (e.g. first come,
| first serve). So, are you now suggesting strict
| serializability [1], then?
|
| [1] https://jepsen.io/consistency/models/strict-serializable
| sgift wrote:
| I agree, and for the same reason that people should only use
| relaxed consistency models for atomics in their code if they
| _really_ know what they are doing, there _really_ is a need and
| they have appropriate testing. It 's good that the option is
| available, but the headaches that you can get yourself if you
| don't know exactly what you are doing are real. Good luck
| debugging these types of Heisenbugs. You'll need it.
| _a_a_a_ wrote:
| I kind of agree, but given the amount of locking[1] that would
| entail, the drop-off in performance would have those self-same
| users squalling about how slow it was . And then they had
| rewrite everything with a (NOLOCK) without understanding the
| implications are even worse ("hey guys, I put this hint
| everywhere and things run really fast now!"). I know this
| because I've seen it.
|
| IIRC Jim Gray said that Repeatable Read is 99% of Serialisable
| anyway, all serialisable does is hide phantoms.
|
| [1] speaking for MS SQL, which is mainly locking based.
| klysm wrote:
| In my experience, most developers don't even consider isolation
| level in the first place and just take whatever the default is.
| Any race conditions are met with an 'oh that's weird', and then
| they move on.
| beltsazar wrote:
| Exactly! That's why I just commented [1] that the default
| isolation level should be serializable.
|
| [1] https://news.ycombinator.com/item?id=38696421
| klysm wrote:
| Completely agree, I've made that exact same argument on HN
| before.
| nordsieck wrote:
| I wish I could argue with you, but the highly successful early
| years of MongoDB proves your point nicely.
| ahoka wrote:
| In my experience almost no developer considers consistency at
| all.
| amluto wrote:
| How does append (a) map onto actual SQL operations on the given
| tables? Are the TEXT fields being used as lists?
|
| Also... I've been issues in MySQL repeatable read mode where a
| single SELECT, selecting a single row, returned impossible
| results. I think it was: SELECT min(value),
| max(value) FROM table WHERE id = 1;
|
| where id is a primary key. I got two _different_ values for min
| and max. That was a fun one.
| aphyr wrote:
| Yup! See https://jepsen.io/analyses/mysql-8.0.34#list-append,
| which also has a link to the code: https://github.com/jepsen-
| io/mysql/blob/4c239cb5c66a7f1a55fa...
|
| This isn't CONCAT-specific, BTW--we just use CONCAT because it
| allows us to infer anomalies in linear, rather than exponential
| time. Same kinds of behaviors manifest with plain old
| read/write registers.
| baq wrote:
| how about that, I planned to do some work today.
|
| aphyr, thank you. call me maybe and later jepsen.io have been
| consistently some of the best content I've ever read on the
| internet.
| aphyr wrote:
| Aw shucks, thanks :-)
| don_neufeld wrote:
| Seriously, thanks for what you do!
|
| I've been reading your stuff for almost 10 years and doing
| work at this level of rigor makes the world a better place.
| shepherdjerred wrote:
| I don't use Jepsen, but I love your blog! Your "x the
| Technical Interview" series are my absolute favorite.
| PeterCorless wrote:
| How much of what is contained within this analysis of MySQL is
| going to be the same-same for MariaDB, given that it uses InnoDB
| as the default storage engine?
| dijit wrote:
| I think they've been diverged long enough that we can consider
| them separate products at this point. (14 years!)
|
| You wouldn't assume that Plex and XMBC shared much
| compatibility despite forking around the same time.
| PeterCorless wrote:
| I guess I was wondering how much of this behavior is endemic
| to MySQL, per se, and how much was endemic to InnoDB.
| mdaniel wrote:
| I don't want to speak out of school, because I've never tried
| to boot up Jepsen for anything, but _in theory_ the purpose of
| publishing the code for the experiment is that one can
| replicate its findings in your own environment to see if it
| impacts you. Yes, I 'd guess that custom FUSE will be a PITA to
| configure but my experience with the AWS RDS setups for kicking
| the tires on MariaDB is (ahem) just money versus costing huge
| amounts of glucose
| pella wrote:
| FOSSDEM-2024 :
|
| Isolation Levels and MVCC in SQL Databases: A Technical
| Comparative Study
|
| // Oracle, MySQL, SQL Server, PostgreSQL, and YugabyteDB.
|
| https://fosdem.org/2024/schedule/event/fosdem-2024-3600-isol...
| dasmoop wrote:
| The RDS replication that stopped working after 5min messing with
| it, with no alert of failed health check is a bit worrying...
| mdaniel wrote:
| Obviously the devil's in the details, and it's almost
| impossible to troubleshoot from a screencast, but my experience
| has been that AWS is generally pretty liberal with the
| CloudWatch Metrics, but does place the onus upon the user to
| dig through the 150++ of them to read the docs to find the one
| that matters. They also claim <https://docs.aws.amazon.com/Amaz
| onRDS/latest/UserGuide/USER_...> there's a console table cell
| for the replication status, but my experience with the console
| is that often one must opt-in to having that column _shown_
| which is suboptimal :-(
|
| That "shared responsibility model," they lean on it heavily
| PeterZaitsev wrote:
| Facinating read. I think it is a great illustration to show how
| many "practically working systems" can be built on the foundation
| exhibiting so many consistency artifacts
___________________________________________________________________
(page generated 2023-12-19 23:02 UTC)