[HN Gopher] Everything I've seen on optimizing Postgres on ZFS
       ___________________________________________________________________
        
       Everything I've seen on optimizing Postgres on ZFS
        
       Author : EntICOnc
       Score  : 87 points
       Date   : 2021-12-22 10:45 UTC (12 hours ago)
        
 (HTM) web link (vadosware.io)
 (TXT) w3m dump (vadosware.io)
        
       | crad wrote:
       | Great writeup, lots of detail and links off to reference
       | material. I wish more posts were like this.
        
       | tmikaeld wrote:
       | Lot's of what-if-s in this write-up - a bit of a tinkerers ball -
       | would have loved more benchmarks with graphs to show the
       | differences.
       | 
       | Interesting to see a 20% performance increase due to blocksize
       | being 16k instead of 8k.
        
         | AdrianB1 wrote:
         | Not a Postgres expert, but a MS SQL one: performance of the
         | storage in well configured servers, after the warm up, is the
         | least concern for perf tuning. In query tuning physical reads
         | are a flag for bad configuration, logical reads are the first
         | and foremost indicator and a 20% increase in storage
         | performance brings less than 1% in total server performance.
         | This is because if your query is well written and covered with
         | indexes, reads are performed in memory and large writes are
         | very rare.
         | 
         | Nonetheless, the article is interesting as a way to understand
         | how Postgres and ZFS go together.
        
         | testesttest wrote:
         | Yea, I have run postgres/citus on zfs with petabytes of data.
         | His analysis on record size is lacking. E.g there is a trade-
         | off between compression vs record size. More compression means
         | more effective iops
        
       | mastax wrote:
       | Trying to remember what I researched 6 months ago.
       | 
       | 1. You don't want to use AIO on Linux because "the driver's AIO
       | implementation is a compatibility shim that just barely passes
       | the POSIX standard"[0] and is bad for performance, at least in
       | database-like applications that have an alternate threaded sync
       | implementation.
       | 
       | 2. You probably don't want to use O_DIRECT since it's currently
       | implemented using a shim on top of AIO[1].
       | 
       | 3. `primarycache=metadata` + fsync + sync I/O works just fine for
       | databases unless you have a very fast SSD array[2] in which case
       | you'll have to wait for real O_DIRECT support[3].
       | 
       | [0]: https://openzfs.github.io/openzfs-
       | docs/Performance%20and%20T...
       | 
       | [1]: https://github.com/openzfs/zfs/pull/7823
       | 
       | [2]: https://github.com/openzfs/zfs/issues/8381
       | 
       | [3]: https://github.com/openzfs/zfs/pull/10018
        
         | R0b0t1 wrote:
         | What about postgres passthrough to disk. Is it better to do
         | something like that? I normally see postgres on zfs, but I
         | wonder.
        
       | iRobbery wrote:
       | Nice overview, bookmarked. Only thing i could say is missing are
       | some graphs :)
        
       ___________________________________________________________________
       (page generated 2021-12-22 23:02 UTC)