https://www.crunchydata.com/blog/five-tips-for-a-healthier-postgres-database-in-the-new-year Crunchy DataCrunchy Data Products Featured Products Crunchy Bridge Fully managed cloud Postgres available on AWS, Azure, & GCP. Crunchy Postgres Automated high availability PostgreSQL solutions for traditional self-managed deployments on VMs and traditional infrastructure. Crunchy Postgres for Kubernetes High availability production deployment for self-managed PostgreSQL deployed through a Kubernetes Operator. Crunchy Hardened Postgres Hardened, secure, access controlled PostgreSQL to meet advanced security requirements. All Crunchy Products Crunchy Certified PostgreSQL Crunchy Postgres for Cloud Foundry Crunchy MLS Postgres Crunchy Spatial Pricing Cloud Crunchy Bridge Overview Documentation Price Calculator Get Started Cloud Partners AWS Azure GCP Heroku Create your accountLogin Solutions Industries Government Healthcare Finance SaaS Automotive Blockchain Software Consultancies Use Cases Internal Database-as-a-Service Embedded PostgreSQL Edge Computing PostGIS Integrations Ansible JFrog Kong OpenShift Developers Overview Postgres Operator Fully Managed Postgres Postgres Tutorials Postgres Tips Why Postgres? Value of a Subscription Customers BlogContact UsDownload Login Access PortalCrunchy Bridge Open menu Production Postgres Five Tips For a Healthier Postgres Database in the New Year Craig KerstiensCraig Kerstiens Craig Kerstiens Dec 23, 2021*5 min read It's been a busy year building Crunchy Bridge and we've shipped a lot of new awesome things. Instead of doing a wrap-up of all the growth and exciting features, instead I wanted to take the time to try to teach a few more things to those that follow us. While onboarding customer after customer this year I've noted a few key things everyone should put in place right away - to either improve the health of your database or to save yourself from a bad day. Set a statement timeout Long running (usually unintentionally so) queries can wreck havoc on a database. They can hold up other queries, replication, or other database processes. Most applications are designed for typical queries to run in a few milliseconds. You may have long running queries for reporting, but these are best offloaded to a read replica for reporting and analytics. To prevent those long running queries you can set a statement_timeout: ALTER DATABASE mydatabase SET statement_timeout = '60s'; For good measure you may also want to set your idle_in_transaction timeout as well, which will cancel long running transaction that are no longer performing work. Ensure you have query tracking Understanding what is going on inside your database is always a good idea. Which queries are slow? Which queries are run too many times? Enter the most useful Postgres extension that exists: pg_stat_statements. Pg_stat_statements records every query that runs against your database, parameterizes it, and then records a variety of metrics about it. That makes it easy to answer the above questions. If you don't have it installed already do it today by running: CREATE EXTENSION pg_stat_statements; Once it's in place you can take a look at our deep dive on all the insights it can show you. Log slow running queries While pg_stat_statements is useful for looking at frequently run queries or queries that may always be slow, sometimes you have extreme outlier queries. With pg_stat_statements you may review your queries every few months. Meanwhile your Postgres logs likely feed into some other central system that you are monitoring daily and have alerting on. Catching these slow outlier queries early can be a great canary for things you should quickly move off to a read-replica for scaling or that you should rewrite to be more efficient. You can log all slow queries that take over a certain time with log_min_duration_statement. For many SaaS applications setting your log_min_duration_statement to something like 1 second: 1s or even as low as 100 milliseconds: 100ms can be a big asset. Improve your connection management If you're using Rails, Django, Hibernate or any other framework/ORM you've likely set a connection pool in your application settings for your database. That connection pool is likely reducing latency in new connections to your database, but is also limiting the performance available for your database. On versions prior to Postgres 14, connections consumed extra overhead leaving idle connections as wasted space. The solution to this is not to replace your in app connection pooling, but rather add a server side connection pooler such as PgBouncer. With PgBouncer you're able to scale to 10s of thousands of connections with no problem. You can take a quick look at your existing database to see if PgBouncer would help: SELECT count(*), state FROM pg_stat_activity GROUP BY 2; If you see idle is above 20 it's recommended to explore using PgBouncer. Adding PgBouncer is often a no brainer to get better performance without any heavy refactoring required. And to make it easy if you're on Crunchy Bridge it's already available to you. Find your goldilocks range for indexes There seems to be a common lifecycle of indexes within applications. First you start off with almost none, maybe a few on primary keys. Then you start adding them, one by one, two by two, until you've got quite a few indexes for most any query you can run. Something is slow? Throw an index at it. What you end up with is some contention on overall throughput of your database, and well a lot of indexes that became a tangled ball of yarn over time. We've got a slew of write-ups and guides on indexes and unfortunately there isn't a "this is your one thing to read and your done". But a few key things and you can be in a better place: * A primer on indexes * Check for unused indexes * Index types in Postgres * A starter lesson on indexes Here's to less database problems in 2022 Our goal at Crunchy is to make Postgres great. One part of that is helping our customers understand their database and providing them with support and guidance for all their Postgres needs. With Crunchy Bridge we're working towards making all of the above easier, so it's one less thing you have to worry about. We've already had customers migrate and see 3-5x performance improvement over their existing cloud providers. We know if you're here you're already a fan of Postgres. In this coming year we look forward to making the developer experience of Postgres better than it's ever been. Enjoy this article? You will love our newsletter! Do not fill this out please: [ ]Do not fill this out please, it will be pre-filled: [ ] Email address[ ] Join The List Craig Kerstiens Written by Craig Kerstiens December 23, 2021 *More by this author Footer Products * Crunchy Postgres * Crunchy Postgres for Kubernetes * Crunchy Bridge * Crunchy Certified PostgreSQL * Crunchy PostgreSQL for Cloud Foundry * Crunchy MLS PostgreSQL * Crunchy Spatial Services & Support * Enterprise PostgreSQL Support * Ansible * Red Hat Partner * Trusted PostgreSQL * Crunchy Data Subscription Resources * Customer Portal * Software Documentation * Blog * Events Company * About Crunchy Data * Team * News * Careers * Contact Us * Newsletter * Security Crunchy Data Newsletter Subscribe to the Crunchy Data Newsletter to receive Postgres content every month. Do not fill this out please: [ ]Do not fill this out please, it will be pre-filled: [ ] Email address[ ] Join The List YouTubeLinkedInTwitterGitHub (c) 2018-2023 Crunchy Data Solutions, Inc. This site uses cookies for usage analytics to improve our service. By continuing to browse this site, you agree to this use. Learn more *