https://github.com/fatkodima/online_migrations Skip to content Sign up * Why GitHub? + Features + Mobile + Actions + Codespaces + Packages + Security + Code review + Issues + Integrations + GitHub Sponsors + Customer stories * Team * Enterprise * Explore + Explore GitHub + Learn and contribute + Topics + Collections + Trending + Learning Lab + Open source guides + Connect with others + The ReadME Project + Events + Community forum + GitHub Education + GitHub Stars program * Marketplace * Pricing + Plans + Compare plans + Contact Sales + Education [ ] * # In this repository All GitHub | Jump to | * No suggested jump to results * # In this repository All GitHub | Jump to | * # In this user All GitHub | Jump to | * # In this repository All GitHub | Jump to | Sign in Sign up {{ message }} fatkodima / online_migrations Public * Notifications * Fork 1 * Star 158 * Catch unsafe PostgreSQL migrations in development and run them easier in production (code helpers for table/column renaming, changing column type, adding columns with default, background migrations, etc). rubydoc.info/github/fatkodima/online_migrations/ MIT License 158 stars 1 fork Star Notifications * Code * Issues 0 * Pull requests 0 * Actions * Security * Insights More * Code * Issues * Pull requests * Actions * Security * Insights This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. master Switch branches/tags [ ] Branches Tags Could not load branches Nothing to show {{ refName }} default View all branches Could not load tags Nothing to show {{ refName }} default View all tags 1 branch 3 tags Code Latest commit @fatkodima fatkodima Add 'Comparison to strong_migrations' section ... 99cb0c2 Feb 19, 2022 Add 'Comparison to strong_migrations' section [skip ci] 99cb0c2 Git stats * 105 commits Files Permalink Failed to load latest commit information. Type Name Latest commit message Commit time .github/workflows Add CI (#1) Jan 14, 2022 gemfiles Add gemfile for ActiveRecord HEAD Jan 8, 2022 lib Fix backfilling code in suggestion for changing column's NOT NULL Feb 17, 2022 test Fix backfilling code in suggestion for changing column's NOT NULL Feb 17, 2022 .gitignore Add a way to log every SQL query to stdout Feb 3, 2022 .rubocop.yml Enable new cops for rubocop Feb 10, 2022 .yardopts Add rake task for docs and test generated docs for correctness Jan 8, 2022 BACKGROUND_MIGRATIONS.md Add BACKGROUND_MIGRATIONS.md Jan 8, 2022 CHANGELOG.md Fix backfilling code in suggestion for changing column's NOT NULL Feb 17, 2022 Gemfile Add rake task for docs and test generated docs for correctness Jan 8, 2022 Gemfile.lock Release 0.3.0 Feb 10, 2022 LICENSE.txt bundle gem online_migrations Jan 6, 2022 README.md Add 'Comparison to strong_migrations' section Feb 19, 2022 Rakefile Add rake task for docs and test generated docs for correctness Jan 8, 2022 online_migrations.gemspec Simplify included files config in gemspec Feb 11, 2022 View code [ ] OnlineMigrations Cool, but there is a strong_migrations already Requirements Installation Motivation Example Checks Removing a column Adding a column with a default value Backfilling data Changing the type of a column Renaming a column Renaming a table Creating a table with the force option Adding a check constraint Setting NOT NULL on an existing column Executing SQL directly Adding an index non-concurrently Removing an index non-concurrently Replacing an index Adding a reference Adding a foreign key Adding a json column Using primary key with short integer type Hash indexes Adding multiple foreign keys Removing a table with multiple foreign keys Mismatched reference column types Adding a single table inheritance column Assuring Safety Configuring the gem Custom checks Disable Checks Down Migrations / Rollbacks Custom Messages Migration Timeouts Lock Timeout Retries Existing Migrations Target Version Small Tables Verbose SQL logs Background Migrations Credits Contributing Development Additional resources Maybe TODO Comparison to strong_migrations License README.md OnlineMigrations Catch unsafe PostgreSQL migrations in development and run them easier in production. Detects potentially dangerous operations Prevents them from running by default Provides instructions and helpers on safer ways to do what you want Note: You probably don't need this gem for smaller projects, as operations that are unsafe at scale can be perfectly safe on smaller, low-traffic tables. Build Status Cool, but there is a strong_migrations already See comparison to strong_migrations Requirements * Ruby 2.1+ * Rails 4.2+ * PostgreSQL 9.6+ Installation Add this line to your application's Gemfile: gem 'online_migrations' And then run: $ bundle install $ bin/rails generate online_migrations:install Note: If you do not have plans on using background migrations feature, then you can delete the generated migration and regenerate it later, if needed. Motivation Writing a safe migration can be daunting. Numerous articles have been written on the topic and a few gems are trying to address the problem. Even for someone who has a pretty good command of PostgreSQL, remembering all the subtleties of explicit locking can be problematic. Online Migrations was created to catch dangerous operations and provide a guidance and code helpers to run them safely. An operation is classified as dangerous if it either: * Blocks reads or writes for more than a few seconds (after a lock is acquired) * Has a good chance of causing application errors Example Consider the following migration: class AddAdminToUsers < ActiveRecord::Migration[7.0] def change add_column :users, :admin, :boolean, default: false, null: false end end If the users table is large, running this migration on a live PostgreSQL < 11 database will likely cause downtime. A safer approach would be to run something like the following: class AddAdminToUsers < ActiveRecord::Migration[7.0] # Do not wrap the migration in a transaction so that locks are held for a shorter time. disable_ddl_transaction! def change # Lower PostgreSQL's lock timeout to avoid statement queueing. execute "SET lock_timeout TO '5s'" # The lock_timeout duration is customizable. # Add the column without the default value and the not-null constraint. add_column :users, :admin, :boolean # Set the column's default value. change_column_default :users, :admin, false # Backfill the column in batches. User.in_batches.update_all(admin: false) # Add the not-null constraint. Beforehand, set a short statement timeout so that # Postgres does not spend too much time performing the full table scan to verify # the column contains no nulls. execute "SET statement_timeout TO '5s'" change_column_null :users, :admin, false end end When you actually run the original migration, you will get an error message: [?][?] [online_migrations] Dangerous operation detected [?][?] Adding a column with a non-null default blocks reads and writes while the entire table is rewritten. A safer approach is to: 1. add the column without a default value 2. change the column default 3. backfill existing rows with the new value 4. add the NOT NULL constraint add_column_with_default takes care of all this steps: class AddAdminToUsers < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change add_column_with_default :users, :admin, :boolean, default: false, null: false end end It suggests how to safely implement a migration, which essentially runs the steps similar to described in the previous example. Checks Potentially dangerous operations: * removing a column * adding a column with a default value * backfilling data * changing the type of a column * renaming a column * renaming a table * creating a table with the force option * adding a check constraint * setting NOT NULL on an existing column * executing SQL directly * adding an index non-concurrently * removing an index non-concurrently * replacing an index * adding a reference * adding a foreign key * adding a json column * using primary key with short integer type * hash indexes * adding multiple foreign keys * removing a table with multiple foreign keys * mismatched reference column types * adding a single table inheritance column You can also add custom checks or disable specific checks. Removing a column Bad ActiveRecord caches database columns at runtime, so if you drop a column, it can cause exceptions until your app reboots. class RemoveNameFromUsers < ActiveRecord::Migration[7.0] def change remove_column :users, :name end end Good 1. Ignore the column: # For ActiveRecord 5+ class User < ApplicationRecord self.ignored_columns = ["name"] end # For ActiveRecord < 5 class User < ActiveRecord::Base def self.columns super.reject { |c| c.name == "name" } end end 2. Deploy 3. Wrap column removing in a safety_assured block: class RemoveNameFromUsers < ActiveRecord::Migration[7.0] def change safety_assured { remove_column :users, :name } end end 4. Remove column ignoring from User model 5. Deploy Adding a column with a default value Bad In earlier versions of PostgreSQL adding a column with a non-null default value to an existing table blocks reads and writes while the entire table is rewritten. class AddAdminToUsers < ActiveRecord::Migration[7.0] def change add_column :users, :admin, :boolean, default: false end end In PostgreSQL 11+ this no longer requires a table rewrite and is safe. Volatile expressions, however, such as random(), will still result in table rewrites. Good A safer approach is to: 1. add the column without a default value 2. change the column default 3. backfill existing rows with the new value add_column_with_default helper takes care of all this steps: class AddAdminToUsers < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change add_column_with_default :users, :admin, :boolean, default: false end end Note: If you forget disable_ddl_transaction!, the migration will fail. Backfilling data Bad ActiveRecord wraps each migration in a transaction, and backfilling in the same transaction that alters a table keeps the table locked for the duration of the backfill. class AddAdminToUsers < ActiveRecord::Migration[7.0] def change add_column :users, :admin, :boolean User.update_all(admin: false) end end Also, running a single query to update data can cause issues for large tables. Good There are three keys to backfilling safely: batching, throttling, and running it outside a transaction. Use a update_column_in_batches helper in a separate migration with disable_ddl_transaction!. class AddAdminToUsers < ActiveRecord::Migration[7.0] def change add_column :users, :admin, :boolean end end class BackfillUsersAdminColumn < ActiveRecord::Migration[7.0] disable_ddl_transaction! def up update_column_in_batches(:users, :admin, false, pause_ms: 10) end end Note: If you forget disable_ddl_transaction!, the migration will fail. Note: You may consider background migrations to run data changes on large tables. Changing the type of a column Bad Changing the type of an existing column blocks reads and writes while the entire table is rewritten. class ChangeFilesSizeType < ActiveRecord::Migration[7.0] def change change_column :files, :size, :bigint end end A few changes don't require a table rewrite (and are safe) in PostgreSQL: Type Safe Changes cidr Changing to inet citext Changing to text if not indexed, changing to string with no :limit if not indexed Increasing or removing :precision, changing to datetime timestamptz when session time zone is UTC in PostgreSQL 12+ decimal Increasing :precision at same :scale, removing :precision and :scale interval Increasing or removing :precision numeric Increasing :precision at same :scale, removing :precision and :scale string Increasing or removing :limit, changing to text, changing to citext if not indexed text Changing to string with no :limit, changing to citext if not indexed timestamptz Increasing or removing :limit, changing to datetime when session time zone is UTC in PostgreSQL 12+ xml Changing to text, changing to string with no :limit Good Note: The following steps can also be used to change the primary key's type (e.g., from integer to bigint). A safer approach can be accomplished in several steps: 1. Create a new column and keep column's data in sync: class InitializeChangeFilesSizeType < ActiveRecord::Migration[7.0] def change initialize_column_type_change :files, :size, :bigint end end 2. Backfill data from the old column to the new column: class BackfillChangeFilesSizeType < ActiveRecord::Migration[7.0] disable_ddl_transaction! def up backfill_column_for_type_change :files, :size end def down # no op end end 3. Copy indexes, foreign keys, check constraints, NOT NULL constraint, swap new column in place: class FinalizeChangeFilesSizeType < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change finalize_column_type_change :files, :size end end 4. Deploy 5. Finally, if everything is working as expected, remove copy trigger and old column: class CleanupChangeFilesSizeType < ActiveRecord::Migration[7.0] def up cleanup_change_column_type_concurrently :files, :size end def down initialize_column_type_change :files, :size, :integer end end 6. Deploy Renaming a column Bad Renaming a column that's in use will cause errors in your application. class RenameUsersNameToFirstName < ActiveRecord::Migration[7.0] def change rename_column :users, :name, :first_name end end Good The "classic" approach suggests creating a new column and copy data/ indexes/etc to it from the old column. This can be costly for very large tables. There is a trick that helps to avoid such heavy operations. The technique is built on top of database views, using the following steps: 1. Rename the table to some temporary name 2. Create a VIEW using the old table name with addition of a new column as an alias of the old one 3. Add a workaround for ActiveRecord's schema cache For the previous example, to rename name column to first_name of the users table, we can run: BEGIN; ALTER TABLE users RENAME TO users_column_rename; CREATE VIEW users AS SELECT *, first_name AS name FROM users; COMMIT; As database views do not expose the underlying table schema (default values, not null constraints, indexes, etc), further steps are needed to update the application to use the new table name. ActiveRecord heavily relies on this data, for example, to initialize new models. To work around this limitation, we need to tell ActiveRecord to acquire this information from original table using the new table name. Online Migrations provides several helpers to implement column renaming: 1. Instruct Rails that you are going to rename a column: OnlineMigrations.config.column_renames = { "users" => { "name" => "first_name" } } 2. Deploy 3. Create a VIEW with aliased column: class InitializeRenameUsersNameToFirstName < ActiveRecord::Migration[7.0] def change initialize_column_rename :users, :name, :first_name end end 4. Replace usages of the old column with a new column in the codebase 5. Deploy 6. Remove the column rename config from step 1 7. Remove the VIEW created in step 3: class FinalizeRenameUsersNameToFirstName < ActiveRecord::Migration[7.0] def change finalize_column_rename :users, :name, :first_name end end 8. Deploy Renaming a table Bad Renaming a table that's in use will cause errors in your application. class RenameClientsToUsers < ActiveRecord::Migration[7.0] def change rename_table :clients, :users end end Good The "classic" approach suggests creating a new table and copy data/ indexes/etc to it from the old table. This can be costly for very large tables. There is a trick that helps to avoid such heavy operations. The technique is built on top of database views, using the following steps: 1. Rename the database table 2. Create a VIEW using the old table name by pointing to the new table name 3. Add a workaround for ActiveRecord's schema cache For the previous example, to rename name column to first_name of the users table, we can run: BEGIN; ALTER TABLE clients RENAME TO users; CREATE VIEW clients AS SELECT * FROM users; COMMIT; As database views do not expose the underlying table schema (default values, not null constraints, indexes, etc), further steps are needed to update the application to use the new table name. ActiveRecord heavily relies on this data, for example, to initialize new models. To work around this limitation, we need to tell ActiveRecord to acquire this information from original table using the new table name. Online Migrations provides several helpers to implement table renaming: 1. Instruct Rails that you are going to rename a table: OnlineMigrations.config.table_renames = { "clients" => "users" } 2. Deploy 3. Create a VIEW: class InitializeRenameClientsToUsers < ActiveRecord::Migration[7.0] def change initialize_table_rename :clients, :users end end 4. Replace usages of the old table with a new table in the codebase 5. Remove the table rename config from step 1 6. Deploy 7. Remove the VIEW created in step 3: class FinalizeRenameClientsToUsers < ActiveRecord::Migration[7.0] def change finalize_table_rename :clients, :users end end 8. Deploy Creating a table with the force option Bad The force option can drop an existing table. class CreateUsers < ActiveRecord::Migration[7.0] def change create_table :users, force: true do |t| # ... end end end Good Create tables without the force option. class CreateUsers < ActiveRecord::Migration[7.0] def change create_table :users do |t| # ... end end end If you intend to drop an existing table, run drop_table first. Adding a check constraint Bad Adding a check constraint blocks reads and writes while every row is checked. class AddCheckConstraint < ActiveRecord::Migration[7.0] def change add_check_constraint :users, "char_length(name) >= 1", name: "name_check" end end Good Add the check constraint without validating existing rows, and then validate them in a separate transaction: class AddCheckConstraint < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change add_check_constraint :users, "char_length(name) >= 1", name: "name_check", validate: false validate_check_constraint :users, name: "name_check" end end Note: If you forget disable_ddl_transaction!, the migration will fail. Setting NOT NULL on an existing column Bad Setting NOT NULL on an existing column blocks reads and writes while every row is checked. class ChangeUsersNameNull < ActiveRecord::Migration[7.0] def change change_column_null :users, :name, false end end Good Instead, add a check constraint and validate it in a separate transaction: class ChangeUsersNameNull < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change add_not_null_constraint :users, :name, name: "users_name_null", validate: false validate_not_null_constraint :users, :name, name: "users_name_null" end end Note: If you forget disable_ddl_transaction!, the migration will fail. A NOT NULL check constraint is functionally equivalent to setting NOT NULL on the column (but it won't show up in schema.rb in Rails < 6.1). In PostgreSQL 12+, once the check constraint is validated, you can safely set NOT NULL on the column and drop the check constraint. class ChangeUsersNameNullDropCheck < ActiveRecord::Migration[7.0] def change # in PostgreSQL 12+, you can then safely set NOT NULL on the column change_column_null :users, :name, false remove_check_constraint :users, name: "users_name_null" end end Executing SQL directly Online Migrations does not support inspecting what happens inside an execute call, so cannot help you here. Make really sure that what you're doing is safe before proceeding, then wrap it in a safety_assured { ... } block: class ExecuteSQL < ActiveRecord::Migration[7.0] def change safety_assured { execute "..." } end end Adding an index non-concurrently Bad Adding an index non-concurrently blocks writes. class AddIndexOnUsersEmail < ActiveRecord::Migration[7.0] def change add_index :users, :email, unique: true end end Good Add indexes concurrently. class AddIndexOnUsersEmail < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change add_index :users, :email, unique: true, algorithm: :concurrently end end Note: If you forget disable_ddl_transaction!, the migration will fail. Also, note that indexes on new tables (those created in the same migration) don't require this. Removing an index non-concurrently Bad While actual removing of an index is usually fast, removing it non-concurrently tries to obtain an ACCESS EXCLUSIVE lock on the table, waiting for all existing queries to complete and blocking all the subsequent queries (even SELECTs) on that table until the lock is obtained and index is removed. class RemoveIndexOnUsersEmail < ActiveRecord::Migration[7.0] def change remove_index :users, :email end end Good Remove indexes concurrently. class RemoveIndexOnUsersEmail < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change remove_index :users, :email, algorithm: :concurrently end end Note: If you forget disable_ddl_transaction!, the migration will fail. Replacing an index Bad Removing an old index before replacing it with the new one might result in slow queries while building the new index. class AddIndexOnCreationToProjects < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change remove_index :projects, :creator_id, algorithm: :concurrently add_index :projects, [:creator_id, :created_at], algorithm: :concurrently end end Note: If removed index is covered by any existing index, then it is safe to remove the index before replacing it with the new one. Good A safer approach is to create the new index and then delete the old one. class AddIndexOnCreationToProjects < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change add_index :projects, [:creator_id, :created_at], algorithm: :concurrently remove_index :projects, :creator_id, algorithm: :concurrently end end Adding a reference Bad Rails adds an index non-concurrently to references by default, which blocks writes. Additionally, if foreign_key option (without validate: false) is provided, both tables are blocked while it is validated. class AddUserToProjects < ActiveRecord::Migration[7.0] def change add_reference :projects, :user, foreign_key: true end end Good Make sure the index is added concurrently and the foreign key is added in a separate migration. Or you can use add_reference_concurrently helper. It will create a reference and take care of safely adding index and/or foreign key. class AddUserToProjects < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change add_reference_concurrently :projects, :user end end Note: If you forget disable_ddl_transaction!, the migration will fail. Adding a foreign key Bad Adding a foreign key blocks writes on both tables. class AddForeignKeyToProjectsUser < ActiveRecord::Migration[7.0] def change add_foreign_key :projects, :users end end or class AddReferenceToProjectsUser < ActiveRecord::Migration[7.0] def change add_reference :projects, :user, foreign_key: true end end Good Add the foreign key without validating existing rows, and then validate them in a separate transaction. class AddForeignKeyToProjectsUser < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change add_foreign_key :projects, :users, validate: false validate_foreign_key :projects, :users end end Note: If you forget disable_ddl_transaction!, the migration will fail. Adding a json column Bad There's no equality operator for the json column type, which can cause errors for existing SELECT DISTINCT queries in your application. class AddSettingsToProjects < ActiveRecord::Migration[7.0] def change add_column :projects, :settings, :json end end Good Use jsonb instead. class AddSettingsToProjects < ActiveRecord::Migration[7.0] def change add_column :projects, :settings, :jsonb end end Using primary key with short integer type Bad When using short integer types as primary key types, there is a risk of running out of IDs on inserts. The default type in ActiveRecord < 5.1 for primary and foreign keys is INTEGER, which allows a little over of 2 billion records. Active Record 5.1 changed the default type to BIGINT. class CreateUsers < ActiveRecord::Migration[7.0] def change create_table :users, id: :integer do |t| # ... end end end Good Use one of bigint, bigserial, uuid instead. class CreateUsers < ActiveRecord::Migration[7.0] def change create_table :users, id: :bigint do |t| # bigint is the default for Active Record >= 5.1 # ... end end end Hash indexes Bad - PostgreSQL < 10 Hash index operations are not WAL-logged, so hash indexes might need to be rebuilt with REINDEX after a database crash if there were unwritten changes. Also, changes to hash indexes are not replicated over streaming or file-based replication after the initial base backup, so they give wrong answers to queries that subsequently use them. For these reasons, hash index use is discouraged. class AddIndexToUsersOnEmail < ActiveRecord::Migration[7.0] def change add_index :users, :email, unique: true, using: :hash end end Good - PostgreSQL < 10 Use B-tree indexes instead. class AddIndexToUsersOnEmail < ActiveRecord::Migration[7.0] def change add_index :users, :email, unique: true # B-tree by default end end Adding multiple foreign keys Bad Adding multiple foreign keys in a single migration blocks reads and writes on all involved tables until migration is completed. Avoid adding foreign key more than once per migration file, unless the source and target tables are identical. class CreateUserProjects < ActiveRecord::Migration[7.0] def change create_table :user_projects do |t| t.belongs_to :user, foreign_key: true t.belongs_to :project, foreign_key: true end end end Good Add additional foreign keys in separate migration files. See adding a foreign key for how to properly add foreign keys. class CreateUserProjects < ActiveRecord::Migration[7.0] def change create_table :user_projects do |t| t.belongs_to :user, foreign_key: true t.belongs_to :project, foreign_key: false end end end class AddForeignKeyFromUserProjectsToProject < ActiveRecord::Migration[7.0] def change add_foreign_key :user_projects, :projects end end Removing a table with multiple foreign keys Bad Removing a table with multiple foreign keys blocks reads and writes on all involved tables until migration is completed. Remove all the foreign keys first. Assuming, projects has foreign keys on users.id and repositories.id: class DropProjects < ActiveRecord::Migration[7.0] def change drop_table :projects end end Good Remove all the foreign keys first: class RemoveProjectsUserFk < ActiveRecord::Migration[7.0] def change remove_foreign_key :projects, :users end end class RemoveProjectsRepositoryFk < ActiveRecord::Migration[7.0] def change remove_foreign_key :projects, :repositories end end Then remove the table: class DropProjects < ActiveRecord::Migration[7.0] def change drop_table :projects end end Mismatched reference column types Bad Reference columns should be of the same type as the referenced primary key. Otherwise, there's a risk of bugs caused by IDs representable by one type but not the other. Assuming, there is a users table with bigint primary key type: class AddUserIdToProjects < ActiveRecord::Migration[7.0] def change add_column :projects, :user_id, :integer end end Good Add a reference column of the same type as a referenced primary key. Assuming, there is a users table with bigint primary key type: class AddUserIdToProjects < ActiveRecord::Migration[7.0] def change add_column :projects, :user_id, :bigint end end Adding a single table inheritance column Bad Adding a single table inheritance column might cause errors in old instances of your application. class AddTypeToUsers < ActiveRecord::Migration[7.0] def change add_column :users, :string, :type, default: "Member" end end After the migration was ran and the column was added, but before the code is fully deployed to all instances, an old instance may be restarted (due to an error etc). And when it will fetch 'User' records from the database, 'User' will look for a 'Member' subclass (from the 'type' column) and fail to locate it unless it is already defined. Good A safer approach is to: 1. ignore the column: # For ActiveRecord 5+ class User < ApplicationRecord self.ignored_columns = ["type"] end # For ActiveRecord < 5 class User < ActiveRecord::Base def self.columns super.reject { |c| c.name == "type" } end end 2. deploy 3. remove the column ignoring from step 1 and apply initial code changes 4. deploy Assuring Safety To mark a step in the migration as safe, despite using a method that might otherwise be dangerous, wrap it in a safety_assured block. class MySafeMigration < ActiveRecord::Migration[7.0] def change safety_assured { remove_column :users, :some_column } end end Certain methods like execute and change_table cannot be inspected and are prevented from running by default. Make sure what you're doing is really safe and use this pattern. Configuring the gem There are a few configurable options for the gem. Custom configurations should be placed in a online_migrations.rb initializer. OnlineMigrations.configure do |config| # ... end Note: Check the source code for the list of all available configuration options. Custom checks Add your own custom checks with: # config/initializers/online_migrations.rb config.add_check do |method, args| if method == :add_column && args[0].to_s == "users" stop!("No more columns on the users table") end end Use the stop! method to stop migrations. Note: Since remove_column, execute and change_table always require a safety_assured block, it's not possible to add a custom check for these operations. Disable Checks Disable specific checks with: # config/initializers/online_migrations.rb config.disable_check(:remove_index) Check the source code for the list of keys. Down Migrations / Rollbacks By default, checks are disabled when migrating down. Enable them with: # config/initializers/online_migrations.rb config.check_down = true Custom Messages You can customize specific error messages: # config/initializers/online_migrations.rb config.error_messages[:add_column_default] = "Your custom instructions" Check the source code for the list of keys. Migration Timeouts It's extremely important to set a short lock timeout for migrations. This way, if a migration can't acquire a lock in a timely manner, other statements won't be stuck behind it. Add timeouts to config/database.yml: production: connect_timeout: 5 variables: lock_timeout: 10s statement_timeout: 15s Or set the timeouts directly on the database user that runs migrations: ALTER ROLE myuser SET lock_timeout = '10s'; ALTER ROLE myuser SET statement_timeout = '15s'; Lock Timeout Retries You can configure this gem to automatically retry statements that exceed the lock timeout: # config/initializers/online_migrations.rb config.lock_retrier = OnlineMigrations::ExponentialLockRetrier.new( attempts: 30, # attempt 30 retries base_delay: 0.01.seconds, # starting with delay of 10ms between each unsuccessful try, increasing exponentially max_delay: 1.minute, # maximum delay is 1 minute lock_timeout: 0.05.seconds # and 50ms set as lock timeout for each try ) When statement within transaction fails - the whole transaction is retried. To permanently disable lock retries, you can set lock_retrier to nil. To temporarily disable lock retries while running migrations, set DISABLE_LOCK_RETRIES env variable. Note: Statements are retried by default, unless lock retries are disabled. It is possible to implement more sophisticated lock retriers. See source code for the examples. Existing Migrations To mark migrations as safe that were created before installing this gem, configure the migration version starting after which checks are performed: # config/initializers/online_migrations.rb config.start_after = 20220101000000 # or if you use multiple databases (ActiveRecord 6+) config.start_after = { primary: 20211112000000, animals: 20220101000000 } Use the version from your latest migration. Target Version If your development database version is different from production, you can specify the production version so the right checks run in development. # config/initializers/online_migrations.rb config.target_version = 10 # or "12.9" etc # or if you use multiple databases (ActiveRecord 6+) config.target_version = { primary: 10, animals: 14.1 } For safety, this option only affects development and test environments. In other environments, the actual server version is always used. Small Tables Most projects have tables that are known to be small in size. These are usually "settings", "prices", "plans" etc. It is considered safe to perform most of the dangerous operations on them, like adding indexes, columns etc. To mark tables as small: config.small_tables = [:settings, :prices] Verbose SQL logs For any operation, Online Migrations can output the performed SQL queries. This is useful to demystify online_migrations inner workings, and to better investigate migration failure in production. This is also useful in development to get a better grasp of what is going on for high-level statements like add_column_with_default. Consider migration, running on PostgreSQL < 11: class AddAdminToUsers < ActiveRecord::Migration[7.0] disable_ddl_transaction! def change add_column_with_default :users, :admin, :boolean, default: false end end Instead of the traditional output: == 20220106214827 AddAdminToUsers: migrating ================================== -- add_column_with_default(:users, :admin, :boolean, {:default=>false}) -> 0.1423s == 20220106214827 AddAdminToUsers: migrated (0.1462s) ========================= Online Migrations will output the following logs: == 20220106214827 AddAdminToUsers: migrating ================================== (0.3ms) SHOW lock_timeout (0.2ms) SET lock_timeout TO '50ms' -- add_column_with_default(:users, :admin, :boolean, {:default=>false}) TRANSACTION (0.1ms) BEGIN (37.7ms) ALTER TABLE "users" ADD "admin" boolean DEFAULT NULL (0.5ms) ALTER TABLE "users" ALTER COLUMN "admin" SET DEFAULT FALSE TRANSACTION (0.3ms) COMMIT Load (0.3ms) SELECT "users"."id" FROM "users" WHERE ("users"."admin" != FALSE OR "users"."admin" IS NULL) ORDER BY "users"."id" ASC LIMIT $1 [["LIMIT", 1]] Load (0.5ms) SELECT "users"."id" FROM "users" WHERE ("users"."admin" != FALSE OR "users"."admin" IS NULL) AND "users"."id" >= 1 ORDER BY "users"."id" ASC LIMIT $1 OFFSET $2 [["LIMIT", 1], ["OFFSET", 1000]] # Update All (9.6ms) UPDATE "users" SET "admin" = $1 WHERE ("users"."admin" != FALSE OR "users"."admin" IS NULL) AND "users"."id" >= 1 AND "users"."id" < 1001 [["admin", false]] Load (0.8ms) SELECT "users"."id" FROM "users" WHERE ("users"."admin" != FALSE OR "users"."admin" IS NULL) AND "users"."id" >= 1001 ORDER BY "users"."id" ASC LIMIT $1 OFFSET $2 [["LIMIT", 1], ["OFFSET", 1000]] # Update All (1.5ms) UPDATE "users" SET "admin" = $1 WHERE ("users"."admin" != FALSE OR "users"."admin" IS NULL) AND "users"."id" >= 1001 [["admin", false]] -> 0.1814s (0.4ms) SET lock_timeout TO '5s' == 20220106214827 AddAdminToUsers: migrated (0.1840s) ========================= So you can actually check which steps are performed. Note: The SHOW statements are used by Online Migrations to query settings for their original values in order to restore them after the work is done. To enable verbose sql logs: # config/initializers/online_migrations.rb config.verbose_sql_logs = true This feature is enabled by default in a production Rails environment. You can override this setting via ONLINE_MIGRATIONS_VERBOSE_SQL_LOGS environment variable. Background Migrations Read BACKGROUND_MIGRATIONS.md on how to perform data migrations on large tables. Credits Thanks to strong_migrations gem, GitLab and maintenance_tasks gem for the original ideas. Contributing Bug reports and pull requests are welcome on GitHub at https:// github.com/fatkodima/online_migrations. Development After checking out the repo, run bundle install to install dependencies. Run createdb online_migrations_test to create a test database. Then, run bundle exec rake test to run the tests. This project uses multiple Gemfiles to test against multiple versions of ActiveRecord; you can run the tests against the specific version with BUNDLE_GEMFILE=gemfiles/activerecord_61.gemfile bundle exec rake test. To install this gem onto your local machine, run bundle exec rake install. To release a new version, update the version number in version.rb, and then run bundle exec rake release, which will create a git tag for the version, push git commits and tags, and push the .gem file to rubygems.org. Additional resources Alternatives: * https://github.com/ankane/strong_migrations * https://github.com/LendingHome/zero_downtime_migrations * https://github.com/braintree/pg_ha_migrations * https://github.com/doctolib/safe-pg-migrations Interesting reads: * Explicit Locking * When Postgres blocks: 7 tips for dealing with locks * PostgreSQL rocks, except when it blocks: Understanding locks * PostgreSQL at Scale: Database Schema Changes Without Downtime * Adding a NOT NULL CONSTRAINT on PG Faster with Minimal Locking * Adding columns with default values to really large tables in Postgres + Rails * Safe Operations For High Volume PostgreSQL * Stop worrying about PostgreSQL locks in your Rails migrations * Avoiding integer overflows with zero downtime Maybe TODO * support MySQL * support other ORMs Background migrations: * extract as a separate gem * add UI * support batching over non-integer and multiple columns Comparison to strong_migrations This gem was heavily inspired by the strong_migrations and GitLab's approaches to database migrations. This gem is a superset of strong_migrations, feature-wise, and has the same APIs. The main differences are: 1. strong_migrations provides you text guidance on how to run migrations safer and you should implement them yourself. This new gem has actual code helpers (and suggests them when fails on unsafe migrations) you can use to do what you want. See example for an example. It has migrations helpers for: * renaming tables/columns * changing columns types (including changing primary/foreign keys from integer to bigint) * adding columns with default values * adding different types of constraints * and others 2. This gem has a powerful internal framework for running data migrations on very large tables using background migrations. For example, you can use background migrations to migrate data that's stored in a single JSON column to a separate table instead; backfill values from one column to another (as one of the steps when changing column type); or backfill some column's value from an API. 3. Yet, it has more checks for unsafe changes (see #checks). 4. Currently, this gem supports only PostgreSQL, while strong_migrations also checks MySQL and MariaDB migrations. 5. This gem is more flexible in terms of configuration - see config file for additional configuration options. License The gem is available as open source under the terms of the MIT License. About Catch unsafe PostgreSQL migrations in development and run them easier in production (code helpers for table/column renaming, changing column type, adding columns with default, background migrations, etc). rubydoc.info/github/fatkodima/online_migrations/ Topics ruby rails activerecord gem migrations postgresql Resources Readme License MIT License Stars 158 stars Watchers 3 watching Forks 1 fork Releases 3 tags Packages 0 No packages published Languages * Ruby 100.0% * (c) 2022 GitHub, Inc. * Terms * Privacy * Security * Status * Docs * Contact GitHub * Pricing * API * Training * Blog * About You can't perform that action at this time. You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session.