Home >Database >Mysql Tutorial >Top ySQL Schema Checks to Boost Database Performance

Top ySQL Schema Checks to Boost Database Performance

DDD
DDDOriginal
2024-11-08 01:31:03610browse

A database schema defines the logical structure of your database, including tables, columns, relationships, indexes, and constraints that shape how data is organized and accessed. It’s not just about how the data is stored but also how it interacts with queries, transactions, and other operations.

These checks can help you stay on top of any new or lingering problems before they snowball into bigger issues. You can dive deeper into these schema checks below and find out exactly how to fix any issues if your database doesn't pass. Just remember, before you make any schema changes, always backup your data to protect against potential risks that might occur during modifications.

1. Primary Key Check (Missing Primary Keys)

The primary key is a critical part of any table, uniquely identifying each row and enabling efficient queries. Without a primary key, tables may experience performance issues, and certain tools like replication and schema change utilities may not function properly.

There are several issues you can avoid by defining a primary key when designing schemas:

  1. If no primary or unique key is specified, MySQL creates an internal one, which is inaccessible for usage.
  2. The lack of a primary key could slow down replication performance, especially with row-based or mixed replication.
  3. Primary keys allow scalable data archiving and purging. Tools like pt-online-schema-change require a primary or unique key.
  4. Primary keys uniquely identify rows, which is crucial from an application perspective.

Example

To create a PRIMARY KEY constraint on the "ID" column when the table is already created, use the following SQL:

ALTER TABLE Persons ADD PRIMARY KEY (ID);

To define a primary key on multiple columns:

ALTER TABLE Persons ADD CONSTRAINT PK_Person PRIMARY KEY (ID, LastName);

Note: If you use the ALTER TABLE command, then the primary key column(s) must have been declared to not contain NULL values when the table was first created.

2. Table Engine Check(Deprecated Table Engine)

The MyISAM storage engine is deprecated, and tables still using it should be migrated to InnoDB. InnoDB is the default and recommended engine for most use cases due to its superior performance, data recovery capabilities, and transaction support. Migrating from MyISAM to InnoDB can dramatically improve performance in write-heavy applications, provide better fault tolerance, and allow for more advanced MySQL features such as full-text search and foreign keys.

Why InnoDB is preferred:

  • Crash recovery capabilities allow it to recover automatically from database server or host crashes without data corruption.
  • Only locks the rows affected by a query, allowing for much better performance in high-concurrency environments.
  • Caches both data and indexes in memory, which is preferred for read-heavy workloads.
  • Fully ACID-compliant, ensuring data integrity and supporting transactions.
  • The InnoDB engine receives the majority of the focus from the MySQL development community, making it the most up-to-date and well-supported engine.

How to Migrate to InnoDB

ALTER TABLE Persons ADD PRIMARY KEY (ID);

3. Table Collation Check (Mixed Collations)

Using different collations across tables or even within a table can lead to performance problems, particularly during string comparisons and joins. If the collations of two string columns differ, MySQL might need to convert the strings at runtime, which can prevent indexes from being used and slow down your queries.

When you make changes to mixed collations tables, a few problems can surface:

  • Collations can differ at the column level, so mismatches at the table level won’t cause issues if the relevant columns in a join have matching collations.
  • Changing a table's collation, especially with a charset switch, isn't always simple. Data conversion might be needed, and unsupported characters could turn into corrupted data.
  • If you don’t specify a collation or charset when creating a table, it inherits the database defaults. If none are set at the database level, server defaults will apply. To avoid these issues, it’s important to standardize the collation across your entire dataset, especially for columns that are frequently used in join operations.

How to Change Collation Settings

Before making any changes to your database's collation settings, test your approach in a non-production environment to avoid unintended consequences. If you're unsure about anything, it’s best to consult with a DBA.

Retrieve the default charset and collation for all databases:

ALTER TABLE Persons ADD CONSTRAINT PK_Person PRIMARY KEY (ID, LastName);

Check the collation of specific tables:

ALTER TABLE <table_name> ENGINE=InnoDB;

Find the server's default charset:

SELECT SCHEMA_NAME, DEFAULT_CHARACTER_SET_NAME, 
DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA;

Find the server's default collation:

SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_COLLATION FROM
information_schema.TABLES WHERE TABLE_COLLATION IS NOT NULL ORDER BY
TABLE_SCHEMA, TABLE_COLLATION;

Update the collation for a specific database:

SELECT @@GLOBAL.character_set_server;

Update the collation for a specific table:

SELECT @@GLOBAL.collation_server;

4. Table Character Set Check (Mixed Character Set)

Mixed character sets are similar to mixed collations in that they can lead to performance and compatibility issues. A mixed character set occurs when different columns or tables use different encoding formats for storing data.

  • Mixed character sets can hurt join performance on string columns by preventing index use or requiring value conversions.
  • Character sets can be defined at the column level, and as long as the columns involved in a join have matching character sets, performance won’t be impacted by mismatches at the table level.
  • Changing a table’s character set may involve data conversion, which can lead to corrupted data if unsupported characters are encountered.
  • If no character set or collation is specified, tables inherit the database's defaults, and databases inherit the server's default charset and collation.

How to Change Character Settings

Before adjusting your database's character settings, be sure to test the changes in a staging environment to prevent any unexpected issues. If you're uncertain about any steps, consult a DBA for guidance.

Retrieve the default charset and collation for all databases:

ALTER TABLE Persons ADD PRIMARY KEY (ID);

Get the character set of a column:

ALTER TABLE Persons ADD CONSTRAINT PK_Person PRIMARY KEY (ID, LastName);

Find the server's default charset:

ALTER TABLE <table_name> ENGINE=InnoDB;

Find the server's default collation:

SELECT SCHEMA_NAME, DEFAULT_CHARACTER_SET_NAME, 
DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA;

To view the structure of a table:

SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_COLLATION FROM
information_schema.TABLES WHERE TABLE_COLLATION IS NOT NULL ORDER BY
TABLE_SCHEMA, TABLE_COLLATION;

Example output:

SELECT @@GLOBAL.character_set_server;

To change a column character set:

SELECT @@GLOBAL.collation_server;

5. Column Auto Increment Check(Type of Auto Increment Columns)

For tables that are expected to grow indefinitely and use auto-increment for primary keys, it's recommended to switch to the UNSIGNED BIGINT data type. This allows the column to handle a much larger range of values, preventing the need for costly table alterations in the future once the maximum value is reached. By specifying UNSIGNED, only positive values are stored, effectively doubling the range of the data type.

How to Change Character Settings

To modify the column type to UNSIGNED BIGINT:

ALTER DATABASE <db-name> COLLATE=<collation-name>;

6. Table Foreign Key Check(Existence of foreign keys)

Foreign keys offer data consistency by maintaining the relationship between parent and child tables, but they also impact database performance. Each time a write operation occurs, additional lookups are required to verify the integrity of the related data. This can cause slowdowns, especially in high-traffic environments.

If performance is a concern, you may want to consider removing foreign keys, especially in scenarios where data consistency can be handled at the application level.

How to Remove Foreign Keys

To drop a foreign key constraint from a table:

ALTER TABLE Persons ADD PRIMARY KEY (ID);

7. Duplicated Index Check

Duplicate indexes in MySQL consume unnecessary disk space and create additional overhead during write operations, as every index must be updated. This can complicate query optimization, potentially leading to inefficient execution plans without offering any real benefit.

Identify and remove duplicate indexes to streamline query optimization and reduce overhead. But make sure that the index is not being used for critical queries before removing it.

8. Unused Index Check

Unused indexes in MySQL can negatively impact database performance by consuming disk space, increasing processing overhead during inserts, updates, and deletes, and slowing down overall operations. While indexes are valuable for speeding up queries, those that aren't used can create unnecessary strain on your system.
Additional benefits of removing unused or duplicate indexes include:

  • With fewer indexes, MySQL's optimizer has fewer choices to evaluate, simplifying query execution and reducing CPU/memory usage.
  • Removing unused indexes frees up valuable disk space that can be used for more critical data, also improving I/O efficiency.
  • Index maintenance tasks, such as rebuilding or reorganizing, become faster and less resource-intensive when the number of indexes is minimized. This leads to smoother operations, particularly in environments requiring 24/7 uptime.

To identify unused indexed in MySQL or MariabDB please use to following SQL statement:

ALTER TABLE Persons ADD CONSTRAINT PK_Person PRIMARY KEY (ID, LastName);

How to Remove Unused or Duplicated Indexes

In MySQL 8.0 and later, you can make indexes invisible to test whether they’re needed without fully dropping them:

ALTER TABLE <table_name> ENGINE=InnoDB;

If performance remains unaffected, the index can be safely dropped:

SELECT SCHEMA_NAME, DEFAULT_CHARACTER_SET_NAME, 
DEFAULT_COLLATION_NAME FROM INFORMATION_SCHEMA.SCHEMATA;

You can revert an index back to visible if needed:

SELECT TABLE_SCHEMA, TABLE_NAME, TABLE_COLLATION FROM
information_schema.TABLES WHERE TABLE_COLLATION IS NOT NULL ORDER BY
TABLE_SCHEMA, TABLE_COLLATION;

Schema Checks Now Available with Releem

With the latest update, Releem now includes comprehensive schema health checks. These checks provide real-time insights into your database’s structural integrity, along with actionable recommendations for fixing any detected issues.

Top ySQL Schema Checks to Boost Database Performance

By automating the schema monitoring process, Releem takes the guesswork out of manual checks, saving database engineers tons of time and effort. Instead of spending hours working on schema details, you can now focus on more pressing tasks.

The above is the detailed content of Top ySQL Schema Checks to Boost Database Performance. For more information, please follow other related articles on the PHP Chinese website!

Statement:
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn