在写SQL中,经常会有诸如更新了一行记录,之后要获取更新过的这一行。 本身从程序来说,没啥难度,大不了把这行缓存起来,完了直接访问。 但是从数据库的角度出发,怎么能快速的拿出来,而又不对原表进行二次扫描? 比如其他数据库提供了如下的语法来实现:
t_girl=# update t1 set log_time = now() where id in (1,2,3) returning *; id | log_time ----+---------------------------- 1 | 2014-11-26 11:06:53.555217 2 | 2014-11-26 11:06:53.555217 3 | 2014-11-26 11:06:53.555217 (3 rows) UPDATE 3 Time: 6.991 ms 返回删除掉的行: t_girl=# delete from t1 where id < 2 returning *; id | log_time ----+---------------------------- 1 | 2014-11-26 11:06:53.555217 (1 row) DELETE 1 Time: 6.042 ms 返回插入后的行: t_girl=# insert into t1 select 1,now() returning *; id | log_time ----+---------------------------- 1 | 2014-11-26 11:07:40.431766 (1 row) INSERT 0 1 Time: 6.107 ms t_girl=# 那在MySQL里如何实现呢? 我可以创建几张内存表来来保存这些返回值,如下: CREATE TABLE t1_insert ENGINE MEMORY SELECT * FROM t1 WHERE FALSE; CREATE TABLE t1_update ENGINE MEMORY SELECT * FROM t1 WHERE FALSE; CREATE TABLE t1_delete ENGINE MEMORY SELECT * FROM t1 WHERE FALSE; ALTER TABLE t1_insert ADD PRIMARY KEY (id); ALTER TABLE t1_update ADD PRIMARY KEY (id); ALTER TABLE t1_delete ADD PRIMARY KEY (id); 以上建立了三张表来存放对应的操作。 t1_insert 保存插入;t1_update 保存更新;t1_delete 保存删除。 那这样的话,我来创建对应的触发器完成。 DELIMITER $$ USE `t_girl`$$ DROP TRIGGER /*!50032 IF EXISTS */ `tr_t1_insert_after`$$ CREATE /*!50017 DEFINER = 'root'@'localhost' */ TRIGGER `tr_t1_insert_after` AFTER INSERT ON `t1` FOR EACH ROW BEGIN REPLACE INTO t1_insert VALUES (new.id,new.log_time); END; $$ DELIMITER ; DELIMITER $$ USE `t_girl`$$ DROP TRIGGER /*!50032 IF EXISTS */ `tr_t1_update_after`$$ CREATE /*!50017 DEFINER = 'root'@'localhost' */ TRIGGER `tr_t1_update_after` AFTER UPDATE ON `t1` FOR EACH ROW BEGIN REPLACE INTO t1_update VALUES (new.id,new.log_time); END; $$ DELIMITER ; DELIMITER $$ USE `t_girl`$$ DROP TRIGGER /*!50032 IF EXISTS */ `tr_t1_delete_after`$$ CREATE /*!50017 DEFINER = 'root'@'localhost' */ TRIGGER `tr_t1_delete_after` AFTER DELETE ON `t1` FOR EACH ROW BEGIN REPLACE INTO t1_delete VALUES (old.id,old.log_time);; END; $$ DELIMITER ; 创建好了以上的表和触发器后, 拿到返回值就非常容易了, 我直接从以上几张表来查询就是。 我现在来演示: 更新: mysql> truncate table t1_update; Query OK, 0 rows affected (0.00 sec) mysql> UPDATE t1 SET log_time = NOW() WHERE id < 15; Query OK, 3 rows affected (0.01 sec) Rows matched: 3 Changed: 3 Warnings: 0 获取更新记录: mysql> select * from t1_update; +----+----------------------------+ | id | log_time | +----+----------------------------+ | 12 | 2014-11-26 13:38:06.000000 | | 13 | 2014-11-26 13:38:06.000000 | | 14 | 2014-11-26 13:38:06.000000 | +----+----------------------------+ 3 rows in set (0.00 sec) 插入: mysql> truncate table t1_insert; Query OK, 0 rows affected (0.00 sec) mysql> INSERT INTO t1 VALUES (1,NOW()); Query OK, 1 row affected (0.08 sec) 获取插入记录: mysql> select * from t1_insert; +----+----------------------------+ | id | log_time | +----+----------------------------+ | 1 | 2014-11-26 13:38:06.000000 | +----+----------------------------+ 1 row in set (0.00 sec) 删除: mysql> truncate table t1_delete; Query OK, 0 rows affected (0.00 sec) mysql> DELETE FROM t1 WHERE id < 15; Query OK, 4 rows affected (0.01 sec) 获取删除记录: mysql> select * from t1_delete; +----+----------------------------+ | id | log_time | +----+----------------------------+ | 1 | 2014-11-26 13:38:06.000000 | | 12 | 2014-11-26 13:38:06.000000 | | 13 | 2014-11-26 13:38:06.000000 | | 14 | 2014-11-26 13:38:06.000000 | +----+----------------------------+ 4 rows in set (0.00 sec)返回删除掉的行:
t_girl=# delete from t1 where id < 2 returning *; id | log_time ----+---------------------------- 1 | 2014-11-26 11:06:53.555217 (1 row) DELETE 1 Time: 6.042 ms
mysql> select * from t1_delete; +----+----------------------------+ | id | log_time | +----+----------------------------+ | 1 | 2014-11-26 13:38:06.000000 | | 12 | 2014-11-26 13:38:06.000000 | | 13 | 2014-11-26 13:38:06.000000 | | 14 | 2014-11-26 13:38:06.000000 | +----+----------------------------+ 4 rows in set (0.00 sec)

MySQL index cardinality has a significant impact on query performance: 1. High cardinality index can more effectively narrow the data range and improve query efficiency; 2. Low cardinality index may lead to full table scanning and reduce query performance; 3. In joint index, high cardinality sequences should be placed in front to optimize query.

The MySQL learning path includes basic knowledge, core concepts, usage examples, and optimization techniques. 1) Understand basic concepts such as tables, rows, columns, and SQL queries. 2) Learn the definition, working principles and advantages of MySQL. 3) Master basic CRUD operations and advanced usage, such as indexes and stored procedures. 4) Familiar with common error debugging and performance optimization suggestions, such as rational use of indexes and optimization queries. Through these steps, you will have a full grasp of the use and optimization of MySQL.

MySQL's real-world applications include basic database design and complex query optimization. 1) Basic usage: used to store and manage user data, such as inserting, querying, updating and deleting user information. 2) Advanced usage: Handle complex business logic, such as order and inventory management of e-commerce platforms. 3) Performance optimization: Improve performance by rationally using indexes, partition tables and query caches.

SQL commands in MySQL can be divided into categories such as DDL, DML, DQL, DCL, etc., and are used to create, modify, delete databases and tables, insert, update, delete data, and perform complex query operations. 1. Basic usage includes CREATETABLE creation table, INSERTINTO insert data, and SELECT query data. 2. Advanced usage involves JOIN for table joins, subqueries and GROUPBY for data aggregation. 3. Common errors such as syntax errors, data type mismatch and permission problems can be debugged through syntax checking, data type conversion and permission management. 4. Performance optimization suggestions include using indexes, avoiding full table scanning, optimizing JOIN operations and using transactions to ensure data consistency.

InnoDB achieves atomicity through undolog, consistency and isolation through locking mechanism and MVCC, and persistence through redolog. 1) Atomicity: Use undolog to record the original data to ensure that the transaction can be rolled back. 2) Consistency: Ensure the data consistency through row-level locking and MVCC. 3) Isolation: Supports multiple isolation levels, and REPEATABLEREAD is used by default. 4) Persistence: Use redolog to record modifications to ensure that data is saved for a long time.

MySQL's position in databases and programming is very important. It is an open source relational database management system that is widely used in various application scenarios. 1) MySQL provides efficient data storage, organization and retrieval functions, supporting Web, mobile and enterprise-level systems. 2) It uses a client-server architecture, supports multiple storage engines and index optimization. 3) Basic usages include creating tables and inserting data, and advanced usages involve multi-table JOINs and complex queries. 4) Frequently asked questions such as SQL syntax errors and performance issues can be debugged through the EXPLAIN command and slow query log. 5) Performance optimization methods include rational use of indexes, optimized query and use of caches. Best practices include using transactions and PreparedStatemen

MySQL is suitable for small and large enterprises. 1) Small businesses can use MySQL for basic data management, such as storing customer information. 2) Large enterprises can use MySQL to process massive data and complex business logic to optimize query performance and transaction processing.

InnoDB effectively prevents phantom reading through Next-KeyLocking mechanism. 1) Next-KeyLocking combines row lock and gap lock to lock records and their gaps to prevent new records from being inserted. 2) In practical applications, by optimizing query and adjusting isolation levels, lock competition can be reduced and concurrency performance can be improved.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

SublimeText3 Linux new version
SublimeText3 Linux latest version

SAP NetWeaver Server Adapter for Eclipse
Integrate Eclipse with SAP NetWeaver application server.

SublimeText3 Chinese version
Chinese version, very easy to use

Dreamweaver Mac version
Visual web development tools

DVWA
Damn Vulnerable Web App (DVWA) is a PHP/MySQL web application that is very vulnerable. Its main goals are to be an aid for security professionals to test their skills and tools in a legal environment, to help web developers better understand the process of securing web applications, and to help teachers/students teach/learn in a classroom environment Web application security. The goal of DVWA is to practice some of the most common web vulnerabilities through a simple and straightforward interface, with varying degrees of difficulty. Please note that this software