某客户有个需求,有2张业务表,数据量比较大,有2000W行,现在的需求是把这2张表中的一部分记录,根据一定的where条件分离出去,
某客户有个需求,有2张业务表,数据量比较大,有2000W行,现在的需求是把这2张表中的一部分记录,根据一定的where条件分离出去,创建到另外的归档表中,即做表记录的迁移操作。最后得到的结果是:未满足筛选条件的记录留在原表中,满足筛选条件的表要插入到归档表中,并且要在原表中删除这些插入到归档表中的全部记录,最后满足:新表记录+归档记录=原表记录数
下面我来模拟一下这个过程:
由于没有拿到具体的建表语句,这里把表的内容最简化,只留2个列,作为最基本的演示
--连接到测试用户,创建测试表
SQL> conn zlm/zlm
Connected.
SQL> select * from cat;
no rows selected
SQL> create table tabhdr(tabhdrid number(10),status number(10));
Table created.
SQL> create table tabdet(tabhdrid number(10));
Table created.
SQL> create table arch_tabdet as select * from tabdet;
Table created.
SQL> create table arch_tabhdr as select * from tabhdr;
Table created.
--创建操作日志表
SQL> create table arch_log(
2 archbegintime char(19),
3 archmiddletime char(19),
4 archendtime char(19),
5 archinscount1 number,
6 archdelcount1 number,
7 archinscount2 number,
8 archdelcount2 number,
9 archstatus varchar2(20),
10 archerrorcode varchar2(20),
11 archerrormsg varchar2(1000));
Table created.
操作日志表字段说明:
archbegintime->archimiddletime //第一次迁移操作(insert+delete)的时间
archimiddletime->archendtime //第二次迁移操作(insert+delete)的时间
archcount1 //第一次迁移操作(insert+delete)的数据量
archcount2 //第二次迁移操作(insert+delete)的数据量
archstatus //操作状态(success/failure)
archerrorcode //报错代码
archerrormsg //报错信息
--插入测试数据(每个表插入10W条记录,仅测试功能没必要用很大的数据)
SQL> begin
2 for i in 1..100000
3 loop
4 insert into tabhdr values(i,9);
5 insert into tabdet values(i);
6 end loop;
7 commit;
8 end;
9 /
PL/SQL procedure successfully completed.
--创建存储过程detach_pro
SQL> create or replace procedure detach_pro
2 is
3 maxrows number default 10000;
4 rowid_table dbms_sql.Urowid_Table;
5 i number;
6 cursor cur_1 is SELECT a.rowid FROM tabdet a WHERE tabhdrid IN(SELECT tabhdrid FROM tabhdr WHERE STATUS=9) order by a.rowid;
7 cursor cur_2 is SELECT a.rowid FROM tabhdr a WHERE status=9 order by a.rowid;
8
9 v_begintime char(19):=to_char(sysdate,'yyyy-mm-dd hh:mi:ss');
10 v_middletime char(19);
11 v_inscount1 number:=0;
12 v_delcount1 number:=0;
13 v_inscount2 number:=0;
14 v_delcount2 number:=0;
15 v_errcode varchar2(100);
16 v_errerrm varchar2(1000);
17
18 begin
19 open cur_1;
20 loop
21 exit when cur_1%notfound;
22 fetch cur_1 bulk collect into rowid_table limit maxrows;
23
24 forall i in 1 .. rowid_table.count
25 insert into arch_tabdet select * from tabdet where rowid = rowid_table(i);
26 commit;
27 v_inscount1:=v_inscount1+rowid_table.count;
28 forall i in 1 .. rowid_table.count
29 delete from tabdet where rowid = rowid_table(i);
30 commit;
31 v_delcount1:= v_delcount1+rowid_table.count;
32 end loop;
33 close cur_1;
34 v_middletime:=to_char(sysdate,'yyyy-mm-dd hh:mi:ss');
35 open cur_2;
36 loop
37 exit when cur_2%notfound;
38 fetch cur_2 bulk collect into rowid_table limit maxrows;
39
40 forall i in 1 .. rowid_table.count
41 insert into arch_tabhdr select * from tabhdr where rowid = rowid_table(i);
42 commit;
43 v_inscount2:=v_inscount2+rowid_table.count;
44 forall i in 1 .. rowid_table.count
45 delete from tabhdr where rowid = rowid_table(i);
46 commit;
47 v_delcount2:= v_delcount2+rowid_table.count;
48 end loop;
49 close cur_2;
50 insert into arch_log values (v_begintime,v_middletime,to_char(sysdate,'yyyy-mm-dd hh:mi:ss'), v_inscount1,v_delcount1,v_inscount2,v_delcount2,'success',null,null);
51 commit;
52 exception
53 when others then
54 v_errcode :=sqlcode;
55 v_errerrm :=sqlerrm;
insert into arch_log values (v_begintime,v_middletime,to_char(sysdate,'yyyy-mm-dd hh:mi:ss'), v_inscount1,v_delcount1,v_inscount2,v_delcount2,'failure',v_errcode,v_errerrm);
57 commit;
58 end;
59 /
Procedure created.
--开始第1次测试
SQL> select count(*) from tabdet;
COUNT(*)
----------
100000
SQL> select count(*) from tabhdr;
COUNT(*)
----------
100000
SQL> select count(*) from arch_tabdet;
COUNT(*)
----------
0
SQL> select count(*) from arch_tabdet;
COUNT(*)
----------
0
SQL> exec detach_pro;
PL/SQL procedure successfully completed.
SQL> select count(*) from tabdet;
COUNT(*)
----------
0
SQL> select count(*) from tabhdr;
COUNT(*)
----------
0
SQL> select count(*) from arch_tabdet;
COUNT(*)
----------
100000
SQL> select count(*) from arch_tabhdr;
COUNT(*)
----------
100000
SQL> select * from arch_log;

MySQL'sBLOBissuitableforstoringbinarydatawithinarelationaldatabase,whileNoSQLoptionslikeMongoDB,Redis,andCassandraofferflexible,scalablesolutionsforunstructureddata.BLOBissimplerbutcanslowdownperformancewithlargedata;NoSQLprovidesbetterscalabilityand

ToaddauserinMySQL,use:CREATEUSER'username'@'host'IDENTIFIEDBY'password';Here'showtodoitsecurely:1)Choosethehostcarefullytocontrolaccess.2)SetresourcelimitswithoptionslikeMAX_QUERIES_PER_HOUR.3)Usestrong,uniquepasswords.4)EnforceSSL/TLSconnectionswith

ToavoidcommonmistakeswithstringdatatypesinMySQL,understandstringtypenuances,choosetherighttype,andmanageencodingandcollationsettingseffectively.1)UseCHARforfixed-lengthstrings,VARCHARforvariable-length,andTEXT/BLOBforlargerdata.2)Setcorrectcharacters

MySQloffersechar, Varchar, text, Anddenumforstringdata.usecharforfixed-Lengthstrings, VarcharerForvariable-Length, text forlarger text, AndenumforenforcingdataAntegritywithaetofvalues.

Optimizing MySQLBLOB requests can be done through the following strategies: 1. Reduce the frequency of BLOB query, use independent requests or delay loading; 2. Select the appropriate BLOB type (such as TINYBLOB); 3. Separate the BLOB data into separate tables; 4. Compress the BLOB data at the application layer; 5. Index the BLOB metadata. These methods can effectively improve performance by combining monitoring, caching and data sharding in actual applications.

Mastering the method of adding MySQL users is crucial for database administrators and developers because it ensures the security and access control of the database. 1) Create a new user using the CREATEUSER command, 2) Assign permissions through the GRANT command, 3) Use FLUSHPRIVILEGES to ensure permissions take effect, 4) Regularly audit and clean user accounts to maintain performance and security.

ChooseCHARforfixed-lengthdata,VARCHARforvariable-lengthdata,andTEXTforlargetextfields.1)CHARisefficientforconsistent-lengthdatalikecodes.2)VARCHARsuitsvariable-lengthdatalikenames,balancingflexibilityandperformance.3)TEXTisidealforlargetextslikeartic

Best practices for handling string data types and indexes in MySQL include: 1) Selecting the appropriate string type, such as CHAR for fixed length, VARCHAR for variable length, and TEXT for large text; 2) Be cautious in indexing, avoid over-indexing, and create indexes for common queries; 3) Use prefix indexes and full-text indexes to optimize long string searches; 4) Regularly monitor and optimize indexes to keep indexes small and efficient. Through these methods, we can balance read and write performance and improve database efficiency.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

SublimeText3 Linux new version
SublimeText3 Linux latest version

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

VSCode Windows 64-bit Download
A free and powerful IDE editor launched by Microsoft

PhpStorm Mac version
The latest (2018.2.1) professional PHP integrated development tool

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.
