The database leakage of account information on various websites at the end of last year was very impressive. I took the opportunity to download several databases and prepared to learn from data analysts to analyze these account information. Although this data information has been "organized", it is quite useful to study by yourself, after all, there is such a large amount of data.
The problem caused by the large amount of data is that a single file is very large. It is not easy to open this file. Don't expect Notepad to crash. The client using MSSQL cannot open such a large SQL file and directly reports insufficient memory. The reason is said to be that when MSSQL reads data, it puts the read data into the memory at once. If the amount of data is too large, Large, and insufficient memory will directly cause the system to crash.
Navicat Premium
Here is a recommended software, Navicat Premium, which is quite powerful. SQL files of hundreds of megabytes can be opened easily without any lag. Moreover, this client software supports connections to various databases such as MSSQL, MYSQL, Oracle, etc. I will slowly study many other functions by myself.
Although Navicat can be used to open the 274MB SQL file CSDN, the content is meaningless, and it is inconvenient to query, classify, make statistics, etc. on these account information. The only way is to read the data one by one, then split the different fragments of each record, and then store these fragments in the database in the format of data fields, so that they can be used conveniently in the future.
Use PHP to read very large files
PHP has many ways to read files. Depending on the target file, adopting a more appropriate method can effectively improve execution efficiency. Since the CSDN database file is very large, we try not to read it all in a short time. After all, every time a piece of data is read, it needs to be split and written. Then a more appropriate way is to read the file area by area. By using PHP's fseek and fread combined, you can read a certain part of the data in the file at will. The following is an example code:
Copy code The code is as follows:
function readBigFile($filename, $count = 20, $tag = "rn") {
$content = "";//Final content
$current = "";//Currently read content storage
$step= 1;//How many characters to go each time
$tagLen = strlen($tag);
$start = 0;//Starting position
$i = 0;//Counter
$handle = fopen($filename,'r+');//Read and write Open the file in mode, and the pointer points to the starting position of the file
while($i < $count && !feof($handle)) {
fseek($handle, $start, SEEK_SET);//The pointer is set in the file Beginning
$current = fread($handle,$step);//Read file
$content .= $current;//Combining string
$start += $step;//According to step Long forward move
//Truncate the last few characters of the string according to the length of the delimiter
$substrTag = substr($content, -$tagLen);
if ($substrTag == $tag) { // Determine whether it is a newline or other delimiter
$i++;
$content .= "
";
}
}
//Close File
fclose($handle);
//Return result
return $content;
}
$filename = "csdn.sql";//File to be read
$tag = "n";//Line separator. Note that double quotes must be used here
$count = 100;//Read number of lines
$data = readBigFile($filename,$count,$tag);
echo $data;
Regarding the value of the variable $tag passed in by the function, the value passed in is also different depending on the system: Windows uses "rn", Linux/unix uses "n", and Mac OS uses "r".
The general process of program execution: first define some basic variables for reading files, then open the file, position the pointer at the specified position of the file, and read the content of the specified size. Store the content in a variable each time it is read until the required number of lines to read is reached or the end of the file.
Never assume that everything in a program will work as planned.
According to the above code, although the data at the specified location and size in the file can be obtained, the entire process is only executed once and not all the data can be obtained. In fact, to get all the data, you can add a loop to determine whether the file ends in the outer layer of this loop, but this is a waste of system resources, and may even cause PHP execution timeout because the file is too large and cannot be read to the end. Another method is to record and store the position of the pointer after the last time the data was read, and then when the loop is executed again, position the pointer at the last ending position, so that there is no loop to read the file from beginning to end. situation.
Actually, I haven’t imported the CSDN database into the database yet, because there was an analysis on CNBETA a few days after the leak. Haha, the action was too fast. When you see that others have already done this, you automatically don’t have much motivation to do it, but in order to learn, you still have to take the time to complete it.
http://www.bkjia.com/PHPjc/325178.htmlwww.bkjia.comtruehttp: //www.bkjia.com/PHPjc/325178.htmlTechArticleThe databases of various website account information leaked at the end of last year were very impressive. I took the opportunity to download a few databases. , prepare to learn data analysts to analyze these account information. Although...