Home >Backend Development >PHP Tutorial >Problem solving when php+ajax imports big data_PHP tutorial
Let’s talk about the problems encountered one by one from first to last.
Question 1 According to my original idea, upload the file first and then read the file. The problem here is that when the file is large, the upload is slow, causing the operation that the customer sees to be in a waiting state, which is not user-friendly.
Method: This is how I did it. Someone has a better way. Please introduce it. I first upload the file, then save the file to a specific folder called import, and then return the file name. This ensures that the file is uploaded successfully. And I can use js to give the customer a prompt when he returns the name. Then Ajax requests PHP to read the file and insert it into the database. But here comes the problem.
Question 2 When I use ajax to request php to read files and insert them into the database, I encounter a problem, that is, the ajax request is always interrupted at 1 minute. I thought, this should be the reason for the maximum execution time of php max_execution_time, but I changed it to 300 seconds. Still like this, then I thought it might be max_input_time, the maximum get time of apache. I added an ini_set to the code. As a result, I used ini_get to check max_input_time. The ini_set setting was invalid, and it was still 60 seconds. I checked a lot of information on the Internet, and it was still 60 seconds. Don't know why. If anyone knows, please reply to me. Thanks in advance, novice. There is no other way. I can only go to the server and modify the php.ini configuration. The manager said that modifications were not allowed, so for the sake of testing, I secretly modified it - and finally the modifications came back. After modifying and testing, it still doesn't work. The execution times out after one minute. Really confused. Don't know why. Please give me advice. That can't be helped.
This method will not work anymore. A 5m file can only be read in separate lines. Then there is a modification to the code. The branch reading operation is like this. First make an ajax request, and then read 2000 pieces each time. Then process the 2000 pieces of data and insert them into the database (the article finally introduces a useful branch reading function ). Then each time ajax is executed, a status symbol and the number of rows read this time are returned, and then the reading continues next time. Until it is finally read. I also encountered a problem: when I checked the duplication of each row of data, it was like this. I looped through the obtained content, and then checked whether each row existed. When I judged whether $count When it is greater than 0, when it already exists, I use continue to execute the next cycle. But when I import 10,000 items, I always get an error saying "server internal error" at 8,000 items. It's very boring, I don't know what to ask, and I can only use if else instead. Wondered. A small reminder: when inserting into the database, do not insert one by one. It is best to inset into aaa(`xx`,`xxx`)values('111','111'),('222','222'). This will be much faster.
Line number reading function, the SplFileObject class library is really easy to use and recommended. If anyone knows my problem, please give me some advice.