Home > Article > Backend Development > Performance comparison of XML data reading methods (2)
In the last issue, we summarized the general XML reading method, but we don’t usually need to use all the data of the XML source, so I also experimented with reading some data, such as the position based on the first letter of the title. Filter.
For the three random reading methods, you only need to change the query conditions
XmlDocument: var nodeList = doc.DocumentElement.SelectNodes("item[substring(title,1,1)='M'][position() mod 10 = 0]"); XPathNavigator: var nodeList = nav.Select("/channel/item[substring(title,1,1)='M'][position() mod 10 = 0]"); Xml Linq: var nodelist = from node in xd.XPathSelectElements("/channel/item[substring(title,1,1)='M'][position() mod 10 = 0]")
Using XPath, you only need to change one line of code. XPath is also fairly easy to master, much simpler than SQL. You can refer to W3C Shcool's syntax introduction and MSDN's LINQ To XML for XPath users, and you will be able to master the secrets in a quarter of an hour.
But for the XmlReader method, it is not that easy. It also reads the title starting with M, and takes one item from every ten items. After thinking for a long time, I couldn't think of an elegant implementation method, so I had to do this. :
Code
static List<Channel> testXmlReader2() { var lstChannel = new List<Channel>(); var reader = XmlReader.Create(xmlStream); int n = 0;Channel channel = null; Search: while (reader.Read()) { if (reader.Name == "item" && reader.NodeType == XmlNodeType.Element) { while (reader.Read()) { if (reader.Name == "item") break; if (reader.NodeType != XmlNodeType.Element) continue; switch (reader.Name) { case "title": var title = reader.ReadString(); if (title[0] != 'M') goto Search; n++; if (n % 10 != 0) goto Search; channel = new Channel(); channel.Title = title; break; case "link": channel.Link = reader.ReadString(); break; case "description": channel.Description = reader.ReadString(); break; case "content": channel.Content = reader.ReadString(); break; case "pubDate": channel.PubDate = reader.ReadString(); break; case "author": channel.Author = reader.ReadString(); break; case "category": channel.Category = reader.ReadString(); break; default: break; } lstChannel.Add(channel); } } } return lstChannel; }
It can be seen that the code structure has changed significantly. In order to perform conditional screening, I had to add local variable n, adjust the initialization of the entity class, and add the location of the collection statement. I was even forced to use the goto statement that I had forgotten for many years to jump (VB is better). Business logic seeps into the implementation of code details. In Lao Zhao's words, a burst of grammatical noise hits the face.
The implementation proxy class of XmlTextReader, XmlTextReaderImp (internal, cannot be used directly), is a super class with tens of thousands of lines of code, which encapsulates a large number of operations directly performed on the Xml character level. Since the operation is very close to the bottom layer, it is difficult to find a good code optimization method at a macro level. If the filtering conditions, that is, the business logic is more complicated, the code will be completely different, and the comprehensibility and maintainability will be like a mirror.
Now let’s compare the time performance:
XmlDocment 26ms XPathNavigator 26ms XmlTextReader 20ms Xml Linq 28ms
The data of the four methods have become close. The time consumption of Document and Navigator has dropped significantly, while the Reader method has not dropped much, because it still has to read from the beginning to the end. The 3ms reduction can be attributed to the reduction in entity object creation overhead. What is more strange is that the Linq method has not changed and fell at the end.
You can test different query conditions. It can be seen that each of these four methods has its own performance limit, which is related to the size of the Xml source. For example, for the first two methods, it depends on the execution time of the XmlDocument.Load method. On my machine, it takes 23ms to load the Xml. The Linq method is not unbreakable. If there are few results to be processed, the execution time will be reduced by 1 to 2 milliseconds.
In Document and Navigator mode, performance will decrease significantly as the amount of data increases. It's easy to guess that it's because they create a lot of useless objects. Looking at the memory usage of each method, we can see that when all the data is loaded without filtering, the Document method takes up about 23.3M of memory, while the Navigator method only takes up about 22.9M. This also explains why the performance of the Document method decreases more obviously. Reader mode is fully loaded with data and only requires about 20.1M of memory. Excluding the overhead of program startup itself, it takes up less than half of the memory compared to the previous two methods. The Linq method has another amazing performance in terms of memory, accounting for less than 500k more than the Reader method.
Further analysis led to a further conclusion: Unless there is a special need, use XmlTextReader with caution. It is poorly prepared for changes and prone to errors. It is more strongly recommended to use the Linq method. Although the time performance is slightly lower than the Navigator method in some cases, its excellent memory usage performance has established its first choice. And I believe that Linq To XML will be even more powerful in the future.
The above is the performance comparison of XML data reading methods (2). For more related content, please pay attention to the PHP Chinese website (www.php.cn)!