In the last issue, we summarized the general XML reading method, but we don’t usually need to use all the data of the XML source, so I also experimented with reading some data, such as the position based on the first letter of the title. Filter.
For the three random reading methods, you only need to change the query conditions
XmlDocument: var nodeList = doc.DocumentElement.SelectNodes("item[substring(title,1,1)='M'][position() mod 10 = 0]"); XPathNavigator: var nodeList = nav.Select("/channel/item[substring(title,1,1)='M'][position() mod 10 = 0]"); Xml Linq: var nodelist = from node in xd.XPathSelectElements("/channel/item[substring(title,1,1)='M'][position() mod 10 = 0]")
Using XPath, you only need to change one line of code. XPath is also fairly easy to master, much simpler than SQL. You can refer to W3C Shcool's syntax introduction and MSDN's LINQ To XML for XPath users, and you will be able to master the secrets in a quarter of an hour.
But for the XmlReader method, it is not that easy. It also reads the title starting with M, and takes one item from every ten items. After thinking for a long time, I couldn't think of an elegant implementation method, so I had to do this. :
Code
static List<Channel> testXmlReader2() { var lstChannel = new List<Channel>(); var reader = XmlReader.Create(xmlStream); int n = 0;Channel channel = null; Search: while (reader.Read()) { if (reader.Name == "item" && reader.NodeType == XmlNodeType.Element) { while (reader.Read()) { if (reader.Name == "item") break; if (reader.NodeType != XmlNodeType.Element) continue; switch (reader.Name) { case "title": var title = reader.ReadString(); if (title[0] != 'M') goto Search; n++; if (n % 10 != 0) goto Search; channel = new Channel(); channel.Title = title; break; case "link": channel.Link = reader.ReadString(); break; case "description": channel.Description = reader.ReadString(); break; case "content": channel.Content = reader.ReadString(); break; case "pubDate": channel.PubDate = reader.ReadString(); break; case "author": channel.Author = reader.ReadString(); break; case "category": channel.Category = reader.ReadString(); break; default: break; } lstChannel.Add(channel); } } } return lstChannel; }
It can be seen that the code structure has changed significantly. In order to perform conditional screening, I had to add local variable n, adjust the initialization of the entity class, and add the location of the collection statement. I was even forced to use the goto statement that I had forgotten for many years to jump (VB is better). Business logic seeps into the implementation of code details. In Lao Zhao's words, a burst of grammatical noise hits the face.
The implementation proxy class of XmlTextReader, XmlTextReaderImp (internal, cannot be used directly), is a super class with tens of thousands of lines of code, which encapsulates a large number of operations directly performed on the Xml character level. Since the operation is very close to the bottom layer, it is difficult to find a good code optimization method at a macro level. If the filtering conditions, that is, the business logic is more complicated, the code will be completely different, and the comprehensibility and maintainability will be like a mirror.
Now let’s compare the time performance:
XmlDocment 26ms XPathNavigator 26ms XmlTextReader 20ms Xml Linq 28ms
The data of the four methods have become close. The time consumption of Document and Navigator has dropped significantly, while the Reader method has not dropped much, because it still has to read from the beginning to the end. The 3ms reduction can be attributed to the reduction in entity object creation overhead. What is more strange is that the Linq method has not changed and fell at the end.
You can test different query conditions. It can be seen that each of these four methods has its own performance limit, which is related to the size of the Xml source. For example, for the first two methods, it depends on the execution time of the XmlDocument.Load method. On my machine, it takes 23ms to load the Xml. The Linq method is not unbreakable. If there are few results to be processed, the execution time will be reduced by 1 to 2 milliseconds.
In Document and Navigator mode, performance will decrease significantly as the amount of data increases. It's easy to guess that it's because they create a lot of useless objects. Looking at the memory usage of each method, we can see that when all the data is loaded without filtering, the Document method takes up about 23.3M of memory, while the Navigator method only takes up about 22.9M. This also explains why the performance of the Document method decreases more obviously. Reader mode is fully loaded with data and only requires about 20.1M of memory. Excluding the overhead of program startup itself, it takes up less than half of the memory compared to the previous two methods. The Linq method has another amazing performance in terms of memory, accounting for less than 500k more than the Reader method.
Further analysis led to a further conclusion: Unless there is a special need, use XmlTextReader with caution. It is poorly prepared for changes and prone to errors. It is more strongly recommended to use the Linq method. Although the time performance is slightly lower than the Navigator method in some cases, its excellent memory usage performance has established its first choice. And I believe that Linq To XML will be even more powerful in the future.
The above is the performance comparison of XML data reading methods (2). For more related content, please pay attention to the PHP Chinese website (www.php.cn)!

The XML structure of RSS includes: 1. XML declaration and RSS version, 2. Channel (Channel), 3. Item. These parts form the basis of RSS files, allowing users to obtain and process content information by parsing XML data.

RSSfeedsuseXMLtosyndicatecontent;parsingtheminvolvesloadingXML,navigatingitsstructure,andextractingdata.Applicationsincludebuildingnewsaggregatorsandtrackingpodcastepisodes.

RSS documents work by publishing content updates through XML files, and users subscribe and receive notifications through RSS readers. 1. Content publisher creates and updates RSS documents. 2. The RSS reader regularly accesses and parses XML files. 3. Users browse and read updated content. Example of usage: Subscribe to TechCrunch's RSS feed, just copy the link to the RSS reader.

The steps to build an RSSfeed using XML are as follows: 1. Create the root element and set the version; 2. Add the channel element and its basic information; 3. Add the entry element, including the title, link and description; 4. Convert the XML structure to a string and output it. With these steps, you can create a valid RSSfeed from scratch and enhance its functionality by adding additional elements such as release date and author information.

The steps to create an RSS document are as follows: 1. Write in XML format, with the root element, including the elements. 2. Add, etc. elements to describe channel information. 3. Add elements, each representing a content entry, including,,,,,,,,,,,. 4. Optionally add and elements to enrich the content. 5. Ensure the XML format is correct, use online tools to verify, optimize performance and keep content updated.

The core role of XML in RSS is to provide a standardized and flexible data format. 1. The structure and markup language characteristics of XML make it suitable for data exchange and storage. 2. RSS uses XML to create a standardized format to facilitate content sharing. 3. The application of XML in RSS includes elements that define feed content, such as title and release date. 4. Advantages include standardization and scalability, and challenges include document verbose and strict syntax requirements. 5. Best practices include validating XML validity, keeping it simple, using CDATA, and regularly updating.

RSSfeedsareXMLdocumentsusedforcontentaggregationanddistribution.Totransformthemintoreadablecontent:1)ParsetheXMLusinglibrarieslikefeedparserinPython.2)HandledifferentRSSversionsandpotentialparsingerrors.3)Transformthedataintouser-friendlyformatsliket

JSONFeed is a JSON-based RSS alternative that has its advantages simplicity and ease of use. 1) JSONFeed uses JSON format, which is easy to generate and parse. 2) It supports dynamic generation and is suitable for modern web development. 3) Using JSONFeed can improve content management efficiency and user experience.


Hot AI Tools

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Undress AI Tool
Undress images for free

Clothoff.io
AI clothes remover

AI Hentai Generator
Generate AI Hentai for free.

Hot Article

Hot Tools

mPDF
mPDF is a PHP library that can generate PDF files from UTF-8 encoded HTML. The original author, Ian Back, wrote mPDF to output PDF files "on the fly" from his website and handle different languages. It is slower than original scripts like HTML2FPDF and produces larger files when using Unicode fonts, but supports CSS styles etc. and has a lot of enhancements. Supports almost all languages, including RTL (Arabic and Hebrew) and CJK (Chinese, Japanese and Korean). Supports nested block-level elements (such as P, DIV),

MantisBT
Mantis is an easy-to-deploy web-based defect tracking tool designed to aid in product defect tracking. It requires PHP, MySQL and a web server. Check out our demo and hosting services.

Safe Exam Browser
Safe Exam Browser is a secure browser environment for taking online exams securely. This software turns any computer into a secure workstation. It controls access to any utility and prevents students from using unauthorized resources.

Dreamweaver Mac version
Visual web development tools

ZendStudio 13.5.1 Mac
Powerful PHP integrated development environment