Home  >  Article  >  Backend Development  >  html rendering function memory leak

html rendering function memory leak

王林
王林forward
2024-02-06 10:39:11667browse

html 渲染函数内存泄漏

Question content

The problem I'm facing is that even trying just 200 requests causes the program to occupy 6gb of the container's memory and ends up being oom kill. My idea is to extract all text nodes present in the html and then process them to extract their name, the html and the text of that tag. So, to generate html for a specific tag, I use the render function from golang.org/x/net/html. Where I provide strings.builder as io.writer to write the generated html. But for some reason the builder takes too much memory.

package main

import (
    "encoding/csv"
    "io"
    "log"
    "net/http"
    "strings"
    "golang.org/x/net/html"
)

func main() {
    mux := http.NewServeMux()
    mux.HandleFunc("/data", GetData)
    if err := http.ListenAndServe(":8001", mux); err != nil {
        log.Println(err)
    }
}

type TagInfo struct {
    Tag  string
    Name string
    Text string
}

// http.handler
func GetData(w http.ResponseWriter, r *http.Request) {
    u := r.URL.Query().Get("url")
    doc, err := GetDoc(u)
    if err != nil {
        log.Println(err)
        w.WriteHeader(500)
        return
    }
    var buf strings.Builder
    data := Extract(doc, &buf)
    csvw := csv.NewWriter(io.Discard)
    for _, d := range data {
        csvw.Write([]string{d.Name, d.Tag, d.Text})
    }
}

// fires request and get text/html
func GetDoc(u string) (*html.Node, error) {
    res, err := http.Get(u)
    if err != nil {
        return nil, err
    }
    defer res.Body.Close()
    return html.Parse(res.Body)
}

func Extract(doc *html.Node, buf *strings.Builder) []TagInfo {
    var (
        tags = make([]TagInfo, 0, 100)
        f    func(*html.Node)
    )

    f = func(n *html.Node) {
        if n.Type == html.TextNode {
            text := strings.TrimSpace(n.Data)
            if text != "" {
                parent := n.Parent
                tag := Render(parent, buf)
                tagInfo := TagInfo{
                    Tag:  tag,
                    Name: parent.Data,
                    Text: n.Data,
                }
                tags = append(tags, tagInfo)
            }
        }
        for child := n.FirstChild; child != nil; child = child.NextSibling {
            f(child)
        }
    }
    f(doc)
    return tags
}

// Render the html around the tag
// if node is text then pass the
// parent node paramter in function
func Render(n *html.Node, buf *strings.Builder) string {
    defer buf.Reset()
    if err := html.Render(buf, n); err != nil {
        log.Println(err)
        return ""
    }
    return buf.String()
}

If you want a specific list of URLs, here it is. I made about 60 requests at one time.

I tried using bytes.buffer bytes.buffer and sync.pool but both have the same problem. Using pprof I noticed that the writestring method of strings.builder caused a lot of memory usage.


Correct answer


So the basic question here is to accept any content-type, which is not acceptable in terms of crawling, most Websites need to send text/html.

The problem is that even if the url sends anything that does not represent html data golang.org/x/net/html it is still accepted without throwing an error.

Let's take an example where application/pdf is returned, then the body will contain html.Parse the binary data of the parsed pdf and no error will be returned, this is Weird-behavior library for scraping/crawling accepted binary data.

The solution is: Check the response headers, if only the data is html, then continue, otherwise there will be ambiguity or higher memory usage (maybe lower), but we can't predict what will happen occur.

The above is the detailed content of html rendering function memory leak. For more information, please follow other related articles on the PHP Chinese website!

Statement:
This article is reproduced at:stackoverflow.com. If there is any infringement, please contact admin@php.cn delete