Home > Article > Backend Development > How to prevent your website from being directly captured by curl and displayed on your own?
I have no problem using this method to display Baidu under my own domain name, but is there a way to avoid it?
I have no problem using this method to display Baidu under my own domain name, but is there a way to avoid it?
In fact, it is impossible to completely prevent being caught. You can only raise the threshold for the other party to crawl. Sometimes search engine crawling is also beneficial. If you really want to prevent crawling, you can do it in many ways, such as: preventing hotlinking of images, using some algorithms when displaying text, or making some restrictions on the server side, etc. Capturing and anti-capturing is a game process, hehehe~~~
As long as you have normal access, you can simulate it, there is nothing you can do about it
Ajax dynamic loading can be somewhat preventive, but there are ways.
But I think there is nothing wrong with this...just because someone else has curled your software does not mean that your software copyright has also been curled. Studying the code and making a website with high stability and good user experience are the primary concerns of an engineer
As long as your page is public, this problem cannot be avoided.
This cannot be avoided, but you can judge his refer to prevent crawling. However, there is a way to solve this. . . But you can limit the number of visits by IP. .
IP, access frequency, number of visits, etc.
There are still some restrictions to be made. It is also good to save bandwidth. Set nginx to prevent hotlinking