搜尋

If there’s something we developers are really bad at, it’s guessing. We think we know which parts of our application are slow, and spend a lot of time optimising those, but in reality the bottlenecks are often somewhere else. The only sane thing to do is measuring, with the help of some profiling tools.

There are a few profilers available for PHP, the most commonly used being Xdebug, which combined with KCacheGrind/WinCacheGrind/MacCallGrind can show the function call graph and the time spent in each function.

In this article, we’re going to try another profiler, XHProf, developed at Facebook and open sourced in March 2009 (under the Apache 2.0 license). XHProf is a function-level hierarchical profiler, with a PHP extension (written in C) to collect the raw data, and a few PHP scripts for the reporting/UI layer.

 

According to Wikipedia:

profiling, a form of dynamic program analysis (as opposed to static code analysis), is the investigation of a program’s behavior using information gathered as the program executes. The usual purpose of this analysis is to determine which sections of a program to optimize ? to increase its overall speed, decrease its memory requirement or sometimes both

So a profiler is a tool that records the program events as they happen, and their effect on the system, collecting data with many different techniques. Some profilers only measure memory and CPU utilisation, others gather a lot more information, like full function call traces, times, and aggregate data. They can be flat or hierarchical, i.e. they can analyse each function by itself or in its context, with the full tree of its descendents.

Installation

At the moment, XHProf is only available for Linux and FreeBSD (and is expected to work with Mac OS).

The easiest way to get it is via the PEAR installer (package home):

apt-get install php5-common pecl config-set preferred_state betapecl install xhprof

If it complains because it can’t find config.m4, you can still build the extension manually, using the following steps:

wget http://pecl.php.net/get/xhprof-0.9.2.tgztar xvf xhprof-0.9.2.tgzcd ./xhprof-0.9.2/extension/phpize./configure --with-php-config=/usr/local/bin/php-configmakemake installmake test

Once you have XHProf installed, you should enable it. Open your php.ini and add

[xhprof]extension=xhprof.soxhprof.output_dir="/var/tmp/xhprof"

Where /var/tmp/xhprof is the directory that will collect the profile data for each run.

Restart apache, and the XHProf extension should be enabled (check with “php -m” that this is the case).

Profile a Block of Code

To profile a block of code, wrap it around these two calls:

// your code // start profilingxhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY); // the code you want to profile // stop profiler$xhprof_data = xhprof_disable();

It is possible to dump the $xhprof_data array to view the raw profiler data for each function call (number of calls, wall time, CPU time, memory usage, peak memory usage) if you want to inspect these at any point.

xhprof_enable() accepts some flags to control what to profile: by default only call counts and elapsed time are profiled, you can add memory and CPU utilisation; make sure they’re enabled in your dev environment (but disable the CPU timer if you use it in production, as it adds a high overhead). If you find the output too noisy, you can disable the reporting of builtin PHP functions with the XHPROF_FLAGS_NO_BUILTINS flag, or even exclude specific functions, by passing a second parameter like this:

// ignore builtin functions and call_user_func* during profiling$ignore = array('call_user_func', 'call_user_func_array');xhprof_enable(0, array('ignored_functions' =>  $ignore));

Profile an Entire Page

It’s usually more useful to have a complete overview of the page, rather than a small block of code, and it’s probably better to have such an overview formatted as a table or a graph, as opposed to an array dump. For this purpose, XHProf provides a convenient UI that must be enabled in order to be used.

The code for the XHProf UI can be found in the xhprof_html/ and xhprof_lib/ directories. Assuming they are created in /usr/local/lib/php/ , we can symlink that directory to /var/www/xhprof/ so it’s available from our DocumentRoot.

We also need to create two PHP files:

/usr/share/php5/utilities/xhprof/header.php

<?phpif (extension_loaded('xhprof')) {    include_once '/usr/local/lib/php/xhprof_lib/utils/xhprof_lib.php';    include_once '/usr/local/lib/php/xhprof_lib/utils/xhprof_runs.php';    xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);}

/usr/share/php5/utilities/xhprof/footer.php

if (extension_loaded('xhprof')) {    $profiler_namespace = 'myapp';  // namespace for your application    $xhprof_data = xhprof_disable();    $xhprof_runs = new XHProfRuns_Default();    $run_id = $xhprof_runs->save_run($xhprof_data, $profiler_namespace);     // url to the XHProf UI libraries (change the host name and path)    $profiler_url = sprintf('http://myhost.com/xhprof/xhprof_html/index.php?run=%s&source=%s', $run_id, $profiler_namespace);    echo '<a href="'. $profiler_url .'" target="_blank">Profiler output</a>';}

Finally, we add them to an .htaccess file so they’re automatically prepended/appended to our pages:

php_value auto_prepend_file /var/www/xhprof/header.phpphp_value auto_append_file /var/www/xhprof/footer.php

At the bottom of your pages, you should now have a link to the profiler output. This is a huge time saver, because every time you load the page, you have fresh profiler data, only one click away, and it doesn’t require having external tools to parse and analyse it.

How to Use XHProf UI

If you click on the link at the bottom of the page, a new page opens with the profiler data:

As you can see, the page has a nice summary with overall statistics, and a table with all the function calls, that can be sorted by many parameters:

Number of Calls Memory Usage Peak Memory Usage CPU time (i.e. CPU time in both kernel and user space) Wall time (i.e. elapsed time: if you perform a network call, that’s the CPU time to call the service and parse the response, plus the time spent waiting for the response itself and other resources)

Memory usage and CPU time are further differentiated into “Inclusive” and “Exclusive”: Inclusive Time includes the time spent in the function itself and in all the descendant functions; Exclusive Time only measures time spent in the function itself, without including the descendant calls.
Finally, in the report page there’s also an input box to filter by function name, and a link to the full call graph, similar to the one you would get with *CacheGrind. Make sure you have GraphViz installed (apt-get install graphviz).

As stated in the documentation, XHProf keeps track of only one level of calling context and is therefore only able to answer questions about a function looking either one level up or down. This is rarely a problem, since you can drill down or up at any level. Clicking on a function name in fact will show details about the current function, the parent (caller) and the children (called).

As a rule of thumb, when we’re ready to optimise our application, we should start sorting the data by CPU (exclusive), and look at the top of the list. The functions at the top are the most expensive ones, so that’s where we should focus our efforts and start optimising/refactoring. If there’s nothing obviously wrong, we drill-down and see if there’s something more evident at an inner level. After every change, we run the profiler again, to see the progress (or lack thereof). Once we are happy, we sort by Memory Usage or Wall time, and start again.
Here’s a quick summary if you want to print a step-by-step worksheet as a reference:

Profile; Sort by CPU/Memory usage, time (exclusive) and function calls; Start from the top of the list; Analyse, refactor and/or optimise; Measure improvement; Start over. Again, and again, and again.

Profiling can be an extremely tedious process, because it requires a lot of patience, and a lot of time staring at numbers in a table (how exciting, eh?). Hopefully, the results of this process are exciting: improvements are often dramatic, since rewriting the slowest parts of the code (and not those we think are slow) has a considerable effect on the overall page load and ultimately on the user’s experience. The advantage of using good tools is that they help maintaining discipline and focus, and thus in building experience.

Diffs and Aggregate Reports

XHProf has a nice feature to get the differences between two different runs, clearly marked in red and green colours. This way it is easy to instantly see the improvements after every change.
To view the report use a URL of the form:

http://%xhprof-ui-address%/index.php?run1=XXX&run2=YYY&source=myapp

Where XXX and YYY are run ids, and the namespace is “myapp” .

Also, it’s possible to aggregate the results of different runs, to “normalise” the reports. To do so, separate the run IDs with a comma in the URL:

http://%xhprof-ui-address%/index.php?run=XXX,YYY,ZZZ&source=myapp

The Path to Scalability Measure the Baseline

As in every journey, you must know where you are and where you want to go. If you need your application to scale, you must know what your targets are (users/sec, memory usage, page generation time), as well as your constraints (of your application, framework, server resources).
Before even starting coding your application, it’s a good idea to measure the baseline of your framework, if you use one. For instance, here’s a summary of an empty Zend Framework project (NB: the same considerations apply to any framework, I do NOT intend to single out ZF as a bad framework):

This tells you that unless you optimise the framework itself or cache the full page, you can’t use less than 2.5MB of memory or have less than 1500 function calls per page load. This is your starting point.
Profiling the framework itself is not just an exercise in style, but is an eye opener on how it works and how (in)efficient the various components are, so any time you decide to use one, you know what to expect.
There are many examples of common programming practice where you might be surprised to see the impact these choices have; here are some examples.
If you have a config.ini setting called “error.logging.level”, and use Zend_Config to read its value, you need to use $config->error->logging->level. Every “arrow” operator means two function calls. So that’s 6 function calls just to read the value of a config setting. If you read that value often or in a loop, consider saving it into a variable.
Every time you use a view helper, there’s a lot of stuff going on behind the scenes; here’s the call stack (it’s actually much worse, but you get the idea):

When you call partial() to render a template, the current view object is cloned, and all the non-private variables are unset. This is done through expensive reflection and an awful lot of substr() calls. Use render() instead if you can (or a view helper if the template is really small and called many times).
Every time you render a template or use a model class, ZF scans the include path to find the correct file to load, even if you already requested that file before. You’ll be surprised to know how many stat calls are made in a single page execution: thousands! Luckily, with XHProf (or even with strace/dtrace, in this case) it’s easy to see whenever a file is read from disk, so you can optimise the include_path order, and possibly use APC to avoid scanning the include_path twice for the same file.
Every time you use Zend_Json::encode() instead of json_encode(), unless you have a very specific reason to do so, you should hit yourself with a stick (perhaps not literally). Profiling the call and seeing what happens is left as an exercise to the reader.
As I said, I don’t intend to bash Zend Framework, I’m sure the others are no worse/better. What is important though is to be aware of the cost of each component of your framework, so you can make a conscious decision on which building blocks to use in your application.

Identify Bottlenecks

It is likely that your application will access external resources: a database, a web service, or data on disk. These are usually the most expensive operations you should try to minimise. If you don’t see them at the top of the list when you look at the XHProf reports, it probably means that there’s something wrong: in this case the framework might be the main bottleneck, or you need to refactor your architecture.
Sometimes, there’s no single call eating all the resources, but it’s easy to spot a cluster of function calls related to a certain part of the code:

Needless to say, this is a clear indicator that you must refactor that component.

Do Less. Do Nothing. Reuse.

When you identify a slow piece of code, before optimising it, rethink why you are doing something, if it’s the right place to do it, and if possible reduce the amount of data you need to process. Only after these steps you can start worrying about the best way to do it.

I’m sure we all agree on the above statement, but sometimes it’s not that obvious what to look for. Or we think we already optimised everything, the reports don’t show any single resource hog, and we reached a dead end. This is when I find it useful to sort the XHProf reports by number of function calls. Usually, it is not a good indicator of the performance of a piece of code, because a single function responsible to retrieve data from an external source is a few orders of magnitude slower than many calls to an internal PHP function, for instance. On the other hand, even if PHP is fast, do we really need to call strtolower() 15000 times? Looking for odd things like this might give some hints on how we process data, and maybe come up with a better way. Too often we tend to bash a language for its slowness, and we tend to forget that usually performance issues have more to do with the implemented algorithms than with the operations used.

Here are some other code smells that might suggest we are doing something in a sub-optimal way:

Immutable functions called within loops Same content being generated twice Content that don’t change being generated every time Content being generated even if not used

All these cases are perfect candidates for caching. Of course I’m not suggesting caching everything. Remember that memory is another limited resource, so don’t abuse it if you need to scale; the key is to spread the load uniformly across all the available resources. You have to think about the cache-hit ratio, and start caching things you hit all the times. Also, it makes little sense to cache if it takes more effort writing to the cache than you save. But more often than not, you can cache a LOT of content.

In order of effectiveness, you can use static variables, APC, memcached. But do not forget about other kind of caches that are even more effective: proxy cache (or reverse-proxy), and of course the user’s browser. If you send the correct headers, many requests will be resolved before even reaching the server!

Some of the above mentioned code smells, even if apparently obvious, are in practice not very simple to spot. For loops and content being generated more than once, it should be quite easy, just look at the number of times a certain function is called and draw your conclusions. Identifying data being processed but not used might be harder: you see the traces, and ideally you should think why you are seeing those calls at all, or why you see them in that particular place. That’s why a lot of discipline is required: you keep looking at those reports for so long that you wish you could eliminate (violently) as many calls as possible so you don’t have to look at them anymore.

Decouple Services

Do not rely on having all the resources available on the same machine. The more you decouple the various services, the easier it is to scale horizontally. Problem is, how to identify the parts to decouple? Well, first of all think about all the services that can be logically separate from the application itself, like all the data sources, content providers, data stores, but also the data-processing routines that are effectively black boxes. Then you might look at the profiler, and see if there’s a resource-intensive routine: can you move it to another machine? Can you maybe add -say- a thin RESTful interface around it? If so, then that service can be moved out of your app, and taken care of separately (e.g. with horizontal replication, if it’s a data store, or put on a cluster behind a load balancer if it’s a data processor).

Profile Under Load

As a last suggestion, it’s a good idea to collect profiler data under load, which is probably more representative of the real usage. To collect a random sample of profiler data, you can run a load testing tool (e.g. apache ab, siege, avalanche) and save a XHProf run every 10000 runs, by modifying the included scripts like this:
/usr/share/php5/utilities/xhprof/header.php

$xhprof_on = false;if (mt_rand(1, 10000) === 1) {    $xhprof_on = true;    if (extension_loaded('xhprof')) {        include_once '/usr/local/lib/php/xhprof_lib/utils/xhprof_lib.php';        include_once '/usr/local/lib/php/xhprof_lib/utils/xhprof_runs.php';        xhprof_enable(XHPROF_FLAGS_CPU + XHPROF_FLAGS_MEMORY);    }}

/usr/share/php5/utilities/xhprof/footer.php

if ($xhprof_on && extension_loaded('xhprof')) {    $profiler_namespace = 'myapp';  // namespace for your application    $xhprof_data = xhprof_disable();    $xhprof_runs = new XHProfRuns_Default();    $run_id = $xhprof_runs->save_run($xhprof_data, $profiler_namespace);     // url to the XHProf UI libraries (change the host name and path)    $profiler_url = sprintf('http://myhost.com/xhprof/xhprof_html/index.php?run=%s&source=%s', $run_id, $profiler_namespace);    echo '<a href="'.$profiler_url.'" target="_blank">Profiler output</a>';}

If your load testing tool can generate reports on CPU and memory usage over time, and collect statistics on what external services are accessed and with what frequency, then by all means observe those graphs, they give a lot of information on the real behaviour of your application and its critical areas. This is a goldmine when it comes to understanding what remains to be optimised. Also make sure the response time remains as flat as possible, without too many spikes or an exponential growth as the load increases: this is a good indicator of stable code and a stable architecture.

Some Parting Thoughts

If you really want to achieve considerable speed gains and scalability improvements, you often have to be ruthless, question everything, ask all the stupid questions, follow the 5 Whys principle and yes, be prepared to annoy everyone else in the team. I think I did that more than once, and I apologise sincerely, but it was in a good cause!

Resources

Some links with more detail about some of the topics mentioned, and some further reading:

http://mirror.facebook.net/facebook/xhprof/doc.html http://pecl.php.net/package/xhprof http://xdebug.org/docs/profiler http://derickrethans.nl/xdebug_and_tracing_memory_usage.php http://kcachegrind.sf.net/ http://sourceforge.net/projects/wincachegrind http://www.maccallgrind.com/ http://www.slideshare.net/postwait/scalable-internet-architecture

 

转自:

  PHP Profiler

陳述
本文內容由網友自願投稿,版權歸原作者所有。本站不承擔相應的法律責任。如發現涉嫌抄襲或侵權的內容,請聯絡admin@php.cn
PHP行動:現實世界中的示例和應用程序PHP行動:現實世界中的示例和應用程序Apr 14, 2025 am 12:19 AM

PHP在電子商務、內容管理系統和API開發中廣泛應用。 1)電子商務:用於購物車功能和支付處理。 2)內容管理系統:用於動態內容生成和用戶管理。 3)API開發:用於RESTfulAPI開發和API安全性。通過性能優化和最佳實踐,PHP應用的效率和可維護性得以提升。

PHP:輕鬆創建交互式Web內容PHP:輕鬆創建交互式Web內容Apr 14, 2025 am 12:15 AM

PHP可以輕鬆創建互動網頁內容。 1)通過嵌入HTML動態生成內容,根據用戶輸入或數據庫數據實時展示。 2)處理表單提交並生成動態輸出,確保使用htmlspecialchars防XSS。 3)結合MySQL創建用戶註冊系統,使用password_hash和預處理語句增強安全性。掌握這些技巧將提升Web開發效率。

PHP和Python:比較兩種流行的編程語言PHP和Python:比較兩種流行的編程語言Apr 14, 2025 am 12:13 AM

PHP和Python各有優勢,選擇依據項目需求。 1.PHP適合web開發,尤其快速開發和維護網站。 2.Python適用於數據科學、機器學習和人工智能,語法簡潔,適合初學者。

PHP的持久相關性:它還活著嗎?PHP的持久相關性:它還活著嗎?Apr 14, 2025 am 12:12 AM

PHP仍然具有活力,其在現代編程領域中依然佔據重要地位。 1)PHP的簡單易學和強大社區支持使其在Web開發中廣泛應用;2)其靈活性和穩定性使其在處理Web表單、數據庫操作和文件處理等方面表現出色;3)PHP不斷進化和優化,適用於初學者和經驗豐富的開發者。

PHP的當前狀態:查看網絡開發趨勢PHP的當前狀態:查看網絡開發趨勢Apr 13, 2025 am 12:20 AM

PHP在現代Web開發中仍然重要,尤其在內容管理和電子商務平台。 1)PHP擁有豐富的生態系統和強大框架支持,如Laravel和Symfony。 2)性能優化可通過OPcache和Nginx實現。 3)PHP8.0引入JIT編譯器,提升性能。 4)雲原生應用通過Docker和Kubernetes部署,提高靈活性和可擴展性。

PHP與其他語言:比較PHP與其他語言:比較Apr 13, 2025 am 12:19 AM

PHP適合web開發,特別是在快速開發和處理動態內容方面表現出色,但不擅長數據科學和企業級應用。與Python相比,PHP在web開發中更具優勢,但在數據科學領域不如Python;與Java相比,PHP在企業級應用中表現較差,但在web開發中更靈活;與JavaScript相比,PHP在後端開發中更簡潔,但在前端開發中不如JavaScript。

PHP與Python:核心功能PHP與Python:核心功能Apr 13, 2025 am 12:16 AM

PHP和Python各有優勢,適合不同場景。 1.PHP適用於web開發,提供內置web服務器和豐富函數庫。 2.Python適合數據科學和機器學習,語法簡潔且有強大標準庫。選擇時應根據項目需求決定。

PHP:網絡開發的關鍵語言PHP:網絡開發的關鍵語言Apr 13, 2025 am 12:08 AM

PHP是一種廣泛應用於服務器端的腳本語言,特別適合web開發。 1.PHP可以嵌入HTML,處理HTTP請求和響應,支持多種數據庫。 2.PHP用於生成動態網頁內容,處理表單數據,訪問數據庫等,具有強大的社區支持和開源資源。 3.PHP是解釋型語言,執行過程包括詞法分析、語法分析、編譯和執行。 4.PHP可以與MySQL結合用於用戶註冊系統等高級應用。 5.調試PHP時,可使用error_reporting()和var_dump()等函數。 6.優化PHP代碼可通過緩存機制、優化數據庫查詢和使用內置函數。 7

See all articles

熱AI工具

Undresser.AI Undress

Undresser.AI Undress

人工智慧驅動的應用程序,用於創建逼真的裸體照片

AI Clothes Remover

AI Clothes Remover

用於從照片中去除衣服的線上人工智慧工具。

Undress AI Tool

Undress AI Tool

免費脫衣圖片

Clothoff.io

Clothoff.io

AI脫衣器

AI Hentai Generator

AI Hentai Generator

免費產生 AI 無盡。

熱門文章

R.E.P.O.能量晶體解釋及其做什麼(黃色晶體)
3 週前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.最佳圖形設置
3 週前By尊渡假赌尊渡假赌尊渡假赌
R.E.P.O.如果您聽不到任何人,如何修復音頻
3 週前By尊渡假赌尊渡假赌尊渡假赌
WWE 2K25:如何解鎖Myrise中的所有內容
4 週前By尊渡假赌尊渡假赌尊渡假赌

熱工具

PhpStorm Mac 版本

PhpStorm Mac 版本

最新(2018.2.1 )專業的PHP整合開發工具

禪工作室 13.0.1

禪工作室 13.0.1

強大的PHP整合開發環境

SAP NetWeaver Server Adapter for Eclipse

SAP NetWeaver Server Adapter for Eclipse

將Eclipse與SAP NetWeaver應用伺服器整合。

SublimeText3 Mac版

SublimeText3 Mac版

神級程式碼編輯軟體(SublimeText3)

VSCode Windows 64位元 下載

VSCode Windows 64位元 下載

微軟推出的免費、功能強大的一款IDE編輯器