Posts in category ‘Cache’.
There have been lot of work in recent weeks in order to make deep internal changes in the framework. Now they include:
- Transparent support of 3 web server APIs: fastcgi, cgi and scgi.
- Support of new mod prefork that allows safer management of worker processes.
- Implementation of a cache that is shared between forked processes.
I had compared two blog systems: this one and WordPress 2.5 with a patched WP-Cache-2 addon. I used following configuration:
- Web Server lighttpd 1.4.13
- Interface FastCGI
- PHP 5.2
- Bytecode cacher: XCache 1.2.1
- Database MySQL 5.0
- Caching for WP: WP-Cache-2 with an additional performance patch
- Hardware: AMD Athlon XP 64bit, 1G RAM
- OS: Linux, Debian Etch 64bit.
I prepared two blogs that were filled up with 1000 articles each. Each article had 10 comments, all the articles were organized in 10 categories in each blog.more...
I'm going to run a heavy benchmarks comparing WordPress -- the blog system I know very well, with CppCMS based blog -- the system I had written.
The new caching system that was developed for CppCMS is quite smart, it stores the entry pages twice: original and gzip compressed. On heavy loads, this allows serving pages significantly faster because only thing that should be done is to push html or compressed html page directly from the cache. Otherwise, gzip compression (even fastest) would take lots of resources and reduces a preformace of the system.
When it comes to benchmarks, I had discovered that WP-Cache-2 plugin does the job well, but it caches only html version of the file, thus, even if the page is cached it still must pass a compression by Apache's mod_deflate or by PHP engine itself.
I had patched this plugin and now it stores two versions of same page: an original and compressed. and was able to get 60% performace improvement.
- WordPress native plugin: 450 requests per second
- WordPress patched plugin: 720 requests per second
So after this patch I can feel that the benchmarks would be proper, because without it this would be incorrect to compare time required for fetching a cache with the time required for compressing entry page.
N.B.: The full benchmarks coming soon
After quite a long period of development I had decided to get prepared to first public beta release of CppCMS.
The major components of this blog and the framework I want to introduce in first beta are following:
- Implementation of Django style templates inheritance, filters (done 70%)
- Introduce powerful cache system (done 100%)
- Replace SOCI by LibDBI (done 100%)
- Improve blog: true markdown, LaTeX equations, categories etc. (done 100%)
- Write Documentation (done 20%)
- Migrate my Hebrew blog from Word Press to CppCMS (done 100%)
There are lots of work to do, but CppCMS now looks much mature then before.more...
As we had seen in previous article, the benchmarks had shown an ability of CppCMS to produce about 630 compressed pages per second and an output of about 20Mbit/s. Is this enough?
For most of cases it is... But as we had seen I want to use every cycle of the CPU as smart as I can. Even, if the model I had suggested, was able to show "a prove of concept" there is an important point that was missed: "Why should I create same page so many times?"
This is the next logical step in the development of high performance web development framework.
First of all we should understand a requirements of the caching system:
- Support of "dropping cache on update"
- Support of drop the cache by timeout
- Work using three models: single process cache, shared cache between processes, shared over the network.
- Support of caching on entry page level and single view level as well
- Transparent storage of compressed content
Lets describe each one of them:more...