CppCMS Blog :: FastCGI http://blog.cppcms.com/ A blog on CppCMS - C++ Web Development Framework Easy Comet: Server-Sent Events http://blog.cppcms.com/post/107 http://blog.cppcms.com/post/107 <div style="direction:ltr"> <h3>HTML5 Comet Technologies</h3> <p>Today there are two major technologies for efficient implementation of Comet applications using HTML5:</p> <ul> <li><a href="http://en.wikipedia.org/wiki/WebSocket">WebSockets (WS)</a></li> <li><a href="http://www.w3.org/TR/eventsource/">Server-Sent Events (SSE)</a></li> </ul> <p>The first ones - WS - provide full-duplex communication such that both client and server can send each other events without creating a new connection. The second ones - SSE - provide real time notification in a single direction, server to client.</p> <p>It seems that WS is much more popular. Indeed, WS are much more powerful and their support is <a href="http://cppcms.com/wikipp/en/page/cppcms_1x_tasks#Provide.WebSockets.support">scheduled</a> for an implementation in CppCMS. On the other hand, WS have very significant limitation: WS protocol <strong>is not HTTP</strong>. It may look like HTTP and its hand-shake is done at HTTP level, but afterwards, the communication is done using different protocol that is not compatible with HTTP.</p> <p>As a result, there is no way to implement WS over existing web server API as FastCGI or SCGI. More than that, even if the application uses HTTP directly, not every web server would be able to proxy web sockets communication.</p> <p>So, despite that WS is very promising technology, it is very hard to deploy it today in the real production environment.</p> <p>On the other hand, SSE are implemented over plain HTTP without any changes. The protocol is very simple and supported by most browsers, Firefox, Opera, Safari, Chrome and expected to be supported by IE10.</p> <p>There is no special changes required to underlying web server APIs, thus FastCGI or SCGI would work perfectly well. For example, SSE can be easily implemented using stable version of CppCMS without any special tools.</p> <p>Additionally SSE support stream synchronization in case of disconnect. The fall-back to long-polling using XHR can be easily implemented.</p> <h3>An Example</h3> <p>‎Lets implement a simple page that receives stock price updates.</p> <p>First, we create an EventSource object and then we attach a handler that would update the appropriate html filed upon notification from the server side:</p> <pre><code>function read_data() { var stream = new EventSource('/ticker'); stream.onmessage = function(e){ document.getElementById('price').innerHTML=e.data; }; stream.onerror = function(e){ console.log(e); }; } read_data(); </code></pre> <p>The code is very simple and trivial, disconnects are handled automatically and transparently. Now lets take a look on the server side that is little bit more challenging:</p> <p>Upon request we prepare our content type as required and fetch the ID of the last known price that was sent to the client.</p> <pre><code>void main(std::string /*url*/) { response().set_content_header("text/event-stream"); response().set_header("Cache-Control", "no-cache"); auto last_id = atoi(request().cgetenv("HTTP_LAST_EVENT_ID")); </code></pre> <p>After that we detach the HTTP context object from the application, such that we will be able to handle multiple connections simultaneously.</p> <pre><code> auto context=release_context(); </code></pre> <p>The idle connections would be stored in a special <code>waiters_</code> set. We add a special callback that allows us to cleanup the clients that had disconnected:</p> <pre><code> context-&gt;async_on_peer_reset([=](){ this-&gt;waiters_.erase(context); }); </code></pre> <p>Note, we use C++11 lambda expressions that make the code much more simple and clear.</p> <p>Then we check if the last price id that is known to the client and if it differs we send the client an update asynchronously, otherwise we add the client to the waiting list:</p> <pre><code> if(last_id != counter_) { async_send(context); } else waiters_.insert(context); } </code></pre> <p>The code that sends the request to the client is quite simple, we send the last price id - simple counter that would use us for synchronization in case of disconnect and send an actual data.</p> <pre><code>void async_send(booster::shared_ptr&lt;cppcms::http::context&gt; waiter) { waiter-&gt;response().out() &lt;&lt; "id:" &lt;&lt; counter_ &lt;&lt;"\n" "data:" &lt;&lt; price_ &lt;&lt; "\n" "\n"; </code></pre> <p>Then we setup a completion callback, if the operation fails (client had disconnected), we just exit, and the context would be automatically destroyed.</p> <pre><code> waiter-&gt;async_flush_output([=,counter_](cppcms::http::context::completion_type status){ if(status!=0) return; </code></pre> <p>Otherwise we check if there were an updates since the data was sent, and if yes we send the latest price once again, otherwise we add the client to the waiting list.</p> <pre><code> if(counter_ != this-&gt;counter_) { this-&gt;async_send(waiter); } else { this-&gt;waiters_.insert(waiter); } }); </code></pre> <p>This completes our function.</p> <pre><code>} </code></pre> <p>Note: that our lambda expression captures the <code>waiter</code> variable and keeps it alive till the handler is executed.</p> <p>Now the last and the simplest thing - updating the price. Upon price update, we increase the identification counter and notify all objects in the waiting list.</p> <pre><code>void update_price(double new_one) { counter_++; price_ = new_one; for(auto waiter : waiters_) { async_send(waiter); } </code></pre> <p>Afterwards we clear the list - now the callback object owns the context and would destroy it in case of an error.</p> <pre><code> waiters_.clear(); } </code></pre> <p>The full code of the sample including simple timer based random price generation can be found <a href="http://cppcms.com/files/ticker/">there</a>.</p> <h3>Falling Back to Long Polling</h3> <p>Is very simple. We would use the same protocol, but when the event is ready we would close the connection. In order to let the server to distinguish between EventSource and XHR long polling we would add a special header like <code>X-Event-Source-Simulate: Long-Polling</code></p> <p>Then we would change a little our <code>async_send</code> function by adding the following lines:</p> <pre><code> if(waiter-&gt;request().getenv("HTTP_X_EVENT_SOURCE_SIMULATE")=="Long-Polling") { waiter-&gt;async_complete_response(); return; } </code></pre> <p>Such that our function would look like:</p> <pre><code> waiter-&gt;response().out() &lt;&lt; "id:" &lt;&lt; counter_ &lt;&lt;"\n" "data:" &lt;&lt; price_ &lt;&lt; "\n" "\n"; if(waiter-&gt;request().getenv("HTTP_X_EVENT_SOURCE_SIMULATE")=="Long-Polling") { waiter-&gt;async_complete_response(); return; } waiter-&gt;async_flush_output(...) ... </code></pre> <p>Of course XHR simulation would have to send and manage <code>Last-Event-Id</code> header and parse the response, but the server side would look almost identically.</p> <h3>Connecting to the web server</h3> <p>When it is coming to configuring a web server you should make sure that it does not buffer out-coming request and sends them immediately to the client:</p> <ul> <li>Lighttpd does this by default for FastCGI, SCGI and HTTP protocols.</li> <li>Apache does this by default for SCGI and HTTP, but for FastCGI (mod_fastcgi) it requires <code>-flush</code> option to work correctly.</li> </ul> <p>Now once again Nginx shows us its problems:</p> <ul> <li>SCGI requires option <code>scgi_buffering off</code></li> <li>HTTP requires option <code>http_buffering off</code></li> </ul> <p>However, FastCGI does not implement such an option! <a href="http://trac.nginx.org/nginx/ticket/159">See this ticket</a></p> <p>So don't even try to use Nginx with FastCGI for Server-Sent Events.</p> <p>Once again, give yourself a favor, <a href="http://blog.cppcms.com/post/99">use lighttpd</a></p> <h3>Thanks</h3> <p>Special thanks to Barbu Paul - Gheorghe,that had brought to my attention such a powerful and useful HTML5 feature.</p> </div> IIS Supports FastCGI? Not really! http://blog.cppcms.com/post/86 http://blog.cppcms.com/post/86 <div style="direction:ltr"> <p><a href="http://www.iis.net/">IIS</a> officially <a href="http://www.iis.net/download/FastCGI">supports</a> FastCGI and even provides built-in module starting from version 7.0.</p> <p>Does it mean that you can now develop generic applications that use the industry standard widely <a href="http://en.wikipedia.org/wiki/FastCGI">supported API</a> and deploy them with IIS?</p> <p>Not really.</p> <p>Several things you should know about IIS and FastCGI:</p> <ol> <li><p>It <a href="http://forums.iis.net/t/1155551.aspx">does not</a> support multi-threaded FastCGI applications. That means your generic application would be able to process only a single request per process.</p> <p> Even the <a href="http://www.fastcgi.com/devkit/examples/threaded.c">standard example</a> that comes with the most popular fastcgi library implementation would not work.</p> <p> This basically means: you can't implement with IIS/FastCGI long polling techniques, or share some cache between multiple worker threads of the same process.</p></li> <li><p>It <a href="http://forums.iis.net/t/1146857.aspx">does not</a> allow you to use external FastCGI applications - something very common in web servers world. All popular web servers: Apache, Lighttpd, Nginx fully support it/</p> <p> IIS must manage your application life cycle and does not allow you "to do what you think is good for you".</p></li> </ol> <p>This basically means only one things: FastCGI support for IIS was designed to run PHP... Nothing more.</p> <p>One again - Microsoft takes very good generic standard and... succeeds to implement it in the most horrible way.</p> </div> What's Next? http://blog.cppcms.com/post/44 http://blog.cppcms.com/post/44 <div style="direction:ltr"> <p>The road map of the project includes two important milestones:</p> <ol> <li>CppCMS core components refactoring including following: <ul> <li> Removal of dependency on CgiCC -- today there is about 5% of CgiCC library is used, many features are not supported by it or are not supported well. For example: file upload handling in CgiCC is very primitive, limited and error prone, support of cookies buggy and so on.</li> <li>Using of Boost.Asio as internal event handler, because: <ol> <li>It provides transparent synchronous and asynchronous event handling allowing future implementation of server push technologies.</li> <li>It provides efficient timer based event handling.</li> </ol> </li> <li> Removal dependency of libfcgi and writing Boost.Asio friendly implementation of FastCGI/SCGI connectors. Implementation of HTTP connectors as well.</li> <li> Support of plug-in applications in CppCMS framework.</li> <li> Improving compilation speed by representing more <code>pimpl</code> idioms and removal of unnecessary classes.</li> </ul> </li> <li>Better support of i18n and and l10n: <ul> <li> Transparent support of <code>std::wstring</code> with forms including automatic encoding testing and conversion.</li> <li> Support of <code>std::locale</code> for localization for outputs like numbers, dates, monetary, translation and so on.</li> <li> Optional support of ICU and icu::UnicodeString and icu::Locale that would add unsupported features by <code>std::locale</code> and allow replacement <code>std::locale</code> features with more correct implementations provided by ICU.</li> </ul> </li> </ol> <p>These changes will significantly break API backward compatibility, but it would be possible to adopt the code almost "mechanically" to the new API.</p> </div> API Changes and mod-prefork http://blog.cppcms.com/post/24 http://blog.cppcms.com/post/24 <div style="direction:ltr"> <p>There have been lot of work in recent weeks in order to make deep internal changes in the framework. Now they include:</p> <ol> <li>Transparent support of 3 web server APIs: fastcgi, cgi and scgi.</li> <li>Support of new mod prefork that allows safer management of worker processes.</li> <li>Implementation of a cache that is shared between forked processes.</li> </ol> <p> <a href="/post/24">more...</a> </p> </div> Components of CppCMS http://blog.cppcms.com/post/3 http://blog.cppcms.com/post/3 <div style="direction:ltr"> <p>There are several important components that CppCMS implements:</p> <ol> <li>FastCGI Application Framework</li> <li>Easy Berkeley DB -- Simple API to BDB</li> <li>Templates System</li> <li>Text Tools -- text to html conversion tools</li> <li>Nice URLs support</li> </ol> <p>I'll describe there all these in details there</p> <p> <a href="/post/3">more...</a> </p> </div> Benchmarks, First Results http://blog.cppcms.com/post/4 http://blog.cppcms.com/post/4 <div style="direction:ltr"> <p><em>Updated 28/12/07, 17:00</em></p> <p>In order to show that all the time I had spend on this system wasn't wasted I had run a simple benchmark tests on following setup:</p> <h4>Server</h4> <ul> <li>Hardware: AMD Athlon XP 3000+ 64 bit, 1GB RAM.</li> <li>OS: Debian Etch 64 bit.</li> <li>Web server: Lighttpd 1.4.13</li> <li>CMS: This blog software, configured with 5 working threads.</li> <li>Data base of 85MB: 5,000 articles with 10 comments to each one. <h4>Network devices</h4></li> <li>100Mbit network cards</li> <li>Edimax $75 home router <h4>Client</h4></li> <li>Hardware: Dell Latitude C400, PIII 750MHz, 256MB RAM</li> <li>OS: Debian Etch 32 bit</li> </ul> <p> <a href="/post/4">more...</a> </p> </div> The Goals http://blog.cppcms.com/post/2 http://blog.cppcms.com/post/2 <div style="direction:ltr"> <p>When I started working on this project, many were (and still are) extremely skeptical about its future. Many questions were asked, like: "Are you masochist? Who you think will ever use it? The hardware is cheap, we do not need such things!" etc.</p> <p>In this post I'll explain my point of view and I'll show why "C++ Web Development Framework" is more then just "yet another crazy idea of another geek"</p> <p> <a href="/post/2">more...</a> </p> </div>