CppCMS Blog :: Comet http://blog.cppcms.com/ A blog on CppCMS - C++ Web Development Framework Server-Sent Events in CppCMS Trunk http://blog.cppcms.com/post/110 http://blog.cppcms.com/post/110 <div style="direction:ltr"> <p>There is a simple class added to the contrib section in trunk.</p> <p><a href="https://cppcms.svn.sourceforge.net/svnroot/cppcms/framework/trunk/contrib/server_side/sse/">https://cppcms.svn.sourceforge.net/svnroot/cppcms/framework/trunk/contrib/server_side/sse/</a></p> <p>The Server-Sent Events support is provide in form of a generic class <code>sse::event_source</code> that the user expected to derive from.</p> <p>Also two simple event source objects with a predefined policy:</p> <ul> <li>Queue bounded up to certain capacity: <code>sse::bounded_event_queue</code></li> <li>Real time state updates that allows to keep clients updated with the latest events - rather than sending all the events from the beginning: <code>sse::state_stream</code></li> </ul> <p>Examples of a simple chat and a stock ticker are provided.</p> <p>Notes:</p> <ul> <li>the <code>sse::event_source</code> supports falling back to long polling using XHR by sending a special header, notifying the server that the connection should be closed rather than kept open.</li> <li>Also the code resides withing trunk, it is fully useful with current stable CppCMS release - 1.0.2.</li> <li>This code does nothing exceptional, you don't really have to use it to create SSE, it is just makes it easy to handle them without knowing too much about server-side event driven programming.</li> </ul> </div> Easy Comet: Server-Sent Events http://blog.cppcms.com/post/107 http://blog.cppcms.com/post/107 <div style="direction:ltr"> <h3>HTML5 Comet Technologies</h3> <p>Today there are two major technologies for efficient implementation of Comet applications using HTML5:</p> <ul> <li><a href="http://en.wikipedia.org/wiki/WebSocket">WebSockets (WS)</a></li> <li><a href="http://www.w3.org/TR/eventsource/">Server-Sent Events (SSE)</a></li> </ul> <p>The first ones - WS - provide full-duplex communication such that both client and server can send each other events without creating a new connection. The second ones - SSE - provide real time notification in a single direction, server to client.</p> <p>It seems that WS is much more popular. Indeed, WS are much more powerful and their support is <a href="http://cppcms.com/wikipp/en/page/cppcms_1x_tasks#Provide.WebSockets.support">scheduled</a> for an implementation in CppCMS. On the other hand, WS have very significant limitation: WS protocol <strong>is not HTTP</strong>. It may look like HTTP and its hand-shake is done at HTTP level, but afterwards, the communication is done using different protocol that is not compatible with HTTP.</p> <p>As a result, there is no way to implement WS over existing web server API as FastCGI or SCGI. More than that, even if the application uses HTTP directly, not every web server would be able to proxy web sockets communication.</p> <p>So, despite that WS is very promising technology, it is very hard to deploy it today in the real production environment.</p> <p>On the other hand, SSE are implemented over plain HTTP without any changes. The protocol is very simple and supported by most browsers, Firefox, Opera, Safari, Chrome and expected to be supported by IE10.</p> <p>There is no special changes required to underlying web server APIs, thus FastCGI or SCGI would work perfectly well. For example, SSE can be easily implemented using stable version of CppCMS without any special tools.</p> <p>Additionally SSE support stream synchronization in case of disconnect. The fall-back to long-polling using XHR can be easily implemented.</p> <h3>An Example</h3> <p>‎Lets implement a simple page that receives stock price updates.</p> <p>First, we create an EventSource object and then we attach a handler that would update the appropriate html filed upon notification from the server side:</p> <pre><code>function read_data() { var stream = new EventSource('/ticker'); stream.onmessage = function(e){ document.getElementById('price').innerHTML=e.data; }; stream.onerror = function(e){ console.log(e); }; } read_data(); </code></pre> <p>The code is very simple and trivial, disconnects are handled automatically and transparently. Now lets take a look on the server side that is little bit more challenging:</p> <p>Upon request we prepare our content type as required and fetch the ID of the last known price that was sent to the client.</p> <pre><code>void main(std::string /*url*/) { response().set_content_header("text/event-stream"); response().set_header("Cache-Control", "no-cache"); auto last_id = atoi(request().cgetenv("HTTP_LAST_EVENT_ID")); </code></pre> <p>After that we detach the HTTP context object from the application, such that we will be able to handle multiple connections simultaneously.</p> <pre><code> auto context=release_context(); </code></pre> <p>The idle connections would be stored in a special <code>waiters_</code> set. We add a special callback that allows us to cleanup the clients that had disconnected:</p> <pre><code> context-&gt;async_on_peer_reset([=](){ this-&gt;waiters_.erase(context); }); </code></pre> <p>Note, we use C++11 lambda expressions that make the code much more simple and clear.</p> <p>Then we check if the last price id that is known to the client and if it differs we send the client an update asynchronously, otherwise we add the client to the waiting list:</p> <pre><code> if(last_id != counter_) { async_send(context); } else waiters_.insert(context); } </code></pre> <p>The code that sends the request to the client is quite simple, we send the last price id - simple counter that would use us for synchronization in case of disconnect and send an actual data.</p> <pre><code>void async_send(booster::shared_ptr&lt;cppcms::http::context&gt; waiter) { waiter-&gt;response().out() &lt;&lt; "id:" &lt;&lt; counter_ &lt;&lt;"\n" "data:" &lt;&lt; price_ &lt;&lt; "\n" "\n"; </code></pre> <p>Then we setup a completion callback, if the operation fails (client had disconnected), we just exit, and the context would be automatically destroyed.</p> <pre><code> waiter-&gt;async_flush_output([=,counter_](cppcms::http::context::completion_type status){ if(status!=0) return; </code></pre> <p>Otherwise we check if there were an updates since the data was sent, and if yes we send the latest price once again, otherwise we add the client to the waiting list.</p> <pre><code> if(counter_ != this-&gt;counter_) { this-&gt;async_send(waiter); } else { this-&gt;waiters_.insert(waiter); } }); </code></pre> <p>This completes our function.</p> <pre><code>} </code></pre> <p>Note: that our lambda expression captures the <code>waiter</code> variable and keeps it alive till the handler is executed.</p> <p>Now the last and the simplest thing - updating the price. Upon price update, we increase the identification counter and notify all objects in the waiting list.</p> <pre><code>void update_price(double new_one) { counter_++; price_ = new_one; for(auto waiter : waiters_) { async_send(waiter); } </code></pre> <p>Afterwards we clear the list - now the callback object owns the context and would destroy it in case of an error.</p> <pre><code> waiters_.clear(); } </code></pre> <p>The full code of the sample including simple timer based random price generation can be found <a href="http://cppcms.com/files/ticker/">there</a>.</p> <h3>Falling Back to Long Polling</h3> <p>Is very simple. We would use the same protocol, but when the event is ready we would close the connection. In order to let the server to distinguish between EventSource and XHR long polling we would add a special header like <code>X-Event-Source-Simulate: Long-Polling</code></p> <p>Then we would change a little our <code>async_send</code> function by adding the following lines:</p> <pre><code> if(waiter-&gt;request().getenv("HTTP_X_EVENT_SOURCE_SIMULATE")=="Long-Polling") { waiter-&gt;async_complete_response(); return; } </code></pre> <p>Such that our function would look like:</p> <pre><code> waiter-&gt;response().out() &lt;&lt; "id:" &lt;&lt; counter_ &lt;&lt;"\n" "data:" &lt;&lt; price_ &lt;&lt; "\n" "\n"; if(waiter-&gt;request().getenv("HTTP_X_EVENT_SOURCE_SIMULATE")=="Long-Polling") { waiter-&gt;async_complete_response(); return; } waiter-&gt;async_flush_output(...) ... </code></pre> <p>Of course XHR simulation would have to send and manage <code>Last-Event-Id</code> header and parse the response, but the server side would look almost identically.</p> <h3>Connecting to the web server</h3> <p>When it is coming to configuring a web server you should make sure that it does not buffer out-coming request and sends them immediately to the client:</p> <ul> <li>Lighttpd does this by default for FastCGI, SCGI and HTTP protocols.</li> <li>Apache does this by default for SCGI and HTTP, but for FastCGI (mod_fastcgi) it requires <code>-flush</code> option to work correctly.</li> </ul> <p>Now once again Nginx shows us its problems:</p> <ul> <li>SCGI requires option <code>scgi_buffering off</code></li> <li>HTTP requires option <code>http_buffering off</code></li> </ul> <p>However, FastCGI does not implement such an option! <a href="http://trac.nginx.org/nginx/ticket/159">See this ticket</a></p> <p>So don't even try to use Nginx with FastCGI for Server-Sent Events.</p> <p>Once again, give yourself a favor, <a href="http://blog.cppcms.com/post/99">use lighttpd</a></p> <h3>Thanks</h3> <p>Special thanks to Barbu Paul - Gheorghe,that had brought to my attention such a powerful and useful HTML5 feature.</p> </div> First beta version of CppCMS 1.x.x is officially out! http://blog.cppcms.com/post/63 http://blog.cppcms.com/post/63 <div style="direction:ltr"> <p>Hello all CppCMS users.</p> <p>The first beta version of CppCMS 1.x.x is available for download from the <a href="https://sourceforge.net/projects/cppcms/files/">Sourceforge</a>. The build instructions can be found <a href="http://cppcms.sourceforge.net/wikipp/en/page/cppcms_1x_build">here</a>.</p> <p>This version very different from CppCMS 0.0.x branch - it fixes many design flaws that had been done the previous version, it is almost 90% rewrite of the original code according to new design.</p> <p>It also includes many <a href="http://cppcms.sourceforge.net/wikipp/en/page/cppcms_1x_whats_new">important features</a></p> <p>Most significant ones:</p> <ul> <li><p>Full CppCMS core rewrite that introduced:</p> <ul> <li>Asynchronous programming support</li> <li>Removal of 3rd part libraries from the core api.</li> <li>Stable API and ABI through all major releases.</li> </ul> </li> <li>Improved Ajax support with introduction of JSON-RPC</li> <li>Powerful i18n and l10n</li> <li>Native Windows support including support of MSVC.</li> <li>And much more...</li> </ul> <p>So now CppCMS beta is ready.</p> <p>Hopefully first release candidate version will be ready withing about a 3 month. And the first stable release is expected at the end of 2010 beginning of 2011.</p> </div> CppCMS 1.x.x updates http://blog.cppcms.com/post/58 http://blog.cppcms.com/post/58 <div style="direction:ltr"> <p>After big internal changes I explain some critical architectural updates that CppCMS 1.x.x did.</p> <p>In few words:</p> <ol> <li>Removed <code>cppcms_boost</code> library. Only several essential tools, not visible to API are still taken from <code>cppcms_boost</code> which compiled statically into cppcms library.</li> <li>Booster library was introduced. It is a library with boost-like interfaces that do not depend on actual boost. Some of the code is taken from boost library, some are wrappers of good C libraries and some totally new code I had written with ABI compatibility in mind.</li> </ol> <p>Following my <a href="http://art-blog.no-ip.info/cppcms/blog/post/57">previous post</a> feel free to update your working copies.</p> <h2>Rationale</h2> <p>CppCMS 1.x.x would provide backward compatible API and ABI and thus it can't relate on Boost library in it's public API. Only possible way to provide Boost API to users is actually wrapping it.</p> <p>CppCMS 1.x.x introduces asynchronous even loop to web development - something that is very critical for Comet programming. This loop was based on Boost.Asio. But unfortunately it had very big limitations and writing a good wrapper Boost.Asio was unfeasible.</p> <p>So a small "Boost<em>er</em>" library replaced required functionality from boost partially reimplementing, partially wrapping C libraries and partially borrowing the code from Boost itself.</p> <p>Booster has following components:</p> <ul> <li>Smart pointers: <code>shared_ptr</code>, <code>intrusive_ptr</code> from Boost and small pointers for pimpl design: <code>hold_ptr</code>, <code>copy_ptr</code> and <code>clone_ptr</code> (my own).</li> <li>Function - similar to <code>std::tr1::function</code> and <code>boost::function</code> - small class important for callbacks implementation (my own implementation).</li> <li>Regular expressions - wrapper of PCRE library with Boost.Regex like API. It provides only <code>regex</code>, <code>match_result</code>, <code>regex_search</code> and <code>regex_match</code>.</li> <li>System: <code>error_code</code>, <code>error_category</code> and <code>system_error</code> (my own).</li> <li>Thread: Boost like pthreads wrapper.</li> <li>Locale - full Boost.Locale library (my own).</li> <li>And the most important AIO library - library inspired by Asio.</li> </ul> <p>The AIO library it is central CppCMS event loop that has Asio like API - proactor design, callback interface, etc.</p> <p>However unlike ASIO it uses very few template classes, it is prefork-friendly (<a href="https://svn.boost.org/trac/boost/ticket/4162">unlike ASIO</a>)</p> <p>Booster.Aio interface would allow Comet application to receive various asynchronous notifications from any source that can deliver such notifications over sockets.</p> </div> CppCMS meets Comet http://blog.cppcms.com/post/47 http://blog.cppcms.com/post/47 <div style="direction:ltr"> <p>One of the major requirements for framework refactoring was support of <a href="http://en.wikipedia.org/wiki/Comet_(programming)">Comet</a>. Now, with introduction of asynchronous request handling and persistent application servers it becomes reality.</p> <h3>Client Side</h3> <p>There is a HTML <a href="http://cppcms.svn.sourceforge.net/viewvc/cppcms/framework/trunk/examples/chat/the_chat.html?revision=2000&amp;view=markup">source</a> of simple chat client, that uses <a href="http://www.dojotoolkit.org/">Dojo</a> toolkit. It does following:</p> <ol> <li><p>Submits new messages to the server application by posting form using XHR:</p> <pre><code> function send_data() { var kw = { url : "/chat/post", form : "theform" }; dojo.xhrPost(kw); dojo.byId("message").value=""; return false; } </code></pre></li> <li><p>Receives new messages from the server using long poll via XHR:</p> <pre><code> var message_count = 0; function read_data() { dojo.xhrGet( { url: "/chat/get/" + message_count, timeout: 120000, handleAs: "text", load: function(response, ioArgs) { dojo.byId("messages").innerHTML = response + '&lt;br/&gt;' + dojo.byId("messages").innerHTML; message_count++; read_data(); return response; }, error: function(response,ioArgs) { read_data(); return response; } }); } dojo.addOnLoad(read_data); </code></pre></li> </ol> <p>So, the client side is quite simple (however error handling should be quite better).</p> <h3>Server Side</h3> <p>First we create our long running asynchronous application, that receives two kinds for requests: "/post" -- with new data, and "/get/NN" -- receive message nuber NN, we assign these calls to two member functions <code>post</code> and <code>get</code>.</p> <pre><code>class chat : public cppcms::application { public: chat(cppcms::service &amp;srv) : cppcms::application(srv) { dispatcher().assign("^/post$",&amp;chat::post,this); dispatcher().assign("^/get/(\\d+)$",&amp;chat::get,this,1); } </code></pre> <p>Now, this class includes two data members:</p> <pre><code>private: std::vector&lt;std::string&gt; messages_; std::vector&lt;cppcms::intrusive_ptr&lt;cppcms::http::context&gt; &gt; waiters_; </code></pre> <p>The history of all chat messages -- <code>messages_</code> and all pending <code>get</code> requests that can't be satisfied, because the message still not exists -- <code>waiters_</code></p> <p>Each, "waiter" is actually pointer to request/response context that can be used for message transport.</p> <p>Now, when new message arrives, <code>post</code> member function is called:</p> <pre><code>void post() { if(request().request_method()=="POST") { if(request().post().find("message")!=request().post().end()) { messages_.push_back(request().post().find("message")-&gt;second); broadcast(); } } release_context()-&gt;async_complete_response(); } </code></pre> <p>If the requested message was found, it is added to <code>messages_</code> list and all waiters are notified using <code>broadcast()</code> member function.</p> <p>At the end, the current request context is released and completed.</p> <p>The broadcasting is done as following:</p> <pre><code>void broadcast() { for(unsigned i=0;i&lt;waiters_.size();i++) { waiters_[i]-&gt;response().set_plain_text_header(); waiters_[i]-&gt;response().out() &lt;&lt; messages_.back(); waiters_[i]-&gt;async_complete_response(); waiters_[i]=0; } waiters_.clear(); } </code></pre> <p>For each pending request the last message is written and the request closed. After that, all pending request are cleaned.</p> <p>When <code>get</code> request arrives, it is handled by <code>get(std::string no)</code> member function, first of all we check if requested message exists, if so we just return it to user.</p> <pre><code>unsigned pos=atoi(no.c_str()); if(pos &lt; messages_.size()) { response().set_plain_text_header(); response().out()&lt;&lt;messages_[pos]; release_context()-&gt;async_complete_response(); } </code></pre> <p>Otherwise, if the requested message is the last one, that does not exists, we add the request context to pending list <code>waiters</code></p> <pre><code>else if(pos == messages_.size()) { waiters_.push_back(release_context()); } </code></pre> <p>If requested message it too late -- probably client error, we just set status to "404 Not Found" and return the response.</p> <pre><code>else { response().status(404); release_context()-&gt;async_complete_response(); } </code></pre> <p>No, all we need to do is to add application to the main running loop under script name "/char" and start the service.</p> <pre><code>cppcms::service service(argc,argv); cppcms::intrusive_ptr&lt;chat&gt; app=new chat(service); service.applications_pool().mount(app,"/chat"); service.run(); </code></pre> <h3>Summary</h3> <p>So, the simple chat service was written with about 50 lines of C++ code and about same amount of JavaScript code.</p> <p>I must admit, that it is too simplistic and not efficient, for example: if new client connects it receives all messages one by one and not as bulk (can be easily fixed), I do not handle timeouts and disconnects. But the general idea is quite clear:</p> <ul> <li>Asynchronous long running application that handles <strong>all</strong> request is created.</li> <li>It manages all outstanding request and uses them for server side push.</li> </ul> <p>This is actually a base for future development of tools like XML-RPC and JSON-RPC that allow client to call asynchronously server side objects, it can be used for implementation of any other Comet protocols.</p> </div>