We are under attack...
Here in Israel...
I hear explosions of Grad rockets fired by Hamas to our cities. I hear sirens that gives us short alarms to run for shelters.
This is daily routine...
What Gives Israel the Right to Defend Itself?
Artyom Beilis,
Lead CppCMS Developer, from Israel
SVN Repositories Had Moved to New Location
Due to upgrades on sourceforge the svn repositories had moved. If you are using the version from SVN-Trunk, please checkout them once again:
CppCMS
- HTTP-Protocol: http://svn.code.sf.net/p/cppcms/code/framework/trunk
- SVN-Protocol: svn://svn.code.sf.net/p/cppcms/code/framework/trunk
CppDB
- HTTP-Protocol: http://svn.code.sf.net/p/cppcms/code/cppdb/trunk
- SVN-Protocol: svn://svn.code.sf.net/p/cppcms/code/cppdb/trunk
Server-Sent Events in CppCMS Trunk
There is a simple class added to the contrib section in trunk.
https://cppcms.svn.sourceforge.net/svnroot/cppcms/framework/trunk/contrib/server_side/sse/
The Server-Sent Events support is provide in form of a generic class sse::event_source
that the user expected to derive from.
Also two simple event source objects with a predefined policy:
- Queue bounded up to certain capacity:
sse::bounded_event_queue
- Real time state updates that allows to keep clients updated with the latest events - rather than sending all the events from the beginning:
sse::state_stream
Examples of a simple chat and a stock ticker are provided.
Notes:
- the
sse::event_source
supports falling back to long polling using XHR by sending a special header, notifying the server that the connection should be closed rather than kept open. - Also the code resides withing trunk, it is fully useful with current stable CppCMS release - 1.0.2.
- This code does nothing exceptional, you don't really have to use it to create SSE, it is just makes it easy to handle them without knowing too much about server-side event driven programming.
CppCMS 1.0.2 - Bug Fix Release
CppCMS 1.0.2 Released, it is a bug fix release.
Fixed Bugs:
- Fixed incorrect HTML generation using
as_table
- Fixed build issues with latest ICU: 3540278
- Fixed some warnings in Boost.Locale
- Fixed missing error report in case of running out of disk space in file upload: 3542469
- Fixed visual studio 11 build issues: 3527712,3527702
- Fixed problem with using automatic type detection in C++11 mode with gcc 3537374
- Fixed bugs that made stackstream useless
- Fixed issues build issues on Mac OS X: 3513865
- Fixed OpenSSL detection on Windows: 3515712
- Fixed incorrect scgi configuration for nginx in
cppcms_run
Easy Comet: Server-Sent Events
HTML5 Comet Technologies
Today there are two major technologies for efficient implementation of Comet applications using HTML5:
The first ones - WS - provide full-duplex communication such that both client and server can send each other events without creating a new connection. The second ones - SSE - provide real time notification in a single direction, server to client.
It seems that WS is much more popular. Indeed, WS are much more powerful and their support is scheduled for an implementation in CppCMS. On the other hand, WS have very significant limitation: WS protocol is not HTTP. It may look like HTTP and its hand-shake is done at HTTP level, but afterwards, the communication is done using different protocol that is not compatible with HTTP.
As a result, there is no way to implement WS over existing web server API as FastCGI or SCGI. More than that, even if the application uses HTTP directly, not every web server would be able to proxy web sockets communication.
So, despite that WS is very promising technology, it is very hard to deploy it today in the real production environment.
On the other hand, SSE are implemented over plain HTTP without any changes. The protocol is very simple and supported by most browsers, Firefox, Opera, Safari, Chrome and expected to be supported by IE10.
There is no special changes required to underlying web server APIs, thus FastCGI or SCGI would work perfectly well. For example, SSE can be easily implemented using stable version of CppCMS without any special tools.
Additionally SSE support stream synchronization in case of disconnect. The fall-back to long-polling using XHR can be easily implemented.
An Example
‎Lets implement a simple page that receives stock price updates.
First, we create an EventSource object and then we attach a handler that would update the appropriate html filed upon notification from the server side:
function read_data() {
var stream = new EventSource('/ticker');
stream.onmessage = function(e){
document.getElementById('price').innerHTML=e.data;
};
stream.onerror = function(e){
console.log(e);
};
}
read_data();
The code is very simple and trivial, disconnects are handled automatically and transparently. Now lets take a look on the server side that is little bit more challenging:
Upon request we prepare our content type as required and fetch the ID of the last known price that was sent to the client.
void main(std::string /*url*/)
{
response().set_content_header("text/event-stream");
response().set_header("Cache-Control", "no-cache");
auto last_id = atoi(request().cgetenv("HTTP_LAST_EVENT_ID"));
After that we detach the HTTP context object from the application, such that we will be able to handle multiple connections simultaneously.
auto context=release_context();
The idle connections would be stored in a special waiters_
set. We add a special callback that allows us to cleanup the clients that had disconnected:
context->async_on_peer_reset([=](){
this->waiters_.erase(context);
});
Note, we use C++11 lambda expressions that make the code much more simple and clear.
Then we check if the last price id that is known to the client and if it differs we send the client an update asynchronously, otherwise we add the client to the waiting list:
if(last_id != counter_) {
async_send(context);
}
else
waiters_.insert(context);
}
The code that sends the request to the client is quite simple, we send the last price id - simple counter that would use us for synchronization in case of disconnect and send an actual data.
void async_send(booster::shared_ptr<cppcms::http::context> waiter)
{
waiter->response().out() <<
"id:" << counter_ <<"\n"
"data:" << price_ << "\n"
"\n";
Then we setup a completion callback, if the operation fails (client had disconnected), we just exit, and the context would be automatically destroyed.
waiter->async_flush_output([=,counter_](cppcms::http::context::completion_type status){
if(status!=0)
return;
Otherwise we check if there were an updates since the data was sent, and if yes we send the latest price once again, otherwise we add the client to the waiting list.
if(counter_ != this->counter_) {
this->async_send(waiter);
}
else {
this->waiters_.insert(waiter);
}
});
This completes our function.
}
Note: that our lambda expression captures the waiter
variable and keeps it alive till the handler is executed.
Now the last and the simplest thing - updating the price. Upon price update, we increase the identification counter and notify all objects in the waiting list.
void update_price(double new_one)
{
counter_++;
price_ = new_one;
for(auto waiter : waiters_) {
async_send(waiter);
}
Afterwards we clear the list - now the callback object owns the context and would destroy it in case of an error.
waiters_.clear();
}
The full code of the sample including simple timer based random price generation can be found there.
Falling Back to Long Polling
Is very simple. We would use the same protocol, but when the event is ready we would close the connection. In order to let the server to distinguish between EventSource and
XHR long polling we would add a special header like X-Event-Source-Simulate: Long-Polling
Then we would change a little our async_send
function by adding the following
lines:
if(waiter->request().getenv("HTTP_X_EVENT_SOURCE_SIMULATE")=="Long-Polling") {
waiter->async_complete_response();
return;
}
Such that our function would look like:
waiter->response().out() <<
"id:" << counter_ <<"\n"
"data:" << price_ << "\n"
"\n";
if(waiter->request().getenv("HTTP_X_EVENT_SOURCE_SIMULATE")=="Long-Polling") {
waiter->async_complete_response();
return;
}
waiter->async_flush_output(...)
...
Of course XHR simulation would have to send and manage Last-Event-Id
header and parse the response, but the server side would look almost identically.
Connecting to the web server
When it is coming to configuring a web server you should make sure that it does not buffer out-coming request and sends them immediately to the client:
- Lighttpd does this by default for FastCGI, SCGI and HTTP protocols.
- Apache does this by default for SCGI and HTTP, but for FastCGI (mod_fastcgi) it requires
-flush
option to work correctly.
Now once again Nginx shows us its problems:
- SCGI requires option
scgi_buffering off
- HTTP requires option
http_buffering off
However, FastCGI does not implement such an option! See this ticket
So don't even try to use Nginx with FastCGI for Server-Sent Events.
Once again, give yourself a favor, use lighttpd
Thanks
Special thanks to Barbu Paul - Gheorghe,that had brought to my attention such a powerful and useful HTML5 feature.