There are currently three ways of connecting a WDialog application to the outer world, especially to the Web: CGI, FastCGI, and JSERV. Note that the Perl bindings currently only support CGI.
Besides choosing from three acronyms, the runtime model determines how the resources of the operating system can be used. The question is whether two activations of the application run in the same process, or run in different processes, and how parallel accesses to the same runtime entities are resolved. Although we are focusing on the connection to the web (server), the runtime model also determines possible solutions to other connections, for example to database systems.
The CGI interface is well-known and available for almost all web servers. Furthermore, CGI defines a set of possible interactions between the web server and the application, and serves as a reference for what one can expect. Because of these reasons, CGI is the basic interface for WDialog.
CGI starts a new process for every request. This has the advantage that (1) the requests can be processed separately such that they do not interfer with each other, and that (2) it is ensured that the application gives all allocated resources (open files etc) back to the operating system. These two points are the reasons why CGI is still used today for critical applications, although there is a performance bottleneck as every process must be initialized anew.
There are some wrong legends about CGI. For example, some people think that the fact that CGI uses the fork and exec system calls makes it slow from the very beginning, especially when the binary to start has a size of several megabytes. This is not the problem. Modern Unix-based operating system are heavily optimized regarding fork and exec, and an experiment showed that my old 400 Mhz system can start 30 CGI processes per second without causing high CPU load; the process image was bigger than one megabyte. Actually, the problem with CGI is that loading the process image is not all of the initialization work. WDialog must parse the XML file containing the UI definition, and it must prepare the XML tree for the transformation. These actions may take more than a second for big applications.
Nevertheless, there is a way to reduce the initialization time significantly, and this makes CGI interesting again. The idea is to avoid parsing the XML file by loading preformatted binary data instead. You can create the binary representation by calling the program wd-xmlcompile which is part of the WDialog distribution:
wd-xmlcompile sample.ui- this would create sample.ui.bin, and the loader of WDialog automatically finds this file and loads it instead of sample.ui. This trick often reduces the load time to less than 0.5 seconds.
In order to run the application as CGI, call the function Wd_run_cgi.run from your main program - it does all the rest.
Summary for CGI
The FastCGI protocol is an extension of the CGI model which allows multiple requests to be processed by the same process. The CGI application either runs in an application server environment provided by the web server (the most common method) or as a stand alone daemon listening for FastCGI connection. The details of the fastcgi protocol, including instructions on how to set it up for various web servers can be found at The FastCGI Project web page.
Using fastcgi in WDialog is accomplished by calling the function Wd_run_fastcgi.serv. This function implements a run loop which processes connections sequentially.
Many different forms of concurrency are possible with fastcgi, and in most cases very little needs to be done to make the WDialog application aware of it. This is especially true in the web server managed environment, where the web server generally implements a process pool model in which it runs N copies of the application at startup time. Requests are then routed to each process by the web server in an implementation dependant way. As long as session state is in some sort of shared store the application need not even be aware that it is operating in a concurrent environment. Threads are also possible in two cases. You may either start multiple threads, and have each one call Wd_run_fastcgi.serv, or you may use threads to perform background tasks which do NOT talk on the fastcgi output channels. No multiplexing of output is possible over a single connection to the web server. The first thread model is very similar to the process pool model, except that a shared session manager need not be used. For an external application, one which does not use the web server as a process manager, concurrency is left completely up to the application.
The JSERV protocol was developed by the Java Apache Project, and is still be used by Jakarta . Although these projects base on the Java language, the protocol as such is language-independent, and it turns out that it is very simple to connect a JSERV-enabled web server with a servlet engine that is not written in Java.
The Java Apache Project is dead, no further development takes place, as all subprojects have moved to Jakarta. Nevertheless, I currently recommend to use mod_jserv, the JSERV extension for Apache 1.3 from the Java Apache Project, because it is much simpler to extract it from the whole software project. However, mod_jk works, too.
The architecture behind JSERV is quite simple. The web server is extended with the JSERV protocol, and every request opens a new connection to the servlet process. This process is a permanently running daemon. The web server forwards the page request over this connection to the servlet process, and the latter processes it and sends the answer back to the web server. Effectively, the servlet process behaves like a second web server behind the first, but it does not support the full HTTP protocol but the simpler and less general JSERV protocol.
In the original Java environment, the servlet process is a JVM (Java virtual machine), and it executes the code of the application. There is also a part handling the JSERV protocol, but this is simply a library that can be loaded like any other library. - The Java background explains why the servlet process is permanently running: CGI is not a choice for Java, because of the long startup time of the JVM. Furthermore, Java's excellent multi-threading capabilities makes it possible to handle concurrency inside the JVM.
That the servlet process is permanently running is the important advantage for the O'Caml port, too. The servlet process is simply an O'Caml program that uses the library for JSERV (which is included in the Ocamlnet package). However, there are differences to the Java original:
The various models are discussed in detail below.
WDialog provides the module Wd_run_jserv that defines request handlers for the various execution models. A sample main program for a servlet process would be:
let req_hdl = Wd_run_jserv.create_request_handler ... () in let server = `Forking(20, [ "appname", req_hdl ]) in Netcgi_jserv.jvm_emu_main (Netcgi_jserv_app.run server `Ajp_1_2)]The real main program is Netcgi_jserv.jvm_emu_main which accepts command-line arguments that are compatible (enough) with the arguments of the Java JVM. (Useful, because the JSERV web server extension usually starts the servlet process, and the web server assumes that it starts a JVM.)
The function Netcgi_jserv_app.run is the main entry point for the JSERV protocol handler. It gets as argument the server definition, here of `Forking type. The list defines that the servlet appname is handled by req_hdl, the WDialog-specific request handler.
In order to get the servlet server running, you also need the jserv.properties file containing the configurations that are needed by both the web server and the servlet server. Furthermore, httpd.conf, the configuration file of Apache, must be extended with some mod_jserv-specific definitions. You can find more information in the Java Apache distribution.
The JSERV execution model `Sequential
Sequential execution means that a single process gets all arriving requests which are processed one after another. This works very well unless it takes too long to process a request. The big advantage of this execution model is that there is almost no management overhead to handle concurrent accesses, because these do not happen. However, if the computations for a request last very long, the server will block until this time-consuming request is done, and any other requests happening at the same time must wait.
There is another advantage: It is quite simple to cache frequently accessed data, because these can be stored in global variables, again without any additional overhead.
The sequential model is very attractive for web applications that have a limited number of concurrent users, and that run on single-CPU systems. However, some care must be taken to avoid that individual requests block the whole application. For example, one possibility would be to set the alarm clock (Unix.alarm or Unix.setitimer) and to raise an exception after the maximum period of time has expired.
Summary for JSERV, `Sequential execution
The JSERV execution model `Forking
In this model, every incoming request causes that the main process spawns a subprocess. The subprocess performs all computations that are necessary to reply, while the main process continues immediately accepting new connections.
This model sounds like CGI, but it is actually different in one important aspect. When the subprocess is spawned, all necessary initializations have already happened, and the subprocess can immediately begin to analyze the request, and to do all the other work related to the request. In contrast to this, the CGI subprocess must first initialize itself, for example read the XML file containing the UI definition.
Effectively, the setup time is longer than for `Sequential execution, but still rather short. The concurrently running activations are isolated from each other like in CGI, and the operating system takes care to deallocate the resources when the activation is over. There is no simple way to let the activations share data or other resources.
Summary for JSERV, `Forking execution
The JSERV execution model `Process_pool
This model combines the advantages of `Forking and `Sequential, and is probably the most attractive model for highly loaded servers. At startup time, a fixed number of processes are spawned (after initialization), and every process of this pool accepts sequentially the incoming requests. When a new page request arrives it is likely that some of the processes of the pool are currently busy and that the rest is idle. One of the free processes will get the request, and will be busy until the request is processed.
It may happen that all processes are busy. The newly arrived request must wait until one of the processes is free again. (Note: The length of this queue can be specified by the backlog parameter.)
This model can process requests in parallel; however, parallelism is restricted to the fixed number that must be known at startup time. A good choice is the number of CPUs times a small factor, but there should be enough memory such that no process is swapped out.
Furthermore, this model avoids the cost of forking for every request, because the processes run sequentially once started. This results in very low overhead and quick responses.
However, this model also combines the disadvantages of `Forking and `Sequential, because the activations for the requests are neither isolated from each other nor they are not isolated, you simply do not know.
Summary for JSERV, `Process_pool execution
The JSERV execution model `Thread_pool
You may wonder why I do not simply follow the Java original and only implement thread pools. There are a number of arguments against multi-threading, some critising this technique in general, some only applying to the O'Caml implementation.
I hope this explains why the multi-threaded execution model does not rank as number 1 in the priority list. However, there are benefits from such a model.
Most important, this is the only model that can combine parallelism with the ability to easily access shared data structures. The other models (`Forking and `Process_pool) can share data only by special means of the operating system (e.g. by sharing files, or by shared memory that is now available in the bigarray library).
Furthermore, it becomes possible to program servers that respond to multiple protocols. For example, such a server could combine a web frontend with RPC services. (Like EJB, but I do not see a strict necessity to do that. Both aspects can be separated.)
No summary yet, as the model is not yet implemented.
Which model is the right one for me?
Obviously, there is no simple answer. I have tried to enumerate the pros and cons for all the models, and it depends on your application which arguments count. Maybe the following simplifications point you to the right direction for your evaluation.
Secondary network connections
It is often necessary to open network connections to further services in order to process a request. For example, accessing database systems is nowadays done in this way. You have several choices for that: