Plasma GitLab Archive
Projects Blog Knowledge


[UP]
Reference
 Dialogs
 Data types
 Events
 Templates
 Session management and security
 Internationalization
 Output encodings
 Processing instructions
 The UI language
 The standard UI library
 WDialog API (O'Caml)
 Runtime models
   
Runtime models

There are currently three ways of connecting a WDialog application to the outer world, especially to the Web: CGI, FastCGI, and JSERV. Note that the Perl bindings currently only support CGI.

Besides choosing from three acronyms, the runtime model determines how the resources of the operating system can be used. The question is whether two activations of the application run in the same process, or run in different processes, and how parallel accesses to the same runtime entities are resolved. Although we are focusing on the connection to the web (server), the runtime model also determines possible solutions to other connections, for example to database systems.

CGI

The CGI interface is well-known and available for almost all web servers. Furthermore, CGI defines a set of possible interactions between the web server and the application, and serves as a reference for what one can expect. Because of these reasons, CGI is the basic interface for WDialog.

CGI starts a new process for every request. This has the advantage that (1) the requests can be processed separately such that they do not interfer with each other, and that (2) it is ensured that the application gives all allocated resources (open files etc) back to the operating system. These two points are the reasons why CGI is still used today for critical applications, although there is a performance bottleneck as every process must be initialized anew.

There are some wrong legends about CGI. For example, some people think that the fact that CGI uses the fork and exec system calls makes it slow from the very beginning, especially when the binary to start has a size of several megabytes. This is not the problem. Modern Unix-based operating system are heavily optimized regarding fork and exec, and an experiment showed that my old 400 Mhz system can start 30 CGI processes per second without causing high CPU load; the process image was bigger than one megabyte. Actually, the problem with CGI is that loading the process image is not all of the initialization work. WDialog must parse the XML file containing the UI definition, and it must prepare the XML tree for the transformation. These actions may take more than a second for big applications.

Nevertheless, there is a way to reduce the initialization time significantly, and this makes CGI interesting again. The idea is to avoid parsing the XML file by loading preformatted binary data instead. You can create the binary representation by calling the program wd-xmlcompile which is part of the WDialog distribution:

wd-xmlcompile sample.ui
- this would create sample.ui.bin, and the loader of WDialog automatically finds this file and loads it instead of sample.ui. This trick often reduces the load time to less than 0.5 seconds.

In order to run the application as CGI, call the function Wd_run_cgi.run from your main program - it does all the rest.

Summary for CGI

  • Execution model: New process for every page request, and the process must initialize everything from the very beginning
  • Advantages: The processes are isolated from each other, so malfunction of one process does not interfer with the other concurrently running processes. Resources of the operating system are guaranteed to be deallocated.
  • Disadvantages: Long initialization time. The time needed for WDialog startup can be reduced by using wd-xmlcompile, however.
FastCGI

The FastCGI protocol is an extension of the CGI model which allows multiple requests to be processed by the same process. The CGI application either runs in an application server environment provided by the web server (the most common method) or as a stand alone daemon listening for FastCGI connection. The details of the fastcgi protocol, including instructions on how to set it up for various web servers can be found at The FastCGI Project web page.

Using fastcgi in WDialog is accomplished by calling the function Wd_run_fastcgi.serv. This function implements a run loop which processes connections sequentially.

Concurrency

Many different forms of concurrency are possible with fastcgi, and in most cases very little needs to be done to make the WDialog application aware of it. This is especially true in the web server managed environment, where the web server generally implements a process pool model in which it runs N copies of the application at startup time. Requests are then routed to each process by the web server in an implementation dependant way. As long as session state is in some sort of shared store the application need not even be aware that it is operating in a concurrent environment. Threads are also possible in two cases. You may either start multiple threads, and have each one call Wd_run_fastcgi.serv, or you may use threads to perform background tasks which do NOT talk on the fastcgi output channels. No multiplexing of output is possible over a single connection to the web server. The first thread model is very similar to the process pool model, except that a shared session manager need not be used. For an external application, one which does not use the web server as a process manager, concurrency is left completely up to the application.

JSERV

The JSERV protocol was developed by the Java Apache Project, and is still be used by Jakarta . Although these projects base on the Java language, the protocol as such is language-independent, and it turns out that it is very simple to connect a JSERV-enabled web server with a servlet engine that is not written in Java.

The Java Apache Project is dead, no further development takes place, as all subprojects have moved to Jakarta. Nevertheless, I currently recommend to use mod_jserv, the JSERV extension for Apache 1.3 from the Java Apache Project, because it is much simpler to extract it from the whole software project. However, mod_jk works, too.

The architecture behind JSERV is quite simple. The web server is extended with the JSERV protocol, and every request opens a new connection to the servlet process. This process is a permanently running daemon. The web server forwards the page request over this connection to the servlet process, and the latter processes it and sends the answer back to the web server. Effectively, the servlet process behaves like a second web server behind the first, but it does not support the full HTTP protocol but the simpler and less general JSERV protocol.

In the original Java environment, the servlet process is a JVM (Java virtual machine), and it executes the code of the application. There is also a part handling the JSERV protocol, but this is simply a library that can be loaded like any other library. - The Java background explains why the servlet process is permanently running: CGI is not a choice for Java, because of the long startup time of the JVM. Furthermore, Java's excellent multi-threading capabilities makes it possible to handle concurrency inside the JVM.

That the servlet process is permanently running is the important advantage for the O'Caml port, too. The servlet process is simply an O'Caml program that uses the library for JSERV (which is included in the Ocamlnet package). However, there are differences to the Java original:

  • The servlets are not dynamically loaded. A normal, pre-linked program is used. This means that you must shutdown the servlet process before you can exchange a servlet by a newer version, or add a servlet.

  • Instead of multi-threading, a range of execution models is supported. One reason is that multi-threading is not always adequate, another reason is that the multi-threading support in O'Caml is not as good as in Java. The models are:

    • `Sequential: Serial execution in a single process

    • `Forking: Every request spawns a new process

    • `Process_pool: Requests are forwarded to a process pool

    • `Thread_pool: Requests are processed by a thread pool

    The latter model is not yet implemented!

The various models are discussed in detail below.

WDialog provides the module Wd_run_jserv that defines request handlers for the various execution models. A sample main program for a servlet process would be:

let req_hdl = Wd_run_jserv.create_request_handler ... () in
let server = `Forking(20, [ "appname", req_hdl ]) in
Netcgi_jserv.jvm_emu_main
  (Netcgi_jserv_app.run server `Ajp_1_2)]
The real main program is Netcgi_jserv.jvm_emu_main which accepts command-line arguments that are compatible (enough) with the arguments of the Java JVM. (Useful, because the JSERV web server extension usually starts the servlet process, and the web server assumes that it starts a JVM.)

The function Netcgi_jserv_app.run is the main entry point for the JSERV protocol handler. It gets as argument the server definition, here of `Forking type. The list defines that the servlet appname is handled by req_hdl, the WDialog-specific request handler.

In order to get the servlet server running, you also need the jserv.properties file containing the configurations that are needed by both the web server and the servlet server. Furthermore, httpd.conf, the configuration file of Apache, must be extended with some mod_jserv-specific definitions. You can find more information in the Java Apache distribution.

The JSERV execution model `Sequential

Sequential execution means that a single process gets all arriving requests which are processed one after another. This works very well unless it takes too long to process a request. The big advantage of this execution model is that there is almost no management overhead to handle concurrent accesses, because these do not happen. However, if the computations for a request last very long, the server will block until this time-consuming request is done, and any other requests happening at the same time must wait.

There is another advantage: It is quite simple to cache frequently accessed data, because these can be stored in global variables, again without any additional overhead.

The sequential model is very attractive for web applications that have a limited number of concurrent users, and that run on single-CPU systems. However, some care must be taken to avoid that individual requests block the whole application. For example, one possibility would be to set the alarm clock (Unix.alarm or Unix.setitimer) and to raise an exception after the maximum period of time has expired.

Summary for JSERV, `Sequential execution

  • Execution model: A single process gets the requests one after another. Requests arriving while the current request is being processed must wait in a queue.
  • Advantages: Minimal overhead to resolve concurrency. The activations for the requests can share data.
  • Disadvantages: There is no isolation between the activations, and there is no automatic clean-up strategy to close files. Very long activations can block the whole application.
The JSERV execution model `Forking

In this model, every incoming request causes that the main process spawns a subprocess. The subprocess performs all computations that are necessary to reply, while the main process continues immediately accepting new connections.

This model sounds like CGI, but it is actually different in one important aspect. When the subprocess is spawned, all necessary initializations have already happened, and the subprocess can immediately begin to analyze the request, and to do all the other work related to the request. In contrast to this, the CGI subprocess must first initialize itself, for example read the XML file containing the UI definition.

Effectively, the setup time is longer than for `Sequential execution, but still rather short. The concurrently running activations are isolated from each other like in CGI, and the operating system takes care to deallocate the resources when the activation is over. There is no simple way to let the activations share data or other resources.

Summary for JSERV, `Forking execution

  • Execution model: The already initialized main process spawns for every incoming request a new subprocess that performs the remaining work.
  • Advantages: The processes are isolated from each other, so malfunction of one process does not interfer with the other concurrently running processes. Resources of the operating system are guaranteed to be deallocated. The initialization time per request is quite short but not negligible.
  • Disadvantages: It is difficult to arrange that the activations share data or other resources.
The JSERV execution model `Process_pool

This model combines the advantages of `Forking and `Sequential, and is probably the most attractive model for highly loaded servers. At startup time, a fixed number of processes are spawned (after initialization), and every process of this pool accepts sequentially the incoming requests. When a new page request arrives it is likely that some of the processes of the pool are currently busy and that the rest is idle. One of the free processes will get the request, and will be busy until the request is processed.

It may happen that all processes are busy. The newly arrived request must wait until one of the processes is free again. (Note: The length of this queue can be specified by the backlog parameter.)

This model can process requests in parallel; however, parallelism is restricted to the fixed number that must be known at startup time. A good choice is the number of CPUs times a small factor, but there should be enough memory such that no process is swapped out.

Furthermore, this model avoids the cost of forking for every request, because the processes run sequentially once started. This results in very low overhead and quick responses.

However, this model also combines the disadvantages of `Forking and `Sequential, because the activations for the requests are neither isolated from each other nor they are not isolated, you simply do not know.

Summary for JSERV, `Process_pool execution

  • Execution model: The already initialized main process spawns a fixed number of worker processes, and every worker runs sequentially.
  • Advantages: The initialization time per request is very low.
  • Disadvantages: The activations for the requests are not isolated from each other, but it is also difficult to let them share data. There is no automatic deallocation of resources of the operating system. As a restriction of the possibilities of the model, it is currently not implemented to start or stop processes as needed.
The JSERV execution model `Thread_pool

You may wonder why I do not simply follow the Java original and only implement thread pools. There are a number of arguments against multi-threading, some critising this technique in general, some only applying to the O'Caml implementation.

  • Multi-threading requires a lot of programming discipline. In general, the whole code must be reentrant, and special means like mutexes, condition variables etc. must be used to ensure that never two threads interfer with each other in an uncontrolled manner. Unfortunately, there are no tools (like type checkers) that enforce these rules, the programmer must do it himself. Furthermore, it is very difficult to find the errors by testing the programs, because the problems often have the character of race conditions that only happen in rare cases. But if you have a lot of seldom occurring races, the stability of the whole program certainly decreases significantly.

  • The O'Caml implementation of multi-threading had some serious bugs in the past, although it was programmed by an outstanding expert. You may take this as a proof of the previous thesis, but it also means that O'Caml has not been used very often for multi-threaded programming (otherwise these errors would have been found earlier), and that the stability cannot (yet) be trusted for production applications.

  • Last but not least, the O'Caml implementation has the fundamental restriction that it cannot take advantage from several CPUs, even if the underlying multi-threading library of the operating system supports this.

I hope this explains why the multi-threaded execution model does not rank as number 1 in the priority list. However, there are benefits from such a model.

Most important, this is the only model that can combine parallelism with the ability to easily access shared data structures. The other models (`Forking and `Process_pool) can share data only by special means of the operating system (e.g. by sharing files, or by shared memory that is now available in the bigarray library).

Furthermore, it becomes possible to program servers that respond to multiple protocols. For example, such a server could combine a web frontend with RPC services. (Like EJB, but I do not see a strict necessity to do that. Both aspects can be separated.)

No summary yet, as the model is not yet implemented.

Which model is the right one for me?

Obviously, there is no simple answer. I have tried to enumerate the pros and cons for all the models, and it depends on your application which arguments count. Maybe the following simplifications point you to the right direction for your evaluation.

  • Development: In this phase of a project the CGI protocol is the best choice. You need not to restart servers to test a new version of your program.

  • Best compromise: If stability and speed both count, I can only recommend JSERV with `Forking processes. The processes are isolated from each other, and are properly cleaned up, so you do not have to care about these issues. It is still fast enough for the majority of applications.

  • Maximum performance: That's very simple, the `Process_pool is your friend. It allows parallelism almost without performance costs. For maximum performance, I would additionally recommend to install the web server and the JSERV engine on different systems. For MAXIMUM performance, I would further recommend to install several instances of the JSERV engine on several systems, and to use JSERV's load-balancing feature to drive them. The architecture is scalable, isn't it.

  • Flexibility: Once implemented, `Thread_pool is probably the most flexible solution. In the meantime, you may consider to use `Sequential, and to start threads for background activitities. (Unfortunately, multithreading is not possible for forked processes because of limitations of O'Caml implementation, so you cannot start threads in the worker processes of `Process_pool.)

Secondary network connections

It is often necessary to open network connections to further services in order to process a request. For example, accessing database systems is nowadays done in this way. You have several choices for that:

  • You can open a new connection for every activation, and close it afterwards. This isolates the accesses best, but this may cause performance problems.

  • An alternative for `Process_pool is to open only one connection per process, and to use it for all requests performed by the process. For database systems with transactions, a reasonable degree of isolation can be achieved by closing the current transaction between requests. Note that the configuration parameter of Netcgi_jserv_app.run provides the two hooks js_init_process and js_fini_process that are called for every process to initialize and for every process to finalise, respectively. So these functions can open and close the database connection.

This web site is published by Informatikbüro Gerd Stolpmann
Powered by Caml