Applications created with the Netplex framework all share the following configuration settings and allow some basic administration commands. This is only a common minimum - the applications typically define more than this.
The Netplex config file has the following layout:
netplex {
controller { <settings> }; (* only one controller section *)
service { <settings> }; (* as many service sections as running services *)
...
service { <settings> };
(* The application can define further types of sections *)
}
controller
sectionThis section configures the controller component. The task of the controller is to start the containers for the workload, and logging.
netplex {
controller {
socket_directory = "<path>";
max_level = "<debuglevel>";
logging { <settings> }; (* several logging destinations possible *)
...
logging { <settings> };
};
...
}
Settings:
socket_directory
: The Netplex framework needs a directory where to
create Unix domain sockets. These sockets are needed for communication
between the started containers. If omitted, the directory defaults to
/tmp/netplex-<id>
(where the id part is derived from the file location of
the configuration). It is not allowed that several running Netplex instances
share these directories, and hence it is strongly recommended to change
this default. If the path is not absolute, it is made absolute by
prepending the path of the working directory at the time Netplex
is started (usually the program start). Note that the paths of
Unix domain sockets are limited to 107 bytes for historic reasons,
so the socket_directory
should not be put too deeply into the
file hierarchy.max_level
: This can be set to globally limit the log level. Defaults
to "debug", i.e. no maximum.logging
section defines a logging destination. Log messages
are written to all destinations that do not filter the messages out.
There are several types of logging
sections:
This type writes log messages to stderr:
netplex {
controller {
logging {
type = "stderr"; (* mandatory *)
format = "<format string>"; (* optional *)
component = "<name_of_component>"; (* optional *)
subchannel = "<name_of_subchannel>"; (* optional *)
max_level = "<max_level>"; (* optional *)
};
...
};
...
}
The settings format
, component
, subchannel
, and max_level
may also
occur in the other types of logging definitions, and are explained below.
This writes the log messages to a single file.
netplex {
controller {
logging {
type = "file"; (* mandatory *)
file = "<path>"; (* mandatory *)
format = "<format string>"; (* optional *)
component = "<name_of_component>"; (* optional *)
subchannel = "<name_of_subchannel>"; (* optional *)
max_level = "<max_level>"; (* optional *)
};
...
};
...
}
Settings:
file
: The file to which the messages are appended. If not existing,
the file is created. The file path must be absolute.format
, component
, subchannel
, and max_level
may also
occur in the other types of logging definitions, and are explained below.
This logging definition directs to create several files in a common directory.
netplex {
controller {
logging {
type = "multi_file"; (* mandatory *)
directory = "<path>"; (* mandatory *)
format = "<format string>"; (* optional *)
file { <settings> };
...
file { <settings> };
};
...
};
...
}
The settings in the file
section:
file {
file = "<name>"; (* mandatory *)
format = "<format string>"; (* optional *)
component = "<name_of_component>"; (* optional *)
subchannel = "<name_of_subchannel>"; (* optional *)
max_level = "<max_level>"; (* optional *)
};
Settings:
directory
: The absolute path of the directory where to create
the log files managed by the file
subsectionsfile
: The name of the file in this directoryformat
, component
, subchannel
, and max_level
may also
occur in the other types of logging definitions, and are explained below.
Note that a format
setting in the file
section overrides the
definition in logging
for the respective file.
The log messages are sent to the syslog device of the system (Unix only).
netplex {
controller {
logging {
type = "syslog"; (* mandatory *)
identifier = "<identifier>"; (* optional *)
facility = "<facility>"; (* optional *)
format = "<format string>"; (* optional *)
component = "<name_of_component>"; (* optional *)
subchannel = "<name_of_subchannel>"; (* optional *)
max_level = "<max_level>"; (* optional *)
};
...
};
...
}
Settings:
identifier
: A string prefixing each message. Default: empty.facility
: The syslog facility name. Defaults to "default".logging
format
: This parameter defines how the log messages look like.
This is a string containing variables in dollar notation ($name
or
${name}
). The following variable specifications are defined:
timestamp
: the time in standard formattimestamp:<format>
the time in custom format where <format>
is a
Netdate
format stringtimestamp:unix
: the time in seconds since the epochcomponent
: the name of the component emitting the log messagesubchannel
: the name of the subchannellevel
: the log levelmessage
: the log message [${timestamp}] [${component}] [${level}] ${message}
component
: This parameter filters messages by the component
emitting the messages. The component name is here the Netplex
service name, i.e. the name
parameter in the service
section
(see below). The parameter may be set to the component name, or
to a pattern matching component names. The wildcard *
can be used
in the pattern.
Default: *
subchannel
: Netplex allows it to have several log channels per
component. There is normally only the main log channel, but components
can define additional channels. For example, a web server may have
a separate log channel for access logging. This parameter filters
messages by the subchannel identifier. Again it is possible to use
the wildcard *
.
The main log channel has the empty string as subchannel identifier,
hence subchannel=""
restricts the messages to the main channel.
Default: *
max_level
: Restricts the log level of the printed messages.
See above for possible levels.
1. Write everything to stderr:
logging { type="stderr" }
2. Write a separate file for each log level:
logging {
type = "multi_file";
directory = "/var/log/myapp";
file {
max_level = "debug";
file = "all_debug.log";
};
file {
max_level = "info";
file = "all_info.log";
};
... (* and so on ... *)
file {
max_level = "emerg";
file = "all_emerg.log";
};
}
3. Write errors to syslog, but write access logs to a file:
logging {
type = "syslog";
max_level = "err";
subchannel = ""; (* i.e. only the main log channel goes here *)
};
logging {
type = "file";
file = "/var/log/myapp/access.log";
subchannel = "access";
}
service
sectionEach service section instructs Netplex to start containers for the service. If a service type is only defined by the application, but does not appear in the config file, no containers will be started!
A service section looks like:
netplex {
service {
name = "<name>"; (* mandatory *)
user = "<user>"; (* optional *)
group = "<group>"; (* optional *)
startup_timeout = <float>; (* optional *)
conn_limit = <int>; (* optional *)
gc_when_idle = <bool>; (* optional *)
protocol { <settings> }; (* at least one *)
...
protocol { <settings> }; (* at least one *)
processor { <settings> }; (* mandatory *)
workload_manager { <settings> }; (* mandatory *)
};
...
}
The name
of the service is a freely chosen identifier. It is used
to reference the service, e.g. in log messages.
Each protocol
section defines a set of sockets with common
properties. The idea here is that a service may define several
protocols for accessing it, e.g. an HTTP-based protocol and an
RPC-based protocol. For each protocol there can then be several
sockets.
The processor
section is the connection to the application
which must have defined the type of processor that is referenced
here. The task of the processor is to accept incoming connections
and to process them.
The workload_manager
section defines how many containers (i.e.
subprocesses or threads) are created for serving incoming connections.
Settings:
user
: If the program is started as user root
, it is possible to
change the user for subprocesses. This parameter is the user name
for all subprocesses created for this service. This is only possible
when Netplex is running in the multi-processing mode, not in the
multi-threading mode. Default: do not change the user.group
: Same for the groupstartup_timeout
: If a subprocess does not start up within this
period of time, it is considered as dysfunctional, and killed again.
This misbehavior usually occurs because the initialization function
of the subprocess hangs.
This setting has only an effect in multi-processing mode.
Default: 60 seconds. A negative value turns this feature off.conn_limit
: If set, a container is automatically shut down if
it has processed this number of connections. This is sometimes
useful if there is a memory leak in the container, or memory is
not reclaimed quickly enough. Of course, one can only fight memory
problems in multi-processing mode this way. Default: no limit.gc_when_idle
: If true
, the Ocaml garbage collector is run
if a container is idle for one second.protocol
subsectionIt looks like:
netplex {
service {
protocol {
name = "<protoname>"; (* mandatory *)
lstn_backlog = <int>; (* optional *)
lstn_reuseaddr = <bool>; (* optional *)
so_keepalive = <bool>; (* optional *)
tcp_nodelay = <bool>; (* optional *)
local_chmod = "..."; (* optional *)
local_chown = "..."; (* optional *)
address { <settings> }; (* at least one *)
...
address { <settings> }; (* at least one *)
};
...
};
...
}
Settings:
name
: The name of the protocol. This is an arbitrary identifier.
The name is passed to some hook functions.lstn_backlog
: The value of the backlog parameter of the listen
system call. When a TCP connection is being accepted, the kernel does
this first on its own, and passes the accepted connection to the application
at the next opportunity (i.e. the accept
system call). Because of this
connections can queue up in the kernel, i.e. connections that are accepted
but not yet passed to the application. This parameter is the maximum
length of this queue. Note that there is usually also a kernel-defined
maximum (e.g. 128 on Linux). If you get spurious EAGAIN
errors
on the client side this might be an indication to increase this parameter.lstn_reuseaddr
: Whether to allow immediate reuse of socket addresses.
Defaults to true
.so_keepalive
: Whether to enable the TCP keep-alive feature which is
useful to detect unresponsive TCP connections (after a very long timeout
only, though). Defaults to false
.tcp_nodelay
: Whether to disable the Nagle algorithm for TCP. Normally,
TCP packets are minimally delayed before they are sent to the network
in the hope that the application makes more data available that could be
put into
the same packet. Especially on local networks it is often not important
how many packets are really sent out, and by enabling this option
latencies can be, sometimes drastically, reduced. Defaults to false
.local_chmod
: For Unix Domain sockets, this sets the file permission
bits. E.g. local_chmod = "0o700"
local_chown
: For Unix Domain sockets, this sets the owner and group
of the socket. E.g. local_chown = "owner:group"
. Both settings can also
be given numerically.address { type="internet"; bind="<ip>:<port>" }
: An IPv4 or IPv6
socket. The IP address can also given as host name which is looked up
at Netplex startup. Use 0.0.0.0
as the IPv4 catch-all address. IPv6
addresses must be put into brackets. Use [:0]
as the IPv6 catch-all
address.address { type="local"; path="<path>" }
:
This is a OS-dependent default IPC mechanism for local
connections. On Unix, local
means Unix Domain sockets. On Win32,
local
means the w32_pipe_file
mechanism (see below). In path
, the name
of the Unix Domain socket or the file with the pipe name must be
specified.address { type="unixdomain"; path="<path>" }
:
Unix domain sockets. In path
give the path.address { type="socket_file"; path="<path>" }
:
An emulation of Unix Domain sockets: A server socket
bound to 127.0.0.1
and an anonymous port is used instead. The
port number is written to a file. The file must be given as path
.address { type="w32_pipe"; path="<path>" }
:
Win32 named pipes. The name of the pipe is given
as path
. This must be a name of the form "\\.\pipe\<name>".
The pipe server is configured so that only clients on the same
system can connect to it.address { type="w32_pipe_file"; path="<path>" }
:
An emulation of Unix Domain sockets: A named
pipe with an unpredictable random name is created instead. The
name of this pipe is written to the file given by path
.address { type="container" }
:
This special address causes that a separate local
socket is created for each started container. The name of the
socket file is automatically chosen. The names of the socket
files can be queried with Netplex_cenv.lookup_container_sockets
.
This type of socket is useful to control the load sent to each
container directly.processor
subsection
This section depends on the Netplex processor connected with it.
At minimum, this section has only a parameter type
that is the
name of a defined Netplex processor type:
netplex {
service {
processor {
type = "<type>";
... (* rest depends on the type *)
};
...
};
...
}
There are a number of processor types coming with Ocamlnet:
Rpc_netplex
: RPC serversNetcgi_plex
: Web connectors (e.g. FastCGI)Nethttpd_plex
: Web servers
workload_manager
sectionThe workload manager determines how many containers (processes or threads) are running for each service.
The constant workload manager starts a fixed number of containers. If containers are shut down or crash, new containers are automatically launched to replace the missing ones.
The config section looks like:
netplex {
service {
workload_manager {
type = "constant";
threads = <n>;
}
}
}
Note that the parameter threads
is also interpreted in
multi-processing mode (as number of processes).
Since Ocamlnet-3.5, there are two optional settings:
max_jobs_per_thread
: This number can be set to the maximum number of
connections a thread or process can be assigned simultaneously. By default
there is no limit.greedy_accepts
: This is a boolean (values true
or false
). If
this mode is turned on, processes accept new connections more aggressively.
This may improve the performance for very high connect rates (e.g.
for more than 1000 new connections per second). The downside is that
the load is assigned to processes in a less balanced way.The dynamic workload manager starts as many containers as needed to handle the current load. Initially, a certain minimum number is started. If it turns out that too many containers become busy, more containers are started until a maximum is reached. If too many containers become idle containers are shut down.
netplex {
service {
workload_manager {
type = "dynamic";
max_jobs_per_thread = <n>; (* optional, default: 1 *)
recommended_jobs_per_thread = <n>; (* optional *)
min_free_jobs_capacity = <n>; (* mandatory *)
max_free_jobs_capacity = <n>; (* mandatory *)
max_threads = <n>; (* mandatory *)
}
}
}
A thread is here a container, even in multi-processing mode. A job is a TCP connection processed by a container. It is possible that a container can process several jobs at the same time (but only a few service types support this, e.g. RPC servers), and the whole calculation is based on the job capacity, i.e. the number of jobs all containers can execute in parallel.
The workload manager adjusts the number of containers so that there
is always free capacity for min_free_jobs_capacity
, but the free
capacity does not exceed max_free_jobs_capacity
. Also, the number
of containers is capped by max_threads
.
As mentioned, in most cases a container can only run one job at a
time (this is meant with max_jobs_per_thread=1
). Then
min_free_jobs_capacity
is just the minimum number of idle
containers, and max_free_jobs_capacity
the maximum number.
If more than one job can be executed, set max_jobs_per_thread
to
a value bigger than one. The workload manager assigns the components
then up to this number of TCP connections to process. A component
is filled up with jobs until it is full before jobs are assigned
to the next container.
The latter behavior can be modified by recommended_jobs_per_thread
.
This must be a number less than or equal to max_jobs_per_thread
,
and it means that the containers normally only get the recommended
number of jobs until they are "full", and only for very high workloads
this scheme is left, and even more jobs are assigned to the containers
until the maximum is reached. A common configuration is to set
recommended_jobs_per_thread=1
, so that each container gets first
only up to one job, and only if the maximum number of containers are
running, additional jobs can be assigned.
There are also a few rarely used options for the dynamic workload manager:
greedy_accepts
: This is a boolean (values true
or false
). If
this mode is turned on, processes accept new connections more aggressively.
This may improve the performance for very high connect rates (e.g.
for more than 1000 new connections per second). The downside is that
the load is assigned to processes in a less balanced way. Note that
this mode only works for asynchronously implemented processors.
(Since Ocamlnet-3.5.)netplex-admin
command
Ocamlnet installs a little utility command netplex-admin
that can
be used to administer a running Netplex program.
$ netplex-admin -help
Usage: netplex-admin [ options ] [ admin_cmd arg ... ]
-sockdir <dir> Set the socket directory of the Netplex to administer
-conf <file> Get the socket directory from this config file
-list List available Netplex services
-containers List available Netplex services with container details
-enable <name> Enable service <name>
-disable <name> Disable service <name>
-restart <name> Restart service <name>
-restart-all Restart all services
-shutdown Shutdown the whole Netplex
-reopen-logfiles Reopen logfiles (if possible)
-unlink Unlink persistent kernel objects
-receiver <pat> Restrict receivers of admin messages to services matching <pat>
The utility talks to the running program, and needs the socket directory
for this. Either specify it directly with the -sockdir
argument, or
indirectly via the config file (-conf
).
The -list
and -containers
switches allow it to get some introspection
into the running conglomerate of processes or threads. For -list
the services and the socket addresses are printed. For -containers
even more details are emitted, including the process IDs.
The -enable
, -disable
, and -restart
commands allow it to
manage the set of running services. A service can be disabled, which
means that all containers are shut down. Note that this does not mean
that incoming TCP connections are rejected. They are just not processed.
If enabled again, the containers are started again for the service,
and the processing of TCP connections is resumed (including the
connections that were accepted during the downtime). Disabling a
service is useful for temporarily stopping the service, e.g.
because it would interfer with other admin tasks.
A restart means to disable and re-enable the service. It may be useful for cleanly reinitializing a service.
With -restart-all
even all services are restarted that are running
within the Netplex framework.
A -shutdown
starts the shutdown sequence. The shutdown is announced
to all containers, and in a second step, the containers are terminated.
Finally, the master process is also stopped.
The command -reopen-logfiles
is meaningful for all file-based
logging definitions. The current set of files is closed, and reopened
again. This is useful as post-action after log file rotation (see below).
-unlink
: see below.
netplex-admin
can also be used to send so-called admin messages to
the containers. These messages have a name and optionally arguments:
$ netplex-admin ... name arg1 arg2 ...
Generally, the possible admin commands must be defined by the Netplex processors. A few commands are defined by default, though:
netplex.threadlist
: Outputs information about threads and processes
to the log filenetplex.logger.set_max_level <level>
: Changes the maximum log level
to the level passed in the argumentnetplex.debug.enable <module>
: Enables debug logging (as controlled
by Netlog.Debug
for the module named in the argument. netplex.debug.disable <module>
: Disables debug logging (as controlled
by Netlog.Debug
for the module named in the argument. netplex.fd_table
: prints the file descriptor tracking table to
the log filenetplex.connections
: prints information about current TCP
connections to the log filenetplex.mem.major
: triggers a major GC runnetplex.mem.compact
: triggers a GC compactionnetplex.mem.pools
: output information about Ocamlnet pools to
the log filenetplex.mem.stats
: output information about the memory managed by
the Ocaml runtime to the log file
logrotate
is a common utility to perform log file rotation. It can
be easily used together with Netplex. The essential point is to
run netplex-admin -reopen-logfiles
as post action to the rotation.
This stanza is an example how to configure the rotation in a
logrotate
config file:
/var/log/myapp/file.log
{
weekly
rotate 10
missingok
sharedscripts
postrotate
/some/path/bin/netplex-admin \
-sockdir /var/lib/myapp/sockdir \
-reopen-logfiles
endscript
}
For some functionality, Netplex programs allocate persistent kernel objects:
Of course, most Netplex programs are carefully written, and delete these
objects if the program ends in a regular way. However, after a crash
the programs normally don't have a chance to delete the objects. In order
to allow administrative removal, many Netplex programs write the names
of these objects into a special file netplex.pmanage
residing in the
socket directory. The netplex-admin
command has a special mode in
order to perform the deletion:
netplex-admin -sockdir /path/to/sockdir -unlink
Some operating systems have actually means for adminstering the
kind of objects managed this way, but some have not. For example,
Linux reflects the objects as normal files in the /dev/shm
directory. As a counter example, there is no administration possibility
in OS X.
(For programmers: The Netplex interface to access netplex.pmanage
is Netplex_cenv.pmanage
.)