This document assumes that PlasmaFS is already built and installed. It explains how to deploy it on the various nodes that are part of a PlasmaFS cluster.
A PlasmaFS cluster needs at least one namenode and one datanode. For getting started it is possible to put both kinds of servers on the same machine.
The smallest reasonable blocksize is 64K. There is already a measurable improvement of speed for blocksizes of 1M (better ratio of metadata operations per amount of accessed data). Blocksizes beyond 64M may be broken in the current code base.
Big blocksizes also mean big buffers. For a blocksize of 64M an application consumes already 6.4G of memory for buffers if it "only" runs 10 processes where each process access 10 PlasmaFS files at the same time.
If you are undecided, choose 1M as blocksize. Good compromise.
The number of namenodes can reasonably be increased to two or three,
but it is unwise to go beyond this number. The more namenodes are
included in the system the more complicated the
becomes. At a certain point, there is no additional safety from adding
The namenode database is best put on RAID 1 or RAID 10 arrays. Although there are replicas, there is right now no way to repair a damaged namenode db without stopping the whole PlasmaFS system.
The number of datanodes is theoretically unlimited. For now it is unwise to add more datanodes than can be connected to the same switch, because there is no notion of network distance in PlasmaFS yet.
There is the limitation that there can be only one datanode identity per machine. Because of this it makes sense to create one big single data volume from the available disks (using RAID 0, or JBOD as provided by volume managers like LVM), and to put the block data onto this volume.
It is advisable to use an extent-based filesystem for this volume, such as XFS. However, this is no requirement - any filesystem with Unix semantics will do.
Raw partitions are not supported, and probably will never be.
PlasmaFS has been developed for Gigabit networks. One should prefer switches with a high bandwidth (there is often a limit of the total bandwidth a switch can handle before data packets are dropped). Ideally, all ports of the switch can simultaneously be run at full speed (e.g. 32 Gbit/s for a 16 port switch). SOHO switches often do not deliver this!
As of now, there is no assumption that the nodes are in the same network segment (no broadcasts). Future versions of PlasmaFS might add optional multicasting-based features, and routers must then be configured for routing multicast traffic.
It is assumed that hostnames resolve to IP addresses. It is allowed that the hostname resolves to a loopback IP address on the local machine. PlasmaFS never tries to perform reverse lookups.
PlasmaFS avoids DNS lookups as much as possible. However, when
PlasmaFS clients start up, DNS lookups are unavoidable. A well
performing DNS infrastructure is advisable. Actually, any alternate
directory system can also be used, because PlasmaFS only uses the
system resolver for looking up names. Even
/etc/hosts is ok.
The machine where PlasmaFS was built and installed is called the operator node in the following text. From the operator node the software is deployed. It is easy to use one of the namenodes or datanodes as operator node.
We assume all daemons on all machines are running as the same user. This user should not be root! It is also required that the nodes can be reached via ssh from the operator node, and that ssh logins are possible without password.
On the namenodes there must be PostgreSQL running (version 8.2 or better). It must be configured as follows:
will do the trick (often there by default). This permits logins if the hostname in connection configs is omitted.
local all all ident
postgres, and run:
This will ask a few questions. The user must be able to create databases. Other privileges are not required.
If it does not ask for a password, and if it does not emit an error, everything is fine.
psql template1 </dev/null
postgresql.confto a positive value. Restart PostgreSQL.
clusterconfigdirectory. This is installed together with the other software. It is advisable not to change this version of
clusterconfigdirectly, but to copy it to somewhere else, and to run the scripts on the copy.
clusterconfig directory contains a few scripts, and a directory
instances there is a subdirectory
some files. So this looks like
clusterconfig: The scripts available here are run on the operator node to control the whole cluster
clusterconfig/instances: This directory contains all the instances that are controlled from here, and templates for creating new instances. Actually, any instance can also be used as template - when being instantiated, the template is simply copied.
clusterconfig/instances/template: This is the default template. It contains recommended starting points for config files.
Create a new instance with name
./new_inst.sh <inst> <prefix> <blocksize>
prefix is the absolute path on the cluster nodes where the PlasmaFS
software is to be installed. The
deploy_inst.sh script (see below) will
create a directory hierarchy:
<prefix>/bin: for binaries
<prefix>/etc: config files and rc scripts
<prefix>/log: for log files
<prefix>/data: data files (datanodes only)
deploy_inst.shscript will create these directories only if they do not exist yet, either as directories or symlinks to directories.
blocksize can be given in bytes, or as a number with suffix "K" or "M".
new_inst.sh may be directed to use an alternate template:
./new_inst.sh -template <templ> <inst> <prefix> <blocksize>
templ can be another instance that is to be copied instead of
new_inst.sh there is a new directory
Go there and edit the files:
namenode.hosts: Put here the hostnames of the namenodes
datanode.hosts: Put here the hostnames of the datanodes
nfsnode.hosts: Put here the hostnames of the nodes that will run NFS bridges (may remain empty)
one can install the files under
prefix on the cluster nodes.
The installation procedure uses
ssh to copy the files.
-only-config restricts the copy to configuration files
and scripts, but omits executables. This is useful when the
configuration of a running cluster needs to be changed (when running,
the executable files cannot be overwritten).
This step creates the PostgreSQL databases on the namenodes:
If the databases already exist, this step will fail. Use the
-drop to delete databases that were created in error.
The databases have the name
After initializing the namenodes, it is possible to start the namenode server with
./rc_nn.sh start <inst>
This must be done before continuing with the initialization of the datanodes.
This step creates the data area for the block files on the datanodes:
./initdn_inst.sh <inst> <size> all
size is the size of the data area in bytes.
size can be
followed by "K", "M", or "G". If
size is not a multiple of the
blocksize, it is rounded down to the next lower multiple.
The keyword "all" means that all datanodes are initialized with the same size. Alternately, one can also initialize the nodes differently, e.g.
./initdn_inst.sh inst1 100G m1 m2 m3 ./initdn_inst.sh inst1 200G m4 m5 m6
This would initialize the hosts
m3 with a data area
of 100G, and
m6 with a data area of 200G.
initdn_inst.sh script also starts the datanode server on the
hosts, and registers the datanodes with the namenode.
You may have noticed that during initialization the cluster nodes were started in this order:
There are scripts
rc_nfsd.sh to start
and stop datanodes, namenodes, and NFS bridges on the whole
cluster. These scripts take the host names of the nodes to
from the configured
So the right order of startup is:
rc_all.shto start/stop all configured services in one go.
What is a "collective" startup? As the namenode servers elect the coordinator at startup, they need to be started within a short time period (60 seconds).
On the cluster nodes, there are also scripts
<prefix>/etc. Actually, the scripts in
clusterconfig on the operator node call the scripts with the same
name on the cluster nodes to perform their tasks. One can also call
the scripts on the cluster nodes directly to start/stop selectively
a single server, e.g.
ssh m3 <prefix>/etc/rc_dn.sh stop
would stop the datanode server on
The distribution does not contain a script that would be directly suited for inclusion into the system boot. However, the existing rc scripts could be called by a system boot script. (Especially, one would need to switch to the user PlasmaFS is running as before one can call the rc scripts.)
One can use the
plasma client for testing the system, e.g.
plasma -namenode m1:2730 -cluster inst1 list /
would list the contents of
/ of PlasmaFS.