Introduction

This page as PDF

VAMNET: The Virtual Amoeba Machine Network

Overview

The VAMNET is a hybrid operating system environment for distributed applications in a heterogeneous environment, concerning both the hardware architectures used and operating systems already present, for example the UNIX-OS. The VAMNET consists of several parts. Some of them can operate standalone. All of them built up a hybrid distributed operating system environment with some new features never seen before. These parts are:

  1. The VX-Amoeba kernel, a compact and powerful micro-kernel with distributed operating systems features.

  2. The VX-Amoeba environment, primary consisting of libraries supporting process execution on the top of the VX-Kernel, building a network distributed operating system.

  3. The AMUNIX environment: Amoeba (concepts) on the top of UNIX like operating systems!

  4. The AMCROSS cross-compiling environment necessary for building native VX-Amoeba target binaries programmed in C.

  5. VAM: The Virtual Amoeba Machine. This machine unites the core Amoeba concepts with the world of functional programming in ML and bytecode execution machines for portable and some kind of safe execution of programs. All Amoeba system servers, needed to build up a distributed operating system, were reimplemented with VAM-ML. VAM programs can be executed both on the top of the AMUNIX and the VX-Kernel process layer.

Figure Figure 17 gives a schematic overview of all these components.


Figure 17:  Overview about all components of the VAMNET system contained in an example configuration.

The VAMNET is an ongoing research and development project by Dr. Stefan Bosse from the BSSLAB laboratory, Bremen Germany, started in the year 1999, currently converging to his final stage.

Fields of application

  1. Distributed measuring and data acquisition systems, for example remote digital camera servers connected with an ethernet network equipped with digital imaging software.
  2. The native Amoeba kernel is very well suited for embedded systems, like PC104 single board equipment.
  3. Distributed systems for machine control.
  4. High performance parallel computing and other distributed numerical computations.
  5. Distributed filesystems on the top of standard operating systems.
  6. Distributed remote (wireless) robot control.
  7. Educational tool for the convenient study of distributed services and operating systems.

Advantages of a Hybrid System

  1. The basic concepts of the distrusted operating system Amoeba are avialable for common operating systems with a convenient desktop environment. New operating systems mostly lack of actual device drivers, especially on the x86-pc platform with a wide spectrum of available hardware.

  2. For specialized (perhaps embedded) machines, for example data acquisition systems, or hardware device reduced numeric cluster machines, the native Amoeba kernel is the best choice, featuring a modern and clean micro-kernel, and exploring the power of the Amoeba system.

  3. Both worlds, embedded and specialized computers and desktop computers, can be merged with simple but powerful methods and concepts using a hybrid system solution. Each machine gets the system which fits best.

VAMNET Details

The VAMNET system forges all previously shown single parts to one hybrid operating system:

  1. The VAM runtime environment with system servers and user interaction,
  2. the native VX-Kernel and a process environment on the top of the VX-Kernel,
  3. native VX-Amoeba programs, which can be user customized programs.

The next figure Figure 18 shows an example configuration of such a hybrid system. Here, the VAM system is used to control a CNC milling machine connected to external embedded PC104 hardware, running with the VX-Kernel and a CNC machine device driver controlling the axis motion of the machine directly.

First, a boot script will start some basic servers needed for an operational Amoeba system. This is the fileserver AFS afs_unix and the directory and name server DNS dns_unix. Both store information in generic UNIX files. Under UNIX, the FLIP server flipd is needed for client-server communication, too.

Now the user can start some utility programs, like the VAM shell vash. For development purposes, the interactive vam program can be used. With this program it’s for example possible to compile and execute ML scripts. Also it provides an online help system containing the VAMNET book.


Figure 18: A VAMNET example configuration connecting a UNIX desktop computer with a network coupled PC104 controller.

On the native Amoeba side using an embedded PC104 system, there is a boot server to start-up the device driver needed to control the connected milling machine. Both computers are connected with 100MBit/s ethernet.

All shown components are merged to one operating system environment. With the VAM shell vash it’s possible to get access to the native Amoeba Kernel, for example kernel statistic information can be simply accessed by calling the builtin kstat command. The administration of such a hybrid system is quite simple. After the Amoeba file- and directory system was created (using the above shown servers, too), only some capabilities must be inserted in the UNIX environment (using generic UNIX environment variables like ROOTCAP specifying the root capability) and the new created directory system, and some system directories expected by various servers and util programs.

Most VAM programs can be executed directly on the native VX-kernel. Only the system Amoeba libam and a limited UNIX emulation library libakjax are required to implement the VAM virtual machine. This is the only VAM part which must be adapted to the VX-Amoeba process environment. The VAM bytecode executables can be used unchanged for both the native and the AMUNIX Amoeba environment.

Performance and Experimental Results

One of the major results of the VAM project is the fact that the Amoeba emulation layer AMUNIX with the user process implementation of the protocol stack FLIP and the virtual machine approach have only a slightly decreased performance compared with the native VX-Kernel and Amoeba implementation. The following tables gives an impression of the performance and capabilities of the native VX-Kernel system, the AMUNIX and the VAM on the top of AMUNIX system.

The main indicator for the performance of a distributed operating system is the performance of the messaging system, that means the data transfer rate and latency of messages without content (only the message header is transferred).


Table 4: RPC Test: remote with native VX-Amoeba kernel

Machine Configuration

Transfer Direction

Transfer Rate

Latency

1: AMD-Duron 650 MHz CPU, 64MB RAM, 3COM905 100MBit/s Ethernet

1 ⇒ 2

10,5 MBytes/s

170 μs

2: Cyrix 100 MHz CPU, 32MB RAM, 3COM905 100MBit/s Ethernet

2 ⇒ 1

9,54 MBytes/s

170 μs

Table 5: RPC Test: remote with native VX-Amoeba kernel

Machine Configuration

Transfer Direction

Transfer Rate

Latency

1: AMD-Duron 650 MHz CPU, 64MB RAM, 3COM905 100MBit/s Ethernet

1 ⇒ 2

8,7 MBytes/s

270 μs

2: Celeron 700 MHz CPU, 64MB RAM, 3COM905 100MBit/s Ethernet

2 ⇒ 1

9,1 MBytes/s

260 μs

Table 6: RPC Test: remote with native VX-Amoeba kernel and AMUNIX

Machine Configuration

Transfer Direction

Transfer Rate

Latency

1: AMD-Duron 650 MHz CPU, 64MB RAM, 3COM905 100MBit/s Ethernet

1 ⇒ 2

8,7 MBytes/s

300 μs

2: Celeron 700 MHz CPU, 64MB RAM, 3COM905 100MBit/s Ethernet

2 ⇒ 1

8,7 MBytes/s

300 μs

Table 7: RPC Test: remote with native VX-Amoeba kernel, AMUNIX, and VAM

The above measurements are example measurements with an accuracy of about < pm>10%. Of course, table Table 4 shows that the transfer performance of a RPC message transfer from one to another machine reaches it’s maximal value. Not only compared with the following AMUNIX and VAM system, also compared with the maximal possible physical transfer rate of 100MBit/s ethernet: 11,9 MBytes/s. This result shows well adaption of the FLIP protocol stack and the underlying ethernet device drivers to this network system. Table Table 5 shows results with a different machine 2: a very old Pentium like CPU (Cyrix MMX) with only 100 MHZ core frequency. The VX-Kernel yields to good performance results downto i486 CPU machines.

Using the AMUNIX layer communicating with a native VX-Kernel (Table Table 6), only a slight decrease in performance and latency can be observed. The transfer rates decreases about 20%, and the latency increased about 100%. With additional VAM (Table Table 7), there is no significant difference. This result shows the suitability of ML programming and virtual machine concepts for client-server implementations.

The RPC message passing is not only used for the remote case, but for the local case, too. The following table Table 8 shows results for the various environments.

Machine Configuration

Transfer Direction

Transfer Rate

Latency

1: AMD-Duron 650 MHz CPU, 64MB RAM, 3COM905 100MBit/s Ethernet, VX-Kernel

1 ⇒ 1

136 MBytes/s

12 μs

1: Celeron 700 MHz CPU, 64MB RAM, 3COM905 100MBit/s Ethernet, FreeBSD, AMUNIX

1 ⇒ 1

26 MBytes/s

275 μs

1: Celeron 700 MHz CPU, 64MB RAM, 3COM905 100MBit/s Ethernet, FreeBSD, AMUNIX, VAM

1 ⇒ 1

22,4 MBytes/s

400 μs

Table 8: RPC Test: local case

No surprise the native VX-kernel is the winner. But the AMUNIX and VAM system have sufficient transfer rates and latency times to implement efficient local RPC communication.