Osh: An Open-Source Python-Based Object-Oriented Shell
Jack Orenstein


UNIX utilities are designed to be combined by piping strings from one to another. Tools such as awk, grep and sed are used to filter and transform these strings to extract and manipulate data of interest. In object-oriented terms, a string represents an object, and much of the processing done by these text processing tools extracts object properties. For example, each line of top output (ignoring headers) represents a process object, and each number in the line represents some attribute of the process, e.g. process id, CPU usage, and memory usage. This approach breaks down for more complex tasks, and for these situations, scripts are written in shell languages (e.g. bash, zsh) and more powerful scripting languages such as Python.

In other words, at the command line, the primitives being manipulated are strings (representing objects). In order to work with objects directly, it is necessary to write a Python script. The power of "one-off" commands would be greatly enhanced if objects were available for use on the command line. This is the idea behind osh. Osh exposes Python objects and functions to the command line. Python objects are piped from one osh command to another. Python functions are used to manipulate these objects.

Osh is licensed under the GPL, and can be obtained from http://geophile.com/osh. Osh relies on the Toy Parser Generator for command-line parsing. Path objects are implemented using the path module.

Related projects

MSH (Monad): Osh and MSH are based on very similar ideas. Both pass objects from one command to another. These objects represent various OS concepts (process, file, etc.) or useful language constructs (number, string, list, map, etc.). Both stress the use of operations on objects as replacements for text processing tools, such as grep and awk, which are often used to extract object attributes from textual representations of objects. One major difference is that osh is designed to be used from within an existing shell, such as bash; while MSH, due to the absence of well-developed shells in the Windows world, is a complete shell. Similarly, osh builds on a well-established language, Python; while MSH includes a new language.

IPython: IPython (Interactive Python) is more of a complete shell which runs inside Python.

Cluster ssh (cssh): Cssh allows interaction with multiple nodes simultaneously. A console is opened on each node. Typed commands are echoed and executed on all nodes. cssh is very nice for working with up to about eight nodes (in my opinion). Beyond that the consoles overlap or you need really tiny fonts to see all consoles at once. Also, does not provide for integration of results across nodes.

pssh, PuSSH: Pssh provides parallel versions of ssh, scp and other commands. PuSSH (Pythonic Ubiquitous SSH) is a parallel version of ssh with options for controlling degree of parallelism, timeouts, and node selections. Both are written in Python.

MapReduce: Google's MapReduce is designed for interacting with large clusters. Map and Reduce are functions which aggregate data from the nodes in the cluster. Osh provides similar capabilities using the f and agg commands, when combined with remote execution. MapReduce is far more sophisticated in terms of scheduling and monitoring remote execution. Osh would need similar improvements before it could be used on google-sized clusters.


Suppose you have a cluster named flock, with nodes seagull1, seagull2, seagull3. Each node has a database tracking work requests in a table named request. You can find the total number of open requests in the cluster as follows:

    [jao@zack]$ osh @flock [ sql "select count(*) from request where state = 'open'" ] ^ agg 0 'total, node, count: total + count' ^ out

Configuring osh

Osh is configured by executing the Python code in ~/.oshrc. The major uses of this file are to configure databases and clusters. This file can also be used to import or define symbols to be referenced in osh commands.

~/.oshrc must contain this statement before any configuration information is specified:

    from oshconfig import *

This imports the configuration API. Configuration information is stored in a hierarchical structure, specified using dot notation. For example, to configure access to a postgres database:

    osh.sql.mydb.dbtype = 'postgres'
    osh.sql.mydb.host = 'localhost'
    osh.sql.mydb.db = 'mydb'
    osh.sql.mydb.user = 'flock'
    osh.sql.mydb.password = 'l3tme1n'

An assignment statement that does configuration must always begin with osh. The convention is that the next part of the name is the command being configured. In the example above, sql is the osh command that does database access. The third part of the name is the resource being configured, so in the example above, we're configuring access to a database that will be referred to as mydb in the osh sql command. The last part of the name is the attribute being initialized. In the example above, the mydb resource is a postgres database, located on localhost. The database name is mydb (it need not match the resource name), and a connection will be made for user flock using the password l3tme1n.

With this configuration, the database can now be accessed as follows:

    [jao@zack]$ osh sql -d mydb "select * from person" $
The database resource name, mydb, is specified using the -d flag.

A default database resource can be created by assigning a value to osh.sql, e.g.

    osh.sql = 'mydb'

This allows invocations of sql to omit the -d flag, e.g.

    osh sql "select * from person" $

Access to the nodes of a cluster is configured similarly, e.g.

    osh.remote = 'flock'
    osh.remote.flock.user = 'root'
    osh.remote.flock.hosts = ['seagull1', 'seagull2', 'seagull3']

This creates a cluster named flock consisting of nodes named seagull1 through seagull3. Cluster nodes are accessed using ssh, connecting as root. A default cluster resource is specified by assigning to osh.remote.

With this configuration, the cluster can be accessed as follows:

    [jao@zack]$ osh @flock [ ... ]
Or just osh @ [ ... ] since a default cluster has been specified.

Integration with the environment

Conversions between object and string: By default, the out command renders a Python object as a string using the str function. Arbitrary formatting can be done using a format specification, and the -c option to out renders data in CSV format (comma-separated values), quoting strings but not numbers.

Osh supports string input by translating each line of input into a tuple containing a single string object. For example, if foo.txt contains this text:

    The good,
    The bad,
    And the ugly
then this osh command:
    [jao@zack]$ cat foo.txt | osh ^ out
generates this output, (one tuple for each output object):
    ('The good,',)
    ('The bad,',)
    ('And the ugly',)

Of course each line of this output can be interpreted by Python as an object. This means that osh output can be saved in a file and then read back and turned into objects. So this sequence of commands:

    [jao@zack]$ cat foo.txt | osh ^ out -f foo2.txt
    [jao@zack]$ cat foo2.txt | osh ^ f 's: eval(s)' ^ out '%s'
generates output matching the contents of foo.txt.

Integration with the filesystem: Paths to files and directories are represented by path objects, implemented by the path module. For example, the following command sequence lists the file name and size of the files in the current directory:

    [jao@zack]$ osh f 'path(".").files()' ^ expand ^ f 'p: (str(p), p.size)' $

Access to processes: The builtin function processes() returns a list of process objects. A process has information including the process's pid and that of it's parent, it's state, the command line, the environment settings, etc. (process is implemented on Linux using the per-process information under /proc. This has only been tested on Linux. It doesn't work on OSX which doesn't have a /proc filesystem.) Example: The following command prints the pid, size and command line of processes whose size is at least 200M:

    [jao@zack]$ osh f 'processes()' ^ expand ^ select 'p: p.size() > 200000000' ^ f 'p: (p.pid(), p.size(), p.command_line())' $

Integration with network: Once a cluster has been configured in ~/.oshrc, a user can interact with some or all of the nodes of that cluster in a single command. The copyfrom command copies files from cluster nodes, placing files in a directory per target. So this command:

    [jao@zack]$ osh copyfrom -c flock /var/log/messages* .
will create directories seagull1, seagull2 and seagull3, (one for each node of the cluster), and copy /var/log/messages* from each node to the corresponding directory.

The copyto command copies local files to each node of a cluster.

Remote execution of osh commands is supported using a special syntactic construct, as shown above. osh @flock [ ... ] ^ ... causes the bracketed commands to be executed on each node of the cluster flock, in parallel. Result tuples from each node are streamed to the next command of the sequence. The results from a given node include the node name as the first element of each output tuple. For example, this command will report on the number of processes running on each node of a cluster:

    [jao@zack]$ osh @flock [ f 'len(processes())' ] $
len(processes()) is a count of the processes running on a node. This count is computed on each node. Each count returned from a node is combined with the nodes name, returning 2-tuples, e.g. ('seagull1', 47), ('seagull2', 49), ('seagull3', 45).

In the current release of osh (0.6), remote commands can generate output but cannot receive input.

Integration with databases: Osh integration with databases is very simple due to the similarity between Python tuples and database rows. The osh command for interacting with a database is sql. For example, suppose the table person has columns name and age (of types varchar and int, respectively). Then this command:

    [jao@zack]$ osh sql 'select * from person' $
generates output tuples like this:
    ('hannah', 12)
    ('julia', 7)

Note that each output tuple mirrors the column types -- not two strings which would need to be parsed, but a string in position 0 and an integer in position 1.

The sql command can also receive input and bind the received values to SQL statements. For example, suppose the file cousins.txt contains this data:

    jake            1
    alexander       13
    nathan          12
    zoe             8

This data can be added to the person table as follows:

    [jao@zack]$ cat cousins.txt | osh ^ f 's: s.split()' ^ sql "insert into person values('%s', %s)" $

Lines of data from cousins.txt are passed to the function 's: s.split()'. This uses the Python function string.split to split out the data from each line of input: ('jake', '1'), ('alexander', '13'), ('nathan', '12') and ('zoe', '8'). These tuples are bound to the %s parameters in the SQL statement. So after executing the above command, the data is present in the database.

The sql command can also be used to execute data definition statements such as ADD TABLE and DROP TABLE. Because tables can be added, populated, queried, and dropped directly from osh, it is feasible to use databases for short-term storage, for scratch storage, and as a data manipulation tool, without the need to write custom applications.

Data manipulation

The first osh command in a sequence generates a stream of objects. Each subsequent command takes objects as input and generate output objects. For a command such as f, one input object gives rise to one output object. Other commands filter and aggregate objects.

select: An input object is sent to output if and only if a given predicate, applied to the object, returns true. For example, this command sequence finds words in a dictionary of length 20 or greater:

    [jao@zack]$ cat /usr/share/dict/words | osh ^ select 'w: len(w) >= 20' $

agg: Input objects are aggregated by specifying an initial value for an accumulator, and a function to combine the accumulator with an input object. (See the Example section for an example of the agg command.) Another form of the agg command creates a set of accumulators, one for each value of a grouping function. For example, this command sequence computes a histogram of dictionary word lengths:

    [jao@zack]$ cat /usr/share/dict/words | osh ^ agg -g 'w: len(w)' 0 'count, w: count + 1' ^ sort $
    (2, 49)
    (3, 536)
    (4, 2236)
    (5, 4176)
    (6, 6177)
    (7, 7375)
    (8, 7078)
    (9, 6093)
    (10, 4599)
    (11, 3072)
    (12, 1882)
    (13, 1138)
    (14, 545)
    (15, 278)
    (16, 103)
    (17, 57)
    (18, 23)
    (19, 3)
    (20, 3)
    (21, 2)
    (22, 1)
    (28, 1)
Other data manipulation commands include:

Error handling

Osh input and output streams are labeled. Normal output goes to the stream named o. Error output goes to the stream named e. When an object is passed from one command to another, a label is attached and used for routing by the osh runtime. Normal data flow is through the o stream, and the osh parser ensures that there is always a handler of the e stream. For example, the following command sequence generates integers 0 through 4 and transforms each using the function f(x) = x / (x - 2):
    [jao@zack]$ osh gen 5 ^ f 'x: x / (x - 2)' $
    ERROR: ("_F#3['x: x / (x - 2)']", (2,), 'integer division or modulo by zero')
The third line of output indicates an error, due to division by zero which occurs on f(2). The output is controlled by an error handler which prints "ERROR:" followed by a tuple identifying the osh command and input that raised the exception, and the exception's message. The error handler can be specified by explicitly handling the e stream. For example, error messages can be saved in a file as follows:
    [jao@zack]$ osh gen 5 ^ f 'x: x / (x - 2)' ^ o : out , e : out -f error.txt
f generates output on both the o and e streams. Both streams of output are piped to the next command which specifies the handling of each stream. o : out specifies that objects in the o stream are passed to the out command, causing them to be printed to stdout. e : out -f error.txt specifies that objects on the e stream should be output to the file error.txt. A comma separates the commands handling the two streams.

Streams are a general-purpose feature of osh. Any number of streams can be handled, e.g. osh ... ^ a : out -f a.out , b : out -f b.out , c : out -f c.out ^ .... However, an object can be placed on a stream only by an osh command, and existing commands only use the o and e streams. This may change in future releases of osh.

Extending osh

Osh can be extended by users by writing functions referenced from the command line, and by adding new commands.

Adding extensions to ~/.oshrc: Osh functions may refer to names defined in ~/.oshrc (which contains Python code). ~/.oshrc is a convenient place to put utility functions and constants needed by osh command sequences.

Installing Python modules: Osh commands may also refer to installed Python modules. In this case, the .oshrc file would have to import the needed module. Installation is done as usual by copying to Python's site-packages directory, e.g. /usr/lib/python2.3/site-packages. For installation on a cluster, osh provides the install command. For example, to install ~/foobar.py on cluster flock, the following command can be used:

    [jao@zack]$ osh install -c flock ~/foobar.py

Writing commands: A new osh command can be created by writing and installing a module. For example, to create a new command foobar:

Details are beyond the scope of this paper. To see a detailed example, look at the osh commands themselves, which are located in site-packages/osh.

Implementation of osh

Syntax: The goal of osh is to allow for manipulation of objects directly from the command line. Some syntax had to be invented to express how the individual commands should be structured. This syntax is constrained by the shell. For example, | is interpreted by the shell, so it terminates arguments to osh and cannot be used as the osh piping syntax. In general, the tokens of osh had to be selected to avoid those tokens already interprted by the shell.

Different shells reserve different tokens, which is a problem. For example, the osh piping symbol ^ is sometimes interpreted by zsh (depending on zsh configuration).

Lexing and parsing: osh is a Python executable, and the tokens of the osh command line are passed to osh via sys.argv. Lexing (identifying osh tokens) is currently pretty crude, relying on the user to include whitespace separating osh tokens in most cases. Parsing is done using the Toy Parser Generator to create a parse tree representing the structure of the osh command line.

Parse tree: The parse tree representing an osh command line is composed of the following structures:

The parse tree can be printed by running osh in verbose mode, specifying an argument of -v1 or -v2 to osh. Example (edited for readability):
    [jao@zack]$ osh -v1 gen 10 1 ^ agg 1 'factorial, x: factorial * x' $
    pipeline#2(opset#1(o: _Gen#0['10', '1']) ^ 
               opset#4(e: _Out#7['-t', 'ERROR: %s'], o: _Agg#3['1', 'factorial, x: factorial * x']) ^ 
               opset#6(e: _Out#8['-t', 'ERROR: %s'], o: _Out#5['-t']))
This command line computes 10! (= 1 x 2 x ... x 10). The root of the parse tree is pipeline#2. (Each node has an identifier prefixed by #.) This pipeline consists of three opsets. The first opset generates 10 integers starting with 1. The second one computes the factorial using the agg command, receiving input on the o stream. If the preceding command (gen) encountered any errors, they would show up on opset#4s e stream, and be printed on stdout. The final opset prints whatever arrives on the o stream, and identifies errors arriving on the e stream.

Execution: Execution is initiated done by invoking the execute method on the parse tree's root. An object is passed to an op by invoking it's receive method, and output is passed downstream when the receive method invokes the send method. The osh runtime implements send, attaches a stream label, and then passes the object to the next opset or pipeline. The recipient then uses the stream label to route the object to the correct receive method.

When a command knows it has consumed all its input, (e.g. gen 10 1, after generating 10), it calls send_complete. This call is propagated downstream and is used by commands that must accumulate some input before generating any output, e.g. agg and sort.

Memory usage of osh commands: A command such as f needs to store just the current object. The non-grouping form of agg stores the current object and an accumulator; the grouping form of agg stores an accumulator for each group. Unfortunately, some commands (sort, reverse) must accumulate all input before any output can be generated. Currently, these commands all run in virtual memory. Virtual memory usage could be limited by using temporary disk storage.

Remote execution: Remote execution is specified using a special syntactic form, e.g. osh @flock [ ... ] .... The identifier following @ names a cluster configured in ~/.oshrc, and the osh command sequence delimited by [ ... ] is executed on each node of the cluster. Results from each node include the node's name.

Execution on the nodes of the cluster is done in parallel. The interaction with each node proceeds as follows. The bracketed osh commands, intended for remote execution, shows up in the parse tree as a pipeline. That pipeline is pickled and sent to the remote node via ssh. On the remote node, the executable remoteosh reads and unpickles the pipeline and executes it. (Currently, osh assumes that the objects referenced in the pipeline have already been installed on the node. It is likely that a future version of osh will install needed modules automatically.) Output from the command is pickled and returned to the caller, at which point the node name is added to each returned object.

Three threads are created per remote node. One thread issues the ssh command which runs remoteosh, and two threads handle stdout and stderr from the remote process. This is potentially a lot of threads, but it is important to avoid blocking progress due to unconsumed input. The biggest cluster I use has 80 nodes, and osh works on this cluster without any problems, (this is with Python 2.4 on FC3).

Another approach to remote execution is to use the Python select module to manage I/O. However, remote execution relies on pickling, while select returns data that has arrived, regardless of pickled object boundaries. Getting select and pickling to work together is doable, and if the thread-based approach runs into trouble with bigger clusters, I may need to switch.

Possible future development

Osh API: Osh was designed for use from the command line. It's capabilities might be useful from a Python based shell such as IPython, or in straight Python code. For these situations, it would be nice to have an osh API.

Pipe objects to commands executing remotely: Currently, remote execution can return data but cannot accept data. I.e., osh ... ^ @flock [ ... ] ... is not supported. It might be useful to support piping objects to commands executing remotely.

Support for other databases: The osh.sql module is constructed on top of a class named DBType. Each database system supported by osh needs a DBType subclass (providing the methods connect, run_query and run_update). Currently, only Postgresql is supported. It would be easy to write DBTypes for other database systems.

Support for other operating systems and shells: Osh has been developed on Linux, using the bash shell. Porting to other shells may be tricky due to overlaps between osh tokens and tokens interpreted by other shells. Porting to other UNIX-like operating systems should not be difficult, although the process module is based on the /proc filesystem, (e.g., OSX does not have /proc). Using vmstat output is another possibility, but that also varies from among UNIXes. Porting to Windows is more difficult because of reliance on the popen2 module, which is not fully supported on Windows.


I developed osh to support my own work at Archivas, contributing to the development of the ArC cluster, a storage system for very large archives. I started using cssh, but found it unwieldy when working with clusters containing more than 9 nodes. I also wanted something that would aggregate output from across the cluster. Not finding anything suitable, I wrote osh. Features and commands have been added as needed to support my work on ArC.

Since May 2005, I've been using osh as my only tool for working on clusters. Often, all I need to do is to run a remote command on all nodes and view output. I do that using this shell script:

    osh @$cluster [ sh "$*" ] $
If that's all I needed, then osh would have been overkill. But other common usages include:

I've also found osh extremely useful for analyzing data. I'll often run tests that run for hours or days, collecting from all nodes log files, and output from tools such as vmstat and iostat. osh has proven useful for extracting data of interest, reducing the data volume, and producing CSV output, which can then be graphed.