Channel - A Name Space Based C++ Framework For Asynchronous Distributed Message Passing and Event Dispatching

Yigong Liu (9/24/2006)



1. Introduction

In Unix and most OSes, file systems allow applications to identify, bind to and operate on system resources and entities (devices, files,...) using a "name" (path name) in a hierarchical name space (directory system) which is different  from variables and pointers in flat address space. Many interprocess communication facilities (IPC) often depend on some kind of "names" to identify them too, such as the pathname of FIFO or named-pipe, pathname for unix domain socket, ip-address and port for tcp/udp socket , and keys for System V shared memory, message queue and semaphores. "The set of possible names for a given type of IPC is called its name space. The name space is important because for all forms of IPC other than plain pipes, the name is how the client and server  "connect" to exchange messages." (quote from W. Richard Stevens "Unix Network Programming").

Channel is a C++ template library to provide name spaces for asynchronous, distributed message passing and event dispatching. Message senders and receivers bind to names in name space; binding and matching rules decide which senders will bind to which receivers; then message passing and event dispatching could happen among bound senders and receivers.
Channel's signature:
    template <
      typename idtype,
      typename platform_type = boost_platform,
      typename synchpolicy = mt_synch<platform_type>,
      typename executor_type = abstract_executor,
      typename name_space = linear_name_space<idtype,executor_type,synchpolicy>,
      typename dispatcher = broadcast_dispatcher<name_space,platform_type>
    >
    class channel;  
Various name spaces (linear/hierarchical/associative) can be used for different applications. For example, we can use integer ids as names to send messages in linear name space, we can use string path name ids (such as "/sports/basketball") to send messages in hierarchical name space and we can use regex patterns or Linda tuple-space style tuples to send messages in associative name space; User can configure name space easily by setting a channel template parameter.

Channel's other major components are dispatchers; which dispatch messages/events from senders to bounded receivers. Dispatcher is also a channel template parameter. The design of dispatchers can vary in several dimensions:
Sample dispatchers includes : synchronous broadcast dispatcher, buffered asynchronous dispatchers,...

Name space and dispatchers are orthogonal; they can mix and match together freely; just as STL algorithms can be used with any STL containers by means of the iterator range concept, name space and dispatchers can be used together because of the name binding set concept.

By combining different name space and dispatching policies, we can achieve various models:
Similar to distributed files systems, distributed channels can be connected or "mounted" to allow transparent distributed message passing. Filters and translators are used to control name space changes.

For tightly coupled single-address-space applications/modules, Channel's "unnamed" in/out objects : ports and signal/slots support fine grained and local message passing model without the hassle of setting up a name space and assigning names.

Channel is built on top of Boost facilities:

2. Build

Channel is continuously being developed and tested in linux (ubuntu8.04/g++4.2.4 - ubuntu9.04/g++4.3.3) and Windows (Visual C++ 2005 - Visual C++ 2008). The implementation is solely based on standard boost facilities plus Boost.Asio and Boost.Interprocess.
Download: http://channel.sourceforge.net
Build: Channel is a header only library. There is no need to build the library itself to use it. Please following these steps:
download or checkout boost distribution
download latest boost_channel_x_x.tar.gz
tar xvzf boost_channel_x_x.tar.gz
add boost's directory and Channel's directory to compilers' include path
cd to <channel_top_directory>/libs/channel/exmaple/<specific samples such as ping_pong>
bjam

3. Tutorials

the following are a few samples showing how different name spaces and dispatchers can be used in various situations:

3.1 gui event handling

A simple sample shows that a gui window send (broadcast) simple events to callbacks (either free functions or object members). details...

3.2 gui event handling with 2 local channels

This sample shows how 2 channels can be connected to allow gui events propagate from one channel to another. Also we use a POD struct as message id/name.  details...

3.3 distributed gui events

A sample shows how events can be sent (broadcast) to callbacks in remote process by connecting local channel to remote channels thru Boost.Asio. details...

3.4 chat with direct connection

This sample shows the usage of hierarchical name space by defining chat subjects as string path names. For demo, chat peers directly connect to each other, subscribing to the subjects they are interested and send messages with each other. Since it a hierarchical name space, peers can subscribe to wildcard ids such as "all sports related subjects". details...

3.5 buffered channel with blocking active receiver (synchronous choice, join synchronization patterns)

A sample shows the usage of buffered channels implemented thru a synchronous pull dispatcher. In this channel configuration, messages are buffered inside channel at sender side. The receiver is active, a thread blocking waiting for the arrival of messages at synchronous join/choice arbiters and then processing the messages. details...

3.6 buffered channel with async receivers (asynchronous choice, join synchronization patterns)

This sample shows a buffered channel support asynchronous receivers using asynchronous coordination patterns: choice and join. The callback actions are dispatched thru a thread pool executor. details...

3.7 distributed chat thru a central server

This sample shows simple chat client and server design. Clients connect to server to chat with each other in seperate chat groups identified by subject. The chat subject (a string) is the ids in name space. Clients can join/leave chat groups identified by subject ids and send messages to chat groups. If the chat group (subject) doesn't exist yet, the first member's "join" will make it created. details...

3.8 channel connection thru shared memory

This sample shows that remote channels at 2 processes (chat1, chat2) can be connected thru shared memory message queues based on Boost.Interprocess. details...

3.9 channel using regex name matching

This sample demos channels using regex pattern matching for name-matching and message dispatching. Peers can use regex patterns to bind/subscribe to names/ids. Boost.Regex is used for implementation. details...

3.10 channel using Linda-style associative lookup

This sample demos channels using Linda-style associative name space. Tuples are used as names/ids and associative lookup is used for name-matching. Boost.Tuple is used for implementation. details...

3.11 channel name space management and security with filter and translator

This sample demos how we can use filters and translators to achieve name space management and security. details...

3.12 port and signal: unnamed point of tightly-coupled local interactions

This tutorial explains 3 samples based on port and signal. details...

4. Design

4.0 Overall Design Idea

"Names" play important role in distributed computing:
quote:
"... a new kind of system, organized around communication and naming ..."
"A single paradigm (writing to named places) unifies all kinds of control and interprocess signaling."
It can be briefly summerized as following:
The following operations on names are identified:
call - vocative use of name by one agent
co-call/response - reaction by other
Synchronized action is the coming together (binding) of calling and co-calling (thru name).
The reason to distinquish between "calling" and "response" is that, in describing any agents ( processes/ threads/ ...), we define its potential behaviour (or capabilities) by what calls and responses it can make - this is the basic idea underlying the active objects or communicating processes based design which will be detailed in later section.
quote, co-quote:
quote/co-quote is refering to the way we can pass names as/inside message content. We can simulate function call (call-and-return) by packing a "return" name/id inside message and waiting on this name for result (from remote).
match:
test a name for equality with another name. Name matching algorithms are mostly dependent on name space structure. In following sections, we will expand "matching" to include wildcards and regex matching

The design of Channel is based on the following integration of Plan9/Inferno's name space idea and Robin Milner's interaction thru names:

4.1 Name space

4.1.1 What's in a name?

In Channel, to facilitate name-matching/binding operations, a name has the following attributes:
Id is the main content of names. Various types of ids can be used for different applications: integer, strings, POD structs etc can be used for linear name space; string path names can be used for hierarchical name spaces and regex patterns and Linda style tuples can be used for associative name space.
Id_trait defines the attributes of an id type. A major feature of id_trait is the id-matching algorithm, which partially decides name-matching and thus which senders will bind which receivers and be able to send messages to them. For example, exact matching can be used for linear name space ids; prefix matching can be used for path name ids; while in associative name spaces, regex pattern matching and Linda style associative lookup can be used for id-matching.
A channel is a process local name space which can be connected to other local or remote channels. So we have 2 types of communicating peers:
When sending/receiving messages, we can specify the scope of operations:

4.1.2 Types of name space

There are 3 types of name spaces based on its id-matching algorithms and naming structures:
There are ordering relationship among ids, so they can be arranged in linear range. Exact matching is used for id-matching.
There are containment relationship among ids, so they can be arranged in tree/trie structures. Prefix matching is used for id-matching
Id-matching is based on associative lookup similar to Linda's tuple space or regular expression matching algorithms

4.1.3 Name binding set and Name matching algorithm, binding rules

No pure name exist; Names are ONLY created into name space when bound for sending/receiving msgs:
Name binding sets:
There are 2 aspects to the name matching algorithms and binding rules to decide binding_sets:
Named_Out and Named_In don't bind to each other directly (as in most event dispatching systems). Instead, they bind to names in name space. Based on binding and matching rules, their binding set will be resolved which will contain the direct pointers to their counterpart. Actual message passing and dispatching happen on the binding set; never need to go thru name space again. So the actual message passing and dispatching behaviour and performance should be the same as  we  have registered the Named_In directly with Named_Out ( as we would have done in normal event dispatching systems ).
Based on name-matching, there are possibly the following 4 kinds of binding sets:

4.1.4 Name spaces merge and connections

When 2 channels (A & B) are connected/mounted, their name spaces are merged as following:
Filter, translator can be specified at connections among channels to control name space merge:
Based on applications name space management requirements, we may need to "relocate"/"mount" the names imported (from a connection to a remote name space) to a specific sub-region in name space. For example, if we have a name space in desktop computer and connect to a PDA and a laptop, we can set translators at connections so that names imported from PDA will appear under "/pda/" and names from laptop will appear under "/laptop/". Or if our application use integer as ids/names, we may want to relocate ids from the 1st connection to [1000-1999] and ids from next connection to [2000-2999] and so on. That is similar to the way how we mount remote file-systems to local file-system.
Based on security requirements, we may need to use filters to restrict the valid range of names allowed to pass in/out specific channel connections. For example, a server's name space connect to 2 clients and we want that these clients' name spaces and messaging are totally separate, so that one client is unware of anything happening inside the other client's name space such as new name-publications and message passing. That is also similar to the way we protect network by firewalls and NATs.

4.2 Dispatching

Dispatchers or dispatching policies are operations or algorithms defined over name binding set. They define the semantics of "interactions thru names". Based on RobinMilner's separation of calling and co-calling, there are 2 parts defined for each dispatching algorithm:
The following are major design considerations for dispatchers.

4.2.1 How message data move: push/pull, buffering

There are 2 basic models of passing messages/events from senders to receivers:
Since Channel is for asynchronous messaging, mostly the following 2 dispatching categories are used:
Dispatching variations can be: broadcast, round-robin,...
Execution variations can be:
Messages are buffered inside channel at Named_Out side; receivers pull message data in two ways:
Message coordination patterns (choice and join) are applied for both pull synchronous and asynchronous model to decide when a synchronous receiving thread can be unblocked or asynchronous callback can be fired based on the available messages.

For message buffering inside channel, we can have various design choices:

4.2.2 How operations are performed: synchronous/asynchronous

When messages arrive, we have 2 choices for how dispatching operations and callbacks are performed:
There are various designs of executors. [7] provides detailed discussion about Java's executor design.
Different executors can run threads in different scheduling priorities and we can assign callbacks to run in propr executors according to applications' requirements

4.2.3 Message passing coordination patterns

Join-calculus, Comega, and CCR [5][6] define a few messaging coordination patterns regarding to when and how messages will be consumed and callbacks be fired:
In channel, both choice and join are applied in synchronous and asynchronous forms.

4.2.4 Messages handling

life-time management of messages:
marshaling/demarshaling of messages:

4.3 Connection related

The following are design considerations related to channel connections.

4.3.1 Connections

there are 2 kinds of connections:
connection object is a simple object, just containing the two peers/ends of the connection.
ways to break a connection:

4.3.2 Peer

    the common interface for connection peers: interfaces and streams
proxy of peer channel
core of channel connection logic:
         . how remote binding/unbinding events will effect local name space
         . how messages are propagated from and to remote channels
Stream is used to wrap a remote transport connection (socket, pipe or message queue inside shared memory).
In earlier implementation of Channel on ACE[8], a Connector class is included as one of the core classes; which will connect local and remote channels. the disadvantage of this design is that Channel is tied to a specific architecture (such as thread per connection); making it difficult to integrate channel with other existing servers.
In plan9/inferno, when we mount a remote name space to local, the real function is to mount a descriptor (file, pipe, or socket connection) to a specific point in name space.
Following this style, remote channel connection is to connect/mount a "stream" to a local channel/name_space; the stream wraps a socket, pipe, or shared_memory message queue connecting a remote channel in another process or machine. thus avoid interfering with servers' internal design: such as threading; so that channel can work well with both single-thread async and multi-thread sync server design.

4.4 "Unnamed" binding of output/input or points of tightly-coupled local interactions

    As discussed in the "Overall Design Idea" section, message passing happens on the binding of calling(the sender of dispatcher) and co-calling(the receiver of dispatcher).
    All the above discussions focus on setting up this binding thru name-matching in name spaces. "Binding thru names" provides a loosely coupled computing model. A agent or thread can perform or provide its functionality thru the "names" which it publishes and subscribs in application channel. It can be moved to another process or another machine and continue functioning as before as long as in its new enviroment there is a channel connected to the original channel and the moved agent attaches to the new channel with the same set of "names". However sometimes it may be too much burden/overhead than benefit to set up a name space and assign proper names, if all we want is performing "localized" computation based on message passing model.
    In many message passing based systems, threads(or processes in CSP meaning) communicate thru "ports" or "channels" which are normal local objects with possible internal message queue. Choice/Join arbiters work directly with these local objects. Pointers to these objects can be passed inside messages to enable various message-passing based idioms. These provide a tightly coupled localized models inside the same address space.
    From channel's design perspective, these localized communication primitives can be encoded as special kinds of binding set of senders (named_out) and receivers (named_in). They are "unnamed", not set up thru name matching in name space. For example in C++CSP, there are One2OneChannel, Any2OneChannel and One2AnyChannel. One2OneChannel can be encoded as the binding of one "unnamed_out" and one "unnamed_in"; Any2OneChannel can be encoded as the binding of a group of "unnamed_out" and a single "unnamed_in"; One2AnyChannel can be encoded as the binding of a single "unnamed_out" and a group of "unnamed_in" (please note that CSP requires synchronous rendezvous of sender and recevier which can be implemented thru a special dispatcher). There is a similar case with normal event dispatching systems where application code directly attaches event receivers (slots) to event sources (signals), not thru name-matching in name space.
    Channel provides generic functions to set up and break binding among any pair of named_out and named_in:
    template <typename name> void bind(name *named_out, name *named_in);
    template <typename name> void unbind(name *named_out, name *named_in);
    By means of these functions, we can setup any imaginable bindings (1-N, N-1, N-M) among named_out and named_in.
    Various message passing systems use different idioms or patterns of binding sets. Channel provides the following sample tightly coupled idioms thru "unnamed" in/out objects (unnamed_in_out.hpp):
    Ports and Signals are simple template specialization of Named_Out and Named_Ins with null_id (unnamed!) and proper dispatchers (pull dispatcher for Port and push dispatcher for Signal/Slot). They can be customized by template parameters just as normal channel entities, e.g. Port can be customized with different queue types and Signal/Slot can be customized with different dispatching algorithms (broadcast, round-robin ...). Ports and Signals are well integrated with "named" entities:

4.5 Application architecture and integration

Channel is intentionally designed to be indepedent of threading models and connection strategies. So channel can be helpful to implement various applications with different designs of threading and connection:
Channel's independence of threading and connection also makes it easy to integrate Channel with exising server applications of various designs. Basically we'll write wrapper classes to glue channel to existing server mechanisms:

5. Classes

5.1 name space related

5.1.1 name spaces

    The major purpose of name space is to set up the bindings among named_outs and named_ins based on id-matching and scoping rules. There are 3 kinds of name spaces:
      name space API are fixed, must support the following methods:
void bind_named_out(name *n);
void unbind_named_out(name *n);
void bind_named_in(name *n);
void unbind_named_in(name *n);

5.1.2 id_type and id_trait

As described above, various id_types (integer, string, PODS, pathnames, tuples etc) can be used for different applications and name spaces. To support name binding operations, id_type should support the following operations:
To be able to use primitive data types as name space ids, containment and matching operations are defined inside id_trait classes.
For channels to be connected with remote name spaces, non-primitive id_type should define serialize() methods to allow ids be marshaled and demarshaled using Boost.Serialization.
Id_trait classes also contain definitions of the following 8 system ids:
      static id_type channel_conn_msg;
      static id_type channel_disconn_msg;
      static id_type init_subscription_info_msg;
      static id_type connection_ready_msg;
      static id_type subscription_info_msg;
      static id_type unsubscription_info_msg;
      static id_type publication_info_msg;
      static id_type unpublication_info_msg.
These ids are used internally for channel name space management. Applications can also subscribe to these system ids to receive notifications about name space changes and add application logic to react properly; for example:

5.1.3 name and name binding callback

Class name is an internal class; application code will not use names directly however. Applications will instantiate named_out and named_in to set up message passing logic.
Class name contains the most important information in name space: id, scope, membership and binding_set.
When Named_In and Named_Out are instantiated, a name binding callback can be specified to allow applications to be notify when peers bind to the name. Its signature :
void binding_callback(name *n, typename name::binding_event e).

5.1.4 named_out and named_in; publisher and subscriber

Class named_out and named_in are where name space and dispatcher meets. In fact, they inherit from both class name and dispatcher.
Class named_out_bundle and named_in_bundle are helper classes to conveniently use a group of name bindings.
On top of named_out_bundle and named_in_bundle, class publisher and subscriber provide direct support for publish/subscribe model.

5.1.5 unnamed in/out: port and signal/slot

Class port provides direct support for localized tightly coupled message passing model. Port inherit pull_dispatcher's sender which inherit queue class. So port can be used directly as a message queue and application can put messages into it and get messages from it. However ports are mostly used with choice/join arbiters.
Class signal/slot support localized "unnamed" event dispatching model.

5.1.6 binder, filter and translator

Filters and translators are defined to control name space changes during name space connection and binding. Binders contain both filters and translators and are specified in channel connection function calls. APIs and dummy operations of binders, filters and translators are defined here,

5.2 dispatching related

5.2.1 dispatchers

As we discussed above, dispatchers have 2 parts: sending and receiving algorithms. Dispatcher's API are not fixed, depending on whether it uses push or pull model and it is synchronous or asynchronous. The following sample dispatchers are provided:
senders/named_out broadcast messages/events to all bound receivers/named_in. This is the most common event dispatching semantics.
senders/named_out send messages/events to bound receivers/named_in in round-robin manner. Simple server load balance can be achieved thru this.
senders/named_out always send messages/events to the latest bound receivers/named_in. This is a dispatcher to simulate plan9's union directory (though most semantics is achieved thru nam space binding/connection). Suppose we use an id (such as "/dev/printer") represent a printer resource. To print something, we send a message to that id. On another machine there is another printer bound to the same id in their local channel.  To be able to use the 2nd printer, we could connect or mount the remote channel to local channel. Then if always_latest_dispatcher is used, all following printouts (sent to  /dev/printer) will come from the remote printer. The local printer will get print messages again after the channels disconnect.
In pull dispatcher, messages/events are buffered inside channel at Named_Outs, high level messaging coordination patterns - "arbiters" are defined at Named_Ins to decide when and how messages are pulled from Named_Outs and consumed by receiving threads or callbacks.
Both senders and receivers are active threads. Messages are buffered inside channel at sender/named_out side and sending thread returns right away. Receiving threads block waiting messages at synchronous arbiters. They unblock and process messages when messages are available at named_outs and their associated arbiters fired.
Callbacks are registered with asynchronous arbiters. Messages are buffered inside channel at sender/named_out side and sending thread will notify receivers before return. Depending on arriving messages, asynchronous arbiters will decide which callbacks will fire; and schedule them to execute in an executor. Join arbiters will quarantee that related messages are consumed atomically.

5.2.2 messages

Application message/event data can be any data type : primitives, structs and classes.  For remote message passing, proper serialization functions must be defined using Boost.Serialization:
Please refer to the tutorials for sample message definitions.

5.2.3 queues

Queues are used for inside channel message buffering. One of pull dispatcher's template parameter is queue type. Various applications can specify and use different queue types based on applications' requirements and queues' capability. Queues will support the following common interface:
      void put(elem_type & e);
      void get(elem_type & e);

The following sample queue implementations are or will be provided:

5.2.4 executors

Executors allow us avoid explicitly spawning threads for asynchronous operations; thus avoiding thread life cycle overhead and resource consumption.   Executors should support the following common interface to allow application register asynchronous operations to be executed later and cancel this registration:
      template <typename task_type>
      async_task_base * execute(task_type task);
      bool cancel(async_task_base *task);

The following sample executors are provided:
There are two places to plugin executors in framework:
specifying an executor when channel is created. By default, all asynchronous operations (event/message callbacks, name binding callbacks,...) will be scheduled and executed in this executor.
For example, some applications may want to give different priorities to handling different event ids (or message types). We can create several executors with their threads running in different scheduling priority; and specify proper executors when named_in and named_out are created.

5.3 connection related

5.3.1  global functions for connecting channels

There are 3 overloaded global functions for connecting channels:
template <typename channel>
typename channel::connection* connect(channel &peer1, channel &peer2,

            typename channel::binder_type *binder1 = NULL,
            typename channel::binder_type *binder2 = NULL)
connecting 2 local channels so that peers at both channels can communicate to each other transparently. binders1 (containing filter and translator) defines how channel peer1's name space will be changed.
        template <typename channel1, typename channel2>
        typename channel1::connection* connect(channel1 &peer1, channel2 &peer2,
            typename channel1::binder_type *binder1 = NULL,
            typename channel2::binder_type *binder2 = NULL);
template <typename channel, typename stream_t>
connection* connect(channel &peer,

                      stream_t * stream,
                      bool active,
                      typename channel::binder_type *binder = NULL)

Normally a connection to remote channel is represented as a "stream" objtect (tcp/ip socket connection or shared memory connection). This connect() function is used to connect a local channel to a remote channel represented by the stream.

5.3.2 connection

Class connection represent the connection between 2 channels. Deleting a connection object will break the connection between 2 channels; and deleting any of the member channels will result in connection object being deleted. 

5.3.3 peer and interface

Class peer defines the common base class of connection proxies such as interface and streams. Normally application code will not need class peer, unless creating a new channel connection mechanism such as SOAP based streams.
Class interface is the proxy between its owner channel and a peer channel. It contains all the logic for how remote name space will be "mounted" at local name space and how local name space change will propagate to the remote name space and vice versa. It is here that filters filt message ids and translators translate incoming and outgoing messages

5.3.4 streams

Streams are proxies for remote channels and wrap transport mechanisms. The following streams are and will be provided:
      template <typename sock_conn_handler>
      void async_accept(int port, sock_conn_handler hndl) ;
      template <typename sock_conn_handler>
      void sync_connect(std::string host, std::string port, sock_conn_handler hndl) ;
      template <typename sock_conn_handler>
      void async_connect(std::string host, std::string port, sock_conn_handler hndl) ;

5.3.5 marshaling registry

5.4 platform abstraction policy and synchronization policy

5.4.1 platform abstraction

Platform independence is one key factor for Channel's portability. Channel's internal implementation depends on some system facilities, such as mutex, condition, timers and logging. Various platforms have different levels of support and different APIs for these system facilities. Some boost libraries already provide nice wrappers over system facilities such as Boost.Thread and Boost.Date_Time. However for some system functions, boost doesn't have an approved library yet, such as logging. Class boost_platform is a platform policy class defined to support platform independence. All the system facilities Channel uses for internal implementation are either defined as nested classes wrapped inside it or its static methods. To port Channel to a different software/hardware platform, one major work is to reimplement the platform policy class using native functions (another is coping with compiler difference). Take logging for example, in future if we have a portable boost library for it, we could redefine boost_platform class to interface to it. Otherwise for a Windows specific application, we can implement platform class logging API using Windows event log facility; for linux based application, we can use syslog.

5.4.2 synchronization policy

Modeled after ACE's synchronization wrapper facades (ACE_Thread_Mutex, ACE_Null_Mutex, ACE_Null_Condition,...) and Null Object  pattern, two "no-op" classes null_mutex and null_condition are defined. They follow the same interface as their counterparts in Boost.Thread and implement the methods as "no-op" inline functions, which can be optimized away by compilers. Also modeled after ACE's Synch_Strategy classes (MT_SYNCH, NULL_SYNCH) and Strategized Locking pattern, two synchronization policy classes are defined: mt_synch and null_synch. mt_synch is for multithreaded applications which contains Boost.Thread's mutex/condition classes as nested types. null_synch is for single-threaded applications whose nested types are "null" types we mentioned above. synchronization policy class is one of channel template parameters which we can use either mt_synch for channel to be used in multithreaded application or use null_synch for single threaded application (such as event dispatching) without incurring overhead. The usage is different from the above mentioned platform independence, it is for application requirement and efficiency.


6. Class Concepts and How to extend Channel framework

One essential task of Generic Programming is to find the set of requirements for each class/type so that the template framework can compile and operate properly. These requirements are called "concepts" and include the following:
To extend Channel framework, new classes / types must satisfy the requirements of its "concept" so that code can compile and run.
In the following discussions, we classify 2 kinds of requirements:

6.1 id_type and id_trait

  1. Primary requirements
For each id_type, a partially specialized template class id_trait should be defined with the following definitions:
      static id_type channel_conn_msg;
      static id_type channel_disconn_msg;
      static id_type init_subscription_info_msg;
      static id_type connection_ready_msg;
      static id_type subscription_info_msg;
      static id_type unsubscription_info_msg;
      static id_type publication_info_msg;
      static id_type unpublication_info_msg
  1. Secondary requirements
Per implementations, there are the following secondary requirements:
Since current implementation use std::map to implement linear name space, user defined id_type must define the following methods to satisfy the requirements of std::map :
bool operator< (const struct_id &id) const
bool operator== (const struct_id &id) const
bool operator!= (const struct_id &id) const
Hierarchical name space is implemented using trie data structure; to support trie related operations, id_trait should add the following definitions:
      static token_type root_token;     //just a name for root trie node, not in name_space
      static token_type wildcard_token;
      static bool id1contains2(id_type id1, id_type id2)

Here is a detailed description of how to add id_type and id_trait for associative name_space based on Linda-style associative lookup.

6.2 name space

  1. Primary requirements
id_type;
id_trait;
synch_policy;
executor;
platform;
name;
void bind_named_out(name *n)
void unbind_named_out(name *n)
void bind_named_in(name *n)
void unbind_named_in(name *n)
  1. Secondary requirements
name space query related:
      template <typename Predicate>
      void bound_ids_for_in(Predicate p, std::vector<id_type> &ids)
      template <typename Predicate>
      void bound_ids_for_out(Predicate p, std::vector<id_type> &ids)
executor_type * get_exec(void)

Please refer to linear_name_space.hpp and hierarchical_name_space.hpp for detailed code.

6.3 dispatcher

Dispatchers are used as policy classes for channel template. As discussed above, each dispatcher contains 2 algorithms: sending and receiving.
Dispatchers' API are not fixed, depending on whether it uses push or pull model and it is synchronous or asynchronous. The API of provided dispatchers follow the general convention of providing various send() and recv().
  1. Primary requirements
Each dispatcher class should define 2 nested types:
These nested types are the parent classes of named_in and named_out.
Inside dispatcher nested types (sender and receiver classes), dispatching algorithms retrieve name binding set from associated "name" object.
  1. Secondary requirements
For dispatchers which are used in channel types with possible remote connections, the nested receiver classes will expect the callback function's signature as :
    void callback(id_type id, boost::shared_ptr<void> msg).
This requirement is because of the implementation of "interface" class.

Here is a detailed description of a sample pull dispatcher.

6.4 executor

6.5 queue

6.6 streams/connectors (or integrate into new architecture)



7. Compare Channel to others (plan9, STL)

7.1 Compare Unix/Plan9/Inferno file-system name space and Channel's name space

In Unix and other OSes, file system provides the machine-wide hierarchical name space for most system resources. Applications use  resources mostly by the standard file system calls: open/close/read/write. By mounting remote file-systems, remote name space (and resources) can be imported and accessed transparently by local applications.
Plan9/inferno push this idea further by 3 ideas: 1. all resources are represented as files. 2. each process has its own private name space which can be customized according to applications' requirements. 3. an uniform protocol - 9P is used for all remote message passings.[1][2]

Channel provides a process local name space for asynchronous message passing and event dispatching. Compared to unix/plan9 name space:
"... is built upon the idea that the respondent to (or referent of) a name
exists no more persistently than a caller of the name. In other words, the notions of
calling and responding are more basic than the notions of caller and respondent; every
activity contains calls and responses, but to have a persistent respondent to x – one that
responds similarly to every call on x – is a design choice that may be sensible but is
not forced."

7.2 compare STL and Channel

Some mapping between STL and Channel's concepts:
Dispatchers are defined using named bindings of senders and receivers, which is provided by name space; similar to that STL algorithms are defined in iterator range of [begin_iterator, end_iterator), while iterator range is provided by containers.


8. Reference Links

[1] Preface to the Second (1995) Edition (Doug McIlroy)
[2] The Use of Name Spaces in Plan 9 (Rob Pike,...)
[3] What's in a name? (Robin Milner)
[4] Turing, Computing and Communication (Robin Milner)
[5] Comega
[6] CCR
[7] Java's executor
[8] http://channel.sourceforge.net