NDN experimental testbed orchestrator

LURCH

Input files

To describe an NDN experiment for lurch, users specify an experiment's folder containing the following three files:

  1. topo.brite In this file users specify virtual links among phisical nodes inferring bidirectional rate limiting (implemented through linux command tc);
  2. routing.dist For every node in the virtual topology this file contains the list of FIB entries of each node;
  3. workload.dist It describes the input for the commands launched by client nodes. For the moment, lurch launches ndncatchunks3 commands asking contents under a specified catalog.

The list of servers (IP addreses or resolvable names) involved in the test lurch should be specified in the hosts file located in the lurch's src folder ( lurch/src/ ). Using experiments' files contained in the specified folder and the hots file lurch builds the configuration file (lurch/src/lurch.conf) used to describe the experiment.

Finally, a list of global parameters is passed to lurch through the Myglobals.py file in the lurch/src/lurch subdirectory.

Following sections thoroughly explaing input files' format.

topo.brite

Topology files' syntax is inspired from the brite topology syntax. The topology file is divided in two parts, Nodes and Edges (bidirectional links).

A node is defined by using the seven following fields.

  1. nodeId Node id in the virtual topology;
  2. - Not used;
  3. cacheProbability (only with customized NDN) Indicates the probability to cache a chunk;
  4. cacheSize Indicates cache size expressed in chunks;
  5. cacheReplace (only with customized NDN) Indicates cache replacement policy used by the node. supported replacement policy is "l" -> LRU;
  6. namespace Indicates namespace served in case of a node running the REPO application (useless if not AS_REPO or AS_CLI_REPO);
  7. node type Can be AS_NODE (normal NDN node), AS_REPO (reporitory running dumb interest replayer), AS_CLIENT (client running ndncatchunks3), AS_CLI_REPO (both client and repo at the same time).

Here is the node row syntax and an example of it in which there is a node with id 0, with an LRU ("l", the only replacement policy supported by customized NDN) cache with caching prob. 100 capable of storing 1 chunk. The node runs the repo application that serves whatever Interest it receives with name "ndn:/".


nodeId  -  cacheProbability   cacheSize   cacheReplace  namespace  nodetype
   0   251        100             1             l            /      AS_REPO
		

An edge is defined by using the nine following fields.

Here is the edge syntax an example of a bidirectional 5Mbps link between node 0 and 4:

  1. edgeId Edge id in the virtual topology;
  2. nodeId One of the node from/to wich the link will be established in the virtual topology;
  3. nodeId One of the node from/to wich the link will be established in the virtual topology;
  4. - Not used;
  5. - Not used;
  6. linkCapacity Link capacity expressed in kbps;
  7. - Not used;
  8. - Not used;
  9. - Not used;
  10. - Not used;

edgeId   nodeId  nodeId     -         -     linkCapacity   -    -    -     -
   0       0       4     100000.0  0.000001    5000.0      2    0   E_AS   U
		

routing.dist

The routing file specifies every entry that need to be inserted in the FIB tables of the nodes involved in the experiment.

Here is the route row syntax and an example indicating that node 4 will have a FIB entry saying every interest with prefix "ndn:/lurch/code" will be worwarded to node 0:


from   to    component1   component2  ...
 4     0     1="lurch"    2="code"     
		

workload.dist

The workload file specifies the workload of the application (currently only file download through ndncatchunks3 is supported)launched by the client. There are nine different fields in the workload line:

  1. nodeId Node id on which the application will be launched;
  2. clientId Client id identifying a specific applikcation instance inside a node (A single node may have multuple clients);
  3. arrivalProcess The arrival process of file download. It can be CBR or Poisson
    The syntax is: arrivalProcess_rate(where the rate is the number of downloads per second)
    Example:Poisson_0.8
    Notice that a rate of 0 means that the client will request a single file download
  4. popularityit defines the popularity of the catalog requested by a client. It can be zipf, rzipf, weibull, trace, none
    • zipf and rzipf: refer to the zipf distribution. zipf is a discrtized version of the pareto distribution while rzipf is the exact zipf distribution (PDF: f(x)=c/x^alpha).
      The syntax is: distrib_alpha_catalogSize
      Example:rzipf_0.8_10000
    • weibull: refers to the discretized and truncated version of weibull distribution
      (PDF: f(x)=alpha * beta^-alpha * x^(alpha-1) * e^((x/beta)^alpha) ).
      The syntax is: distrib_beta_alpha_catalogSize
      Example:weibull_1_1_100000
    • trace: Is the traffic characterizd from Orange traces
      The syntax is: trace
      Example:trace
    • none: allow to explicitely indicate the rank of the file that wil be downloaded.
      The syntax is: none_rank
      Example:none_10
  5. catalogPrefixIndicates the prefix of the catalog requested by the client;
  6. startTime Relative time of the experiment (expressed in second) at which the client will start launching file downloads;
  7. durationClient duration (expressed in second) starting from the start times.

Here is the syntax plus an example of client 0 at node 7 that will launch a file download every second (CBR) requesting filenames of the type ndn://lurch/OBJNUM0001, ndn://lurch/OBJNUM0002, etc.


nodeId  clientId  arrivalProcess    popularity   catalogPrefix startTime duration 
  7        0          CBR_1      rzipf_0.8_10000   ndn:/lurch      0       3600  
		

hosts

The hosts file is simply a list of IP or reselvable names of the servers that are part of the experiment. Notice that the number of servers involved in the experiment should be at least equal to the number of nodes defined in the topo.brite file. This is an example of the hosts file for an experiment in grid5000


taurus-12.lyon.grid5000.fr
taurus-14.lyon.grid5000.fr
taurus-16.lyon.grid5000.fr
taurus-13.lyon.grid5000.fr
		

Myglobals.py

The Myglobals.py file contains some variable initialization and some lurch parameters. It can be found under lurch/src/lurch folder. Here is a list of the parameters:

  1. remote_server: ip address of the remote server where experiments'results will be saved;
  2. remote_server_user: username of the remote server where experiments'results will be saved;
  3. remote_server_folder: Folder in the remote server where experiments'results will be saved;
  4. test_path: Path local to the lurch node pointing to the folder containing input files (topo.brite,workload.dist, routing.dist);
  5. conf_file: name of the configuration file (path+name);
  6. username: username used to connect to servers involved the experiment. user having read/write rights to /var/tmp folder;
  7. ndn_dir: ndn installation directory in the remote machines;
  8. cache_dir: directory containing configuration file for caches in the servers involved in the expereiment;
  9. scripts_dir: temp folder where scripts used in remote servers are stored;
  10. log_dir: folder storing log files;
  11. files_dir: Not used;
  12. type_of_repo: Type of repository used. Right now, only "virtual" is supported (ndn-virtual-repo);
  13. test_duration: Length of the test, expressed in seconds;
  14. file_size: Size of the files downloaded by the file download application, expressed in number of chunks;
  15. chunk_size: Size of a single chunk expressed in Bytes;
  16. transport_prot: protocol used by two NDN to communicate. Can be either tcp either udp;
  17. file_size_distribution: used to describe the filesize. Only "constant" filesize supported;
  18. flow_control_gamma: (only with customized NDN) Window increasing factor of the AIMD cong. control at the receiver;
  19. flow_control_beta: (only with customized NDN) Window decreasing factor of the AIMD cong. control at the receiver;
  20. flow_control_p_min: (only with customized NDN) minimum decreasing probability of the RAQM parameter;
  21. flow_control_p_max: (only with customized NDN) difference between the max and min decreasing probability of the RAQM parameter;
  22. flow_control_est_len: (only with customized NDN) number of samples used by the RAQM cong. control protocol to estimate RTT on a path;
  23. PIT_lifetime: Interest lifetime in the PIT;
  24. flow_control_timeout: (only with customized NDN) Interest lifetime for the application (Interest re-expression timeout);
  25. fwd_alhpa_avg_pi: (only with customized NDN) alpha parameter for the forwarding algorithm;
  26. nfd_lb_forwarding_debug_mode: (only with customized NDN) enable fwd debug mode;
  27. nfd_stats_interval: (only with customized NDN) interval of the fwd statistics (if fwd enabled), expressed in microseconds;

The following is an example of the Myglobals.py file.

	  
#global variables for the lurch project

hosts = {}
Node2Host = []
Host2Node = {}

remote_server_user="mgallo"
remote_server="lyon.grid5000.fr"
remote_server_folder="~/tests"
test_path = ""
conf_file = "lurch.conf"
hosts_file = "hosts"
username = "root"
ndn_dir = "~/NDN-0.2.0"
cache_dir = "./cache/"
scripts_dir = "./scripts/"
log_dir = "./log/"
files_dir = "./files/"
type_of_repo = "virtual"
test_start_time=0.0
test_duration = 3600
file_size = 2000
chunk_size = 4096 
transport_prot = "tcp" 
file_size_distribution = "constant"
file_size_parameter1 = 0 
file_size_parameter2 = 0 
flow_control_gamma = 1
flow_control_beta = 0.9
flow_control_p_min = 0.00001
flow_control_p_max = 0.01
flow_control_est_len = 30
#TODO IMPORTANT: In NDN, this PIT_life time parameter is not taking into effect, 
whatever is configured here is ignored, the interest lifetime is set. this 
implementation maybe fixed in the future. The same for flow-control-time-out
PIT_lifetime = 950
flow_control_timeout = 1000
fwd_alhpa_avg_pi = 0.9
nfd_stats_interval = 60000000
nfd_lb_forwarding_debug_mode=1
localclient=1