Java程序辅导

C C++ Java Python Processing编程在线培训 程序编写 软件开发 视频讲解

客服在线QQ:2653320439 微信:ittutor Email:itutor@qq.com
wx: cjtutor
QQ: 2653320439
 1  
THE UNIVERSITY OF NEW SOUTH WALES 
 
 
 
SCHOOL OF ELECTRICAL ENGINEERING AND 
TELECOMMUNICATIONS 
 
 
 
 
Implementation Of A Bandwidth Broker In Java 
 
 
 
Authors :   
Khoi Ba PHAM (2246971)  
 Richmond NGUYEN (2247629) 
 
 
Bachelor Of Engineering – Telecommunications 
June 2003 
 
 
 
Supervisor : Sanjay Jha 
 
Assistant Supervisor : Shaleeza Sohail 
 
 2  
 
Acknowledgements 
 
 
Special thanks go to Shaleeza Sohail for her continued guidance and trust 
throughout the course of this thesis project. 
 
Thanks also goes to the members of the Networks Teaching Lab at the 
University of New South Wales, in particular, Albert Chung for helping us get 
set up in the lab. 
 
 3  
 
 
Abstract 
 
 
The explosive growth of internet usage has shown that the current 
infrastructure and routing mechanism will shortly become unsuitable as the 
transmission of multimedia traffic increases. Current internet does not 
distinguish between different types of traffic and as congestion becomes 
inevitable, traffic which depends on quality of service parameters such as 
delay will not be viable. The solution lies in network environments which make 
a distinction between traffic types and their quality of service requirements.  
As quality of service aware network implementations become prevalent, the 
importance of an entity to manage and allocate resources becomes critical. 
This thesis project designs and implements such a management entity called 
a Bandwidth Broker which is to be used in the quality of service aware 
DiffServ environment.   
 4  
 
 
Table Of Contents 
 
 
1. Introduction            6 
 
2. Differentiated Services (DiffServ)        8 
 
2.1  Background            8 
2.2  DiffServ Router         11 
2.3  Security Issues         12 
2.4  Service Level Agreements(SLA) And Service Level Specifications(SLS)   13 
2.5 Resource Allocation Request (RAR)      14 
 
3. Bandwidth Broker Architecture       15 
 
3.1 Data Interface          16 
3.2 Key Protocols          17  
3.2.1 Intra-Domain Protocol       17 
3.2.2 Inter-Domain Protocol       20 
3.3 User Interface          21 
3.4 Previous Implementations        21 
 
4. Implementation Details         22 
 
4.1 Operating Environment        22 
4.1.1 Linux DiffServ Support       22 
4.2 Java Programming Language       25 
4.3 MySQL Database Schema        28 
4.3.1 User Information        28  
4.3.2 BB Information        30  
4.3.3 Network Information       32 
 
5. Program Development          35 
 
5.1  BB Server          35 
5.2 COPS-PR Communication          43  
5.3 Linux Router Client        45  
5.4 Java Client          47  
5.5 Web Interface          49  
5.5.1 Internet Browser Client       50  
5.5.2 XML Processing        53  
 
6. Testing Procedure         57 
 
7. Future Development          60 
 
8. Conclusion           62 
 
 5  
9. References           63 
 
Appendix A – Source Code        65 
 
Appendix B – XML Schema / Examples               179 
 
Appendix C – Sample MySQL Database Data              184 
 
Implementation CD                  189 
 
 6  
 
1.  Introduction 
 
 
As multimedia traffic across the internet becomes more prevalent, it is clear to 
see that such bandwidth intensive applications require differing individual 
requirements. Multimedia video requires high throughput and low error, whilst 
Internet voice telephony requires very low end to end delay but can withstand 
higher error count than video. It is these differing requirements that bring 
about the important notion of Quality of Service. Traditionally, Quality of 
Service was defined as a set of parameters used to describe network 
performance such as delay, packet loss, jitter and error rate. 
 
 
Presently, the internet as we know it does not make a distinction between 
different types of traffic and their requirements. Traffic is delivered on a best-
effort mentality with no certain guarantees from end-to-end. Packets are 
forwarded by routers on a first in, first out (FIFO) principle. If the situation 
arises where a router’s buffer becomes full then packets are simply discarded. 
Clearly, this is no longer acceptable considering the types of multimedia being 
carried on the internet presently, and in the future. It does not make logical 
sense that a data transfer should cause a voice transmission to be held up 
(causing jitter), when delay for a data transfer is not important. 
 
 
Quality of Service in this regard is not the only reason that traffic distinction is 
important. Network administrators would also like the capability to share 
bandwidth on any particular link with respect to different traffic classes. For 
example, a customer who pays an ISP a higher rate should be ‘guaranteed’ a 
higher percentage of bandwidth compared to another who pays less. This 
commercial aspect of Quality of Service is also very important as undoubtedly 
the Internet has become a very commercialised medium. So the option to 
offer different level of services based on price will no doubt be a driving factor 
for companies involved with Internet infrastructure. 
 
 
The IETF during the 1990’s proposed 2 main architectures to address this 
problem, Integrated Services (IntServ) & Differentiated Services (DiffServ). 
IntServ is the earlier proposal whilst DiffServ is more recent and is where 
current research and implementation is directed. This thesis will concentrate 
on DiffServ and the implementation will be based around the DiffServ 
framework. 
 
 
DiffServ allows traffic to be distinguished into different traffic classes. Each of 
these classes are then treated in a different manner at DiffServ enabled 
routers. A detailed discussion of DiffServ is included in Section 2. The basis of 
this operation is based around Per-Hop Behaviour (PHB), where all packets 
belonging to a certain class are routed with a predefined routing policy at each 
hop. Many different PHB’s can be defined to cater for different types of traffic 
 7  
which have individual quality of service requirements. In order for routers to 
know what type of service to grant to packets, some sort of configuration must 
be carried out. A logical entity called a bandwidth broker (BB) has been 
proposed which carries out configuration of routers, admission control and 
also the automated negotiation of contractual agreements between a service 
provider and a customer. 
 
 
A bandwidth broker is fully aware of all the resources within its domain from 
routers to the capacity of links. It must also have access to routing tables to 
ensure packets are routed correctly and efficiently. Not only is the BB entity 
required to manage resources within its own domain, additionally, it must be 
able to communicate and negotiate with peer bandwidth broker’s in different 
domains in order to setup inter-domain traffic flows. It is all these 
functionalities of a bandwidth broker which enable the DiffServ framework to 
be implemented. 
 
 
The aim of this thesis is to create a logical bandwidth broker entity using the 
Java programming language. This software implementation is expected to 
fulfil the general requirements of a bandwidth broker and provide a viable 
entity which can be effectively used within a DiffServ environment. 
 
 8  
 
2.  Differentiated Services (DiffServ) 
 
 
2.1  Background 
 
 
An architecture such as DiffServ has become necessary due to the explosion 
in the growth and use of the internet. In particular, the growth in different 
applications has meant that the classless best-effort of current Internet would 
not be viable for very much longer. Currently, the Internet supports several 
services such as email and file transfer (ftp). With the improvements in 
bandwidth and multimedia capabilities of computers, there is no doubt that 
many more services will be transmitted across the Internet in the future. A 
main goal of DiffServ is to accommodate a wide range of services and 
provisioning policies extending end-to-end or within a particular (set of) 
networks [3]. However DiffServ does not define any particular service, it 
merely defines the behaviour a packet may receive at each hop.   
 
 
Figure 2.1 – A DiffServ Domain 
 9  
 
 
The above diagram shows a typical DiffServ domain and its main elements. It 
can be seen that there is a distinction between boundary nodes and interior 
nodes. The DiffServ boundary nodes interconnect the DiffServ domain to 
other DiffServ or non-DiffServ capable domains, whilst DiffServ interior nodes 
only connect to other DiffServ interior or boundary nodes within the same 
DiffServ domain [3]. The majority of DiffServ operations occur at the edge or 
boundary routers. There are 2 types of boundary nodes. Nodes that handle 
incoming traffic are called ingress nodes, whilst egress nodes handle outgoing 
traffic. A DiffServ boundary node can act as either ingress or egress node 
interchangeably.  
 
 
Request For Comments 2475 defines DiffServ as an architecture for 
implementing scalable service differentiation in the Internet [3]. This is 
achieved by aggregating traffic classification state which is marked within IP 
packets by using the DS Field as described in Request For Comments 2474 
[4]. The way this works is with all packets belonging to a particular traffic class 
being marked by the same value in the DS Field called the DiffServ codepoint 
(DSCP). Each DSCP defines a particular per-hop behaviour that the packet 
will receive at a DiffServ router. This alleviates the need to treat each 
separate flow separately thus freeing routers up from having to store 
information about every single flow passing through it. All packets marked 
with the same pre-defined DSCP are routed accordingly. This ability to 
aggregate similar traffic flows allows DiffServ to operate efficiently, especially 
at the interior nodes.  
 
 
The current transport protocol across the Internet is IPv4 with the next version 
IPv6 beginning to be deployed. As far as DiffServ is concerned, the two 
packet header fields which are most important to its operation is the Type Of 
Service (TOS) octet in IPv4 and the Traffic Class (TC) octet in IPv6. These 
two fields are relabelled as the Differentiated Services Field (DS Field) for use 
within DiffServ.  
 
 
 
Figure 2.2 – DiffServ DS Field 
 
 
The length of the DS field is 8 bits, however only 6 bits are used for the DSCP 
to select a PHB a packet receives at each hop. The remaining 2 bits are 
unused and are ignored by DiffServ compliant routers. This still leaves 6 bits 
to use for individual codepoints, a total of 26 (64) different values. The DiffServ 
 10  
proposal states that DiffServ compliant nodes MUST select PHBs by 
matching against the entire 6-bit DSCP [4]. Any PHB specification must 
contain a default value for the DSCP to ensure any packets received at a 
node with unknown DSCP can be re-marked with a default DSCP to prevent a 
malfunction. For the DiffServ implementation, a default DSCP value of 
‘000000’ is used. This sequence of bits represents current best-effort internet 
forwarding behaviour.  
 
 
The IETF has defined two types of per-hop forwarding behaviours, Expedited 
Forwarding (EF) and Assured Forwarding(AF). RFC 2598 states that the EF 
PHB can be used to build a low loss, low latency, low jitter, assured 
bandwidth, end-to-end service through DiffServ domains [7]. EF PHB has also 
been labelled Premium Service as it is intended for bandwidth intensive 
applications that require a guaranteed minimum bandwidth. Traffic marked 
with EF ‘should’ receive this minimum bandwidth regardless of any other 
traffic currently in transit at that node. In order to ensure that EF traffic does 
not completely block off lower priority traffic, RFC 2598 also states that if the 
EF PHB is implemented by a mechanism that allows unlimited pre-emption of 
other traffic (e.g a priority queue), the implementation has to include some 
means to limit the damage EF traffic could inflict on other traffic (eg a token 
bucket rate limiter). Traffic which exceeds this maximum limit must be 
dropped. The configurable minimum and maximum rates for EF PHB must be 
settable by network administrators. A DSCP value of ‘101110’ is 
recommended for use with the EF PHB. 
 
 
Assured Forwarding (AF) PHB consists of a group of four of forwarding 
classes. Within the four classes, IP packets can be assigned one of three 
different levels of drop precedence [8]. This allows service providers to assign 
traffic to different classes depending on which service a customer is eligible to 
use. This way, service providers can charge different monetary rates to each 
different class. Each of the four classes gets allocated a certain amount of 
resources and packets are then queued based on their class. It is when the 
resources allocated to each class is saturated that the different drop 
precedence levels come into effect. The drop precedence level of a packet 
determines its priority within each AF class, a lower drop precedence level is 
considered higher priority and the node will drop packets of higher drop 
precedence first.  
 
 
Evidently, the AF PHB requires more than one DSCP. Four classes, each with 
three levels of drop precedence equates to twelve different values for the 
DSCP. Below is a table of the twelve DSCP values :  
 
 
 
 
 
 
 11  
Drop 
Precedence 
AF1 AF2 AF3 AF4 
Low 001010 010010 011010 100010 
Medium 001100 010100 011100 100100 
High 001110 010110 011110 100110 
 
 
The combination of EF and AF PHBs along with current best-effort internet, 
allow for a variety of differing types of traffic to be transmitted across DiffServ 
domains whilst being sensitive to quality of service requirements.  
 
 
2.2  DiffServ Router 
 
 
The two fundamental operations of a DiffServ router are traffic classifying and 
traffic conditioning. Packets arriving at a router need to be classified to see 
what forwarding behaviour they are entitled based on the DSCP. Traffic 
conditioning performs metering, shaping, policing and/or remarking to ensure 
that the traffic entering the DiffServ domain conforms to a predefined service 
provisioning policy [3].  
 
 
 
Figure 2.3 – DiffServ Router Components 
 
 
Classifier :
 Packet classifiers select packets based on fields within a packet’s 
header. DiffServ routers look for the DSCP to determine what kind of 
forwarding behaviour the packet will receive. Additionally, the router may need 
to determine the source, destination and other parameters within the packet 
header to be able to route the packet correctly. With all this information, the 
 12  
traffic classifier can steer the packet to the correct queue and buffer to await 
forwarding.  
 
 
Meter :
 A meter is used to check if an incoming flow of traffic conforms with 
the negotiated traffic profile. Packets that do not conform are either remarked 
to a different DSCP, passed to the shaper or simply dropped. The meter can 
be used for accounting management of the network [2].   
 
 
Marker :
 Packet markers set the DS field of a packet to a particular DSCP, 
adding the marked packet to an existing behaviour aggregate. The marker 
can be set to remark all packets sent to it with a particular DSCP or re-marked 
to a certain DSCP based on the state of the meter. 
 
 
Shaper/Dropper : Shapers are needed to delay some or all of the packets 
from a traffic flow in order to ensure that the flow conforms to its negotiated 
traffic profile. In reality, shapers have limited buffer space, so packets are 
discarded if there is insufficient buffer space. A dropper drops packets that are 
out of profile to bring the flow back into profile. It is sometimes implemented 
as a shaper with no buffer space.  
 
 
2.3  Security Issues 
 
 
A possible security issue with this DSCP mapping is that services can be 
stolen by packets being marked with a DSCP codepoint that they are not 
entitled to receive. Therefore admission control and authentication is needed 
to ensure packets are only marked with a DSCP that they are entitled to 
according to some pre-defined service agreement. Taken to the extreme, 
these unauthorised ‘theft-of-service’ can become a denial of service attack 
when the modified or illegally injected packets flood the resources resulting in 
lack of available resources to process legimate packets [4]. RFC 2474 states 
that the main defense against this sort of security breach ‘consists of the 
combination of traffic conditioning at DS domain boundaries with security and 
integrity of the network infrastructure within a DS domain’. This implies that 
DiffServ boundary nodes must check if packets are correctly marked and if 
they are from an authenticated peer DiffServ domain. They can also re-mark 
packets to ensure safe operation. The result of this edge policing is that the 
interior nodes do not need to do any processing, they merely forward the 
packets according to the DSCP. This further highlights that interior nodes 
within a DiffServ domain can perform at high speed.  
 
 
Another issue to be aware of is that the majority of the routers in current 
internet do not process the TOS field in IPv4 packets and simply ignore this 
field. Therefore in order for a DiffServ environment to be successfully 
implemented, routers need to be configured to process this field.  
 13  
 
 
2.4  Service Level Agreements (SLA) and Service Level                                               
Specifications (SLS)  
 
 
In order to determine what kind of per-hop behaviour a packet shall receive, a 
form of service negotiation needs to be carried out prior to packets being 
transmitted. The DiffServ architecture defines a Service Level Agreement 
(SLA) as a service contract between a customer and a service provider that 
specifies the forwarding service a customer should receive [3]. In fact, an SLA 
is a formal contractual agreement which contains issues such as network 
availability, payment agreements as well other legal and business related 
issues [5]. An SLA guarantees that traffic offered by a customer that meets 
the agreed conditions will be carried and delivered by the service provider. 
Depending on the contractual agreement, failure to provide the agreed service 
could result in some form of monetary or legal consequences. Not only does 
an SLA contain the technical forwarding behaviour that a certain traffic flow 
will receive, it could contain additional parameters like delay and access 
privileges.  
 
 
Therefore an SLA is a partially technical document that is determined by 
network administrators and customers binded by legal obligations like any 
other contract. The Qbone Signaling Design Team concludes that SLA 
negotiation is carried out by human parties. ‘Bandwidth brokers do not involve 
themselves in SLA negotiation and do not communicate SLAs between peers. 
Thus SLA (re-)negotiation is not one of the tasks of a bandwidth broker’ [6]. 
 
 
Since an SLA contains non-technical information, it cant be directly used by a 
bandwidth broker. An Service Level Specification (SLS) contains exclusively 
the technical details specified by an SLA. It is essentially a translation of a 
SLA into the appropriate information necessary to configure network devices. 
The SLS will dictate how traffic is dealt with within a DiffServ domain, from 
whether the flow is allowed access to that domain, to the forwarding 
behaviour it should receive within that domain. Once a domain has agreed to 
honour the conditions set out in a SLS, it must be responsible for giving the 
guaranteed service to traffic specified in the SLS for the duration of the 
agreement. It is important to note that an SLS is not a reservation, it is rather 
a commitment to allow reservations based on the conditions found in the SLS.  
 
 
For the scenario where the destination of the traffic flow is in another DiffServ 
domain, then inter-domain specifications need to exist between each adjacent 
domain on the path to the destination. Due to the complex nature of network 
resources and network parameters, the Qbone Signaling Design Team has 
only defined two different SLS negotiation protocols (Phase 0 and Phase 1). 
In these two phases, ‘the terms of bilateral SLSs are propagated out-of-band 
(either through another protocol or manually), so that any two peering 
 14  
bandwidth brokers have a shared understanding of the SLS that exists 
between them’ [6].  
 
 
2.5  Resource Allocation Request 
 
 
A Resource Allocation Request (RAR) is a request for resources as outlined 
in a SLS. If the RAR does not meet or exceed the conditions of the SLS, then 
it is rejected. Otherwise, the RAR is admitted and DiffServ policing points 
need to be reconfigured to admit new DiffServ traffic according to the SLS [6]. 
A Resource Allocation Answer is returned to the host requesting to confirm 
whether the network has been reconfigured successfully according to the SLS 
in place. It is important to note that a RAR can only request resources from a 
SLS it is authorised to be entitled. Hence the RAR must include the identity of 
the SLS for which it wishes to request resources.  
 
 
 15  
 
3.  Bandwidth Broker Architecture 
 
 
The Qbone architecture states that as the DiffServ progresses, it should be 
expected that network operators will begin to rely on automated tools to make 
admission control decisions and configure network devices [9]. The 
Bandwidth Broker (BB) is the tool that is touted to automate such tasks. In 
fact, the idea of a BB was suggested in RFC 2638 after identifying that it 
would be impractical and inefficient for individual users to know the current 
network topology and all its policies in order to mark packets accordingly [10]. 
One of the goals of DiffServ was to simplify the information that a router had 
to maintain, that is, not having to keep track of every single traffic flow. 
However, a BB entity allows for each flow to be monitored at a administrative 
level.  
 
 
A BB manages the quality of service resources within a DiffServ domain 
based from all the SLS that it is aware about. The majority of this network 
configuration involves the edge routers of the domain connecting it to adjacent 
domains. It is also responsible for managing inter-domain communication with 
peer BB’s in adjacent domains to allow negotiation of SLS for inter-domain 
traffic flows [1]. It is also outlined in RFC 2638 that there shall only be one BB 
per domain and that only the BB has access privileges to configure the edge 
routers.  
 
 
The BB monitors the state of resources within its domain and also the edge 
routers connected to adjacent domains. The state of network resources 
coupled with policy information allows the BB to process resource requests 
from clients within its domain and from adjacent peer BB’s. All requests for 
resource usage must be made by a client to the BB within its domain. It is the 
function of the BB to determine if the request can be accepted and whether 
peer BB’s must be contacted. After processing the request, the BB must notify 
the client regardless if the request was successful or failed.  
 
 
A request from a client to a BB requesting the use of resources is called a 
Resource Allocation Request (RAR). A RAR can contain parameters such the 
amount of bandwidth requested, duration of the intended flow and the 
destination of the flow. A RAR must conform to a pre-negotiated SLS for 
which the client is entitled to use. Numerous RAR can be mapped to a single 
SLS as long as the collective requests of the different RARs do not exceed 
the conditions of the SLS. A RAR which does not conform to a SLS is 
rejected.  
 
 16  
 
 
Figure 3.1 – The BB Architecture 
 
 
Above is an architecture for a BB and is also the model that this 
implementation will be utilising [5]. The main components are : 
 
 
1. Data Interface 
  
1.1 Data Repository 
1.2 Routing Tables 
 
2. Key Protocols 
 
2.1 Intra Domain Communication 
2.1.1 COPS-PR 
2.2 Inter Domain Communication 
2.3 User Interface Application 
 
 
3.1  Data Interface 
 
 
One of the most important components of a BB is the data interface. It is here 
that all the information needed to allow BB to carry out its functions is stored. 
SLA/SLS agreements, current resource allocations, network management 
statistics and network topology are items that need to be included within the 
BB’s data repository. Whenever a BB receives a RAR from a client, it will 
check its data repository in order to determine whether the new flow can be 
 17  
accepted. Evidently the need for a robust and reliable data interface is 
required for efficient BB operation.  
 
 
A possible implementation for a data repository is to use the Light-Weight 
Directory Access Protocol (LDAP). LDAP is a open standard protocol for 
accessing information services. The use of LDAP in this scenario can be 
justified by its light weight nature and simplicity of implementation [5]. 
Information is stored in a tree/directory structure conforming to a LDAP 
schema which can then be accessed by any LDAP client. One major 
drawback of using LDAP is that complex policies are not support due to the 
directory structure it uses. Additionally, in the case of multiple clients, the 
transaction mechanism is not optimal.  
 
 
The other option is to use a Relational Database Management System 
(RDBMS). Using the Structured Query Language (SQL) which has become 
the industry standard for accessing databases, a RDBMS can be used to 
store the data required for BB operation. The fact that SQL is a standard 
language allows the BB to utilise any of the many SQL based database 
systems available. SQL was first adopted as an industry standard in 1986 and 
has been become the most common database protocol. Relational databases 
allow for complex policy elements to be stored and also the ability to modify 
database tables to add extra tables and columns through the SQL standard. 
As many relational database systems are used in large commercial 
environments, they are able to cater for many clients. 
 
 
The BB will also need access to the routing table for its domain in order to be 
aware of the network topology. The BB needs to know the location and status 
of the domain’s edge routers in order to configure them to handle traffic 
requests. It also needs to know the next hop outside of its domain in 
neighbouring domains to find the path towards a destination not in its domain. 
This routing information is stored at the BB in a database so that it can make 
decisions on resource requests quickly. 
 
 
3.2  Key Protocols 
 
 
3.2.1  Intra-Domain Protocol  
 
 
This is a protocol to communicate the BB decisions to routers within the 
bandwidth broker’s domain in the form of router configuration parameters for 
quality of service operation and (possibly) communication with a policy 
enforcement agent at the router interface [11]. There are numerous methods 
to communicate with routers, including COPS, SNMP and Telnet or vender 
specific command line operations. It is important to note that not all routers 
support the same method for remote configuration.  
 18  
 
 
Essentially, a BB is considered a policy decision point (PDP), where decisions 
are made about resource requests with respect to existing policy rules. If the 
BB is a PDP, then the routers are considered to be policy enforcement points 
(PEP) as they are configured to ensure that policy decisions are enforced. 
The COPS protocol outlined in RFC 2748 describes a simple client/server 
model for supporting policy control over QoS signaling protocols. The model 
does make any assumptions about the methods of the policy server, but is 
based on the server returning decisions to policy requests [13]. A PEP 
connects to a PDP through a TCP connection which provides reliability. 
COPS is a client/server model which means PEPs contacts PDPs which 
respond accordingly. Further enhancing its usefulness, COPS makes use of 
IPSec and message level security for authentication and message integrity. 
However, COPS operating in this client/server fashion is not suited to DiffServ 
BB operation. As has been discussed, the BB makes the decision and simply 
configures the routers. Decisions from the BB are not initiated by the routers 
which rules out the client/server model of COPS.  
 
 
COPS-PR 
 
 
The above described operation of COPS is referred to as outsourcing. 
Another mode of operation for COPS is called policy provisioning and is 
labelled as COPS-PR. The development of COPS-PR was undertaken with a 
bias towards policy provisioning within a DiffServ environment. COPS-PR 
allows for efficient transportation of attributes and large amounts of data 
coupled with flexible error handling [14]. Also, the protocol is event driven 
meaning that polling between PDP and PEP does not occur increasing its 
efficiency with respect to network traffic. The event that triggers the PDP to 
send data to a PEP can be from an external source which is an important 
aspect of BB operation. For example, a request made by client to a BB is 
deemed acceptable and hence the routers in the domain need to be re-
configured to handle packets from the client. Below is a diagram of the COPS-
PR model showing typical operation [14]. 
 
 
Figure 3.2 – COPS-PR Operation 
 
 
 19  
Upon starting up, a PEP establishes a connection to the PDP allowing the 
PDP to extract all policy relevant information concerning that particular PEP 
from its data repository. The PDP then sends this data to the PEP which then 
configures itself from the values contained in this data. During normal 
operation, if the PDP detects a policy change that needs to be propagated to 
the PEP, it will send the updates to the PEP.  
 
 
RFC3084 introduces the idea of a Policy Information Base (PIB) which is a 
named data structure that holds instances of policy data that is transported by 
COPS-PR. The PIB name space is common to both the PEP and the PDP 
and data instances within this space are unique within the scope of a given 
Client-Type and Request-State per TCP connection between a PEP and PDP 
[14]. The PIP is a tree structure name space where each branch represent 
structures of data or Provisioning Classes (PRCs) and the leaves represent 
instantiations of Provisioning Instances (PRIs) for any given PRC. PRIs are 
each identified by a Provisioning Instance Identifier (PRID) which acts like a 
named pointer. 
 
 
The COPS-PR architecture also defines the message types that are 
exchanged between PDP and PEP. They conform to the message 
specifications defined in the COPS base protocol [16]. The three messages 
are Request, Decision and Report State.  
 
 
Request (REQ) : A REQ message is sent from a PEP to the PDP to request 
configuration data. The configuration request message serves as a request 
from the PEP to the PDP for provisioning policy data that the PDP may have 
for the PEP, such as access control lists, etc. This includes policy the PDP  
may have at the time the REQ is received as well as any future policy data or 
updates to this data [16].  
 
 
Decision (DEC) :
 This message is sent from the PDP to a PEP either as a 
solicited or unsolicited message. That is, a DEC message can be sent from 
the PDP even without a PEP making a request. This is the provisioning 
aspect of COPS-PR operation. Each DEC message can contain multiple 
decisions, for example, delete a certain policy in addition to installing some 
policies. An important note from RFC 3084 is that all the decisions contained 
in a message must be successful or the entire decision is deemed a failed. In 
this case, the PEP reverts back to its former state before receiving the failed 
DEC message.  
 
 
Report State (RPT) : Alike the DEC message, the RPT message can either 
be solicited or unsolicited. Unlike the DEC message, the RPT is sent from a 
PEP to a PDP. A RPT is solicited following a DEC message from the PDP, in 
that it acts like an acknowledgement message. The RPT informs the PDP 
whether the policies in the DEC message were completely installed 
 20  
successfully. A RPT message is sent unsolicited for accounting purposes with 
respect to a installed policy or a change in the PEP’s status [16].  
 
 
3.2.2  Inter-Domain Protocol 
 
 
It can be naturally assumed that a lot of traffic requests will involve a host and 
destination that are not within the same domain. Comparing to current 
network usage, many people share information across their local area 
network however they still need to access resources elsewhere on the 
internet. This very interconnectivity of the internet has been its success as 
people across the world can share and interact with anyone else connected to 
the internet. As DiffServ looks to improve the overall traffic delivery across the 
internet, interconnectivity between DiffServ domains also needs to be 
considered.  
 
 
As mentioned in Section 2, DiffServ domains are connected by links between 
each domains edge routers. It is the job of the BB’s to configure these edge 
routers to allow identified traffic flows to pass. The BB for a certain domain will 
need to tell the ingress routers that traffic from another domain shall be 
allowed to pass through, and if required, how to re-mark packets originating 
from a certain domain. Evidently, some mechanism is required for BB’s to 
know about traffic flows originating from other DiffServ domains and also to 
the ability for BB’s to request bandwidth from neighbouring DiffServ domains.  
 
 
The QBone Signaling Design Team [6] has proposed a inter-domain BB 
protocol called Simple Interdomain Bandwidth Broker Signalling (SIBBS). As 
the name suggests, it is a simple request/response protocol between BB 
Peers in different DiffServ domains. It allows for a resource request to a 
destination that lies outside the originating senders domain. A few 
assumptions are made for SIBBS’ operation, namely that all peering domains 
are DiffServ domains, and that SLS are already established between peer 
BB’s through some other means. It also leaves the option available to BB’s to 
peer with non-adjacent BB’s for aggregation of service requests [6]. 
 
 
SIBBS utilises the TCP connection protocol as the reliability and flow control 
offered by TCP is enough for the simple operations of the SIBBS. The QBone 
Signaling Design Team has not released an automatic protocol for connecting 
to peer BB’s. There is no self-discovery of peer BB IP addresses at this stage 
of the SIBBS specification. As a result, connections to peer BB’s are 
established with out-of-band information, either manually or through some 
external protocol.  
 
 
 
 21  
3.3  User Interface Application  
 
 
Obviously, there needs to be an interface to allow a user to interact with the 
BB and make requests, view responses or simply browse current requests in 
use or the details of the pre-agreed SLS.  These interfaces can vary from 
simple command lines to a full graphical interface or a web interface accessed 
through an internet browser.  
 
3.4  Previous Bandwidth Broker Implementations 
 
 
There have been numerous undertakings worldwide to build a Bandwidth 
Broker for use within a DiffServ environment, the most notable being the 
following :  
 
1. Canarie ANA 
2. University of Kansas 
3. Merit 
4. Novel 
 
 
Further information regarding these implementations can be found in The 
Survey of Bandwidth Brokers [5]. However, it is important to discuss the 
University of Kansas’ [24] implementation as it shares a similar system design 
to this implementation. Clients make requests to the BB which consults its 
database and responds accordingly. Upon a successful request, the BB 
reconfigures the routers within its domain to accept the new traffic flow.  
 
 
One major difference between our implementation and the Kansas 
implementation is the way BB interacts with routers. The Kansas BB connects 
to routers within its domain upon startup and sends them configuration 
information. It communicates with the routers using telnet sessions. This 
implementation will be using the more robust COPS/COPS-PR protocol suite 
as described previously. Also the University of Kansas implementation does 
not contain inter-domain functionality which will be incorporated in our 
implementation.  
 
 
The Kansas implementation uses the C programming language and 
incorporates a web user interface through the use of CGI scripts on top of an 
Apache web server. This implementation was undertaken in 1999 and hence 
Java and web-based programming was not as wide-spread as today. Our 
implementation will undoubtedly be an improvement due to the advances in 
remote programming provided by Java and also the Java Web Services 
Developer Pack which is a customised Apache container that allows 
interaction with Java server programs.  
 
 22  
 
 
4.  Implementation Details 
 
 
4.1  Operating Environment 
 
 
The Linux operating system was chosen as the platform for this thesis project 
as it offers support for DiffServ. The fact that Linux is open-source and that 
Linux allows easy operation of a workstation as a router were further factors 
influencing the choice of Linux. The Mandrake 9.0 Linux 
(http://www.mandrakelinux.com/en/) distribution was used for this 
implementation as it was the newest distribution at the commencement of this 
implementation. It is important to note that the distribution of Linux will not 
affect the operation of this thesis project as the Java programming language 
is platform independent. The only requirement is the usage of a  Linux kernel 
version 2.4 and onwards as to enable support for DiffServ.  
 
 
4.1.1  Linux DiffServ Support 
 
 
The Linux kernel from version 2.4 onwards has built in support for DiffServ 
through the more general traffic control (tc) architecture. The tc utility is part of 
the iproute2 package which does ship with recent editions of the Linux kernel, 
however some older versions do not include support for DiffServ [15]. 
Information and the location of DiffServ enabled traffic control packages can 
be found at http://diffserv.sourceforge.net . It is critical that an updated version 
of the iproute2 package is installed successfully for this thesis project to 
perform as intended. In some cases, it might be necessary to recompile the 
kernel. However, while using the Mandrake 9.0 distribution, all that was 
required was to install the new traffic control package, replacing the version 
that is included. Installation instructions for the iproute2 package are included 
within the package, with one important instruction to note. The installation 
configuration file needs to be modified to enable DiffServ operation by 
enabling the required parameter before compilation and installing the 
package. 
 
 
The basics of the Linux traffic control include classing, queuing, filtering and 
policing. For example, traffic control can be used to control which packets are 
dropped while the packets allowed through are queued determined by where 
the packets originated. The possibilities for setting up different traffic control 
policies are endless. A full discussion of the extensive uses of Linux traffic 
control is beyond the scope of this report, however a thorough guide can be 
found in [16].  
 
 
 23  
As the goal of this thesis project is to create a logical BB entity operating 
within DiffServ domains, it is required that a DiffServ network operating 
environment is installed. As the DiffServ architecture [3] requires that packets 
are forwarded with respect to the value in their DS field, the DiffServ traffic 
control package for Linux includes a structure, skb -> tc_index which holds the 
value of the DSCP after classification. This field is set using the DiffServ field 
marker (dsmark) queueing discipline (qdisc). It is equally important to note 
that this dsmark qdisc, can be used to mark the DSCP for packets as well as 
reading from it [17].  
 
 
Furthermore, as the DiffServ architecture specifies that the AF per hop 
behaviour has three differing drop precedence levels, an addition was made 
to the traffic control queuing discipline. Linux traffic control contains Random 
Early Detect (RED), a discipline for dropping packets from a full queue. RED 
is better than normal dropping techniques where the queue is filled up 
sequentially to a limit and any traffic past this limit is dropped. This is not a fair 
implementation and leads to retransmission problems. RED is a dropping 
discipline that statistically drops packets before a queue’s hard limit is 
reached. This allows a congested link to slow gracefully and prevents 
retransmit synchronisation problems [16]. For DiffServ integration, a new 
queuing discipline, generalised RED (GRED), was added to traffic control as 
more drop priorities were required than that supported by RED. GRED uses 
the four lower bits of skb->tc_index to select the drop class and hence the 
corresponding set of RED parameters. Hence there are a possibility of 
mapping packets to sixteen virtual queues (four bits) although only 3 are 
needed for the AF per hop behaviour. 
 
 
The following Linux shell commands executed at a Linux router will allow 
basic DiffServ routing operation through that node [2]. Only EF and BE 
forwarding behaviour has been included here as these two classes are 
sufficient for a DiffServ test environment.  
 
 
1. tc qdisc add dev eth0 handle 1:0 root dsmark indices 64 set_tc_index 
 
 
2. tc filter add dev eth0 parent 1:0 protocol ip prio 1 tcindex mask 0xfc 
shift 2 
 
 
3. tc qdisc add dev eth0 parent 1:0 handle 2:0 prio 
 
 
4. tc qdisc add dev eth0 parent 2:1 tbf rate 1.0Mbit burst 1.5kB limit 1.6kB 
 
 
5. tc filter add dev eth0 parent 2:0 protocol ip prio 1 handle 0x2e tcindex 
classid 2:1 pass_on 
 24  
 
 
6. tc qdisc add dev eth0 parent 2:2 red limit 60KB min 15KB max 45KB 
burst 20 avpkt 1000 bandwidth 10Mbit probability 0.4 
 
 
7. tc filter add dev eth0 2:0 protocol ip prio 2 handle 0 tcindex mask 0 
classid 2:2 pass_on 
 
 
 
Some of the above shell commands need to elaborated upon to increase 
understanding of their functionality. It is important that a general 
understanding of Linux traffic control and routing commands is possessed in 
order to comprehend their context here. As started earlier, [16] provides an 
excellent introduction to Linux routing commands, whilst [17] and [18] provide 
more information regarding DiffServ for Linux. 
 
 
1. The first line adds a root queuing discipline to the Ethernet interface 
labelled eth0 and uses 1:0 as a reference handle. Also this line 
indicates that the queuing discipline is of DiffServ type. The 
set_tc_index tells traffic control to read the TOS field of all IP packets 
passing through eth0.  
 
2. This adds a filter to eth0 which selects all IP packets and shifts the 
tc_index by 2 as the DSCP only uses 6 bits while the TOS field of IP 
packets is 8 bits.  
 
3. Adds a queuing discipline to eth0 under 1:0 and uses the handle 2:0 for 
itself. 
 
4. Adds a token bucket filter (tbf) discipline to eth0 under the 2:0 discipline 
and is rate limited to 1 Mbps. 
 
5. A filter is added here to the 2:0 discipline which selects all IP packets  
with a handle of 0x2e (EF traffic) from the tc_index. It passes the 
packet to the 2:1 discipline for further processing. 
 
6. Adds a queuing discipline that is used for best effort internet traffic. It is 
connected to the 2:2 class and it uses RED queue discipline with a 
maximum bandwidth of 10 Mbps. A minimum, maximum and full 
threshold is supplied at which point it reverts to being a normal tail drop 
queuing discipline. 
 
7. Similar filter as in command 5, however it looks for packets marked for 
BE treatment and sends it to the 2:2. 
 
 
 25  
4.2  Java Programming Language 
 
 
It was decided that the implementation of this thesis was to be undertaken 
using the Java programming language. Java is an object-orientated 
programming language which is built upon previous languages such as the 
popular C++. The following table shows the advantages and disadvantages of 
utilising the Java language. 
 
 
Advantages Disadvantages 
Platform Independent Java needs to be installed on all 
server systems. 
Easy Client/Server Programming Java Virtual Machine needs to be 
available on clients to access Java 
based web applications 
MySQL Database Server Integration Relatively new language so not as 
much exposure as some other 
languages e.g C/C++ 
Object-Orientated  
Web Orientated  
XML Support  
Large Number of Pre-written Classes  
  
 
Java is a cross platform language that will allow the BB and clients to run on 
any Linux distribution once the Java development environment is installed. 
The implementation will also execute within a Microsoft Windows 
environment, however since Microsoft Windows does not support the DiffServ 
routing commands, operation within such an environment will be limited. 
However, a client may be executed from a Microsoft Windows computer to 
access a BB that is running in a Linux domain. 
 
 
The functionality of the BB follows the client/server model where BB responds 
to requests from a client. Additionally, the interaction of the BB with routers 
also follows the client/server model. The routers connect to the BB and 
consequently receive configuration information from the BB. Evidently it is 
essential that client/server communication and functionality is required. Java 
handles remote client/server functionality through the use of TCP Sockets and 
includes the Sockets classes for use as part of its standard libraries. The ease 
of creating remote servers and connecting to them through TCP connections 
is a highlight of the Java programming language.  
 
 
This apparent ease of remote client/server programming was a primary goal 
of the Java programming language in its quest to become a web orientated 
language. It is evident from the large number of web sites that use Java to 
process information and produce output, that this goal has become reality. In 
order to allow for browser based internet applications to be deployed, Sun 
 26  
Microsystems created the Java Web Services Developer Pack (JWSD) [20]. 
Amongst the many features of this developer package, the support for XML 
and deployment of web services are more notable. Specifically for the 
implementation of this thesis, the Java Web Services Developer Pack will be 
used to provide a browser based client interface to access the BB. The 
deployed web pages will provide a graphical user interface (GUI) with an easy 
to use interface and platform independent method of accessing the 
functionality of the BB. All that is a required is a internet browser, as from the 
client perspective, the interface is simple HTML based input forms. The 
processing of the inputted values will be undertaken at the BB server with the 
Java Web Services Developer Pack providing the required connectivity.  
 
 
Additionally, the fact that Java Web Services Developer Pack supports XML, 
provides a further option for making requests to the remote BB. The package 
provides classes that utilise the SOAP protocol for exchange of XML 
documents which will contain well defined XML tags that represent the same 
values supplied by a client through a Java client or web interface whilst 
requesting from a BB. This further functionality opens the door for automated 
requests and by utilising the XML specification [22], the actual implementation 
of end systems is transparent as the data transferred is plain text. As long as 
both sides agree on a standard XML schema with well defined tags, this 
method of remote access can be used in future improvements to the BB 
entity. 
 
 
As the Bandwidth Broker’s operation relies heavily on its data repository to 
successfully process requests from clients. As stated earlier, the data 
repository will be implemented by a MySQL server. Consequently, the Java 
implementation will be required to access this database to retrieve and update 
data. The Java platform defines standard libraries to access databases with 
the assistance of a database specific driver. There exists a Java/MySQL 
driver (MySQL Connector/J 2.0.14 ) which allows for Java to access a MySQL 
database server. This driver and future updates can be found at 
http://www.mysql.com/products/connector-j/ . Evidently, this integration of 
MySQL and Java makes the choice of using a RDBMS as the BB’s data 
repository feasible. 
 
 
Having decided upon Java as the programming language to use in 
implementing this thesis, it was then became paramount that a system design 
was created to make the most of Java’s capabilities as well as providing all 
required functionality aspects of the BB and its clients. Coupled with the 
standard architecture for a BB as defined by the QBone working group [9], the 
following system design will form the basis of this implementation. 
 
 
 27  
 
 
Figure 4.1 – The BB System Diagram 
 
 
Some important points to note about this system : 
 
• There shall only be one and only one BB for each DiffServ 
domain.  
 
• The Admin Console, Java Client and Web Client are remote 
clients that do not need to be the same domain as the BB. 
However, users must be supply a login identification which will 
determine whether they are authorised to make requests within 
a certain DiffServ domain. In the case of the Admin Console, the 
user will have administrator permissions and are thus granted 
rights to make any required modifications of other users 
requests and statistics. 
 
• The functionality of the Java client and the Web client will be 
identical. The only difference is the actual visual interface as the 
Java client is a text based console client while the web client is 
essentially a GUI accessed from an internet web browser.  
 
• A separate Admin Console will be supplied to enable network 
administrators to access the BB and make modifications that 
override requests made by clients. However, network 
administrators must bear in mind that a pre-negotiated SLA is a 
legally binding document and the conditions set out in the SLA 
possibly come with a guarantee, so any changes made by 
network administrators might have repercussions. However, 
there are times when changes are necessary, for example, 
hardware failure or network outages.  
 
 28  
• All BB’s will have access to a MySQL database. The presence 
of a maintained database is critical to a BB’s operation. Each BB 
will have access to a local copy of the database with changes 
being propagated to all other databases in other DiffServ 
domains in order to ensure dynamic operation.  
 
NOTE : For this version of the BB implementation, there will be a 
central database located at a well-known server address instead of 
one for each domain. This is to allow for easy testing and not 
having to worry about propagating database changes across 
multiple networks.  
 
• The number of routers within a single DiffServ domain will vary 
but there must be a minimum of one router per domain (this can 
be the Linux machine that the BB is operating on). The BB will 
configure routers within its own domain by using the COPS-PR 
protocol. 
 
• Inter-connectivity with peer BB’s will be determined by how 
many neighbouring DiffServ domains each BB is connected 
with. The address of peer BB’s will be known by the BB through 
accessing the MySQL database as at this stage there is no self 
discovery mechanism for BB’s to find peer BBs. 
 
 
 
4.3  MySQL Database Schema 
 
 
Since the data repository for the BB is a critical element for its viability, a more 
detailed analysis is required regarding the information that it stores. As 
previously stated, SQL was chosen as it is a well known and supported 
specification and its robustness suits the purposes of this BB implementation. 
This section shall describe the database tables required for BB operation. The 
BB database shall store a variety of information which can be broken down 
into 3 classes : 
 
1. User Information  
 
2. BB Information  
 
3. Network Information 
 
 
4.3.1  User Information   
 
 
TABLE : SLA 
 
 29  
The SLA table is essentially the SLS as described as it only contains the 
technical aspects of an SLA. However it is more natural to name this table 
SLA as it will map to the SLA ID negotiated with a client. A client will know 
their SLA ID so it is easier for clients to use this identification to access the BB 
instead of having a separate identification for SLA and SLS. At the moment 
the SLA table contains conditions that are sufficient for simple network 
resource requests. If further fields are required, such as the time of day a user 
is allowed to make requests, then it is easy to make additions to the SLA 
table.  
 
 
FIELD TYPE COMMENT 
sla_id(PRIMARY KEY) INTEGER 
Numerical ID 
corresponding to the 
identification number 
given a negotiated SLA 
service_type VARCHAR 
Text representing the 
DiffServ service type eg 
EF,AF and BE 
Startdate DATE Start date of the SLA 
Starttime TIME Start time of the SLA 
Enddate DATE End date of the SLA 
Endtime TIME End time of the SLA 
Rate INTEGER 
The total bandwidth 
allocated to this SLA 
(kbps) 
AvailBW INTEGER 
The remaining 
bandwidth for this SLA 
(kbps) 
 
 
  
TABLE : RAR 
 
Upon successful acceptance of a request, the conditions of the RAR are then 
stored in the database. The fields of the RAR table reflect the fields in the SLA 
table as the RAR conditions are dependent on the SLA specifications. Alike 
the SLA table, this table can be easily modified to suit different application 
parameters. One importance difference between the RAR and SLA tables is 
that the RAR table contains the source and destination addresses. This 
information is required by the BB to figure out which domain and to which BB 
to send the RAR if the request spans multiple domains.  
 
 
FIELD TYPE COMMENT 
rar_id (PRIMARY KEY) INTEGER 
Numerical ID that is 
automatically assigned 
incrementally to new 
successful RARs 
 30  
Startdate DATE Start date for the RAR 
Starttime TIME Start time for the RAR 
Enddate DATE End date for the RAR 
Endtime TIME End time for the RAR 
GivenBW INTEGER 
The bandwidth 
successfully allocated to 
the RAR (kbps) 
Source VARCHAR 
Text representation for 
the source IP address of 
the requested flow 
(w.x.y.z) 
Destination VARCHAR 
Text representation for 
the destination IP 
address of the 
requested flow (w.x.y.z) 
sla_id INTEGER 
References the sla_id 
field of the SLA table to 
show which SLA the 
RAR belongs too. 
 
 
 
TABLE : passwords 
 
Each SLA ID is associated with a password in order to allow the BB to 
authenticate if a user is entitled to make requests or not. This also protects 
against users requesting flows with a different SLA than that they are entitled 
to be using. NOTE : Simple security is employed in this implementation of the 
BB.  
 
 
FIELD TYPE COMMENT 
sla_id(PRIMARY KEY) INTEGER References the sla_id field of the SLA table. 
password (PRIMARY 
KEY) VARCHAR 
Text representation of 
the password 
corresponding to the 
relevant sla_id 
 
 
 
4.3.2  BB Information 
 
 
TABLE : BBPeer 
 
The data stored in this table is critical to the operation of the BB. This table 
contains the data to start the BB by giving the new BB its identification and 
also supplying it with parameters to communicate with routers within its 
 31  
domain. The status field is a requirement for inter-domain communication as a 
BB checks the status of other BB’s when determining which BB to contact 
regarding an inter-domain traffic request. Consequently, a BB must change 
the status field for itself upon coming online and also when shutting down.  
 
FIELD TYPE COMMENT 
BB_ID (PRIMARY KEY) INTEGER Unique Integer ID for 
each BB 
Address VARCHAR 
Text representation of 
the IP address for the 
BB (w.x.y.z) 
Domain VARCHAR 
Text representation of 
the DiffServ domain for 
which the BB controls 
(w.x.y.0 OR w.x.0.0) 
Status VARCHAR The status of the BB (ON or OFF) 
Port INTEGER 
The port that the BB 
runs on at the specified 
Address 
pdpPort INTEGER 
The associated PDP 
port used for COPS-PR 
communication on this 
BB 
GatewayPEPID VARCHAR 
The textual identification 
of the default edge 
router for the domain. 
 
 
TABLE : Allow 
 
Another permission based table which determines whether requests from a 
certain SLA is allowed to pass through a certain domain. The BB will inspect 
the values in this table while processing requests from clients. 
 
 
FIELD TYPE VARCHAR 
SLA (PRIMARY KEY) INTEGER References the sla_id field of the SLA table 
Domain (PRIMARY 
KEY) VARCHAR 
Text representation of 
the DiffServ domain. 
(w.x.y.0 OR w.x.0.0) 
 
 
TABLE : codepoint 
 
A mapping must exist between the service type defined in the SLA to a 
DiffServ code point value. This mapping is stored in this table.  
 
 32  
 
FIELD TYPE COMMENT 
service_type (PRIMARY 
KEY) VARCHAR 
Text representing the 
DiffServ service type eg 
EF,AF and BE 
dscp VARCHAR 
Textual bit string 
represent the DSCP for 
the corresponding 
service type. 
 
 
 
4.3.3  Network Information 
 
 
 TABLE : Capacity 
 
The BB needs to know how much bandwidth is available within its domain, 
from the total to the actual remaining bandwidth whilst requests are in place. 
The assumption has been made for this implementation in that the total 
bandwidth for a domain is determined by the slowest link within a domain as 
this will evidently be the bottleneck of the network and there is no point 
offering a higher total bandwidth as the bottleneck link will not be able to cope 
thus resulting in dropped packets and retransmissions which are not a 
desirable outcome.  
 
 
FIELD TYPE COMMENT 
Domain (PRIMARY 
KEY) VARCHAR 
Text representation of 
the DiffServ domain 
(w.x.y.0 OR w.y.0.0) 
Capacity INTEGER 
The maximal bandwidth 
that this domain can 
handle (kbps) [This 
value is the restricted by 
the slowest link within 
the domain] 
availCapacity INTEGER 
The remaining free 
bandwidth that this 
domain possesses. 
 
 
TABLE : Domains 
 
The multi network topology is stored within this table to allow the BB to 
understand where the peer BBs are located. The way that the BB architecture 
has been defined, the BB only needs to contact BB’s in adjacent peer DiffServ 
domains.  
   
 33  
 
FIELD TYPE COMMENTS 
Domain (PRIMARY 
KEY) VARCHAR 
Text representation of 
the DiffServ domain 
(w.x.y.0 OR w.y.0.0) 
Neighbour (PRIMARY 
KEY) VARCHAR 
Text representation of a 
neighbouring DiffServ 
domain (w.x.y.0 OR 
w.y.0.0) 
 
 
TABLE : Flows 
 
This table keeps track of the all the requests currently valid. The BB will refer 
to this table to find out when resources levels need to be changed for all the 
domains associated with a particular RAR.  
 
 
FIELD TYPE COMMENTS 
RAR (PRIMARY KEY) INTEGER References rar_id field from RAR table 
Domain (PRIMARY 
KEY) VARCHAR 
Text representation of 
the DiffServ domain 
(w.x.y.0 OR w.y.0.0) 
bandwidth INTEGER 
The amount of 
bandwidth given to the 
RAR 
starttime TIME Start time for RAR 
startdate DATE Start date for RAR 
endtime TIME End time for RAR 
enddate DATE End date for RAR 
 
  
 
TABLE : PEP 
 
 
The BB needs to know about all the routers or PEPs within its domain in order 
to send configuration information. The PEPs are distinguished by unique 
identification regardless of domain. The sindex field is required because the 
list of PEPs for a domain is stored within an array for this specific 
implementation of a BB. 
 
 
FIELD TYPE COMMENTS 
pepID (PRIMARY KEY) VARCHAR Texual identification tag for the PEP (router) 
sindex INTEGER Integer value for array index containing the 
 34  
TCP socket 
corresponding to this 
PEP 
Domain VARCHAR 
Text representation of 
the DiffServ domain for 
which the PEP belongs. 
(w.x.y.0 OR w.y.0.0) 
Address VARCHAR 
Text representation of 
the IP address for the 
PEP (w.x.y.z) 
Neighbour VARCHAR 
Text representation of a 
neighbouring DiffServ 
domain to which the 
PEP is connected 
(w.x.y.0 OR w.y.0.0) 
[PEP is edge router] 
status VARCHAR PEP status (‘ON’ or 
‘OFF’) 
 
 
 35  
 
 
5.  Program Development  
 
 
Having defined the requirements and the data that the BB will be processing, 
the actual programming implementation of the BB can be undertaken. Using 
the divide and conquer methodology, the system has been divided into five 
sub-systems which are implemented separately and then integrated to 
provide complete functionality. The 5 sub-systems are :  
 
 
1. BB Server 
2. Linux Routing Client 
3. COPS-PR Communication 
4. Java Client  
5. Web Interface 
 
The following sections will describe each sub-system and discuss the 
important aspects of the program code. The complete extensive code for this 
thesis implemention (including comments) can be found in Appendix A. 
 
 
5.1  BB Server 
 
 
Requirements : 
 
• Accept incoming TCP connections from clients (the ability to 
accept more than one client at any one time). 
 
• Process requests from clients after performing simple login 
authentication. The data for requests will be sent from the client 
and it is the duty of the BB server to process this information 
and make the required decisions. 
 
• Access the MySQL database in order to make decisions. 
 
• Send request response to client regardless of success or failure. 
 
• Accept incoming connections from Linux based routers and 
send configuration data to the routers (through COPS-PR 
protocol). 
 
• Connect and accept incoming connections from peer bandwidth 
brokers in adjacent domains in order to process inter-domain 
requests. 
 
 
 36  
Design & Implementation : 
 
 
The top level class for the BB server is the BBServ class. This is the class that 
is initially started to begin BB operation. It can be considered as an 
initialisation process as BBServ creates different threads which perform the 
required functionality of the BB. A major requirement of the BB was to allow 
clients to connect and process the client’s requests. The communication 
protocol used for client/server interaction is TCP. Java supports this through 
the use of its included net package [19], in particular the ServerSocket and 
Socket classes. It is easy to create a TCP server with the ServerSocket 
package.  
 
try { 
 
 serverSocket = new ServerSocket(port); 
 System.out.println("Server #" + bbID + " ready to receive requests."); 
 
} catch (IOException e) { 
 
 System.err.println("Could not listen on port: " + port); 
 System.exit(-1); 
 
}    
 
The above code outlines how to create a server which listens to client 
connection attempts through the current IP address of the server and the port 
defined by the variable port which differs with respect to the identification of 
the BB server being started [A default port of 7777 can be used if all BB’s are 
located on different computers. For testing purposes, multiple BB’s were 
installed on a single computer hence the need for different port numbers].  
 
 
Once a server socket has been created, it then listens on this port for 
incoming TCP connections. The BB is required to accept multiple client 
connections so once a incoming connection is detected, the BBServ class 
creates a new thread BBMultiServerThread which will then handle the actual 
requests from the client. 
 
while (listening) { 
 
 Socket socket = serverSocket.accept(); 
     
new BBMultiServerThread(pdp,peps,socket,,bbID,sockets       
,peerID,statusList ,mysqlURL,currentDomain).start(); 
 
} 
 
The BB server also accepts connections from Linux routers (PEP) as BB acts 
as the PDP as defined by the COPS protocol. These connections are also 
 37  
TCP connections and another ServerSocket is created. As the COPS protocol 
code is a separate package, the BBServ class merely starts and initiates the 
code for a PDP server [The default port for a COPS PDP is 3288, however as 
more than one logical PDP is to run be on a computer for testing purposes, 
different port numbers are obviously needed].  
 
 
Upon starting, the BB server also creates a timed thread which will 
automatically remove any requests which are expired. This involves removing 
their entries from all the relevant database tables and also reconfiguring the 
routers to no longer accept those traffic flows. The interval between checks 
can be adjusted in the file autoDelete.java to suit individual requirements. 
Frequent checks require CPU usage and also creates network traffic to the 
database, however infrequent checks can result in requests using resources 
they are no longer allowed to use which could be more a bigger drain on 
resources. Obviously a balance is required and this value is easily settable by 
network administrators.  
 
 
After an incoming client connection has been established, it is the BB’s 
requirement to process the clients requests. As discussed above, BBServ 
starts a new BBMultiServerThread  instance for every client connection. 
Obviously, a form of data transfer needs to take place between the BB server 
and the client.  
 
PrintWriter out = new PrintWriter(socket.getOutputStream(), true); 
 
BufferedReader in = new BufferedReader( 
                new InputStreamReader(socket.getInputStream())); 
 
The above lines create separate output and input data streams which can be 
used to send and receive information to the client. Data is transferred from the 
BB server to the client and vice versa, one text line at a time. To send data, 
the following code is executed , out.println(data), whilst to receive data, 
in.readLine() is executed. This simple yet effective line by line data 
transmission is more than sufficient for the limited traffic being sent between 
the client and the BB server. 
 
 
At this stage, the BB server performs basic authentication of clients who 
supply a user identification and password to the BB. The BB checks this 
information against entries stored in the MySQL database and if the validation 
is negative than the client is disconnected from the server. If the client is an 
authenticated user, then it is allowed to send requests to the server with its 
identification used to determine what services it can request as the client 
identification corresponds to a pre-negotiated SLA identification.  
 
 
As discussed, data is transferred between client and server one line at a time. 
This helps in reducing network traffic as the client interface will append all of a 
 38  
user’s request parameters into one string and send all across to the server. 
Hence the server needs to interpret this string and extract all the relevant 
parameters. Evidently, the server is required to pattern match in order to 
figure out what type of request and find the parameters that have been sent 
from the client. Critically, a standard message format must be exist between 
the server and client if any functionality is going to be possible. The current 
implementation utilises textual string based messages for ease of 
interpretation. The option for future development of encoded messages is a 
possibility.  Below are the textual format for the client/server messages : 
 
 
Request Bandwidth : 
 
“request bw;sla;startdate;starttime;enddate;endtime;bw;src;dst” 
 
 “request bw;1;2003-05-10;00:00:01;2003-06-30;23:59:59;1000 ;    
129.94.231.23;129.94.232.41” 
 
request bw  - tells the server it is a request for bandwidth 
sla   - SLA identification 
startdate   - the start date for the request 
starttime   - the start time for the request 
enddate   - the end date for the request 
endtime   - the end time for the request 
bw    - the amount of bandwidth requested  
src   - source IP address of the request 
dst    - destination IP address for the request 
 
 
SLA Info : 
 
“SLA info:sla” 
 
“SLA:1” 
 
sla  - SLA identification 
 
 
RAR Info : 
 
“RAR INFO;sla;rar” 
 
“RAR INFO;1;33” 
 
sla  - SLA identification 
rar  - RAR identification 
 
 
Delete RAR : 
 
 39  
“delete RAR;sla;rar” 
 
“delete RAR;1;33” 
 
sla  - SLA identification 
rar  - Identity of RAR to delete 
 
 
Modify RAR : 
 
“modify RAR;sla;rar;startdate;starttime;enddate;endtime;bw” 
 
“modify RAR;1;2003-05-11;10:00:00;2003-06-30;10:00:00;1500” 
 
sla  - SLA identification 
rar  - Identity of RAR to modify 
startdate - New start date of RAR 
starttime - New start time of RAR 
enddate - New end date of RAR 
endtime - New end time of RAR 
bw  - New requested bandwidth for RAR 
 
 
Modify SLA: 
 
“modify SLA;sla;service_type;total_bw;avail_bw; 
startdate;starttime;enddate;endtime” 
 
“modify SLA;1;EF;10000;9000;2003-04-25;09:00:00;2003-12-31;23:59:59” 
 
sla  - Identity of SLA to modify 
service_type - The DiffServ forwarding behaviour for this SLA 
total_bw - Total bandwidth guaranteed to this SLA 
avail_bw - Remaining bandwidth for this SLA 
startdate - New start date for the SLA 
starttime - New start time for the SLA 
enddate - New end date for the SLA 
endtime - New end time for the SLA 
 
 
In addition to the above parameter based request type messages, there are 
two special messages, “exit” and “shutdown”. Upon receiving the “exit” 
message, the server closes the connection to the client as the client has 
effectively requested to disconnect. The “shutdown” message is sent to close 
the BB server and upon receiving this message, the BB begins its shutdown 
procedure to ensure a safe shutdown of the BB server.  
 
 
As it can be seen, the textual messages sent between the server and the 
client are a single string with parameters separated by a ‘;’ character. The 
 40  
reason that ‘;’ was chosen is that it is a unique character that will not be used 
in normal use. It appears more natural to use the ‘:’ character, but in this case 
it is not possible as the messages contain time elements of the form 
hh:mm:ss.  
 
 
A lot of the BB server operations revolve around the information stored in its 
data repository. The MySQL database server will act as the data repository as 
previously mentioned. A specific driver has been written to allow the Java 
programming language to access a MySQL database. Once the driver has 
been made accessible to the Java programming language, access to the 
database can be initiated by the following code :  
 
 
try { 
 Class.forName("com.mysql.jdbc.Driver").newInstance(); 
} catch (Exception e) { 
 System.out.println("BBSQL: JDBC exception"); 
 System.exit(1); 
} 
 
 
The above code segment is required in any Java class wishing to access a 
MySQL database. Once the driver has been initiated, access to a database 
can be achieved through the java.sql package. This package contains 
numerous classes and methods to connect to a SQL based database and to 
access the database. The methods used extensively throughout this 
implementation are : 
 
 
try { 
 Connection conn; 
conn = DriverManager.getConnection (jdbc:mysql://localhost/test_BB"); 
  
Statement st1 = conn.createStatement(); 
    
 ResultSet res = st1.executeQuery("select rar_id,source from RAR….”); 
    
while(res.next()) { 
  int rarID = res.getInt("rar_id"); 
   String src = res.getString(“source”); 
 } 
  
 st1.executeUpdate(“insert into table  
 
}catch (SQLException ex) { 
 System.err.println("SQLException: " + ex.getMessage()); 
}   
 
 
 41  
A Connection instance is created with the location of the database and also 
the name of the database which is called test_BB in this implementation. Any 
access to the database is now made through this Connection instance.  
 
 
Statement st1 = conn.createStatement(); 
    
 ResultSet res = st1.executeQuery("select rar_id,source from RAR….”); 
 
 
The first line above creates a new statement and there are two types of 
statements that are utilised in this thesis implementation. A very common 
application of a SQL statement is the query statement which is used to 
retrieve data from the SQL database. The Java method which queries a SQL 
database is called executeQuery(“select …..”), with the SQL query supplied 
as a parameter to the method. After executing the query, the database returns 
the reply to Java as a ResultSet. The ResultSet class contains methods to 
retrieve individual elements elements from the returned set of results, in 
particular, the getInt() and getString() methods are predominantly used in the 
execution of this thesis. Arguments supplied to these methods are the names 
of the fields that are returned from the SQL query.  
 
 
 while(res.next()) { 
   int rarID = res.getInt("rar_id"); 
    String src = res.getString(“source”); 
  } 
 
 
The other type of statement is an update statement which is used when an 
alteration to the database is required. Updates to a database come in two 
forms, an insert which creates a new entry in the database while an update 
modifies data already stored in the database.  
 
 
This implementation also supports inter-domain operation which requires that 
bandwidth brokers can connect to peer BB’s in adjacent networks. The 
functionality of inter-domain behaviour is outlined by the QBone Signaling 
Design Team [6]. In order for each BB to connect to a peer BB in an adjacent 
domain, the complete network topology needs to be stored in the MySQL 
database. The BBPeer and Domains table in the database (refer to Section 
4.3) provides each BB with enough information to locate the peer BB in an 
adjacent domain on the path from source to destination. A very important field 
in the BBPeer table is the ‘status’ field which holds the current online status of 
any particular bandwidth broker [either ON or OFF].  
 
 
Initially upon starting up, the BB server will attempt to connect to other peer 
BB’s which are marked as online from the BBPeer database table. The 
SIBBS_CONNECT class connects the current BB to its peers and then 
 42  
creates arrays to keep track of the status of online peer bandwidth brokers 
and also the established connections with online peer bandwidth brokers. 
 
 
SIBBS_CONNECT sibbs = new SIBBS_CONNECT(mysqlURL); 
Socket[ ] sockets = sibbs.connectOnline(bbID); 
String[ ] statusList = sibbs.statusList(bbID); 
 
  
The above methods are executed by the BB server during startup and the two 
arrays are passed to each stage of the BB server operation in order to keep 
track of inter-domain connections and to maintain a list of the status of peer 
bandwidth brokers. The updatePeers method is called whenever a client 
connects to a BB to ensure that inter-domain connections between peer BBs 
are updated. 
 
 
Each request for resources from clients contain a source and destination 
address. The matchNetwork class determines whether the new request is a 
intra-domain or inter-domain request. In the case where the destination of a 
request is in another domain to the source domain, the matchNetwork class 
determines which adjacent peer BB to contact on the path to the destination 
domain. It is a basic route discovery mechanism which is based on the 
network topology information stored in the database. To make an inter-
domain request to a peer BB, the current BB sends the request to the 
adjacent domain’s BB with the network administrator identification and 
password. This process continues all the way to the destination domain and at 
each transit domain, the parameters of the request are checked and if any of 
the transit domains cannot handle the traffic request, a failed response is sent 
back to the originating BB. 
  
 
After checking the client’s request against data stored in the database, two 
things can happen. If the request is deemed unacceptable, then a message is 
sent back to the client stating why the request cannot be accepted. If the 
request can be accepted, then modifications to routing tables and the 
database are carried out to accommodate the new requests. The BB server is 
required to send configuration information to the routers in its domain so that 
the new traffic flow can be routed. The Linux routers will be running a Java 
program which will enable them to access their routing tables and also receive 
configuration information through the use of the COPS protocol which is 
discussed in the next section. 
 
 
 
 
 
 
 
 
 43  
 
5.2  COPS-PR Communication 
 
   
NOTE :
 The COPS/COPS-PR implementation was completed by previous 
students at the University of New South Wales [27]. This thesis incorporates 
the use of this package with a few slight modifications to integrate with our 
system. 
 
 
Requirements : 
 
• Follow the COPS/COPS-PR protocol specifications for 
communication from a PDP (BB Server) to a PEP (Linux 
Routing Client). 
 
 
Integration Details :  
 
 
The implementation of the COPS protocol is a complete package with fully 
specified COPS message formats. It also includes COPS-PR support which 
our system will be utilising.  
 
 
One major integration issue while using this COPS code was the fact that it 
uses a PIB to store data while the BB server uses a MySQL database. The 
COPS package includes a class called IpFilterEntry which is used to store the 
required parameters to create a Linux router filter within a PIB.  
 
 
public static byte[] prcIndex = ObjectID.parseFrom("1.3.6.1.2.2.1.3.2"); 
 
 
The PRC to be used to store the data in the PIB is hard-coded to be at the 
1.3.6.1.2.2.1.3.2 location. The parameters that a Linux router (PEP) is 
expecting for this thesis implementation is the source and destination address 
of the flow and an action (either add or del). The router uses this information 
to either add or delete a route from its routing table. If further parameters are 
required in future developments, additions can be made accordingly to the 
IpFilterEntry class.  
 
 
So in order to send COPS messages to a PEP, a IpFilterEntry instance needs 
to be created to provide relevant values to the PIB. The class RARcops was 
created to initialise the procedure for sending COPS decision messages to 
the PEP. The RARcops class is called when a resource request or RAR 
deletion is deemed acceptable by the BB server and configuration data must 
be sent to the router. The values to be sent are retrieved from the MySQL 
 44  
database and passed to this class which then places it in the PIB before 
sending COPS messages to the router.  
 
 
public void initIPFilterTable(PIB pib,String src,String dst) { 
 try { 
  PRC prc =new PRCImpl(IpFilterEntry.class, IpFilterEntry.prcIndex); 
  pib.putPRC(prc.getIndex(), prc); 
 
  byte[] dstAddr = ObjectID.parseFrom(dst); 
  byte[] srcAddr = ObjectID.parseFrom(src); 
  int dscp = 8; 
  byte[] mask = ObjectID.parseFrom("255.255.255.255"); 
  byte prid = (byte) 255; 
 
IpFilterEntry filterEntry = new IpFilterEntry(prid, dstAddr, mask, 
srcAddr, mask,dscp); 
   
prc.putPRI(prid, filterEntry); 
 
 } catch (Exception e) {e.printStackTrace();} 
} 
 
 
The above method places the required parameters into the PIB at the hard-
coded PRC index after parsing the textual values supplied. This method 
allows integration of the PIB with the MySQL database as values can be first 
retrieved from the database, parsed to values accepted by the COPS protocol 
and then stored in the PIB. Once the values have been stored in the PIB, 
COPS messages can be sent by supplying the connection to the relevant 
PEP, the PRC index and the type of command to be sent. This is achieved by 
the sendcops method. The only COPS message that is dynamically used by 
the BB server is the Decision message and this message is created by : 
 
 
Decisionpr[] decpr = {new Decisionpr(COContext.CONFIGURATION, (short) 
0,commandCode,(short) 0, new NamedDecData(decs))}; 
    
CopsprDEC dec = new CopsprDEC((short) 0, 1, decpr); 
 
 
The variable decs contains the information that is retrieved from the PIB and a 
new Decision message is created with this information as the message body. 
The other important variable to note is commandCode which can take two 
possible values that tells the PEP to either install or remove a decision. This 
corresponds to adding or deleting a route from the routing table. It is important 
to note that the COPS implementation allows for multiple decisions to be sent 
with each Decision message however this is not required for this thesis.  
 
 
 45  
 
5.3  Linux Routing Client 
 
 
Requirements : 
 
• Set up a DiffServ environment 
 
• Connect to the BB server to receive new routing configuration 
information (through COPS-PR protocol) and the ability to 
execute the new configuration data. 
 
 
Design & Implementation : 
 
 
The Linux routers act as the PEP as described in the COPS-PR protocol and 
they play an important part in DiffServ operation. Section 2 describes the 
functionality of a router within a DiffServ environment. The router requires the 
BB to supply it with configuration information in order for it to know how to 
treat packets that it receives. The PEPClient class is used by this 
implementation as the Linux router client class. An instance of this class 
needs to be executed at each Linux machine dedicated to being a router. 
 
 
As DiffServ operation is required, the router needs to initialise itself for 
DiffServ operation once it starts up. The Linux shell commands in Section 
4.1.1 need to be executed to set up a basic DiffServ router. The code in the 
LinuxRoute.java file, in particular the setupDiff() method executes the required 
shell commands. Java allows access to a system shell and to execute 
commands though the use of the Runtime class as follows : 
 
 
String diffserv = "tc qdisc add dev eth0 handle 1:0 root dsmark indices 64 set_tc_index"; 
Runtime.getRuntime().exec(diffserv); 
 
 
The Java Runtime class allows a Java application to interface with the 
environment in which the application is running. This executes the command 
represented by the string diffserv the same way had a user typed the 
command in a Linux shell. Hence by using the Runtime class, the PEPClient 
can access the routing table and its traffic control conditions as required to 
enable DiffServ operation. The PEPClient will call the LinuxRoute class 
whenever it needs to make changes to the routing table, that is, to install or 
delete an entry.  
 
 
The PEPClient makes use of the supplied COPS/COPS-PR code. The COPS 
package supplies a class for use as a PEP implementation CopsprPepImpl 
and this thesis makes use of this PEP implementation in order to send and 
 46  
receive COPS messages from the BB server which as previously stated, acts 
as the PDP. After the PEPClient has initialised itself for DiffServ operation, it 
then attempts to contact the PDP that oversees its domain. It accesses the 
MySQL database to find to find the address and port of the PDP, and then 
connects to it using TCP with these parameters. If the PDP accepts the PEP 
connection, then there is the option for the PDP to send the PEP all previous 
configuration information through a COPS Decision message. Otherwise the 
PDP will send unsolicited Decision messages to the PEP whenever the BB 
server determines necessary. Also at startup, the PEPClient updates the 
database and lets the PDP know that it is now online and is available to 
receive configuration data.  
 
 
The processDEC() method in the PEPClient extracts the decisions from the 
incoming Decision message and deals with them according to the command 
code supplied with message. The way to extract information from the COPS 
message is part of the COPS package and more information can be found in 
[27]. One parameter that needs to be retrieved is : 
 
 
short commandCode = decision.getFlag().getCommandCode(); 
 
 
The value of commandCode determines whether the PEP will issue an add or 
delete command to the LinuxRoute class in order to alter the routing table. 
The command and relevant parameters can then be passed to the 
corresponding LinuxRoute method based on the action required by the BB 
server. As the COPS package encodes data using BER and as string of 
bytes, values from the COPS Decision message must be converted back into 
a useable format. One particular requirement is the creation of IP addresses 
in the form of w.x.y.z. This is achieved by the following code extract : 
 
 
IpFilterEntry ipfe = new IpFilterEntry(b.epd.getEncodedPRIValue()); 
 
for (int i = 0; i < ipfe.srcAddr.length; i++) { 
 
      if (i 
 
  
 
  5 
 55  
  sla005 
  2003-10-10 
  00:00:00 
  2003-11-11 
  00:00:00 
  500 
  127.1.1.0 
  127.1.1.1 
  
 
 
 
 
The first two lines declare that this is document is of the XML standard and 
that it is a SOAP message using an envelope type schema as defined at the 
included SOAP-ENV URL. The main section of the document is the SOAP 
body which contains the data that is to be transmitted to the BB server. There 
is nothing new in the parameters, they are the defined parameters required for 
a resource request.  
 
 
Processing of the XML file and transmission of the SOAP message takes 
place in RequestBW.java file. The first two lines creates and initialises a 
SOAP connection while the following three lines creates a new SOAP 
message. 
 
 
SOAPConnectionFactory scf = SOAPConnectionFactory.newInstance(); 
SOAPConnection con = scf.createConnection(); 
 
   
MessageFactory mf = MessageFactory.newInstance(); 
SOAPMessage message = mf.createMessage(); 
SOAPPart soapPart = message.getSOAPPart(); 
 
 
The contents of the XML document RequestBW.xml are then added to the 
SOAP message after creating a new XML document object. The 
DocumentBuilder class contains a method to automatically parse the contents 
of the XML document.   
 
 
DocumentBuilderFactory dbf =DocumentBuilderFactory.newInstance( ); 
dbf.setNamespaceAware(true); 
DocumentBuilder db = dbf.newDocumentBuilder(); 
Document doc = db.parse("xml/RequestBW.xml"); 
DOMSource domsrc = new DOMSource(doc); 
soapPart.setContent(domsrc); 
 
 
 56  
After the SOAP message has been constructed, it is then sent to the Java 
Servlet which has been designated to process this particular type of request. 
The servlet in this instance is running on the local machine, however, it can be 
listening on any networked (including internet) computer that is running the 
JWSD pack and has had this thesis’ XML processing package installed to the 
JWSD pack. As it can be seen from the second line below, this class is 
expecting a response from the servlet which is itself another XML/SOAP 
document.   
 
 
URL endpoint = new URL("http://localhost:8080/Net/RequestBW"); 
SOAPMessage response = con.call(message, endpoint); 
con.close(); 
 
 
The Java servlet that receives the SOAP message examines the message 
line by line, hence there is the requirement that the order of the tags in the 
XML document must not be changed. As with the Java console client and the 
web-based client, the servlet needs to form a string which contains the 
request and its parameters to send to the BB server for processing. Firstly, 
the servlet creates a new SOAP message for the reply and it retrieves the 
body from the incoming message msg.  
 
 
SOAPBody responseBody = msg.getSOAPPart().getEnvelope().getBody(); 
Iterator it1 = responseBody.getChildElements(); 
Iterator it2 = null; 
 
 
The servlet then cycles through the elements in the incoming message in 
sequential order and forms the BB server compatible message by appending 
to a string which will be sent to the BB server. 
 
 
while (it1.hasNext()) { 
 Object obj = it1.next(); 
 try { 
  SOAPBodyElement bodyElem = (SOAPBodyElement)it1.next(); 
  it2 = bodyElem.getChildElements(); 
 } catch (Exception e) {} 
  
while (it2.hasNext()) { 
  javax.xml.soap.Text element = (javax.xml.soap.Text)it2.next(); 
  if (counter == slaNumberPosition) { 
   slaNumber = element.getValue(); 
   input = input + ";" + slaNumber; 
  } 
  else if (counter == passwordPosition) { 
   passWord = element.getValue(); 
  } 
 57  
  else { 
   input = input + ";" + element.getValue(); 
  } 
  counter++; 
 } 
} 
 
 
After extracting all the required parameters from the SOAP message, the 
servlet calls the browserClient method which will contact the BB server that is 
normally located on the same computer as the response servlet. The BB 
server will then process the request as normal and return its response.  
 
 
display = client.browserClient(slaNumber,passWord,input); 
 
 
Upon receiving the BB server’s reply, the servlet adds the reply into the SOAP 
message and sends it back to the client who made the request. This added 
XML/SOAP functionality allows for even greater interaction and compatibility 
with the BB server. It even opens the door for connecting a peer bandwidth 
broker which is different to this current implementation. As XML is a plain text 
based protocol that represents data values but not how data is stored, another 
bandwidth broker with a different architecture (for example, using a directory 
based data repository instead of a MySQL database), can still inter-connect 
with this implementation as long as both sides agree on the XML schema to 
be used. The possibilities of inter-connectivity are endless and the different 
options available for development stem from the support of XML based 
interaction.  
 
 58  
 
6.  Testing Procedure 
 
 
As the implementation of this thesis was divided into numerous sub-systems, 
testing was carried out incrementally as each sub-system was completed and 
integrated. The testing of the Bandwidth Broker involved ensuring that all its 
functionality was executed with the expected results. Strategically placed 
debugging statements were inserted into the Java code to determine what the 
BB server was doing at each stage in its execution.  
 
 
In order to test both intra-domain and inter-domain operation, a hypothetical 
network was created as the resources available did not allow for numerous 
computers across multiple different domains. The following diagram shows 
the hypothetical network used to test this thesis implementation. 
 
 
 
 
  
Figure 6.1 – The Testing Network 
 
 
This thesis implementation was conducted on two computers on the 
129.94.232.x domain. In order to mimic inter-domain behaviour, a number of 
hypothetical domains were created. As defined, there is a single BB for each 
DiffServ domain. A single computer (129.94.232.27) was used to host these 
hypothetical domains, with logical BB’s distinguished by their port number and 
not their address as all the hypothetical domains had a associated BB with a 
localhost address (127.0.0.1). Host 129.94.232.27 itself was running on the 
hypothetical 129.94.231.x domain as the second computer (129.94.232.110) 
was left to solely run on the 129.94.232.x domain. An operation mode like this 
is possible through the use of TCP which is an address/port protocol and also 
by storing the hypothetical domains in the MySQL database. There is no 
problem adopting this implementation into a real multiple domain network as 
 59  
the operation principle is the same, as long as the required data is entered 
into the database, operation should proceed as predicted. 
 
 
A lot of the BB server’s operation involves checking values supplied by the 
client against values stored in the MySQL database. A few sample SLA’s and 
domain specific data was inserted into the MySQL database (See Appendix 
B) and requests were made with respect to this data to determine if the 
outputted result matched manual human predictions. Manual human 
calculation of the effects of a new request would make to the system was 
undertaken before a request was entered. The request was then submitted to 
the BB server and the result and altered data states were then compared to 
the expected values obtained through manual calculation. This type of testing 
is possible as the mathematical operations undertaken by the BB server are 
simple arithmetic and comparison tests to determine whether a resource 
request is successful or not. Below are a few important sample test scenarios 
:  
 
 
• If a user is given a total bandwidth allocation of x kbps, and they 
have y kbps remaining, a further request for z (where z > y) kbps 
will result in the BB server rejecting the new source request.  
 
• The SLA for a particular user is valid from time1/date1 until 
time2/date2, a new request will fail if –  
1. request starts before time1/date1 
2. request starts or ends after time2/date2 
 
• If a domain for which the traffic propagates has a total capacity 
of B kbps and during the time interval that the new request is 
requesting usage, there is an available capacity of A kbps, then 
if the request is requesting more than A kbps it will be rejected.   
 
• A request will be rejected if the user does not have permission 
to make requests from a certain source domain.  
 
• An existing RAR will be automatically deleted once the current 
time/date passes the end time/end date for the RAR. 
 
 
Inter-domain operation had an additional network related condition which was 
required to be satisfied for successful operation. 
 
 
• A neighbouring peer BB must be online for each domain on the 
path from a source domain to the destination domain in order to 
create a logical path from source to destination. A inter-domain 
request is rejected if a logical path cannot be found from source 
domain to the destination domain. 
 
 60  
 
Extensive testing of each of the client interfaces was also undertaken to 
ensure that all input options were accurately conveyed to the BB server. It 
was critical that the clients transmitted a message to the BB server which 
fitted with the message patterns that the BB server could recognise. Testing 
was carried out to ensure that the client would display the predicted result 
from the BB server depending on the input values given to the client initially.  
 
 
Numerous deliberate and accidental error inputs were tested to ensure that 
the BB server was dynamic enough to handle badly entered data. As a result 
of the extensive tests, many error catching handles were introduced into the 
BB implementation in order to ensure that smooth operation can be 
maintained regardless of the data that a user attempts to input into the client 
interfaces. This smooth behaviour is helped by the fact that the BB server is a 
multi-threaded server so if one client session is corrupted, it will not effect the 
entire BB server. A client in the unlikely situation that a session becomes 
corrupt can simple reconnect to the BB and start off afresh.  
 
 
 
 
 
 
 61  
 
 
7.  Future Development  
 
 
Although this thesis attempts to bring complete functionality whilst matching 
the QBone requirements for a bandwidth broker in a DiffServ domain, there is 
a number of possibilities for future development.  
 
 
1. Encoded Messages 
 
 
As discussed during the implementation notes for the BB server, the 
messages that are exchanged between the BB server to peer BBs and clients 
is in the form of plain text messages (converted to byte streams for 
transmission by Java). Standard encoded messages could be developed with 
uniform headers and possibly match the message formats that are specified 
by QBone Signaling Design Team for SIBBS. This also applies for the inter-
domain communication messages.  
 
 
2. Incorporation of a Route Discovery Mechanism 
 
 
An advanced route discovery mechanism can be incorporated to improve how 
routes are found for inter-domain requests. At the moment, this 
implementation utilises a simple mechanism to find the path to a destination 
based on the network topology information which is stored in the MySQL 
database. It is a static mechanism which does not cater for the situation of 
network failure and re-routing. A dynamic routing protocol such as Open 
Shortest Path First (OSPF)  [26], can be added to this implementation to 
provide routing information.  
 
 
3. Provide Front End Access to MySQL Database 
 
 
At the moment, the network administrators need to manually add entries into 
the MySQL database through the MySQL client console. A front-end client 
could be added to make this process easier and hence improve efficiency. 
 
 
4. Automated Propagation of Changes to MySQL Database 
 
 
For ease of development and testing, a centralised network database was 
utilised for this implementation as stated earlier. In order to match the BB 
architecture, each BB should have a local data repository with changes 
propagated to relevant peer BBs. With this implementation, it is possible to 
 62  
meet that requirement, however, propagation of changes requires manual 
intervention. Automated propagation of changes once implemented will no 
doubt improve the efficiency of the BB servers.  
 
 
5. Development of Tunnel Mode Operation 
 
 
The QBone Signaling Design Team includes operation for inter-domain 
requests through core tunnels. A core tunnel is a pipe between source and 
destination domain where the destination of the flow is not a specific host, but 
that of a domain. It is method for aggregating reservations if the situation 
occurs where there are many known requests from one particular domain to 
another. The setting up of a tunnel requires negotiation between peer 
adjacent bandwidth brokers. However, usage of the tunnel does not involve 
any participation by the intermediate domain’s bandwidth brokers. Once a 
core tunnel has been created, a request that is to a specific host in a domain 
which is the destination of a pre-existing tunnel, that request can be combined 
with the tunnel if the resources allocated to the tunnel can support the new 
request. The advantage of using tunnels is that it reduces network traffic 
setting up each individual inter-domain request. 
 
 
6. Increased Security 
 
 
In this time and age, online security is of utmost importance. Unauthorised 
access to the BB server could result in resources being allocated to 
unauthorised users while denying authorised users access to bandwidth that 
they possibly paid to use. The current BB server only employs basic 
authentication security through supplying a username and password. This 
minimal security is sufficient for development purposes, however, if a BB 
server is to be deployed across public networks or commercially, stronger 
security needs to be enforced. Encryption and key-based authentication are 
possible options to be considered.  
 
 
 63  
 
 
8.  Conclusion 
 
 
This thesis project has presented a functional bandwidth broker that satisfies 
the core requirements of a DiffServ bandwidth broker. The bandwidth broker 
created can be used to allocate network resources in DiffServ domains which 
can be utilised by quality of service attentive traffic. Coupled with numerous 
interfaces to access the BB, this implementation makes it easy to request, 
view and change resource allocations. All the advantages of the platform 
independent Java platform in conjunction with a web based interface (along 
with XML compatibility), will allow this implementation to be potentially used in 
many situations. 
 
 
There are a few possible additions to be made to this implementation which 
will further improve its capabilities. However, the work completed on this 
implementation has laid the foundation for a comprehensive bandwidth broker 
which can be used to successfully allocate and maintain resources within a 
DiffServ environment.  
 
 
 
 
 
 
 
 
 
 
 64  
 
 
9.  References 
 
 
[1] R Nielson, J Wheeler, F Reichmeyer, S Hares. A Discussion Of Bandwidth 
Broker Requirements for Internet2 Qbone Deployment. August 1999 
(http://www.merit.edu/working.groups/i2-qbone-bb/requirements.html)  
 
[2] S Jha, M Hassan. Engineering Internet QoS. 2002 Artech House 
 
[3] S Blake, D Black, M Carlson, E Davies, Z Wang, W Weiss. An Architecture 
for Differentiated Services. Request For Comments 2475, Network Working 
Group, December 1998. 
 
[4] K Nichols, S Blake, F Baker, D Black. Definition Of The Differentiated 
Services Field (DS Field) in the IPv4 and IPv6 Headers. Request For 
Comments 2474, Network Working Group, December 1998. 
 
[5] S Sohail and S Jha. The Survey Of Bandwidth Broker. Technical Report 
UNSW CSE TR 0206, School of Computer Science and Engineering, 
University of New South Wales. May 2002. 
 
[6] P Chimento et al. Qbone Signaling Design Team – Final Report. 
1/10/2002. (http://www.internet2.edu/qos/wg/documents-
informational/20020709-chimento-etal-qbone-signaling/) 
 
[7] V Jacobson, K Nichols, K Poduri. An Expedited Forwarding PHB. Request 
For Comments 2598, Network Working Group, June 1999. 
 
[8] J Heinanen, F Baker, W Weiss, J Wroclawski. Assured Forwarding PHB 
Group. Request For Comments 2597, Network Working Group , June 1999. 
 
[9] B Teitelbaum et al. QBone Architecture (v1.0), Internet2 QoS Working 
Group Draft, August 1999. (http://qbone.internet2.edu/arch/) 
 
[10] K Nichols, V Jacobson, L Zhang. A Two-Bit Differentiated Services 
Architecture For The Internet. Request For Comments 2638, Network 
Working Group, July 1999. 
 
[11] B Teitelbaum et al. QBone Bandwidth Broker Architecture, Work In 
Progress June 2000. (http://qbone.internet2.edu/bb/bboutline2.html) 
 
[12] Jean Anderson. SQL FAQ : SQL Standard June 1994. 
(http://epoch.cs.berkeley.edu:8000/sequoia/dba/montage/FAQ/SQL.html) 
 
[13] D Durham, J Boyle, R Cohen, S Herzog, R Rajan, A Sastry. The COPS 
(Common Open Policy Service) Protocol. Request For Comments 2748, 
Network Working Group, January 2000. 
 
 65  
[14] K Chan, J Seligson, D Durham, S Gaj, K McCloghrie, S Herzog, F 
Reichmeyer, R Yavatkar, A Smith. COPS Usage for Policy Provisioning 
(COPS-PR). Request For Comments 3084, Network Working Group, March 
2001. 
 
[15] Differentiated Services On Linux May 2001. 
(http://diffserv.sourceforge.net/) 
 
[16] B Hubert et al. Linux Advanced Routing & Traffic HOWTO. 
(http://lartc.org/howto/index.html)  
 
[17] W Almesberger, J H Salim, A Kuznetsov. Differentiated Services On 
Linux. Internet Draft, February 1999.  
 
[18] W Almesberger. Linux Traffic Control – Implementation Overview. 
November 1998. 
 
[19] Java 2 Platform Standard Edition, v1.4.1 API Specification. Sun 
Microsystems 2002  (http://java.sun.com/j2se/1.4.1/docs/api/) 
 
[20] Java Web Services Developer Pack v1.1 Sun Microsystems 2003 
(http://java.sun.com/webservices/webservicespack.html).  
 
[21] Java(TM) Web Services Developer Pack (Version 1.1) Combined API 
Specification. (http://java.sun.com/webservices/docs/1.1/api/index.html) 
 
[22] T Bray, J Paoli, C M Sperberg-McQueen, E Maler. Extensible Markup 
Language (XML) 1.0 (Second Edition). W3C Recommendation 6 October 
2000. (http://www.w3.org/TR/REC-xml) 
 
[23] MySQL Reference Manual 
(http://www.mysql.com/documentation/mysql/bychapter/index.html) 
 
[24]  D Rao, H Sitaraman -  Bandwidth Broker Implementation. University of 
Kansas Information and Telecommunication Technology Center (ITTC) 1999 . 
(http://www.ittc.ku.edu/~kdrao/BB/)  
 
[25] D Box, D Ehnebuske, G Kakivaya, A Layman, N Mendelsohn, H F 
Nielsen, S Thatte, D Winer. Simple Object Access Protocol (SOAP) 1.1 . WC3 
Note 8 May 2002. (http://www.w3.org/TR/SOAP/) 
 
[26] J Moy. OSPF Version 2. Request For Comments 1247, Network Working 
Group, July 1991. 
 
[27] H Halim, M Darmadi. Implementation of bandwidth broker using COPS-
PR. Honours thesis report, School Of Computing Science And Engineering, 
UNSW, November 2000. 
 
 66  
 
 
Appendix A – Source Code 
 
 
 
 
 
NOTE :
 Source files for COPS package are NOT included.