How to Implement ODMRP Protocol in NS2

To implement the ODMRP (On-Demand Multicast Routing Protocol) is a multicast routing protocol that created for use in the wireless ad hoc networks. This protocol ascertains the routes and maintains multicast group membership on demand by using a mesh-based method instead of the tree-based one. Executing ODMRP protocol within NS2 that has needs to include making the essential agent classes, managing the route discovery process, and handling group memberships. Given below is a step-by-step process to executing the ODMRP protocol in the simulation tool NS2:

Step-by-Step Implementation:

Step 1: Understand the ODMRP Protocol

ODMRP works as follows:

  1. Membership Setup: Once a source has data to forward, it overflows a Join Query packet via the network.
  2. Join Table Construction: These nodes are getting the Join Query update their membership data and propagate the packet.
  3. Forwarding Group Setup: These nodes that are part of the route from a source to the receivers that is members of the multicast group establish a sending group.
  4. Data Delivery: Data packets are distributed via the established mesh of forwarding group nodes.

Step 2: Set Up NS2

Make sure that NS2 is installed on the computer and is running appropriately. Because NS2 does not natively assist ODMRP, we will want to execute it from scratch or adjust existing multicast protocols.

Step 3: Implement the ODMRP Protocol in C++

  1. Create the ODMRP Agent Class

Given below is a simple structure for executing the ODMRP protocol in NS2:

#include <agent.h>

#include <packet.h>

#include <trace.h>

#include <address.h>

#include <map>

#include <vector>

#include <set>

class ODMRPAgent : public Agent {

public:

ODMRPAgent();

void recv(Packet* p, Handler* h);

void sendJoinQuery();

void handleJoinQuery(Packet* p);

void sendJoinReply();

void handleJoinReply(Packet* p);

void forwardData(Packet* p);

protected:

std::set<int> groupMembers_; // Multicast group members

std::map<int, std::vector<int>> routingTable_; // Multicast Group -> Forwarding Nodes

std::map<int, int> sequenceNumbers_; // Track sequence numbers for loop prevention

};

// Constructor

ODMRPAgent::ODMRPAgent() : Agent(PT_UDP) {

// Initialization code here

}

// Packet reception

void ODMRPAgent::recv(Packet* p, Handler* h) {

hdr_cmn* cmnh = hdr_cmn::access(p);

// Handle different packet types

if (cmnh->ptype() == PT_JOINQUERY) {

handleJoinQuery(p);

} else if (cmnh->ptype() == PT_JOINREPLY) {

handleJoinReply(p);

} else {

forwardData(p);

}

}

// Send a Join Query packet

void ODMRPAgent::sendJoinQuery() {

Packet* p = allocpkt();

hdr_cmn* cmnh = hdr_cmn::access(p);

cmnh->ptype() = PT_JOINQUERY;

// Set source address and sequence number

hdr_ip* iph = hdr_ip::access(p);

iph->saddr() = addr();

iph->daddr() = IP_BROADCAST;

cmnh->size() = sizeof(hdr_cmn) + sizeof(hdr_ip);

// Broadcast the Join Query

send(p, 0);

}

// Handle Join Query packets

void ODMRPAgent::handleJoinQuery(Packet* p) {

hdr_ip* iph = hdr_ip::access(p);

int src = iph->saddr();

int seqNum = iph->ttl(); // Use TTL as a sequence number

// Check if this is a new Join Query

if (sequenceNumbers_[src] < seqNum) {

sequenceNumbers_[src] = seqNum;

 

// Forward the Join Query

sendJoinQuery();

}

// Create a Join Reply in response

sendJoinReply();

}

// Send a Join Reply packet

void ODMRPAgent::sendJoinReply() {

Packet* p = allocpkt();

hdr_cmn* cmnh = hdr_cmn::access(p);

cmnh->ptype() = PT_JOINREPLY;

// Set destination address to source of the Join Query

hdr_ip* iph = hdr_ip::access(p);

iph->daddr() = addr();

cmnh->size() = sizeof(hdr_cmn) + sizeof(hdr_ip);

// Send the Join Reply back to the source

send(p, 0);

}

// Handle Join Reply packets

void ODMRPAgent::handleJoinReply(Packet* p) {

hdr_ip* iph = hdr_ip::access(p);

int src = iph->saddr();

// Add this node to the forwarding group

routingTable_[src].push_back(addr());

// Forward the data

forwardData(p);

}

// Forward data packets using the established routes

void ODMRPAgent::forwardData(Packet* p) {

hdr_ip* iph = hdr_ip::access(p);

int dest = iph->daddr();

if (routingTable_.find(dest) != routingTable_.end()) {

for (int nextHop : routingTable_[dest]) {

send(p->copy(), nextHop);

}

} else {

// Drop packet if no route is found

drop(p);

}

}

  1. Define Packet Types and Structures

ODMRP protocol has needs describing custom packet types for the Join Queries and Join Replies:

#define PT_ODMRP_JOINQUERY 50

#define PT_ODMRP_JOINREPLY 51

// Add this to the packet.h file in NS2’s source code

packet_t PT_ODMRP_JOINQUERY;

packet_t PT_ODMRP_JOINREPLY;

We will want to change ns-default.tcl to accept these packet types.

  1. Integrate the Protocol into NS2
  1. Modify the Makefile: Append the new ODMRPAgent class to the NS2 Makefile so as to it acquires compiled with the rest of the simulator.
  2. Recompile NS2:

make clean

make

Step 4: Create a Tcl Script to Simulate the ODMRP Protocol

When the protocol ODMRP is executed and compiled, make a Tcl script to replicate a network using ODMRP.

Example Tcl Script:

# Create a simulator object

set ns [new Simulator]

# Define the topology

set val(chan)   Channel/WirelessChannel

set val(prop)   Propagation/TwoRayGround

set val(netif)  Phy/WirelessPhy

set val(mac)    Mac/802_11

set val(ifq)    Queue/DropTail/PriQueue

set val(ifqlen) 50

set val(stop)   100.0

# Initialize the topology object

set topo [new Topography]

$topo load_flatgrid 1000 1000

# Create the God object

create-god 10

# Configure the nodes

$ns node-config -adhocRouting ODMRP \

-llType LL \

-macType Mac/802_11 \

-ifqType Queue/DropTail/PriQueue \

-ifqLen 50 \

-antType Antenna/OmniAntenna \

-propType Propagation/TwoRayGround \

-phyType Phy/WirelessPhy \

-channelType Channel/WirelessChannel \

-topoInstance $topo \

-agentTrace ON \

-routerTrace ON \

-macTrace ON \

-movementTrace ON

# Create nodes

for {set i 0} {$i < 10} {incr i} {

set node_($i) [$ns node]

}

# Define node positions

$node_(0) set X_ 50.0

$node_(0) set Y_ 50.0

$node_(0) set Z_ 0.0

$node_(1) set X_ 200.0

$node_(1) set Y_ 50.0

$node_(1) set Z_ 0.0

# and so on for all nodes…

# Attach ODMRP agents to nodes

for {set i 0} {$i < 10} {incr i} {

set odmrp_($i) [new Agent/ODMRPAgent]

$ns attach-agent $node_($i) $odmrp_($i)

}

# Start sending a message from node 0 to all others

set msg [new Packet]

$odmrp_(0) sendJoinQuery

# Simulation end

$ns at $val(stop) “stop”

$ns at $val(stop) “exit 0”

proc stop {} {

global ns

$ns flush-trace

exit 0

}

# Run the simulation

$ns run

Step 5: Run the Simulation

  1. We can save the Tcl script such as odmrp_simulation.tcl.
  2. Open a terminal and navigate to the directory in which we can be saved the Tcl script.
  3. Run the simulation using the below command:

ns odmrp_simulation.tcl

Step 6: Analyse the Results

  • We can use the trace files and the network animator (NAM) to examine the performance of the ODMRP protocol, concentrating on the parameters like packet delivery ratio, latency, and multicast group management effectiveness.
  • Assess how well the protocol that ascertains and maintains multicast routes.

Additional Considerations

  • Scalability: Investigate ODMRP in larger networks with more multicast groups to estimate its scalability.
  • Performance Optimization: To deliberate the performance optimizations like decreasing control overhead or enhancing the effectiveness of forwarding group maintenance.
  • Mobility Scenarios: Mimic mobility to monitor how efficiently ODMRP protocol adjusts to dynamic network conditions.

A stepwise method was implemented for the ODMRP protocol and it was executed and evaluated using the virtual environment ns2. Further specifies will be provided, if required.

Let us handle your performance analysis! We deliver outstanding results for the ODMRP Protocol implemented in NS2. Reach out to ns2project.com for more information. Explore our top-notch project ideas and topics with us. We specialize in executing the ODMRP protocol using the NS2 simulation tool.