Configuring Channels

本文介绍如何使用RMAN配置通道参数、并行性和设备类型,简化备份操作,并提供手动覆盖配置的方法。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Configuring Channels

You can configure persistent settings for your channels, such as channel parameters, parallelism, and the default device type for backups. The configured settings are stored in the RMAN repository. If you configure channel settings, then you do not have to use ALLOCATE CHANNEL commands with every RMAN backup, restore, recovery or maintenance command. Configuring persistent channel settings greatly simplifies the use of RMAN.

You can always override configured channels with ALLOCATE CHANNEL for a particular backup job surrounded by a RUN block.

By default, RMAN has preconfigured a disk channel so that you can back up to disk without doing any manual configuration. You may, however, want to parallelize the channels for disk or tape devices to improve performance.


See Also:

"About RMAN Channels" for a conceptual overview of configured and allocated channels, and Oracle Database Backup and Recovery Reference for syntax

Configuring Channel Parallelism

Configuring parallelism for a device type specifies the number of server sessions to be used for I/O to that device type. By default, channel parallelism for each configured device is set to 1. As a rule, allocating one channel for each physical device is best. If you are backing up to only one disk location or only one tape drive, then you need only one channel.

The CONFIGURE DEVICE TYPE ... PARALLELISM integer command specifies how many channels (up to 254) RMAN should allocate for jobs on the specified device type. This command allocates three channels for jobs on device typeDISK :

RMAN> CONFIGURE DEVICE TYPE DISK PARALLELISM 3;

These commands back up to a media manager using two tape drives in parallel:

RMAN> CONFIGURE DEFAULT DEVICE TYPE TO sbt; # default backup device is tape
RMAN> CONFIGURE DEVICE TYPE sbt PARALLELISM 2; # configure two tape channels
RMAN> BACKUP DATABASE; # backup goes to two tapes, in two parallel streams

Each configured sbt channel will back up roughly half the total data.

Configuring Channel Settings for a Device Type

By default, RMAN allocates a one DISK channel with default options, and uses it for backup commands.


Note:

This disk channel allocated by default is not the same channel as the default channel, a disk channel which RMAN creates when it first connects to the target instance, and generally does not use for activities such as backups and restores that require large amounts of I/O.

However, you may want to change the default DISK channel settings, for example, to specify a degree of parallelism or output locations for disk backups. Also, if you use a media manger, you must configure any required options for it, such asPARMSFORMATMAXPIECESIZE, and so forth. By configuring channel settings, you define which parameters are used for channels RMAN allocates when you use configured channels for a backup job.

Use the CONFIGURE CHANNEL command to configure options for DISK and sbt channels. CONFIGURE CHANNEL takes the same options used to specify one-time options with ALLOCATE CHANNEL.

For example, you can configure default parameters for disk and tape channels as in this example:

RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT = '?/bkup_%U';
RMAN> CONFIGURE CHANNEL DEVICE TYPE sbt 
  PARMS='SBT_LIBRARY=/mediavendor/lib/libobk.so ENV=(NSR_SERVER=tape_svr,NSR_CLIENT=oracleclnt,NSR_GROUP=ora_tapes)';

You can configure generic channel settings for a device type, that is, a template that is used for any channels created based on configured settings for that device. If you set the PARALLELISM for a device, and then make the device default, then RMAN uses the generic configured channel settings for each parallelized channel.

Note that if you use CONFIGURE CHANNEL to specify generic channel settings for a device, any previous settings are discarded, even if the settings are not in conflict. For example, after the second CONFIGURE CHANNEL command, which specifies only a FORMAT for configured disk channels, the MAXPIECESIZE for the disk channel is returned to its default value:

RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE 2G;
RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT = /tmp/%U;

You can also configure default settings for individual channels from a group of parallelized channels by specifying a channel number.

Showing the Configured Channel Settings

The SHOW CHANNELSHOW DEVICE TYPE and SHOW DEFAULT DEVICE TYPE commands are used to display the current configured channel settings.

Showing the Currently Configured Channel Settings

After connecting to the target database and recovery catalog (if you use one), issue the SHOW CHANNEL command to display the currently configured channel settings. For example, connect the RMAN client to the target and, if applicable, the recovery catalog. Then enter:

RMAN> SHOW CHANNEL;  

Sample output for SHOW CHANNEL follows:

RMAN configuration parameters are:
CONFIGURE CHANNEL DEVICE TYPE SBT RATE 1500K;

Showing the Configured Device Types

Issue the SHOW DEVICE TYPE command to display the configured devices and their PARALLELISM and backup type settings.

To show the default device type and currently configured settings for disk and sbt devices:

After connecting to the target database and recovery catalog (if you use one), run the SHOW DEVICE TYPE command. For example, enter:

SHOW DEVICE TYPE;    # shows the CONFIGURE DEVICE TYPE ... PARALLELISM settings

Sample output for SHOW DEVICE TYPE follows:

RMAN configuration parameters are:
CONFIGURE DEVICE TYPE DISK PARALLELISM 2 BACKUP TYPE TO COPY;
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 2 BACKUP TYPE TO BACKUPSET;


Note:

As with all SHOW commands, the output of SHOW DEVICE TYPE is in the form of a valid RMAN CONFIGURE command. You can in fact enter one command, like those shown in the preceding sample output, to configure the backup type and parallelism simultaneously. Refer to the syntax diagrams for CONFIGURE in Oracle Database Backup and Recovery Reference for details on all of the possible ways of combining arguments to the CONFIGURE command.

Showing the Default Device Type

Issue the SHOW DEFAULT DEVICE TYPE command to display the settings for the default device type for backups. When you issue the BACKUP command, RMAN allocates only default channels of the type set by the CONFIGURE DEFAULTDEVICE TYPE command. This default device type setting is not in effect when you use commands other than BACKUP. Note that you cannot disable the default device type: it is always either DISK (default setting) or sbt.

To show the default device type for backups:

After connecting to the target database and recovery catalog (if you use one), run the SHOW DEFAULT DEVICE TYPE command. For example, enter:

SHOW DEFAULT DEVICE TYPE;    # shows the CONFIGURE DEFAULT DEVICE TYPE setting

Sample output for SHOW DEFAULT DEVICE TYPE follows:

RMAN configuration parameters are:
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT';

Manually Overriding Configured Channels

If you manually allocate a channel during a job, then RMAN disregards any configured channel settings. For example, assume that the default device type is configured to sbt, and you execute this command:

RMAN> RUN 
{
  ALLOCATE CHANNEL c1 DEVICE TYPE DISK;
  BACKUP TABLESPACE users;
}

In this case, RMAN uses only the disk channel that you manually allocated within the RUN block, overriding any defaults set by using CONFIGURE DEVICE TYPECONFIGURE DEFAULT DEVICE, or CONFIGURE CHANNEL settings.


See Also:


Configuring a Specific Channel for a Device Type

Besides configuring a generic channel for a device, you can also configure one or more specific channels for each device type by manually assigning your own channel numbers to the channels. Run the CONFIGURE CHANNEL n command (where n is a positive integer less than 255) to configure a specific channel. When manually numbering channels, you must specify one or more channel options (for example, MAXPIECESIZE or FORMAT) for each channel. When you use that specific numbered channel in a backup, the configured settings for that channel will be used instead of the configured generic channel settings.

Configure specific channels by number when it is necessary to control the parameters set for each channel separately. This could arise in the following situations:

  • When running a Real Application Clusters (RAC) configuration in which individual nodes do not have access to the full set of backups, so different nodes must be configured with different connect strings so that all backups are accessible from some node

  • When running a Real Application Cluster and using a media manager with multiple tape drives requiring different PARMS settings

Configuring Specific Channels: Examples

In this example, you want to send disk backups to two different disks. Configure disk channels as follows:

CONFIGURE DEFAULT DEVICE TYPE TO disk;        # backup goes to disk
CONFIGURE DEVICE TYPE sbt PARALLELISM 2;      # two channels used in in parallel
CONFIGURE CHANNEL 1 DEVICE TYPE DISK FORMAT '/disk1/%U' # 1st channel to disk1 
CONFIGURE CHANNEL 2 DEVICE TYPE DISK FORMAT '/disk2/%U' # 2nd channel to disk2
BACKUP DATABASE; # backup - first channel goes to disk1 and second to disk2

In this example, assume that you have two tape drives and want each tape drive to use tapes from a different tape pool. Configure your default output device and default sbt channels as follows:

CONFIGURE DEFAULT DEVICE TYPE TO sbt;    # backup goes to sbt
CONFIGURE DEVICE TYPE sbt PARALLELISM 2; # two sbt channels will be allocated by default
# Assume media manager takes NSR_DATA_VOLUME_POOL to
# specify a pool
# Configure channel 1 to pool named first_pool
CONFIGURE CHANNEL 1 DEVICE TYPE sbt 
  PARMS 'SBT_LIBRARY=/mediavendor/lib/libobk.so ENV=(NSR_DATA_VOLUME_POOL=first_pool)'; 
# configure channel 2 to pool named second_pool
CONFIGURE CHANNEL 2 DEVICE TYPE sbt 
  PARMS 'SBT_LIBRARY=/mediavendor/lib/libobk.so ENV=(NSR_DATA_VOLUME_POOL=second_pool)'; 
BACKUP DATABASE; # first stream goes to 'first_pool' and second to 'second_pool'

Mixing Generic and Specific Channels

When parallelizing, RMAN always allocates channels beginning with CHANNEL 1 and ending with channel number equal to the PARALLELISM setting.

If you configure settings for a specific channel using CONFIGURE CHANNEL with a channel number, RMAN uses those specified configured settings. Otherwise, it uses the generic configuration for channels for that device type, as specified by the CONFIGURE CHANNEL command without a channel number.

Assume you enter the following channel configuration:

# disk channel configuration
CONFIGURE DEVICE TYPE DISK PARALLELISM 4;
CONFIGURE CHANNEL DEVICE TYPE DISK FORMAT = '/tmp/backup_%U';
CONFIGURE CHANNEL 2 DEVICE TYPE DISK MAXPIECESIZE = 20M; 
CONFIGURE CHANNEL 4 DEVICE TYPE DISK MAXPIECESIZE = 40M; 

# sbt channel configuration
CONFIGURE DEVICE TYPE sbt PARALLELISM 3;
CONFIGURE CHANNEL DEVICE TYPE sbt 
      PARMS='SBT_LIBRARY=oracle.disksbt, ENV=(BACKUP_DIR=?/oradata)';
CONFIGURE CHANNEL 3 DEVICE TYPE sbt 
      PARMS='SBT_LIBRARY=oracle.disksbt, ENV=(BACKUP_DIR=/tmp)';

The following table illustrates the channel names and channel settings that RMAN allocates when the default device is DISK and PARALLELISM for DISK is set to 4.

Channel Name Setting
ORA_DISK_1 FORMAT = '/tmp/backup_%U'
ORA_DISK_2 MAXPIECESIZE = 20M
ORA_DISK_3 FORMAT = '/tmp/backup_%U'
ORA_DISK_4 MAXPIECESIZE = 40M

The following table illustrates the channel names and channel settings that RMAN allocates when the default device is sbt and PARALLELISM for sbt is set to 3.

Channel Name Setting
ORA_SBT_TAPE_1 PARMS='ENV=(BACKUP_DIR=?/oradata)'
ORA_SBT_TAPE_2 PARMS='ENV=(BACKUP_DIR=?/oradata)'
ORA_SBT_TAPE_3 PARMS='ENV=(BACKUP_DIR=/tmp)'

Relationship Between CONFIGURE CHANNEL and Parallelism Setting

The PARALLELISM setting is not constrained by the number of specifically configured channels. For example, if you back up to 20 different tape devices, then you can configure 20 different sbt channels, each with a manually assigned number (from 1 to 20) and each with a different set of channel options. In such a situation, you can set PARALLELISM to any value up to the number of devices, in this instance 20.

RMAN always numbers parallel channels starting with 1 and ending with the PARALLELISM setting. For example, if the default device is sbt and PARALLELISM for sbt is set to 3, then RMAN names the channels as follows:

ORA_SBT_TAPE_1
ORA_SBT_TAPE_2
ORA_SBT_TAPE_3

RMAN always uses the name ORA_SBT_TAPE_n even if you configure DEVICE TYPE sbt (not the synonymous sbt_tape). RMAN always allocates the number of channels specified in PARALLELISM, using specifically configured channels if you have configured them and generic channels if you have not.


See Also:

"Automatic Channel-Specific Configurations" for concepts about manually numbered channels, and "Configuring Specific Channels: Examples"

Clearing Channel and Device Settings

To clear a configuration is to return it to its default settings. You can clear channel and device settings by using these commands:

  • CONFIGURE DEVICE TYPE ... CLEAR

  • CONFIGURE DEFAULT DEVICE TYPE CLEAR

  • CONFIGURE CHANNEL DEVICE TYPE ... CLEAR

  • CONFIGURE CHANNEL n DEVICE TYPE ... CLEAR (where n is an integer)

Each CONFIGURE ... CLEAR command clears only itself. For example, CONFIGURE DEVICE TYPE ... CLEAR does not clear CONFIGURE DEFAULT DEVICE TYPE. The CONFIGURE DEVICE TYPE ... CLEAR command removes the configuration for the specified device type and returns it to the default (PARALLELISM 1).


Note:

You cannot specify any other options when clearing a device type.

The CONFIGURE DEFAULT DEVICE TYPE ... CLEAR command clears the configured default device and returns it to DISK (the default setting).

The CONFIGURE CHANNEL DEVICE TYPE ... CLEAR command erases the channel configuration for the specified device type. RMAN does not change the PARALLELISM setting for the device type because PARALLELISM is specified through a separate CONFIGURE command.

If you have manually assigned options to configured channels, then clear the options for these channels individually by specifying the channel number in CONFIGURE CHANNEL n DEVICE TYPE ... CLEAR. For example, assume that you run the following:

RMAN> CONFIGURE CHANNEL DEVICE TYPE DISK MAXPIECESIZE = 1800K;
RMAN> CONFIGURE CHANNEL 3 DEVICE TYPE DISK FORMAT = /tmp/%U;
RMAN> CONFIGURE CHANNEL 3 DEVICE TYPE DISK CLEAR;

In this case, RMAN clears the settings for CHANNEL 3, but leaves the settings for the generic DISK channel (the channel with no number manually assigned) intact.


我给需要修改的代码给你,请你帮我标出使用方法1时该代码的修改方案,代码如下: #include <stdlib.h> #include <chrono> #include <fstream> #include <iostream> #include <string> #include "ns3/applications-module.h" #include "ns3/core-module.h" #include "ns3/internet-module.h" #include "ns3/ipv4-global-routing-helper.h" #include "ns3/network-module.h" #include "ns3/point-to-point-module.h" #include "ns3/traffic-control-module.h" #include "ns3/pfifo-bolt-queue-disc.h" using namespace ns3; #define START_TIME 1.0 #define ONE_WAY_DELAY 4110e-9 #define MTU 1500 NS_LOG_COMPONENT_DEFINE ("BoltFairnessExperiment"); static void DecwinTrace(Ptr<OutputStreamWrapper> stream, Ptr<const Packet> pkt, Ipv4Address saddr, Ipv4Address daddr, uint16_t sport, uint16_t dport, int txMsgId, uint32_t seqNo, uint16_t flag) { if (flag & ns3::BoltHeader::Flags_t::DECWIN) { // 512 *stream->GetStream () << Simulator::Now().GetNanoSeconds() << " DECWIN " << saddr << ":" << sport << "→" << daddr << ":" << dport << " txMsgId=" << txMsgId << " seq=" << seqNo << std::endl; } } void TraceFlowStats (Ptr<OutputStreamWrapper> stream, Ipv4Address saddr, Ipv4Address daddr, uint16_t sport, uint16_t dport, int txMsgId, uint32_t cwnd, uint64_t rtt) { Time now = Simulator::Now (); NS_LOG_DEBUG ("stats " << now.GetNanoSeconds () << " " << saddr << ":" << sport << " " << daddr << ":" << dport << " " << txMsgId << " " << cwnd << " " << rtt); *stream->GetStream () << now.GetNanoSeconds () << " " << saddr << ":" << sport << " " << daddr << ":" << dport << " " << txMsgId << " " << cwnd << " " << rtt << std::endl; } static void BytesInQueueDiscTrace (Ptr<OutputStreamWrapper> stream, uint32_t oldval, uint32_t newval) { NS_LOG_INFO (Simulator::Now ().GetNanoSeconds () << " Queue Disc size from " << oldval << " to " << newval); *stream->GetStream () << Simulator::Now ().GetNanoSeconds () << " " << newval << std::endl; } static void calTpt(Ptr<OutputStreamWrapper> stream, uint32_t bytesDequeued) { // 使用静态变量跟踪总流量和时间 static uint64_t totalBytes = 0; static Time lastTime = Simulator::Now(); Time now = Simulator::Now(); totalBytes += bytesDequeued; // 每100微秒计算并记录一次吞吐量 if ((now - lastTime).GetMicroSeconds() >= 3) { // 计算比特每秒(bps)吞吐量 double throughput = (totalBytes * 8.0) / ((now - lastTime).GetSeconds()); // 以Gbps为单位输出 double throughputGbps = throughput / 1000 / 1000 / 1000; *stream->GetStream() << now.GetNanoSeconds() << " " << throughputGbps << std::endl; lastTime = now; totalBytes = 0; } } static void calPru (Ptr<OutputStreamWrapper> stream,double oldval, double newval) { NS_LOG_INFO (Simulator::Now ().GetNanoSeconds () << " Queue Disc size from " << oldval << " to " << newval); //NS_LOG_WARN ("wenzhu"); *stream->GetStream () << Simulator::Now ().GetNanoSeconds () << " " << newval <<std::endl; } void ReceiveLongLivedFlow (Ptr<Socket> socket) { Time now = Simulator::Now (); Ptr<Packet> message; uint32_t messageSize; Address from; Ipv4Header ipv4h; BoltHeader bolth; while ((message = socket->RecvFrom (from))) { messageSize = message->GetSize (); NS_LOG_DEBUG (now.GetNanoSeconds () << " Received " << messageSize << " Bytes from " << InetSocketAddress::ConvertFrom (from).GetIpv4 () << ":" << InetSocketAddress::ConvertFrom (from).GetPort ()); uint32_t hdrSize = bolth.GetSerializedSize () + ipv4h.GetSerializedSize (); uint32_t payloadSize = MTU - hdrSize; uint32_t msgSizePkts = messageSize / payloadSize + (messageSize % payloadSize != 0); messageSize += msgSizePkts * hdrSize; double thp = (double) messageSize * 8.0 / (now.GetSeconds () - ONE_WAY_DELAY - START_TIME) / 1e9; NS_LOG_WARN ("The average thp from " << from << ": " << thp << "Gbps"); } } void SendLongLivedFlow (Ptr<Socket> socket, InetSocketAddress receiverAddr, uint32_t flowSizeBytes) { Ptr<Packet> msg = Create<Packet> (flowSizeBytes); int sentBytes = socket->SendTo (msg, 0, receiverAddr); if (sentBytes > 0) { NS_LOG_DEBUG (Simulator::Now ().GetNanoSeconds () << " Sent " << sentBytes << " Bytes to " << receiverAddr.GetIpv4 () << ":" << receiverAddr.GetPort ()); } } void SetLinkDownToStopFlow (NetDeviceContainer netDevices) { PointToPointNetDevice *p2pNetDevice; for (uint32_t n = 0; n < netDevices.GetN (); n++) { p2pNetDevice = dynamic_cast<PointToPointNetDevice *> (&(*(netDevices.Get (n)))); p2pNetDevice->SetLinkDown (); } } void CalculateTailBottleneckQueueOccupancy (std::string qStreamName, double percentile, uint64_t bottleneckBitRate, double duration) { std::ifstream qSizeTraceFile; qSizeTraceFile.open (qStreamName); NS_LOG_FUNCTION ("Reading Bottleneck Queue Size Trace From: " << qStreamName); std::string line; std::istringstream lineBuffer; std::vector<int> queueSizes; uint64_t time; uint32_t qSizeBytes; while (getline (qSizeTraceFile, line)) { lineBuffer.clear (); lineBuffer.str (line); lineBuffer >> time; lineBuffer >> qSizeBytes; if (time < (uint64_t) ((START_TIME + duration) * 1e9)) queueSizes.push_back (qSizeBytes); } qSizeTraceFile.close (); std::sort (queueSizes.begin (), queueSizes.end ()); uint32_t idx = (uint32_t) ((double) queueSizes.size () * percentile); int tailQueueSizeBytes = queueSizes[idx]; double tailQueueSizeUsec = (double) tailQueueSizeBytes * 8.0 * 1e6 / (double) bottleneckBitRate; NS_LOG_UNCOND (percentile * 100 << "%ile queue size: " << tailQueueSizeUsec << "usec (" << tailQueueSizeBytes << " Bytes)"); } void FlowThptCb(Ptr<OutputStreamWrapper> stream, ns3::PfifoBoltQueueDisc::FlowKey fk, double mbps) { *stream->GetStream() << Simulator::Now().GetNanoSeconds() << " " << fk.pLow << "→" << fk.pHigh << " " << mbps << std::endl; } int nFlows = 2; int senderPortNoStart = 1000; int receiverPortNoStart = 2000; NodeContainer senderNodes; NodeContainer receiverNodes; NodeContainer switchNodes; double newFlowTime = 0.002; NetDeviceContainer switchToSwitchDevices; NetDeviceContainer senderToSwitchDevices[4]; //nFlows NetDeviceContainer receiverToSwitchDevices[4]; //nFlows // PointToPointNetDevice *bottleneckNetDevice = // dynamic_cast<PointToPointNetDevice *>(&(*(switchToSwitchDevices.Get(0)))); uint64_t bottleneckBps = 102 * 1e9; //bottleneckNetDevice->GetDataRate().GetBitRate(); BoltHeader bolth; Ipv4Header ipv4h; uint32_t payloadSize = MTU - bolth.GetSerializedSize () - ipv4h.GetSerializedSize (); Ipv4InterfaceContainer receiverToSwitchIfs[4]; //nFlows Ipv4InterfaceContainer senderToSwitchIfs[4]; //nFlows int main (int argc, char *argv[]) { // Create output directory if it doesn't exist auto simStart = std::chrono::steady_clock::now (); AsciiTraceHelper asciiTraceHelper; std::string simNote (""); double dur = 1.0; uint32_t simIdx = 0; bool traceQueues = true; bool debugMode = false; uint32_t bdpBytes = 550000; // in bytes uint64_t inboundRtxTimeout = 250000; // in microseconds uint64_t outboundRtxTimeout = 100000; // in microseconds std::string ccMode ("DEFAULT"); /* Bolt (Swift) Related Parameters */ double rttSmoothingAlpha = 0.75; // Default: 0.75 uint16_t topoScalingPerHop = 1000; // Default: 1000 ns double maxFlowScaling = 100000.0; // Default: 100000.0 double maxFlowScalingCwnd = 256.0; // Default: 256.0 pkts double minFlowScalingCwnd = 0.1; // Default: 0.1 pkts uint64_t baseDelay = 10000; // Default: 25000 us (25 usec) double aiFactor = 1.0; // Default: 1.0 double mdFactor = 0.8; // Default: 0.8 double maxMd = 0.5; // Default: 0.5 uint32_t maxCwnd = 37376000; // Default: 373760 Bytes bool usePerHopDelayForCc = false; // Default: false bool enableMsgAgg = true; bool enableBts = false; bool enablePru = true; bool enableAbs = false; int abs = 1; int pru = 1; std::string ccThreshold ("8KB"); // 4 packets int other_size = 0; std::string bottleLinkDelay ("10us"); CommandLine cmd(__FILE__); cmd.AddValue("bottlelinkdelay","bottlelinkdelay", bottleLinkDelay); cmd.AddValue("cct", "cct", ccThreshold); cmd.AddValue("os", "os", other_size); cmd.AddValue("abs", "abs", abs); cmd.AddValue("pru", "pru", pru); cmd.AddValue("note", "Any note to identify the simulation in the future", simNote); cmd.AddValue("newFlowTime", "The interval at which a new flow joins/leaves.", newFlowTime); cmd.AddValue("simIdx", "The index of the simulation used to identify parallel runs.", simIdx); cmd.AddValue("nFlows", "Number of flows in the topology.", nFlows); cmd.AddValue("traceQueues", "Whether to trace the queue lengths during the simulation.", traceQueues); cmd.AddValue("debug", "Whether to enable detailed pkt traces for debugging", debugMode); cmd.AddValue("bdp", "RttBytes to use in the simulation.", bdpBytes); cmd.AddValue("inboundRtxTimeout", "Number of microseconds before an inbound msg expires.", inboundRtxTimeout); cmd.AddValue("outboundRtxTimeout", "Number of microseconds before an outbound msg expires.", outboundRtxTimeout); cmd.AddValue("ccMode", "Type of congestion control algorithm to run.", ccMode); cmd.AddValue("rttSmoothingAlpha", "Smoothing factor for the RTT measurements.", rttSmoothingAlpha); cmd.AddValue("topoScalingPerHop", "Per hop scaling for target delay.", topoScalingPerHop); cmd.AddValue("maxFlowScaling", "Flow scaling multiplier for target delay.", maxFlowScaling); cmd.AddValue("baseDelay", "Base delay for the target delay.", baseDelay); cmd.AddValue("aiFactor", "Additive increase for congestion control.", aiFactor); cmd.AddValue("mdFactor", "Multiplicative decrease for congestion control.", mdFactor); cmd.AddValue("maxMd", "Maximum multiplicative decrease allowed.", maxMd); cmd.AddValue("maxCwnd", "Maximum value of cwnd a flow can have.", maxCwnd); cmd.AddValue("usePerHopDelayForCc", "Flag to to use per hop delay instead of RTT for CC.", usePerHopDelayForCc); cmd.AddValue("enableMsgAgg", "Flag to disable message aggregation on end-hosts.", enableMsgAgg); cmd.AddValue("enableBts", "Flag to enable back to sender feature.", enableBts); cmd.AddValue("enablePru", "Flag to enable proactive ramp-up feature.", enablePru); cmd.AddValue("enableAbs", "Flag to enable Available Bandwidth Signaling. If false, queue " "occupancy is used to detect congestion.", enableAbs); cmd.AddValue("ccThreshold", "Threshold for declaring congestion, i.e 15KB.", ccThreshold); cmd.Parse(argc, argv); std::cout << ccThreshold << std::endl; std::string outputDir = "outputs/bolt-fairness/" + bottleLinkDelay; std::string mkdir_command = "mkdir -p " + outputDir; if (system (mkdir_command.c_str ()) != 0) { NS_LOG_ERROR ("Failed to create output directory"); } if (ccMode == "DEFAULT") { enableBts = true; enablePru = true; enableAbs = true; } if (pru == 1) { enablePru = true; }else { enablePru = false; } if (abs == 1) { enableAbs = true; }else { enableAbs = false; } Time::SetResolution (Time::NS); // Packet::EnablePrinting(); LogComponentEnable ("BoltFairnessExperiment", LOG_LEVEL_WARN); LogComponentEnable ("BoltSocket", LOG_LEVEL_WARN); LogComponentEnable ("BoltL4Protocol", LOG_LEVEL_WARN); LogComponentEnable ("PfifoBoltQueueDisc", LOG_LEVEL_WARN); if (debugMode) { LogComponentEnable ("BoltFairnessExperiment", LOG_LEVEL_DEBUG); NS_LOG_DEBUG ("Running in DEBUG Mode!"); SeedManager::SetRun (0); } else { SeedManager::SetRun (simIdx); } std::string tracesFileName ("outputs/bolt-fairness/" + bottleLinkDelay + '/'); if (debugMode) tracesFileName += "debug"; else tracesFileName += std::to_string (simIdx); if (enableMsgAgg) tracesFileName += "_MSGAGG"; tracesFileName += "_" + ccMode; if (ccMode != "DEFAULT") { if (enableBts) tracesFileName += "_BTS"; if (enablePru) tracesFileName += "_PRU"; if (usePerHopDelayForCc) tracesFileName += "_PERHOP"; if (enableAbs) tracesFileName += "_ABS"; } if (!simNote.empty ()) { tracesFileName += "_" + simNote; NS_LOG_UNCOND ("Note: " << simNote); } std::string qStreamName = tracesFileName + ".qlen"; std::string tptName = tracesFileName + ".tpt"; std::string pruName = tracesFileName + ".pru"; std::string msgTracesFileName = tracesFileName + ".tr"; std::string statsTracesFileName = tracesFileName + ".log"; std::string decwinTraceFileName = tracesFileName + ".dec"; /******** Create Nodes ********/ NS_LOG_DEBUG ("Creating Nodes..."); senderNodes.Create (nFlows); receiverNodes.Create (nFlows); switchNodes.Create (2); /******** Create Channels ********/ NS_LOG_DEBUG ("Configuring Channels..."); PointToPointHelper hostLinks; hostLinks.SetDeviceAttribute ("DataRate", StringValue ("100Gbps")); hostLinks.SetChannelAttribute ("Delay", StringValue ("2us")); hostLinks.SetQueue ("ns3::DropTailQueue", "MaxSize", StringValue ("500p")); PointToPointHelper hostLinks2; hostLinks2.SetDeviceAttribute ("DataRate", StringValue ("100Gbps")); hostLinks2.SetChannelAttribute ("Delay", StringValue (bottleLinkDelay)); hostLinks2.SetQueue ("ns3::DropTailQueue", "MaxSize", StringValue ("500p")); PointToPointHelper bottleneckLink; bottleneckLink.SetDeviceAttribute ("DataRate", StringValue ("100Gbps")); bottleneckLink.SetChannelAttribute ("Delay", StringValue ("2us")); bottleneckLink.SetQueue ("ns3::DropTailQueue", "MaxSize", StringValue ("1p")); /******** Create NetDevices ********/ NS_LOG_DEBUG ("Creating NetDevices..."); switchToSwitchDevices = bottleneckLink.Install (switchNodes); for (uint32_t n = 0; n < switchToSwitchDevices.GetN (); n++) switchToSwitchDevices.Get (n)->SetMtu (MTU); // int i = 0; // senderToSwitchDevices[i] = hostLinks2.Install (senderNodes.Get (i), switchNodes.Get (0)); // for (uint32_t n = 0; n < senderToSwitchDevices[i].GetN (); n++) // senderToSwitchDevices[i].Get (n)->SetMtu (MTU); // i++; // senderToSwitchDevices[i] = hostLinks.Install (senderNodes.Get (i), switchNodes.Get (0)); // for (uint32_t n = 0; n < senderToSwitchDevices[i].GetN (); n++) // senderToSwitchDevices[i].Get (n)->SetMtu (MTU); // i = 0; // receiverToSwitchDevices[i] = // hostLinks2.Install (receiverNodes.Get (i), switchNodes.Get (1)); // for (uint32_t n = 0; n < receiverToSwitchDevices[i].GetN (); n++) // receiverToSwitchDevices[i].Get (n)->SetMtu (MTU); // i++; // receiverToSwitchDevices[i] = // hostLinks.Install (receiverNodes.Get (i), switchNodes.Get (1)); // for (uint32_t n = 0; n < receiverToSwitchDevices[i].GetN (); n++) // receiverToSwitchDevices[i].Get (n)->SetMtu (MTU); for (int i=0;i<nFlows;i++) { if(i&1) { senderToSwitchDevices[i] = hostLinks.Install (senderNodes.Get (i), switchNodes.Get (0)); receiverToSwitchDevices[i] = hostLinks.Install (receiverNodes.Get (i), switchNodes.Get (1)); } else { senderToSwitchDevices[i] = hostLinks2.Install (senderNodes.Get (i), switchNodes.Get (0)); receiverToSwitchDevices[i] = hostLinks2.Install (receiverNodes.Get (i), switchNodes.Get (1)); } for (uint32_t n = 0; n < senderToSwitchDevices[i].GetN (); n++) senderToSwitchDevices[i].Get (n)->SetMtu (MTU); for (uint32_t n = 0; n < receiverToSwitchDevices[i].GetN (); n++) receiverToSwitchDevices[i].Get (n)->SetMtu (MTU); } /******** Install Internet Stack ********/ NS_LOG_DEBUG ("Installing Internet Stack..."); /******** Set default BDP value in packets ********/ Config::SetDefault ("ns3::BoltL4Protocol::AggregateMsgsIfPossible", BooleanValue (enableMsgAgg)); Config::SetDefault ("ns3::BoltL4Protocol::BandwidthDelayProduct", UintegerValue (bdpBytes)); Config::SetDefault ("ns3::BoltL4Protocol::InbndRtxTimeout", TimeValue (MicroSeconds (inboundRtxTimeout))); Config::SetDefault ("ns3::BoltL4Protocol::OutbndRtxTimeout", TimeValue (MicroSeconds (outboundRtxTimeout))); Config::SetDefault ("ns3::BoltL4Protocol::CcMode", StringValue (ccMode)); Config::SetDefault ("ns3::BoltL4Protocol::RttSmoothingAlpha", DoubleValue (rttSmoothingAlpha)); Config::SetDefault ("ns3::BoltL4Protocol::TopoScalingPerHop", UintegerValue (topoScalingPerHop)); Config::SetDefault ("ns3::BoltL4Protocol::MaxFlowScaling", DoubleValue (maxFlowScaling)); Config::SetDefault ("ns3::BoltL4Protocol::MaxFlowScalingCwnd", DoubleValue (maxFlowScalingCwnd)); Config::SetDefault ("ns3::BoltL4Protocol::MinFlowScalingCwnd", DoubleValue (minFlowScalingCwnd)); Config::SetDefault ("ns3::BoltL4Protocol::BaseDelay", UintegerValue (baseDelay)); Config::SetDefault ("ns3::BoltL4Protocol::AiFactor", DoubleValue (aiFactor)); Config::SetDefault ("ns3::BoltL4Protocol::MdFactor", DoubleValue (mdFactor)); Config::SetDefault ("ns3::BoltL4Protocol::MaxMd", DoubleValue (maxMd)); Config::SetDefault ("ns3::BoltL4Protocol::MaxCwnd", UintegerValue (maxCwnd)); Config::SetDefault ("ns3::BoltL4Protocol::UsePerHopDelayForCc", BooleanValue (usePerHopDelayForCc)); Config::SetDefault ("ns3::Ipv4GlobalRouting::EcmpMode", EnumValue (Ipv4GlobalRouting::ECMP_PER_FLOW)); InternetStackHelper stack; stack.Install (senderNodes); stack.Install (receiverNodes); stack.Install (switchNodes); TrafficControlHelper boltQdisc; boltQdisc.SetRootQueueDisc ("ns3::PfifoBoltQueueDisc", "MaxSize", StringValue ("1000p"), "EnableBts", BooleanValue (enableBts), "CcThreshold", StringValue (ccThreshold), "EnablePru", BooleanValue (enablePru), "MaxInstAvailLoad", IntegerValue (MTU), "EnableAbs", BooleanValue (enableAbs)); QueueDiscContainer bottleneckQdisc = boltQdisc.Install (switchToSwitchDevices); //QueueDiscContainer C0toS1Qdisc = boltQdisc.Install(senderToSwitchDevices[0]); //QueueDiscContainer C1toS1Qdisc = boltQdisc.Install(senderToSwitchDevices[1]); //QueueDiscContainer S2toC3Qdisc = boltQdisc.Install(receiverToSwitchDevices[0]); //QueueDiscContainer S2toC4Qdisc = boltQdisc.Install(receiverToSwitchDevices[1]); if (traceQueues) { Ptr<OutputStreamWrapper> qStream = asciiTraceHelper.CreateFileStream (qStreamName); bottleneckQdisc.Get (0)->TraceConnectWithoutContext ( "BytesInQueue", MakeBoundCallback (&BytesInQueueDiscTrace, qStream)); Ptr<OutputStreamWrapper> tptStream = asciiTraceHelper.CreateFileStream (tptName); bottleneckQdisc.Get (0)->TraceConnectWithoutContext ("totDeQueue", MakeBoundCallback (&calTpt, tptStream)); Ptr<OutputStreamWrapper> pruStream = asciiTraceHelper.CreateFileStream (pruName); bottleneckQdisc.Get (0)->TraceConnectWithoutContext ("PruTokensInQueue", MakeBoundCallback (&calPru, pruStream)); //bottleneckQdisc.Get (0)->TraceConnectWithoutContext ("ALLPru", // MakeBoundCallback (&calPru, pruStream)); } /******** Set IP addresses of the nodes in the network ********/ Ipv4AddressHelper address; address.SetBase ("10.0.0.0", "255.255.255.0"); address.Assign (switchToSwitchDevices); for (int i = 0; i < nFlows; i++) { address.NewNetwork (); senderToSwitchIfs[i] = address.Assign (senderToSwitchDevices[i]); } std::vector<InetSocketAddress> receiverAddresses; for (int i = 0; i < nFlows; i++) { address.NewNetwork (); receiverToSwitchIfs[i] = address.Assign (receiverToSwitchDevices[i]); receiverAddresses.push_back ( InetSocketAddress (receiverToSwitchIfs[i].GetAddress (0), receiverPortNoStart + i)); } Ipv4GlobalRoutingHelper::PopulateRoutingTables (); /******** Create Flows on End-hosts ********/ NS_LOG_DEBUG ("Installing the Applications..."); /******** Schedule the long lived flows for fairness measurements ********/ uint32_t flowSizeBytes = static_cast<uint32_t> (2 * newFlowTime * 2 * static_cast<double> (bottleneckBps) * static_cast<double> (payloadSize) / 8.0 / static_cast<double> (MTU)); InetSocketAddress receiverAddr = InetSocketAddress (receiverToSwitchIfs[0].GetAddress (0), 0); // dummy Ptr<Socket> receiverSocket[nFlows]; Ptr<Socket> senderSocket[nFlows]; for (int i = 0; i < nFlows; i++) { receiverAddr = InetSocketAddress (receiverToSwitchIfs[i].GetAddress (0), receiverPortNoStart + i); receiverSocket[i] = Socket::CreateSocket (receiverNodes.Get (i), BoltSocketFactory::GetTypeId ()); receiverSocket[i]->Bind (receiverAddr); receiverSocket[i]->SetRecvCallback (MakeCallback (&ReceiveLongLivedFlow)); senderSocket[i] = Socket::CreateSocket (senderNodes.Get (i), BoltSocketFactory::GetTypeId ()); senderSocket[i]->Bind ( InetSocketAddress (senderToSwitchIfs[i].GetAddress (0), senderPortNoStart + i)); if (i & 1) { Simulator::Schedule (Seconds (START_TIME), &SendLongLivedFlow, senderSocket[i], receiverAddr, flowSizeBytes*2); } else { Simulator::Schedule (Seconds (START_TIME), &SendLongLivedFlow, senderSocket[i], receiverAddr, flowSizeBytes); } } for (int i = 0; i < nFlows; i++) { //Simulator::Schedule (Seconds (START_TIME + (nFlows + i) * newFlowTime), // &SetLinkDownToStopFlow, senderToSwitchDevices[i]); } /******** Set the message traces for the Bolt clients ********/ Ptr<OutputStreamWrapper> statsStream; statsStream = asciiTraceHelper.CreateFileStream (statsTracesFileName); Config::ConnectWithoutContext ("/NodeList/*/$ns3::BoltL4Protocol/FlowStats", MakeBoundCallback (&TraceFlowStats, statsStream)); Ptr<OutputStreamWrapper> decwinStream; decwinStream = asciiTraceHelper.CreateFileStream(decwinTraceFileName); Config::ConnectWithoutContext( "/NodeList/*/$ns3::BoltL4Protocol/CtrlPktArrival", MakeBoundCallback(&DecwinTrace, decwinStream)); // 到达:也可换 CtrlPktDeparture 看发出 std::string flowThptName = tracesFileName + ".flow_thpt"; Ptr<OutputStreamWrapper> flowThptStream = asciiTraceHelper.CreateFileStream(flowThptName); bottleneckQdisc.Get(0)->TraceConnectWithoutContext( "FlowThroughput", MakeBoundCallback(&FlowThptCb, flowThptStream)); /******** Run the Actual Simulation ********/ NS_LOG_WARN ("Running the Simulation..."); Simulator::Stop (Seconds (START_TIME + dur)); Simulator::Run (); Simulator::Destroy (); /******** Measure the tail occupancy of the bottleneck link ********/ if (traceQueues) CalculateTailBottleneckQueueOccupancy (qStreamName, 0.99, bottleneckBps, (nFlows * 2 - 1) * newFlowTime); /***** Measure the actual time the simulation has taken (for reference) *****/ auto simStop = std::chrono::steady_clock::now (); auto simTime = std::chrono::duration_cast<std::chrono::seconds> (simStop - simStart); double simTimeMin = (double) simTime.count () / 60.0; NS_LOG_UNCOND ("Time taken by simulation: " << simTimeMin << " minutes"); return 0; }
最新发布
08-06
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值