No result defined for Action and result failure

本文介绍了在使用Struts框架时遇到的“未定义failure结果”的错误及其解决方法。通过调整配置文件中Action名称的大小写,成功解决了这一问题。

不多说了 配置如下

<package name="base" namespace="" extends="struts-default">
  <action name="login" class="action.LoginAction">
   <result name="success">main.jsp</result>
   <result name="failure">login.jsp</result>
  </action>
 </package>

 

报错:No result defined for Action and result failure

 

看到该错误,我理解的是没有配置<result name="failure"></result>  但是 明明配置了的啊 

然后我就以为是单词打错了 ,就挨个复制了一遍,还是报错

之后就在网上查,看见一朋友说是因为没有返回input,所以我就把failure改成了input

结果还是报错

继续查呗,又发现一朋友说是login不应该小写,然后就改成了大写【注意前台页面也要改哈】

<package name="base" namespace="" extends="struts-default">
  <action name="Login" class="action.LoginAction">
   <result name="success">main.jsp</result>
   <result name="failure">login.jsp</result>
  </action>
 </package>

就ok了O(∩_∩)O~

 

具体原因,查了查,没找到    个人觉得 应该是啥关键字什么的吧  呵呵

请查看以下的C++代码的编写要求,请根据代码要求开始编写代码 PURPOSE: This file is a proforma for the EEET2246 Laboratory Code Submission/Test 1. This file defines the assessment task which is worth 10% of course in total - there is no other documentation. At the BASIC FUNCTIONAL REQUIREMENTS level, your goal is to write a program that takes two numbers from the command line and perform and arithmetic operations with them. Additionally your program must be able to take three command line arguments where if the last argument is 'a' an addition is performed, and if 's' then subtraction is performed with the first two arguments. At the FUNCTIONAL REQUIREMENTS level you will be required to extend on the functionality so that the third argument can also be 'm' for multiplication,'d' for division and 'p' for exponential operations, using the first two arguments as the operands. Additionally, at this level basic error detection and handling will be required. The functionality of this lab is relatively simple: + - / * and "raised to the power of" The emphasis in this lab is to achieve the BASIC FUNCTIONALITY REQUIREMENTS first. Once you a basic program functioning then you should attempt the FUNCTIONALITY REQUIREMENTS and develop your code so that it can handle a full range of error detection and handling. ___________________________________________________________________________________________ ___ GENERAL SPECIFICATIONS (mostly common to all three EEET2246 Laboratory Code Submissions): G1. You must rename your file to lab1_1234567.cpp, where 1234567 is your student number. Your filename MUST NEVER EVER contain any spaces. _under_score_is_Fine. You do not need to include the 's' in front of your student number. Canvas will rename your submission by adding a -1, -2 etc. if you resubmit your solution file - This is acceptable. G2. Edit the name/email address string in the main() function to your student number, student email and student name. The format of the student ID line is CSV (Comma Separated Variables) with NO SPACES- student_id,student_email,student_name When the program is run without any operands i.e. simply the name of the executable such as: lab1_1234567.exe the program MUST print student ID string in Comma Separated Values (CSV) format with no spaces. For example the following text should be outputted to the console updated with your student details: "1234567,s1234567@student.rmit.edu.au,FirstName_LastName" G3. All outputs are a single error character or a numerical number, as specified by the FUNCTIONAL REQURMENTS, followed by a linefeed ( endl or \n). G4. DO NOT add more than what is specified to the expected console output. Do NOT add additional information, text or comments to the output console that are not defined within the SPECIFICATIONS/FUNCTIONAL REQURMENTS. G5. DO NOT use 'cin', system("pause"), getchar(), gets(), etc. type functions. Do NOT ask for user input from the keyboard. All input MUST be specified on the command line separated by blank spaces (i.e. use the argv and argc input parameters). G6. DO NOT use the characters: * / \ : ^ ? in your command line arguments as your user input. These are special character and may not be processed as expected, potentially resulting in undefined behaviour of your program. G7. All input MUST be specified on the command line separated by blank spaces (i.e. use the argc and argv[] input parameters). All input and output is case sensitive unless specified. G8. You should use the Integrated Debugging Environment (IDE) to change input arguments during the development process. G9. When your code exits the 'main()' function using the 'return' command, you MUST use zero as the return value. This requirement is for exiting the 'main()' function ONLY. A return value other than zero will indicate that something went wrong to the Autotester and no marks will be awarded. G10. User-defined functions and/or class declarations must be written before the 'main()' function. This is a requirement of the Autotester and failure to do so will result in your code scoring 0% as it will not be compiled correctly by the Autotester. Do NOT put any functions/class definitions after the 'main()' function or modify the comments and blank lines at the end of this file. G11. You MUST run this file as part of a Project - No other *.cpp or *.h files should be added to your solution. G12. You are not permitted to add any other #includes statements to your solution. The only libraries permitted to be used are the ones predefined in this file. G13. Under no circumstances is your code solution to contain any go_to labels - Please note that the '_' has been added to this description so that this file does not flag the Autotester. Code that contains go_to label like syntax will score 0% and will be treated as code that does not compile. G14. Under no circumstances is your code solution to contain any exit_(0) type functions. Please note that the '_' has been added to this description so that this file does not flag the Autotester. Your solution must always exit with a return 0; in main(). Code that contains exit_(0); label like syntax will score 0% and will be treated as code that does not compile. G15. Under no circumstances is your code solution to contain an infinite loop constructs within it. For example usage of while(1), for(int i; ; i++) or anything similar is not permitted. Code that contains an infinite loop will result in a score of 0% for your assessment submission and will be treated as code that does not compile. G16. Under no circumstances is your code solution to contain any S_l_e_e_p() or D_e_l_a_y() like statements - Please note that the '_' has been added to this description so that this file does not flag the Autotester. You can use such statements during your development, however you must remove delays or sleeps from your code prior to submission. This is important, as the Autotester will only give your solution a limited number of seconds to complete (i.e. return 0 in main()). Failure for your code to complete the required operation/s within the allotted execution window will result in the Autotester scoring your code 0 marks for that test. To test if your code will execute in the allotted execution window, check that it completes within a similar time frame as the provided sample binary. G17. Under no circumstances is your code solution to contain any characters from the extended ASCII character set or International typeset characters. Although such characters may compile under a normal system, they will result in your code potentially not compiling under the Autotester environment. Therefore, please ensure that you only use characters: a ... z, A ... Z, 0 ... 9 as your variable and function names or within any literal strings defined within your code. Literal strings can contain '.', '_', '-', and other basic symbols. G18. All output to console should be directed to the standard console (stdout) via cout. Do not use cerr or clog to print to the console. G19. The file you submit must compile without issues as a self contained *.cpp file. Code that does not compile will be graded as a non-negotiable zero mark. G20. All binary numbers within this document have the prefix 0b. This notation is not C++ compliant (depending on the C++ version), however is used to avoid confusion between decimal, hexadecimal and binary number formats within the description and specification provided in this document. For example the number 10 in decimal could be written as 0xA in hexadecimal or 0b1010 in binary. It can equally be written with leading zeroes such as: 0x0A or 0b00001010. For output to the console screen you should only ever display the numerical characters only and omit the 0x or 0b prefixes (unless it is specifically requested). ___________________________________________________________________________________________ ___ BASIC FUNCTIONAL REQUIREMENTS (doing these alone will only get you to approximately 40%): M1. For situation where NO command line arguments are passed to your program: M1.1 Your program must display your correct student details in the format: "3939723,s3939723@student.rmit.edu.au,Yang_Yang" M2. For situation where TWO command line arguments are passed to your program: M2.1 Your program must perform an addition operation, taking the first two arguments as the operands and display only the result to the console with a new line character. Example1: lab1_1234567.exe 10 2 which should calculate 10 + 2 = 12, i.e. the last (and only) line on the console will be: 12 M3. For situations where THREE command line arguments are passed to your program: M3.1 If the third argument is 'a', your program must perform an addition operation, taking the first two arguments as the operands and display only the result to the console with a new line character. M3.2 If the third argument is 's', your program must perform a subtraction operation, taking the first two arguments as the operands and display only the result to the console with a new line character. The second input argument should be subtracted from the first input argument. M4. For situations where less than TWO or more than THREE command line arguments are passed to your program, your program must display the character 'P' to the console with a new line character. M5. For specifications M1 to M4 inclusive: M5.1 Program must return 0 under all situations at exit. M5.2 Program must be able to handle integer arguments. M5.3 Program must be able to handle floating point arguments. M5.4 Program must be able to handle one integer and one floating point argument in any order. Example2: lab1_1234567.exe 10 2 s which should calculate 10 - 2 = 8, i.e. the last (and only) line on the console will be: 8 Example3: lab1_1234567.exe 10 2 which should calculate 10 + 2 = 12, i.e. the last (and only) line on the console will be: 12 Example4: lab1_1234567.exe 10 4 a which should calculate 10 + 4 = 14, i.e. the last (and only) line on the console will be: 14 ___________________________________________________________________________________________ ___ FUNCTIONAL REQUIREMENTS (to get over approximately 50%): E1. For situations where THREE command line arguments (other than 'a' or 's') are passed to your program: E1.1 If the third argument is 'm', your program must perform a multiplication operation, taking the first two arguments as the operands and display only the result to the console with a new line character. E1.2 If the third argument is 'd', your program must perform a division operation, taking the first two arguments as the operands and display only the result to the console with a new line character. E1.3 If the third argument is 'p', your program must perform an exponential operation, taking the first argument as the base operand and the second as the exponent operand. The result must be display to the console with a new line character. Hint: Consider using the pow() function, which has the definition: double pow(double base, double exponent); Example5: lab1_1234567.exe 10 2 d which should calculate 10 / 2 = 5, i.e. the last (and only) line on the console will be: 5 Example6: lab1_1234567.exe 10 2 p which should calculate 10 to power of 2 = 100, i.e. the last (and only) line on the console will be: 100 NOTE1: DO NOT use the character ^ in your command line arguments as your user input. Question: Why don't we use characters such as + - * / ^ ? to determine the operation? Answer: Arguments passed via the command line are processed by the operating system before being passed to your program. During this process, special characters such as + - * / ^ ? are stripped from the input argument stream. Therefore, the input characters: + - * / ^ ? will not be tested for by the autotester. See sections G6 and E7. NOTE2: the pow() and powl() function/s only work correctly for given arguments. Hence, your code should output and error if there is a domain error or undefined subset of values. For example, if the result does not produce a real number you code should handle this as an error. This means that if the base is negative you can't accept and exponent between (but not including) -1 and 1. If you get this then, output a MURPHY's LAW error: "Y" and return 0; NOTE3: zero to the power of zero is also undefined, and should also be treated MURPHY's LAW error. So return "Y" and return 0; In Visual Studio, the 0 to the power of 0 will return 1, so you will need to catch this situation manually, else your code will likely calculate the value as 1. ___ REQUIRED ERROR HANDLING (to get over approximately 70%): The following text lists errors you must detect and a priority of testing. NB: order of testing is important as each test is slight more difficult than the previous test. All outputs should either be numerical or upper-case single characters (followed by a new line). Note that case is important: In C, 'V' is not the same as 'v'. (No quotes are required on the output). E2. Valid operator input: If the third input argument is not a valid operation selection, the output shall be 'V'. Valid operators are ONLY (case sensitive): a addition s subtraction m multiplication d division p exponentiation i.e. to the power of: 2 to the power of 3 = 8 (base exponent p) E3. Basic invalid number detection (Required): Valid numbers are all numbers that the "average Engineering graduate" in Australia would consider valid. Therefore if the first two arguments are not valid decimal numbers, the output shall be 'X'. For example: -130 is valid +100 is valid 1.3 is valid 3 is valid 0.3 is valid .3 is valid ABC123 is not valid 1.3.4 is not valid 123abc is not valid ___ ERROR HANDLING (not marked by the autotester): E4. Intermediate invalid number detection (NOT TESTED BY AUTOTESTER - for your consideration only): If the first two arguments are not valid decimal numbers, the output shall be 'X'. Using comma punctuated numbers and scientific formatted numbers are considered valid. For example: 0000.111 is valid 3,000 is valid - NB: atof() will read this as '3' not as 3000 1,000.9 is valid - NB: atof() will read this as '1' not as 1000.9 1.23e2 is valid 2E2 is valid -3e-0.5 is not valid (an integer must follow after the e or E for floating point number to be valid) 2E2.1 is not valid e-1 is not valid .e3 is not valid E5. Advanced invalid number detection (NOT TESTED BY AUTOTESTER - for your consideration only): If the first two arguments are not valid decimal numbers, the output shall be 'X'. 1.3e-1 is valid 1,00.0 is valid - NB: if the comma is not removed atof() will read this as '1' not as 100 +212+21-2 is not valid - NB: mathematical operation on a number of numbers, not ONE number 5/2 is not valid - NB: mathematical operation on a number of numbers, not ONE number HINT: consider the function atof(), which has the definition: double atof (const char* str); Checking the user input for multiple operators (i.e. + or -) is quite a difficult task. One method may involve writing a 'for' loop which steps through the input argv[] counting the number of operators. This process could also be used to count for decimal points and the like. The multiple operator check should be considered an advanced task and developed once the rest of the code is operational. E6. Input number range checking: All input numbers must be between (and including) +2^16 (65536) or -2^16 (-65536). If the operand is out of range i.e. too small or too big, the output shall be 'R'. LARGE NUMBERS: is 1.2e+999 acceptable input ? what happens if you enter such a number ? try and see. Hint: #INF error - where and when does it come up ? SMALL NUMBERS: is 1.2e-999 acceptable input ? what happens if you enter such a number ? try and see. Test it by writing your own test program. E7. ERROR checks which will NOT be performed are: E7.1 Input characters such as: *.* or / or \ or : or any of these characters: * / ^ ? will not be tested for. E7.2 Range check: some computer systems accept numbers of size 9999e999999 while others flag and infinity error. An infinity error becomes an invalid input Therefore: input for valid numbers will only be tested to the maximum 9.9e99 (Note: 9.9e99 is out of range and your program should output 'R') E8. Division by zero should produce output 'M' E9. Error precedence: If multiple errors occur during a program execution event, your program should only display one error code followed by a newline character and then exit (using a return 0; statement). In general, the precedence of the error reported to the console should be displayed in the order that they appear within this proforma. However to clarify the exact order or precedence for the error characters, the precedence of the displayed error code should occur in this order: 'P' - Incorrect number of input command line arguments (see M4) 'X' - Invalid numerical command line argument 'V' - Invalid third input argument 'R' - operand (command line argument) value out of range 'M' - Division by zero 'Y' - MURPHY'S LAW (undefined error) Therefore if an invalid numerical command line argument and an invalid operation argument are passed to the program, the first error code should be displayed to the console, which in this case would be 'X'. Displaying 'V' or 'Y' would be result in a loss of marks. E10. ANYTHING ELSE THAT CAN GO WRONG (MURPHY'S LAW TEST): If there are any other kinds of errors not covered here, the output shall be 'Y'. Rhetorical question: What for example are the error codes that the Power function returns ? If this happens then the output shall be 'Y'. See section E1.3, NOTE2. ___________________________________________________________________________________________ ___ HINTS: - Use debug mode and a breakpoint at the return statement prior to program finish in main. - What string conversion routines, do you know how to convert strings to number? Look carefully as they will be needed to convert a command line parameter to a number and also check for errors. - ERROR CHECKING: The basic programming rules are simple (as covered in lectures): 1) check that the input is valid. 2) check that the output is valid. 3) if any library function returns an error code USE IT !!! CHECK FOR IT !!! - Most conversion routines do have inbuilt error checking - USE IT !!! That means: test for the error condition and take some action if the error is true. If that means more than 50% of your code is error checking, then that's the way it has to be. ____________________________________________________________________________________________ */ // These are the libraries you are allowed to use to write your solution. Do not add any // additional libraries as the auto-tester will be locked down to the following: #include <iostream> #include <cstdlib> #include <time.h> #include <math.h> #include <errno.h> // leave this one in please, it is required by the Autotester! // Do NOT Add or remove any #include statements to this project!! // All library functions required should be covered by the above // include list. Do not add a *.h file for this project as all your // code should be included in this file. using namespace std; const double MAXRANGE = pow(2.0, 16.0); // 65536 const double MINRANGE = -pow(2.0, 16.0); // All functions to be defined below and above main() - NO exceptions !!! Do NOT // define function below main() as your code will fail to compile in the auto-tester. // WRITE ANY USER DEFINED FUNCTIONS HERE (optional) // all function definitions and prototypes to be defined above this line - NO exceptions !!! int main(int argc, char *argv[]) { // ALL CODE (excluding variable declarations) MUST come after the following 'if' statement if (argc == 1) { // When run with just the program name (no parameters) your code MUST print // student ID string in CSV format. i.e. // "studentNumber,student_email,student_name" // eg: "3939723,s3939723@student.rmit.edu.au,Yang_Yang" // No parameters on command line just the program name // Edit string below: eg: "studentNumber,student_email,student_name" cout << "3939723,s3939723@student.rmit.edu.au,Yang_Yang" << endl; // Failure of your program to do this cout statement correctly will result in a // flat 10% marks penalty! Check this outputs correctly when no arguments are // passed to your program before you submit your file! Do it as your last test! // The convention is to return Zero to signal NO ERRORS (please do not change it). return 0; } //--- START YOUR CODE HERE. // The convention is to return Zero to signal NO ERRORS (please do not change it). // If you change it the AutoTester will assume you have made some major error. return 0; } // No code to be placed below this line - all functions to be defined above main() function. // End of file.
08-16
翻译:and a material design model, for simulating the characteristics of the semiconductor equipment. Mitrovic and Strang [19] applied the first-principle simulation to virtual sensor measurements to facilitate the process performed by the semiconductor processing tool. However, semiconductor manufacturing consists of hundreds of processing phases, months of processing time and correlative process flows [20]. The corresponding behavior analysis of the tool failure degradation process is very limited. Thus, the results provide neither reliable and accurate physical or mathematical models nor confident experience feedback for mechanism-based modeling. This is the main limitation of the applications of model-based approaches. In the knowledge-based methods, competitive advantages for stabilizing maintenance processes and reducing unplanned costs are realized by holistic consideration of production processes [21]. Prescriptive maintenance is known as the highest maturity and complexity level of knowledge-based maintenance [22], and “how can we control the occurrence of a specific event” should be answered and useful advice for decision-making to be given to improve and optimize the upcoming maintenance processes [23]. Nemeth et al. [22] built on the concept of prescriptive maintenance and proposed a reference model called PriMa-X to support the implementation of a prescriptive maintenance strategy and the assessment of its maturity level, facilitating the integration of data driven methods for predicting future events and identify action fields to reach an enhanced target maturity state. Padovano et al. [24] proposed a framework to construct the missing link between prescriptive maintenance and production planning and control functions in a cyber–physical production environment. Shaheen and Németh [25], Silvestri et al. [26], and Zonta et al. [27] discussed the prescriptive maintenance methods for Industry 4.0 technologies. Obtaining domain knowledge and converting it to precise rules are usually difficult as they require strong background and experience. Besides, the new situations without being covered by the knowledge bases cannot be handled. Data-driven methods are effective when acquiring practical data is more convenient than establishing physical or analytical models [28]. With the development of sensor techniques and wide applications of advanced process control technologies [29], the numerous manufacturing data provide wide opportunities for applying the data-driven prognostics methods to deal with the failure time prediction in the semiconductor manufacturing. Based on the monitoring data, the system degradation tendency can be easily understood and predicted. Generally, data-driven methods applied to the prognostics in the industry are adapted from the ones used in machine learning [29], [30]: neural networks [10], [31], k-nearest neighbor [32], [33], statistical methods [11], [34], decision trees [35], [36], clustering analysis [37], [38], and so on. Regarding the prognostics in semiconductor manufacturing, Bouaziz et al. [40] addressed a predictive approach based on the Bayesian network to analyze the device health factor. It allows early scheduling for preventive maintenance and avoids unscheduled tool downtime. Jia et al. [41] developed an adaptive method on the basis of the polynomial neural network to infer the material removal rate in the chemical-mechanical planarization process of the semiconductor fabrication. Yang and Lee [42] applied the Bayesian belief network to analyze the causal relationship of the process variables and estimate their effects on wafer quality to achieve high-classification rates for wafer quality and identify the problematic sensors when any bad wafer is detected. In reality, the quantity and quality of the data collected from the processes are problematic when it comes to developing and maintaining effective data-driven prognostics models [43]. As the failure can be slowly accumulated, the relevant data for evaluating the failure behavior would be scarce. In addition, most data-driven prognostics methods in semiconductor manufacturing focus on the deterministic estimation without giving the confidence level of the inference of the tool failure prognostics, which implies how much effect on the accuracy of the prognostic models built upon the acquired data is unknown. Using the poorly monitoring data would result in an improper estimation of the cleaning time. For more accurate prognostics, it is necessary to have a good and reliable prognostic model. As the tool failure mechanisms and uncertainty in semiconductor manufacturing are complex and unavailable, this article proposes a data-driven prognostic method to study the failure behavior and predict the tendency on the basis of the obtained production data in the fabrication process. One of the achievements is implementing the prognostics for determining the explicit cleaning time and reducing the dependence on the mechanism analysis. A novel failure factor extraction and prognostic model is developed based on autoassociative regression (AR)-Gaussian process (GP), which is the integration of the AR [44] and the GP [45]. It can be used to analyze and extract the failure factors by the comparisons between the actual data and the fault-free data. Furthermore, with the analysis of the residual tendency, the abnormalities can be detected to be used as the variables of the prognostic model. To infer the future failure behaviors in the fabrication process of semiconductor manufacturing, the data-driven prognostic model is constructed on the basis of GP to present the degradation evolution in a probability distribution and evaluate the uncertainty. Thanks to the confident interval of each estimated value, the accuracy of the prediction influenced by the collected data quality can be presented. It cannot be done by the deterministic estimation. And the suitable cleaning time for the fabrication process is given based on the discussion of the possible failure phenomena in semiconductor manufacturing. This article is structured as follows. The prognostic problem in semiconductor manufacturing is defined and its issues are discussed in Section II. Section III details the novel AR-GP method. The feasibility and the promising results of the ARGP method are validated through a numerical example and a practical semiconductor manufacturing process for the failure factor extraction and failure behavior inference, respectively, in Section IV. Finally, Section V draws conclusions.
11-24
OAM Engine Configuration The OAM engine configuration requires common infrastructure settings that affect all OAM flows. For each OAM flow, the application must configure the OAM Table attributes that define the flow behavior. This is achieved by setting the fields of the OAM Engine table. This table has 2K rules and it must be partitioned between Ingress and Egress OAM engines. The OAM engine table record is described in the CPSS_DXCH_OAM_ENTRY_STC structure. The flow configuration is described in details in OAM Engine Single Flow Configuration. The OAM engine detects various exceptions. The device also maintains special counters and indications of the exceptions. Exception handling configuration is described in OAM Exception – Configuration, Indications, Counters, and Recovery. Exception recovery is described in Exception Recovery. Using stage Parameter in OAM APIs Most of the CPSS APIs described in this section have a parameter called stage that defines if the API is applicable to either Ingress or Egress OAM processing. The Ingress and Egress processing is defined by the CPSS_DXCH_OAM_STAGE_TYPE_ENT type. To set the OAM processing to Ingress stage, use constant CPSS_DXCH_OAM_STAGE_TYPE_INGRESS_E. To set the OAM processing to Egress stage, use constant CPSS_DXCH_OAM_STAGE_TYPE_EGRESS_E. If the stage parameter is omitted, the API is applicable to both Ingress and Egress stages. OAM Engine Initialization To enable the Ingress or Egress OAM processing, call cpssDxChOamEnableSet. The OAM Engine table has 2K flow entries. The application may need to allocate continuous areas for OAM Ingress and Egress flows in the OAM table. To set the basic flow offset for each stage, call cpssDxChOamTableBaseFlowIdSet. All other OAM APIs rely on this setting for accessing the OAM table. Keepalive Functionality Configuration The OAM engine uses the keepalive daemon for monitoring the connectivity with a peer device. Each flow in the OAM table defines keepalive attributes. The built-in aging daemon applies them. To detect LOC, the daemon uses up to 8 configurable timers. Each timer is used to measure the time between successful keepalive message arrivals. The LOC timeout for a single flow is defined as the number of times the timer elapsed. A keepalive exception is raised if there was no packet for the configured time. Each timer can be set to a different period. Each flow can use any of the 8 timers. To enable keepalive detection on the device, call cpssDxChOamAgingDaemonEnableSet. Set the enable parameter to GT_TRUE to enable the aging daemon. If the daemon is enabled, the periodic keepalive check will be performed on entries according to the aging settings in the OAM Engine table. Otherwise, the Ingress or Egress keepalive check will be globally disabled. The device supports 8 different aging timers per stage to provide a greater granularity. To configure each one of the 8 aging timers, call cpssDxChOamAgingPeriodEntrySet. The timers are configured in units of 40 ns. The applicable range of time units is 0 to 0x3FFFFFFFF. Therefore, the maximal time that can be set equals to ~10 minutes. The timers are referenced in the OAM Table entry field agingPeriodIndex described in LOC Detection Configuration. An application may configure a keep-alive engine to process dropped keep-alive packets. There is a separate configuration for soft-dropped and hard-dropped packets. To enable processing of dropped packets, call cpssDxChOamKeepaliveForPacketCommandEnableSet. Reporting LOC Event Set OAM engine to report LOC events by calling cpssDxChOamAgingBitmapUpdateModeSet with mode set to CPSS_DXCH_OAM_AGING_BITMAP_UPDATE_MODE_ONLY_FAILURES_E. This ensures aging bitmap is updated only upon flow failure. Setting mode to CPSS_DXCH_OAM_AGING_BITMAP_UPDATE_MODE_ALL_E, allows updating aging bitmap to OK as well as to failure. Enabling Protection LOC The OAM Engine can trigger protection switching upon a LOC event. To enable a protection switching update, set the of CPSS_DXCH_OAM_ENTRY_STC, when calling cpssDxChOamEntrySet or cpssDxChOamPortGroupEntrySet. The protection switching configuration is described in Protection Switching. Note that the protection LOC update must be configured in the OAM Engine table at the same row as the row of the LOC table that implements the protection switch. Monitoring Payload In some cases, it is desired to validate the packet payload beyond verifying that the message had arrived with the correct header. The OAM engine provides the ability to monitor the packet payload for correctness. This is implemented by comparing the hashed value calculated for the monitored packet fields with the configured one. The OAM engine can optionally report the changes in the monitored packet data fields. To configure a continuous area of up to 12 bits that will be monitored by the hash mechanism, call cpssDxChOamHashBitSelectionSet. This setting will be used by the OAM engine as described in Packet Header Correctness Detection. OAM Table Related Configuration For a TCAM action to assign a flow ID to an OAM packet, the respective entry in the OAM table requires configuring using the cpssDxChOamEntrySet API. In addition, additional configurations are required for proper processing of OAM packets, as described below. Packet Command Profile Configuration The OAM engine uses the Packet Opcode table to apply commands and set CPU codes for packets trapped to the CPU. To access entries in the Opcode to Packet Command table is a lookup table, use the following two indexes: The 8-bit opcode from the CFM packet header The profile ID – The packetCommandProfile field of CPSS_DXCH_OAM_ENTRY_STC, set by the cpssDxChOamEntrySet API Call cpssDxChOamEntrySet to set the opcodeParsingEnable field of CPSS_DXCH_OAM_ENTRY_STC set to GT_TRUE, in order to enable access to the Opcode to Packet Command table. The contents of the table is a packet command of the CPSS_PACKET_CMD_ENT type, including CPSS_PACKET_CMD_LOOPBACK_E as a possible command. It is recommended to configure the table prior to enabling the OAM functionality. To configure the profile table, call cpssDxChOamOpcodeProfilePacketCommandEntrySet. If the packet command is drop or forward to CPU, cpssDxChOamOpcodeProfilePacketCommandEntrySet is also used to configure the CPU/DROP code to be sent to the CPU. Multicast packets can be automatically assigned (profile ID +1) for accessing the Packet Opcode table. In this way, an application can enable different handling for Multicast and Unicast flows. In order to enable a dedicated profile for Multicast traffic, use cpssDxChOamOpcodeProfileDedicatedMcProfileEnableSet. Dual-Ended Loss Measurement Command To define a packet command for Dual-Ended Loss Measurement packets, call cpssDxChOamDualEndedLmPacketCommandSet. The structure CPSS_PACKET_CMD_END describes the command types. CPU Code Configuration for Trapped Packets All trapped packets contain the CPU code that can be used by the application for further processing. The opcode is constructed dynamically for each packet from 3 configured values as follows: CPU_result_code=<OAM_CPU_Code_Base>+ (OAM_Table_Flow_Cpu_Code_Offset> << 2) + (Opcode_Packet_Command_Table_CPU_Code_Offset) where: OAM_CPU_Code_Base is the value configured by cpssDxChOamCpuCodeBaseSet. OAM Table_Cpu OAM_Table_Flow_Cpu_Code_Offset is the value configured for a specific flow in the OAM Engine table. For more details, see OAM Engine Single Flow Configuration. Opcode_Packet_Command_Table_CPU_Code_Offset is the value from the Opcode to Packet command table to be set by calling cpssDxChOamOpcodeProfilePacketCommandEntrySet. The available CPU code offset constants are defined by the CPSS_NET_RX_CPU_CODE_ENT enumeration type. Timestamp Configuration CPSS provides APIs that enable time stamping in OAM frames and configure the offset where the time stamp must be inserted. To enable time stamping parsing for the incoming frames, call cpssDxChOamTimeStampParsingEnableSet. To configure Ethertype to be inserted into outgoing DM frames, call cpssDxChOamTimeStampEtherTypeSet. Timestamping can be done anywhere within OAM packets using the PTP Timestamp table. To insert a timestamp: Call cpssDxChOamEntrySet to set the timestampEnable and oamPtpOffsetIndex fields of CPSS_DXCH_OAM_ENTRY_STC. If the packet is not DM, turn off (set to GT_FALSE) opcodeParsingEnable. Call cpssDxChPtpTsCfgTableSet to configure the entry of index oamPtpOffsetIndex from Step 1. Set the entry of type CPSS_DXCH_PTP_TS_CFG_ENTRY_STC to be used as a parameter of cpssDxChPtpTsCfgTableSet to: tsMode = CPSS_DXCH_PTP_TS_TIMESTAMPING_MODE_DO_ACTION_E Set tsAction of type CPSS_DXCH_PTP_TS_ACTION_ENT to the required timestamp type, for example CPSS_DXCH_PTP_TS_ACTION_ADD_INGRESS_TIME_E packetFormat = CPSS_DXCH_PTP_TS_PACKET_TYPE_Y1731_E ptpTransport = CPSS_DXCH_PTP_TRANSPORT_TYPE_ETHERNET_E Set L3 offset of timestamp insertion Packet to Opcode Table Usage Some OAM packets are processed as known types of OAM messages (LM, DM, CCM Keep Alive). OAM types with dedicated processing are listed in CPSS_DXCH_OAM_OPCODE_TYPE_ENT. Packet are classified by opcode-matching with predefined OAM opcode types listed in the Opcode table. Upon finding an opcode match, an internal OAM process (not an OAM Action) is triggered. Call cpssDxChOamOpcodeSet to set the table per stage and per OAM opcode type (keepalive message, LM, DM).it triggers The following figure illustrates the common format for all OAM PDUs. Figure 299: Common OAM PDU Format Set opcodeType to CPSS_DXCH_OAM_OPCODE_TYPE_LM_SINGLE_ENDED_E to configure opcode for the single-ended LM opcode. Set opcodeType to CPSS_DXCH_OAM_OPCODE_TYPE_LM_DUAL_ENDED_E to define an opcode for dual-ended loss measurement. Set opcodeType to CPSS_DXCH_OAM_OPCODE_TYPE_KEEPALIVE_E to define an opcode for keepalive monitoring. Note, that if the opcode does not match CPSS_DXCH_OAM_OPCODE_TYPE_DM_E, even though opcode parsing is enabled and timestampEnable is set, no timestamp is added to the packet. Each flow in the OAM table is configured to either attempt opcode matching or skip it. To enable OAM Engine matching of packet opcode to a configured one, call cpssDxChOamEntrySet, and set the field opcodeParsingEnable in CPSS_DXCH_OAM_ENTRY_STC. Loss Measurements Configuration – Destination Offset There is a special LM Offset table that contains a packet destination offset. The OAM engine accesses the LM Offset table to determine the offset in the packet and insert the LM counters data. This table is accessed according to the index configured in the OAM Engine table, as described in Loss Measurements (LM) Configuration. To configure the LM Offset table, call cpssDxChOamLmOffsetTableSet. The parameter entryIndex defines the table row. The parameter offset contains the offset value. IETF MPLS-TP OAM Support The OAM engine determines the packet command according to 8-bit opcode values retrieved from OAM packets. However, in the MPLS TP, the OAM is represented by a 16-bit MPLS Control Word value. The device provides a flexible way of mapping MPLS -TP Control Word to 8-bit opcode values used by the OAM engine. This is done by using 16 profiles. To map an MPLS Channel Type to a profile, call cpssDxChOamMplsCwChannelTypeProfileSet. To configure mapping profiles, call cpssDxChPclOamChannelTypeProfileToOpcodeMappingSet. OAM Exception – Configuration, Indications, Counters, and Recovery Exception Overview There are 7 OAM exceptions that may occur during OAM processing. Keepalive Aging Exception – Occurs when OAM flows age out and Loss of Continuity occurs. Excess Keepalive Exception – Occurs when an excess number of keepalive messages is received in one of the flows. RDI Status Exception – Occurs when an OAM message is received with an RDI value that is different than the current RDI status of the corresponding OAM Table entry. Tx Period Exception – Occurs when the transmission period of an OAM message differs from the configured transmission period in the corresponding OAM Table entry. Invalid Keepalive Exception – Occurs when the hash verification of a received OAM packet fails. MEG Level Exception – Occurs when the MEG Level of the received OAM message is lower than expected. Source Interface Exception – Occurs when the source interface of the OAM message is different from the one expected. The device also maintains a summary exception indication. It is set if any of the above exceptions occurs. The CPSS_DXCH_OAM_EXCEPTION_TYPE_ENT type must be used to define the exception type in any of the exception related APIs described in this section. Exception Action Configuration CPSS provides an API that defines the command to apply on a packet upon exception and the data to forward to the CPU if CPU TRAP was asserted upon exception. To bind a command and the CPU code to an exception, call cpssDxChOamExceptionConfigSet. The structure CPSS_DXCH_OAM_EXCEPTION_CONFIG_STC defines the command and CPU data for each exception. The commands to apply on the packet upon exception are listed by CPSS_PACKET_CMD_ENT. The codes to pass to the CPU are listed by CPSS_NET_RX_CPU_CODE_ENT. Exception Counters Access The device maintains counters for each exception type at the device level (cumulative counter for exceptions that occurred in all 2K flows). Call cpssDxChOamExceptionCounterGet to obtain the current value of the device level exception counter for the specified exception type. Note, the exception counters are not cleared on read, and wrap around upon reaching the maximal value (232-1). Counter types are listed by CPSS_DXCH_OAM_EXCEPTION_TYPE_ENT. Exception Recovery At times, exception state toggles from Fail to Pass. In such cases, it is possible to assign a pre-configured Recovery Packet Command and CPU/drop code to the packet that triggered the state change. This allows notifying the application of flow recovery by assigning a MIRROR command to the packet. To achieve that, call cpssDxChOamExceptionRecoveryConfigSet with exceptionCommandPtr (CPSS_DXCH_OAM_EXCEPTION_COMMAND_CONFIG_STC) set to the desired exception recovery configuration per the specified exception type and OAM direction/stage (ingress/egress). Exception Storm Suppression This section is applicable for Falcon family of devices CPSS allows suppressing exception storms for OAM exceptions, though it is possible to still assign command and CPU code (the latter, for packets marked as TO CPU) to the respective packets. To suppress exception storm for exceptions: Enable exception suppression for the desired exception type in the relevant OAM table entry (CPSS_DXCH_OAM_ENTRY_STC). The following fields are available: Keepalive aging – keepaliveAgingStormSuppressEnable Invalid keepalive hash – invalidHashKeepaliveStormSuppressEnable MEG level – megLevelStormSuppressEnable Source interface – sourceInterfaceStormSuppressEnable Tx period – txPeriodStormSuppressEnable NOTE: For explanation on each of these exception types, see OAM Exception – Configuration, Indications, Counters, and Recovery. Call cpssDxChOamExceptionSuppressConfigSet with exceptionCommandPtr (CPSS_DXCH_OAM_EXCEPTION_COMMAND_CONFIG_STC) set to the desired packet OAM handling configuration per the specified exception type and OAM direction/stage (ingress/egress). Exception Status Indication The device maintains 2 structures per each exception type—the device exception status vector, and flows exception status table. Device Exception Status Access The device exception status vector has 64 bits where each bit represents the cumulative exception status of 32 consecutive flows. For example, if bit 3 is set to 1, there is an exception in one of the flows, from flow 96 up to flow 127. To read the device exception status vector of all 2K flows, call cpssDxChOamExceptionGroupStatusGet. Set the exceptionType parameter to indicate the required exception type. Single-Flow Exception Status Access For each of the above exceptions, the device maintains an exception status indications table. The exception status indication table has 64 rows. Each row has 32 bits—one bit per OAM flow. When an exception occurs for flow i, the OAM engine sets bit i in the corresponding exception table row. Figure 300: Calculation of Flow ID with Exception To get the status of 32 flow exceptions, call cpssDxChOamExceptionStatusGet and provide the exception type and row index that contains the required flow exception. The cpssDxChOamExceptionGroupStatusGet API provides the row IDs to be used as inputs to cpssDxChOamExceptionStatusGet. In Falcon devices, obtain the exception status by calling cpssDxChOamPortGroupEntryGet. To detect which flow caused the exception, call cpssDxChOamExceptionGroupStatusGet. The indexes to set bits in the returned vector groupStatusArr must be used as input parameters to cpssDxChOamExceptionStatusGet. An example shown in the previous figure explains how to calculate the flow ID that caused the exception. OAM Engine Single Flow Configuration The OAM engine provides building blocks to implement any of the CFM protocols defined by the Ethernet OAM standards 802.1ag/Y.1731, MPLS OAM ITU-T Y.1711 standard, and others. The CFM supports 3 protocols with 3 message types: Linktrace Protocol with Linktrace Message (LTM) Continuity Check Protocol with Continuity Check Message (CCM) Loopback Protocol with Loopback Message LBM The standards also introduce the requirements for filtering CFM messages, Delay Measurements (DM) and Loss Measurements (LM) as well as for sending and detecting indications of local alarms. (RDI). The above requirements can be supported by configuring the entry in OAM Engine table. To configure an OAM Engine table entry, call cpssDxChOamEntrySet or cpssDxChOamPortGroupEntrySet. All the settings are configured through the CPSS_DXCH_OAM_ENTRY_STC structure field. The fields described in this section are assumed to be members of this structure. The OAM Engine table is configured for each OAM flow and consists of the following: OAM Packet Parsing MEG Level Filtering Configuration Source Interface Filtering Configuration Keepalive Monitoring Configuration Delay Measurement (DM) Configuration Loss Measurements (LM) Configuration OAM Packet Parsing Set opcodeParsingEnable to GT_TRUE to use the packet Opcode to determine the packet command. This field is typically enabled for OAM flows of the 802.1ag / Y.1731 / MPLS-TP OAM, and is typically disabled for flows of other OAM protocols, such as BFD or Y.1711. If set, the packet command is determined using the Opcode-to-packet-command table. For the LM and DM processing, set this field to apply the LM and DM actions only to packets with opcode that matches the configured opcodes. If opcodeParsingEnable is not set, the DM or LM action is applied to any packet that passes the TTI or PCL classification and is referred to OAM processing. For details on the DM processing, see Delay Measurement (DM) Configuration. For details on the LM processing, see Loss Measurements (LM) Configuration. MEG Level Filtering Configuration The IEEE 802.1ag l standard specifies that OAM messages of the level below the level configured must be dropped. In the following example, the device is configured to process OAM packets for portId =5, MEG level =3 and VID =10. Packets with MEG levels 0,1, and 2 must be dropped while packets with levels above 3 must be forwarded. Set the megLevelCheckEnable parameter to GT_TRUE to enable MEG filtering. Set megLevel = 3. CFM packets from any MEG level for port 0 and VID =10 will be classified for the OAM engine. The OAM engine will drop all packets below level 3 while the CFM frames above level 3 will be forwarded. The CFM packets of MEG level 3 will undergo OAM processing according to the Opcode to Packet command mapping table configuration. The MEG Level exception occurs when the MEG level of the received OAM message is lower than expected. Multiple MEG Level Filtering The same IEEE 802.1ag standard specifies that multiple MEG levels may be defined for a single interface. The following example explains how to configure 2 separate Maintenance Points (MP). There are 2 MP for the same service—one at level (3) and another one at Level (5). Port=0, VID=7, MEG Level=3 Port=0, VID=7, MEG Level=5 In this case, 2 separate OAM Table entries are created, one for each of these MPs: The first entry must not perform MEG filtering. megLevelCheckEnable = GT_FALSE The second entry – filtering enabled for MEG Level=5. megLevelCheckEnable = GT_TRUE megLevel = 5 Two corresponding TCAM rules are created for these flows: (either in the TTI or in the PCL) First rule – EtherType=CFM, Port=0, VID=7, MEG Level=3. Second rule (must appear after the first one) – EtherType=CFM, Port=0, VID=7, MEG Level=* The first rule binds the OAM flow with MEG Level=3 to the corresponding OAM entry. The second rule binds the OAM flow to the second OAM entry, resulting in a MEG Level filtering. The OAM packets with MEG level 3 will be matched by the first TCAM entry and will be processed by the OAM engine’s first rule. The other OAM packets with MEG levels other than 3 will be matched by the second TCAM rule, and will be processed by the second OAM entry. Thus, the following MEG Levels are dropped: 0, 1, 2, 4, while all packets in MEG Levels above 5 are forwarded. Source Interface Filtering Configuration Source interface filtering is defined in IEEE 802.1ag. The device can be configured to detect source interface violations. The Source Interface exception occurs when the source interface of the OAM message is different than the one configured, as explained further. If classification rules do not use the source interface as the classification parameter, the OAM frames may arrive from different interfaces. Set sourceInterfaceCheckEnable to enable source interface filtering. Set sourceInterface to define the filtering interface. To enable packet filtering from any port except for the configured one, set sourceInterfaceCheckMode to CPSS_DXCH_OAM_SOURCE_INTERFACE_CHECK_MODE_MATCH_E. Set sourceInterfaceCheckMode to CPSS_DXCH_OAM_SOURCE_INTERFACE_CHECK_MODE_NO_MATCH_E to raise an exception if an OAM packet arrives from the interface other than the one set in the sourceInterface field. Multiple Interface Filtering It is possible to configure filtering of multiple interfaces on the same device. Multiple MEPs can be defined within a single switch, with the same VID and MEG level, but with different interfaces. The following example shows how to configure processing of OAM packets from 2 different interfaces, while dropping OAM packets from any other interface. For example, 2 separate Down Maintenance Points (MP) may be defined as follows: ePort=0, VID=7, MEG Level=3 ePort=1, VID=7, MEG Level=3 In this case, 2 separate OAM Table entries are created, one for each of these MPs. First entry – the set source interface filtering is disabled. sourceInterfaceCheckEnable = GT_FALSE; Second entry – source interface filtering is enabled in the following way: sourceInterfaceCheckEnable = GT_TRUE; sourceInterface.portNum = 1; sourceInterfaceCheckMode = CPSS_DXCH_OAM_SOURCE_INTERFACE_CHECK_MODE_MATCH_E; Two corresponding TCAM rules are created for these flows (either in TTI or PCL). First rule – EtherType=CFM, ePort=0, VID=7. Second rule (must appear after the first one) – EtherType=CFM, ePort=*, VID=7. The first rule binds the OAM flow with ePort=0 to the corresponding OAM entry. The second rule binds the OAM flow to the second OAM entry, resulting in a source interface filtering. Thus, OAM packets with VID=7 from ePorts 0 or 1 are not dropped, while packets from other ports are dropped. Keepalive Monitoring Configuration Keepalive monitoring provides the following configurable functionalities: LOC Detection Configuration Packet Header Correctness Detection Excess Keep Alive Message Detection LOC Detection Configuration To define the keepalive timeout for the flow, set the agingPeriodIndex field to point to one of the 8 aging timers described in Keepalive Functionality Configuration. Set the agingThreshold field to configure the number of periods of the selected aging timer. LOC is detected if there is no CCM packet during the time period defined by agingThreshold. The Keepalive Aging exception occurs when an OAM flow ages out and LOC occurs. To configure the LOC timeout period for 100 ms using the aging timer of 1 ms, set agingThreshold =100. The Keepalive exception occurs if a message does not arrive within 100 ms. Packet Header Correctness Detection The device can be configured to detect the correctness of a packet header. Set the hashVerifyEnable field to enable detection. If enabled, the packet header is verified against the hash value that is set in the flowHash field. This field can be either configured by an API or can be dynamically set, according to the first OAM packet, by the device. To use the configured value, set the lockHashValueEnable field to GT_TRUE. Otherwise, the OAM engine will control this field. The packet header correctness check is based on monitoring a 12-bit hash value out of the 32-bit hash value computed by the hash generator. To select packet fields and a hash method, see Hash Modes and Mechanism. The configuration of a continuous area of up to 12 bits that will be monitored by the hash mechanism is described in Monitoring Payload. Excess Keep Alive Message Detection The OAM engine can be configured to detect excess keep alive messages. The excess keep alive detection algorithm causes the exception if for the configured detection time the expected number of keep alive messages is above the threshold. Set excessKeepaliveDetectionEnable to detect excess keepalive messages. To configure the detection time, set excessKeepalivePeriodThreshold to the number of aging timer periods and excessKeepaliveMessageThreshold to the minimal number of messages expected during the configured period. The Excess Keepalive exception occurs when an excess number of keepalive messages is received. Set the following fields to detect excess keepalive frames in 100 ms, using a minimal amount of messages (4), if the aging timer is configured to period of 1 ms. excessKeepalivePeriodThreshold =100; excessKeepaliveMessageThreshold =4; The OAM engine may be set to compare the period of received keepalive packets with the configured one. To enable this check, set the periodCheckEnable field and set the expected period in the keepaliveTxPeriod field. The Tx Period exception occurs when the transmission period of an OAM message differs from the configured transmission period in the corresponding OAM Table entry. RDI Check Configuration The OAM engine can be configured to compare the RDI field in the packet to the configured one in the OAM engine table. To enable this check, set the rdiCheckEnable field. The RDI check is performed only if the keepaliveAgingEnable field is set. The OAM Engine monitors the RDI bit that was extracted into UDB according to the profile. The expected location RDI must be set by calling cpssDxChPclOamRdiMatchingSet. The RDI Status exception occurs when an OAM message is received with an RDI value that is different than the current RDI status of the corresponding OAM Table entry. Delay Measurement (DM) Configuration The OAM Engine provides a convenient way to configure time stamping for implementing an accurate delay measurement functionality. The device maintains an internal Time of Day (ToD) counter that is used for time stamping. This ToD counter can be synchronized to a network Grandmaster clock using the Precision Time Protocol (PTP) or to a time server using the Network Time Protocol (NTP). For details on synchronizing the ToD, see Time Synchronization and Timestamping. The OAM engine uses the offset table defined in Time Synchronization and Timestamping to read the offset of the packet in which the time stamp must be inserted. The OAM Engine entry is configured with the index to the offset table. To enable time stamping in the OAM packets serviced by a flowId entry of the OAM engine, call cpssDxChOamEntrySet and set following fields: Set the opcodeParsingEnable field to GT_TRUE Set the timestampEnable field to GT_TRUE. Configure the offset of the packet where the time stamp is copied by setting the offsetIndex field to point to the offset table with the configured offset. The time stamping will be performed only for packets with an opcode matched to one of the 16 opcodes available in the DM Opcodes table. To configure 16 DM opcodes, call cpssDxChOamOpcodeSet. Set the opcodeType parameter to CPSS_DXCH_OAM_OPCODE_TYPE_DM_E and set 16 DM opcodes in opcodeValue. The opcodeIndex parameter defines the required index in the DM opcodes table. If opcodeParsingEnable is set to GT_FALSE, the timestamps are set to any packet classified to the OAM flow. Loss Measurements (LM) Configuration Loss Measurement (LM) is performed by reading billing and policy counters and inserting them into OAM frames. All the service counters are assigned using the TTI or PCL classification rules, as defined in a. The TTI, IPCL, or EPCL engine rules must be set to bound the traffic to counters. Only a green conforming counter out of 3 billing counters is used for LM. For more details on configuring counters in the TTI engine, see TTI Rules and Actions. For more details on configuring counters in a PCL lookup, see Policy Action. An OAM Engine Table rule defines where to insert LM counters into a frame. The OAM engine maintains a table that allows setting LM values into a different offset depending on the packet opcode. The LM configuration is explained in detail further in this section. The OAM packets are identified and classified into flows in the TTI (see TTI Rules and Actions). The relevant rule action must have the following fields set: oamProcessEnable + flowId – Bind the packet to a specific entry in the OAM table. bindToPolicer – This field must be enabled in the action entry if LM counting is enabled for this flow. policerIndex – Specifies the index of the LM counting entry when Bind To Policer Counter is set. To bind the Policer counter to the OAM, call cpssDxChPclRuleSetas defined in Policy Action. To define LM counting, call cpssDxChOamEntrySet and set following fields in structure CPSS_DXCH_OAM_ENTRY_ST: To enable counting of OAM packets in LM, set lmCountingMode = CPSS_DXCH_OAM_LM_COUNTING_MODE_ENABLE_E. To insert an Egress counter into the packet as defined in the LM table, set the lmCounterCaptureEnable to GT_TRUE. To define an offset for inserting the LM data, set offsetIndex to point to the LM Offset table (see Loss Measurements Configuration – Destination Offset). CPU Code Offset Configuration To configure the value to be added to the CPU code value for packets trapped or mirrored to the CPU, configure the cpuCodeOffset field.能否提取到cpssDxChOamEntrySet适用于哪些机型,AC5调用哪个接口
最新发布
12-06
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值