Utilize Action Mechanism in Web Test Automation to Save the Effort for Frequent Code Change

本文探讨了在WebUI自动化测试中遇到的问题及其解决方案。通过采用Action机制,可以有效分离页面内部行为与断言,从而在测试用例更改时大幅减少代码修改工作量。文中详细解释了Action机制的实现方式,包括Java反射特性在Action机制系统中的应用,以及如何设计和封装断言和定制步骤到不同的Action类中。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

Author:  Zhou Hongfei, Han Jun.

1. Background: challenge we met during test automation, based on the design pattern of page object.

Currently design pattern of page object is commonly used in Web UI automation. Page object is considered as the atomic element in UI test cases, which contains static web elements in the page, along with the internal page behavior.

As is shown in Figure 1, there is a UI TestPlan (test suite), which is composed of three different test methods. In each test method, there are flows and internal page behaviors. Internal page behavior is a part of page object and defines the concise steps user made in the very page. Since these steps can be re-used, they’re encapsulated in the internal page behavior.

Flow contains a series internal page behavior and some other steps, triggered by web element method directly.

Figure 1 Test Plan component in Web Automation

No assertions will be written in flow or internal page behaviors, for the purpose of making them more re-usable. Assertion is written according to the verification points, mentioned in the test case doc. Assertion needs to be changed when there’s any change in the test case.

If putting assertion into flows or internal page behaviors, it requires a large amount of effort for automation code maintain later on. If not, here comes the challenge how to deal with the situation when QA want to test some verification points during steps in flow. Action mechanism is designed to address the challenge.

2. Solutions: utilize Action mechanism to address the challenge.

As described in Figure 2, there is a single flow, which covers the steps through Page 1, Page2 to Page 3. Some assertions need to be made on each page respectively. But for the sake of code re-use, assertions are not provided in either internal page behavior or flow itself.

To address the requirement, two Actions are created with relative assertions and customized steps for specific pages. When flow goes to Page 1, assertions and customized steps in Action 1 will be invoked. When it goes to Page 3, extra actions in both Action 1 and 2 will be invoked automatically.

When verification points (VP) in test cases are changed, what QA need to modify is just Action class (Action1, 2, etc.). As for Flow, internal page behavior in page object keeps the same. This dramatically saves the effort for code change when there are VP changes in test case.

Figure 2 Internal Page Behavior and external Action in one Single Flow

3. How to implement Action mechanism?

Action mechanism is implemented by adopting Java Reflection feature. As is described in Figure 3, Action class and Action Invoker are the two important components in Action mechanism system.

Figure 3 implementation of Action Mechanism

Assertions and customized steps are designed and encapsulated into different Action class, defined by QA. All of them extend to the super class as ‘Action’.

As for the definition of Assertions and customized steps, take the test scenario below for example.

User need to click a button, which triggers a pop up window. And then the text field and dropdown list in the pop up window need to be verified.

In this case, Assertions are referred to the verification of text field and dropdown list in the pop up window.

Customized steps are referred to the steps made by user to click the button.

Different action method is written by QA for different pages, and the input parameter (PageObject) defines which page the assertions and customized steps happen on.

Action Invoker is the other important component in Action mechanism system. Java reflection is applied to implement the listener feature. In this case, Action Invoker knows which action method to invoke, according to the input parameter as PageObject type.

Detailed algorithm is described as below,

  1. Firstly Action Invoker get the input class types.
  2. Action Invoker takes an array of Action classed and does ‘for loop’ iteration.
  3. During the iteration of all Action classes, it looks for the method whose method name = ‘Actions’, and parameter type=’Object’.
  4. If ‘void action (PageObject)’ is found, invoke it to make proper assertions with customized steps.

4. Conclusion: future benefits of adopting Action mechanism in test automation.

Web UI Automation is commonly used in a lot of online commerce companies to save efforts made by manual test. With frequent test case changes, it is always a challenge about how to deal with the automation code change.

Action mechanism is an innovated approach to address the challenge. By adopting it, stable steps in the pages (internal page behavior) can be separated with assertions and customized steps (Action class). It dramatically saves the code change effort.

 

* 本文版权和/或知识产权归eBay Inc所有。如需引述,请和联系我们DL-eBay-CCOE-Tech@ebay.com。本文旨在进行学术探讨交流,如您认为某些信息侵犯您的合法权益,请联系我们DL-eBay-CCOE-Tech@ebay.com,并在通知中列明国家法律法规要求的必要信息,我们在收到您的通知后将根据国家法律法规尽快采取措施。

LSTM (Long Short-Term Memory) is a type of recurrent neural network (RNN) architecture that is designed to overcome the vanishing gradient problem in traditional RNNs. It is particularly useful for processing sequential data such as time-series, natural language, and speech data. In PyTorch, the LSTM model is implemented in the torch.nn module. The LSTM class can be called with the required input and output dimensions, as well as the number of layers and hidden units in each layer. The forward method of the LSTM class takes the input sequence and returns the output sequence and the final hidden state of the LSTM. Here is an example of utilizing the LSTM model in PyTorch: ``` import torch import torch.nn as nn # Define LSTM model class LSTMModel(nn.Module): def __init__(self, input_dim, hidden_dim, num_layers, output_dim): super(LSTMModel, self).__init__() self.hidden_dim = hidden_dim self.num_layers = num_layers self.lstm = nn.LSTM(input_dim, hidden_dim, num_layers, batch_first=True) self.fc = nn.Linear(hidden_dim, output_dim) def forward(self, x): # Initialize hidden state and cell state h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_() c0 = torch.zeros(self.num_layers, x.size(0), self.hidden_dim).requires_grad_() # Forward pass through LSTM layer lstm_out, (h_n, c_n) = self.lstm(x, (h0.detach(), c0.detach())) # Flatten LSTM output to feed into fully connected layer lstm_out = lstm_out[:, -1, :] # Forward pass through fully connected layer out = self.fc(lstm_out) return out # Instantiate LSTM model input_dim = 10 hidden_dim = 20 num_layers = 2 output_dim = 1 model = LSTMModel(input_dim, hidden_dim, num_layers, output_dim) # Define loss function and optimizer criterion = nn.MSELoss() optimizer = torch.optim.Adam(model.parameters(), lr=0.01) # Train LSTM model for epoch in range(num_epochs): # Forward pass outputs = model(inputs) loss = criterion(outputs, targets) # Backward and optimize optimizer.zero_grad() loss.backward() optimizer.step() ``` In this example, we define an LSTM model with an input dimension of 10, a hidden dimension of 20, 2 layers, and an output dimension of 1. We then instantiate the model and define the loss function and optimizer. Finally, we train the model using the forward method to calculate the output and the loss.backward() method to calculate the gradients and update the weights.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值