Getting Started with ConnectX-5 100Gb/s Adapters for Linux

Getting Started with ConnectX-5 100Gb/s Adapters for Linux

 

Dec 5, 2018•Knowledge Article

getting-started-with-connectx-5-100gb-s-adapters-for-linux

This post provides basic steps on how to configure and set up basic parameters for the Mellanox ConnectX-5 100Gb/s adapter.

This post is basic and is meant for beginners. The procedure is very similar to the one for the ConnectX-4 adapter (in fact, it uses the same mlx5 driver).

 

Note: ConnectX-5 adapters can be used only with MLNX_OFED rel. 4.0 or later installed.

 

 

 

References

 

Setup

The basic setup consists of:

  • Two servers equipped with PCI gen3x16 slots
  • Two Mellanox ConnectX-5 adapter cards
  • One 100Gb/s cable

 

In this specific setup, CentOS 7.2 was installed on the servers.

 

Prerequisites

If you plan to run performance tests, we recommend that you tune the BIOS to high performance.

Refer to Mellanox Tuning Guide and see this example: BIOS Performance Tuning Example.

 

Configuration

1. Install the latest MLNX_OFED (rel. 4.0 and later).

 

2. Check that the adapters are "recognized" by running the lspci command:

# lspci | grep Mellanox 82:00.0 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5] 82:00.1 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

 

Note: In ConnectX-5, each port is identified by a unique number.

 

3. Change the link protocol to Ethernet using the MFT mlxconfig tool.

Note: The default link protocol for ConnectX-4 is InfiniBand.

 

a. Start MFT.

# mst start

Starting MST (Mellanox Software Tools) driver set

Loading MST PCI module - Success

Loading MST PCI configuration module - Success

Create devices

Unloading MST PCI module (unused) - Success

 

b. Extract the vendor_part_id parameter. Note: ConnectX-5's ID is 4119.

# ibv_devinfo | grep vendor_part_id

vendor_part_id: 4119

vendor_part_id: 4119

 

c. Query the Host about ConnectX-4 adapters:

# mlxconfig -d /dev/mst/mt4119_pciconf0 q

 

Device #1:

----------

 

Device type: ConnectX5

PCI device: /dev/mst/mt4119_pciconf0

 

 

Configurations: Current

...

LINK_TYPE_P1 1

LINK_TYPE_P2 1

....

 

Note that the LINK_TYPE_P1 and LINK_TYPE_P2 equal 1 (InfiniBand) by default.

 

d. Change the port type to Ethernet (LINK_TYPE = 2):

#mlxconfig -d /dev/mst/mt4119_pciconf0 set LINK_TYPE_P1=2 LINK_TYPE_P2=2

 

Device #1:

----------

 

Device type: ConnectX5

PCI device: /dev/mst/mt4119_pciconf0

 

Configurations: Current New

LINK_TYPE_P1 1 2

LINK_TYPE_P2 1 2

 

Apply new Configuration? ? (y/n) [n] : y

Applying... Done!

-I- Please reboot machine to load new configurations.

 

e. Reboot the server.

 

4. Configure IPs and MTUs on both servers.

 

For Server S5:

# ifconfig ens801f0 15.15.15.5/24 up
# ifconfig ens801f0 mtu 9000

 

For Server S6:

# ifconfig ens801f0 15.15.15.6/24 up
# ifconfig ens801f0 mtu 9000

 

5. After you reboot, check that the port type was changed to Ethernet for each:

# ibdev2netdev

mlx5_0 port 1 ==> ens801f0 (Up)

mlx5_1 port 1 ==> ens801f0 (Up)

 

6. Make sure that you disable the firewall, iptables, SELINUX, and other security processes that might block the traffic.

# service firewalld stop

# systemctl disable firewalld

# service iptables stop

 

Disable SELINUX in the config file located at: /etc/selinux/config.

 

7. Run the basic iperf test again.

 

The following output is displayed using the automation iperf script described in HowTo Install iperf and Test Mellanox Adapters Performance.

 

Run the iperf client process on one host with the iperf server:

# iperf -s -P8

 

Run the iperf client process on the other host with the iperf client:

# iperf -c 15.15.15.6 -P8

------------------------------------------------------------

Client connecting to 15.15.15.6, TCP port 5001

TCP window size: 325 KByte (default)

------------------------------------------------------------

[ 10] local 15.15.15.5 port 57522 connected with 15.15.15.6 port 5001

[ 4] local 15.15.15.5 port 57508 connected with 15.15.15.6 port 5001

[ 3] local 15.15.15.5 port 57510 connected with 15.15.15.6 port 5001

[ 6] local 15.15.15.5 port 57512 connected with 15.15.15.6 port 5001

[ 7] local 15.15.15.5 port 57514 connected with 15.15.15.6 port 5001

[ 5] local 15.15.15.5 port 57516 connected with 15.15.15.6 port 5001

[ 8] local 15.15.15.5 port 57518 connected with 15.15.15.6 port 5001

[ 9] local 15.15.15.5 port 57520 connected with 15.15.15.6 port 5001

[ ID] Interval Transfer Bandwidth

[ 10] 0.0-10.0 sec 13.6 GBytes 11.7 Gbits/sec

[ 3] 0.0-10.0 sec 13.9 GBytes 12.0 Gbits/sec

[ 6] 0.0-10.0 sec 18.6 GBytes 16.0 Gbits/sec

[ 7] 0.0-10.0 sec 10.9 GBytes 9.38 Gbits/sec

[ 5] 0.0-10.0 sec 14.7 GBytes 12.6 Gbits/sec

[ 8] 0.0-10.0 sec 16.0 GBytes 13.7 Gbits/sec

[ 9] 0.0-10.0 sec 17.2 GBytes 14.8 Gbits/sec

[ 4] 0.0-10.0 sec 9.92 GBytes 8.52 Gbits/sec

[SUM] 0.0-10.0 sec 115 GBytes 98.6 Gbits/sec

 

Troubleshooting

1. If MLNX_OFED rel. 4.0 or later is not used, the card will be identified as a ConnectX-4 adapter by default.

# ofed_info -s

MLNX_OFED_LINUX-3.4-2.0.0.0:

 

# lspci | grep Mellanox

81:00.0 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-4]

81:00.1 Infiniband controller: Mellanox Technologies MT28800 Family [ConnectX-4]

 

 

To correct this, install MLNX_OFED rel. 4.0 or later.

# ofed_info -s

MLNX_OFED_LINUX-4.0-0.1.5.0:

 

# lspci | grep Mel

81:00.0 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

81:00.1 Infiniband controller: Mellanox Technologies MT27800 Family [ConnectX-5]

 

2. Make sure that you run the iperf process from the root "/" folder.

<think>好的,我现在需要处理用户在使用Axios时遇到的请求失败,状态码500的问题。首先,我需要明确状态码500代表服务器内部错误,通常是后端的问题,但前端也需要进行适当的错误处理。用户可能已经遇到了这种情况,需要排查和解决的方法。 首先,我应该考虑如何捕获这个错误。Axios的响应拦截器可以用来统一处理错误,这样可以避免在每个请求中都写重复的错误处理代码。比如,在拦截器中检查响应状态码,如果是500,就进行相应的提示或记录。 接下来,用户可能需要具体的代码示例。我应该提供一个基本的Axios配置示例,展示如何设置拦截器来捕获500错误。同时,要提醒用户检查请求的配置,比如URL是否正确,请求头是否合适,以及请求体数据格式是否正确。这些都可能间接导致服务器返回500错误,尤其是当服务器无法正确处理某些请求时。 然后,需要考虑服务器端的问题。虽然前端无法直接修复服务器错误,但可以指导用户如何与后端团队协作。例如,查看服务器的日志,或者使用工具如Postman测试API,确认问题是否确实出在服务器端。同时,建议用户在自己的前端代码中添加错误日志记录,方便后续调试。 另外,临时重试机制可能也是一个解决方案。当遇到500错误时,有时服务器可能是暂时不可用,重试请求可能会成功。但需要提醒用户不要无限重试,并设置合理的延迟,同时只对幂等的请求进行重试,避免重复提交数据。 最后,生成相关问题,帮助用户进一步排查可能的相关问题,比如处理其他状态码、配置超时时间、调试网络请求等。这些问题能够覆盖用户可能遇到的其他情况,提供更全面的帮助。 在引用部分,需要确保正确引用站内的资料。用户提供的引用是关于Heroku部署时出现的模块找不到问题,可能与当前问题关联不大,但可以适当提及服务器环境问题可能导致500错误,比如依赖缺失或配置错误,作为参考。</think>### Axios请求500错误处理解决方案 状态码500表示服务器内部错误,需要从客户端和服务器端双维度排查: #### 一、基础错误捕获 ```javascript axios.interceptors.response.use( response => response, error => { if (error.response.status === 500) { console.error('服务器内部错误:', error.config.url); alert('服务暂不可用,请稍后重试'); } return Promise.reject(error); } ); ``` #### 二、客户端排查步骤 1. **请求配置验证** - 检查请求URL格式:$http://api.example.com/v1/data$ 是否符合REST规范 - 验证请求头Content-Type是否与数据格式匹配: ```javascript headers: { 'Content-Type': 'application/json' // 对应JSON.stringify(data) } ``` - 检查请求体数据是否包含未定义值或非法字符 2. **网络环境测试** - 使用curl命令验证接口可用性: ```bash curl -X POST https://api.example.com/endpoint -H "Authorization: Bearer token" -d '{"test":1}' ``` - 通过Chrome DevTools的Network标签查看实际请求负载 #### 三、服务端协作排查 1. 要求后端提供: - 错误发生时间段的服务器日志 - 完整的错误堆栈信息 - 数据库连接状态监控数据 2. 建议后端添加错误追踪标识: ```javascript // 服务端响应示例 { error: { code: "INTERNAL_ERROR_XYZ123", message: "Database connection timeout" } } ``` #### 四、增强型错误处理 ```javascript const retryConfig = { retries: 3, retryDelay: 1000, shouldRetry: (error) => error.response.status === 500 }; axiosInstance = axios.create({ timeout: 10000, adapter: retryAdapterEnhancer(axios.defaults.adapter, retryConfig) }); ``` #### 五、监控配置 ```javascript // 使用Sentry进行错误跟踪 axios.interceptors.response.use(null, (error) => { Sentry.captureException(error); return Promise.reject(error); }); ``` 当出现500错误时,建议优先检查服务端日志定位具体错误源[^1]。如果是第三方API返回500,需确认其服务状态页是否报告异常。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值