OS Configuration/Tuning
Setting | Recommend Value | Rationale | Note |
fs.file-max | 2097152 | Maximum number of open files system-wide | 全系统最大打开文件数 |
vm.swappiness | 0 | turn off swappiness | 关闭交换 |
net.core.somaxconn | 1024 | maximum number of simul connection attempts | 同时尝试连接的最大次数 |
net.core.netdev_max_backlog | 30000 | Set maximum number of packets, queued on the INPUT side, when the interface receives packets faster than kernel can process them. Recommended setting is for 10GbE links. For 1GbE links use 8000. | 当接口接收数据包的速度快于内核处理数据包的速度时,设置在INPUT端排队的最大数据包数。推荐设置用于10GbE链接。对于1GbE链接,请使用8000。 |
net.core.wmem_max | 67108864 | Set max to 16MB (16777216) for 1GbE links and 64MB(67108864) for 10GbE links. | 对于1GbE链接,最大设置为16MB(16777216),对于10GbE链接,最大设置为64MB(67108864)。 |
net.core.rmem_max | 67108864 | Set max to 16MB (16777216) for 1GbE links and 64MB(67108864) for 10GbE links. | 对于1GbE链接,最大设置为16MB(16777216),对于10GbE链接,最大设置为64MB(67108864)。 |
net.ipv4.tcp_congestion_control | htcp | There seem to be bugs in both bic and cubic (the default) for a number of versions of the Linux kernel up to version 2.6.33. The kernel version for Redhat 5.x is 2.6.18-x and 2.6.32-x for Redhat 6.x 这个表示的一种TCP算法,包括htcp,bbr,cubic,bic | 直到版本2.6.33的许多Linux内核似乎在bic和cubic(默认)中均存在错误。Redhat 5.x的内核版本是2.6.18-x,而Redhat 6.x的内核版本是2.6.32-x。这个表示的是一种TCP算法,包括htcp,bbr,cubic,bic |
net.ipv4.tcp_congestion_window | 10 | This is the default for Linux operating systems based on Linux kernel 2.6.39 or later. | 这是基于Linux内核2.6.39或更高版本的Linux操作系统的默认设置。 |
net.ipv4.tcp_fin_timeout | 10 | This setting determines the time that must elapse before TCP/IP can release a closed connection and reuse its resources. During this TIME_WAIT state, reopening the connection to the client costs less than establishing a new connection. By reducing the value of this entry, TCP/IP can release closed connections faster, making more resources available for new connections. The default value is 60. The recommened setting lowers its to 10. You can lower this even further, but too low, and you can run into socket close errors in networks with lots of jitter.与TCP握手的状态相关,表示其中一个握手的间隔。 | 此设置确定TCP / IP可以释放关闭的连接并重新使用其资源之前必须经过的时间。在此TIME_WAIT状态下,重新建立与客户端的连接比建立新的连接花费少。通过减小此条目的值,TCP / IP可以更快地释放关闭的连接,从而为新连接提供更多资源。默认值是60。建议的设置将其降低到10。您甚至可以将其降低得更低,但又太低,并且在存在很多抖动的网络中可能会遇到套接字关闭错误。握手的间隔。 |
net.ipv4.tcp_keepalive_interval | 30 | This determines the wait time between isAlive interval probes. Default value is 75. Recommended value reduces this in keeping with the reduction of the overall keepalive time. | 这确定了“活动间隔”探测之间的等待时间。默认值是75。建议值会减少此值,以减少总的保持活动时间。 |
net.ipv4.tcp_keepalive_probes | 5 | How many keepalive probes to send out before the socket is timed out. Default value is 9. Recommended value reduces this to 5 so that retry attempts will take 2.5 minutes. | 在套接字超时之前要发送多少个保持活动状态的探针。默认值为9。建议的值将该值减小为5,以便重试尝试将花费2.5分钟。 |
net.ipv4.tcp_keepalive_time | 600 | Set the TCP Socket timeout value to 10 minutes instead of 2 hour default. With an idle socket, the system will wait tcp_keepalive_time seconds, and after that try tcp_keepalive_probes times to send a TCP KEEPALIVE in intervals of tcp_keepalive_intvl seconds. If the retry attempts fail, the socket times out. | 将“ TCP套接字超时”值设置为10分钟,而不是默认的2小时。使用空闲套接字时,系统将等待tcp_keepalive_time秒,然后尝试tcp_keepalive_probes次以tcp_keepalive_intvl秒为间隔发送TCP KEEPALIVE。如果重试失败,则套接字超时。 |
net.ipv4.tcp_low_latency | 1 | Configure TCP for low latency, favorig low latency over throughput | 将TCP配置为低延迟,从而在吞吐量上实现低延迟 |
net.ipv4.tcp_max_orphans | 16384 | Limit number of orphans, each orphan can eat up to 16M (max wmem) of unswappable memory | 限制孤儿数量,每个孤儿最多可以吃掉16M(最大wmem)不可交换的内存 |
net.ipv4.tcp_max_tw_buckets | 1440000 | Maximal number of time wait sockets held by system simultaneously. If this number is exceeded time-wait socket is immediately destroyed and warning is printed. This limit exists to help prevent simple DoS attacks. | 系统同时持有的最大等待套接字数。如果超过此数量,等待时间插座将立即销毁并打印警告。存在此限制有助于防止简单的DoS攻击。 |
net.ipv4.tcp_no_metrics_save | 1 | Disable caching TCP metrics on onnection close | 在连接关闭时禁用缓存TCP指标 |
net.ipv4.tcp_orphan_retries | 0 | Limit number of orphans, each orphan can eat up to 16M(max wmem) of unswappable memory | 限制孤儿数量,每个孤儿最多可以吃掉16M(最大wmem)的不可交换内存 |
net.ipv4.tcp_rfc1337 | Enable a fix for RFC1337 - time-wait assassination hazardsin TCP | 启用RFC1337的修复-TCP中的时间等待暗杀危险 | |
net.ipv4.tcp_rmem | 10240 131072 33554432 | Setting is min/default/max. Recommend increasing theLinux autotuning TCP buffer limit to 32MB | 设置为最小/默认/最大。建议将Linux自动调整TCP缓冲区限制增加到32MB |
net.ipv4.tcp_sack | 1 | Enable select acknowledgmen | 启用选择确认 |
net.ipv4.tcp_slow_start_after_idle | 0 | By default, TCP starts with a single small segment, gradually increasing it by one each time. This results in unnecessary slowness that impacts the start of every request. | 默认情况下,TCP从一个小段开始,然后每次逐渐增加一个。这会导致不必要的延迟,从而影响每个请求的开始。 |
net.ipv4.tcp_syncookies | 0 | Many default Linux installations use SYN cookies to protect the system against malicious attacks that flood TCP SYN packets. The use of SYN cookies dramatically reduces network bandwidth, and can be triggered by a running Geode cluster. If your Geode cluster is otherwise protected against such attacks, disable SYN cookies to ensure that Geode network throughput is not affected. NOTE: if SYN floods are an issue and SYN cookies can’t be disabled, try the following: net.ipv4.tcp_max_syn_backlog="16384" net.ipv4.tcp_synack_retries="1" net.ipv4.tcp_max_orphans="400000" | 许多默认的Linux安装使用SYN cookie来保护系统免受淹没TCP SYN数据包的恶意攻击。使用SYN cookie可以大大减少网络带宽,并且可以由正在运行的Geode集群触发。如果您的Geode群集受到其他保护,可免受此类攻击,请禁用SYN cookie以确保Geode网络吞吐量不受影响。注意:如果出现SYN泛滥且无法禁用SYN cookie的情况,请尝试以下操作:net.ipv4.tcp_max_syn_backlog =“ 16384” net.ipv4.tcp_synack_retries =“ 1” net.ipv4.tcp_max_orphans =“ 400000” |
net.ipv4.tcp_timestamps | 1 | Enable timestamps as defined in RFC1323: | 启用RFC1323中定义的时间戳记: |
net.ipv4.tcp_tw_recycle | 1 | 1 This enables fast recycling of TIME_WAIT sockets. The default value is 0 (disabled). Should be used with caution with load balancers. | 这样可以快速回收TIME_WAIT套接字。默认值为0(禁用)。应谨慎使用负载平衡器。 |
net.ipv4.tcp_tw_reuse | 1 | This allows reusing sockets in TIME_WAIT state for new connections when it is safe from protocol viewpoint. Default value is 0 (disabled). It is generally a safer alternative to tcp_tw_recycle. The tcp_tw_reuse setting is particularly useful in environments where numerous short connections are open and left in TIME_WAIT state, such as web servers and load balancers. | 从协议的角度来看,这可以在安全的情况下将TIME_WAIT状态的套接字重新用于新连接。默认值为0(禁用)。通常,它是tcp_tw_recycle的更安全的替代方法。tcp_tw_reuse设置在Web服务器和负载均衡器等许多短连接处于打开状态并处于TIME_WAIT状态的环境中特别有用。 |
net.ipv4.tcp_window_scaling | 1 | Turn on window scaling which can be an option to enlarge the transfer window: | 打开窗口缩放,可以选择放大传输窗口: |