some linux kernel parameters tune

本文详细介绍了如何在Linux系统中配置sysctl参数,包括ip_forward、rp_filter、syncookies等,以优化系统性能和安全性。通过调整这些参数,可以显著提升网络连接稳定性、减少系统资源消耗。

#set sysctl
true > /etc/sysctl.conf
cat >> /etc/sysctl.conf << EOF
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
kernel.msgmnb = 65536
kernel.msgmax = 65536
kernel.shmmax = 68719476736
kernel.shmall = 4294967296
net.ipv4.ip_conntrack_max = 1048576

net.ipv4.tcp_max_tw_buckets = 6000
net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_rmem = 4096 87380 4194304
net.ipv4.tcp_wmem = 4096 16384 4194304
net.core.wmem_default = 8388608
net.core.rmem_default = 8388608
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.core.netdev_max_backlog = 262144
net.core.somaxconn = 262144
net.ipv4.tcp_max_orphans = 3276800
net.ipv4.tcp_max_syn_backlog = 262144
net.ipv4.tcp_timestamps = 0
net.ipv4.tcp_synack_retries = 1
net.ipv4.tcp_syn_retries = 1
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_mem = 94500000 915000000 927000000
net.ipv4.tcp_fin_timeout = 1
net.ipv4.tcp_keepalive_time = 1200
net.ipv4.ip_local_port_range = 1024 65535
EOF

/sbin/sysctl -p

Using RCU's CPU Stall Detector 2 3 This document first discusses what sorts of issues RCU's CPU stall 4 detector can locate, and then discusses kernel parameters and Kconfig 5 options that can be used to fine-tune the detector's operation. Finally, 6 this document explains the stall detector's "splat" format. 7 8 9 What Causes RCU CPU Stall Warnings? 10 11 So your kernel printed an RCU CPU stall warning. The next question is 12 "What caused it?" The following problems can result in RCU CPU stall 13 warnings: 14 15 o A CPU looping in an RCU read-side critical section. 16 17 o A CPU looping with interrupts disabled. 18 19 o A CPU looping with preemption disabled. 20 21 o A CPU looping with bottom halves disabled. 22 23 o For !CONFIG_PREEMPT kernels, a CPU looping anywhere in the kernel 24 without invoking schedule(). If the looping in the kernel is 25 really expected and desirable behavior, you might need to add 26 some calls to cond_resched(). 27 28 o Booting Linux using a console connection that is too slow to 29 keep up with the boot-time console-message rate. For example, 30 a 115Kbaud serial console can be -way- too slow to keep up 31 with boot-time message rates, and will frequently result in 32 RCU CPU stall warning messages. Especially if you have added 33 debug printk()s. 34 35 o Anything that prevents RCU's grace-period kthreads from running. 36 This can result in the "All QSes seen" console-log message. 37 This message will include information on when the kthread last 38 ran and how often it should be expected to run. It can also 39 result in the "rcu_.*kthread starved for" console-log message, 40 which will include additional debugging information. 41 42 o A CPU-bound real-time task in a CONFIG_PREEMPT kernel, which might 43 happen to preempt a low-priority task in the middle of an RCU 44 read-side critical section. This is especially damaging if 45 that low-priority task is not permitted to run on any other CPU, 46 in which case the next RCU grace period can never complete, which 47 will eventually cause the system to run out of memory and hang. 48 While the system is in the process of running itself out of 翻译上面文档
最新发布
10-23
### Linux Kernel SLUB Memory Allocator Overview The SLUB (Simple Low-overhead Unbounded) allocator is a successor to the earlier SLAB allocator found in older versions of the Linux kernel. This allocator aims at providing efficient management of small objects while reducing fragmentation and overheads associated with managing these allocations. #### Key Features One significant advantage lies within its design simplicity compared to predecessors like SLAB or SLOB, making it easier for developers to understand how memory allocation works internally without sacrificing performance efficiency[^2]. In addition, unlike previous allocators that required maintaining complex data structures such as queues holding partially allocated slabs, SLUB uses only one queue per cache named `partial`. It keeps track of all partially filled pages ready for reuse when new requests come in. Such an approach minimizes contention between threads trying simultaneously access shared resources during high-concurrency scenarios. Another notable feature introduced by SLUB includes support for hardware-based red zoning which helps detect buffer overflows more effectively than software methods alone could achieve. By placing guard zones around each object stored inside a slab page, any overflow attempting to corrupt adjacent areas will trigger immediate detection mechanisms provided either through CPU-specific instructions or external tools monitoring system behavior. #### Usage Example Below demonstrates basic usage patterns involving dynamic creation/destruction operations on custom-defined types using kmalloc() & kfree(): ```c #include <linux/slab.h> // Define structure type. struct my_data { int value; }; void example_function(void){ struct my_data *ptr; // Allocate space from SLUB managed pool. ptr = kmalloc(sizeof(struct my_data), GFP_KERNEL); if (!ptr) return; /* Handle error */ // Initialize your variable here... ptr->value = 42; // Free previously reserved area back into free list once done. kfree(ptr); } ``` This code snippet shows allocating a single instance of `my_data` via `kmalloc()` followed later by freeing this same block after use has concluded utilizing `kfree()`. Both functions operate under control exerted indirectly upon them thanks largely due implementation details embedded deep within SLUB's core logic layers. --related questions-- 1. How does SLUB handle concurrent accesses better than other allocators? 2. What are some common debugging techniques used specifically for diagnosing issues related to SLUB? 3. Can you explain what happens behind the scenes when calling kmalloc/kfree in terms of SLUB interactions? 4. Is there any way to tune parameters affecting SLUB’s operation directly? If so, where can they be configured? 5. In comparison with SLAB/SLOB, why was SLUB chosen as the default allocator starting certain kernel releases?
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值