One really good example of this is when

本文探讨了个人生活中的一些细节,如衣物尺寸的选择、商业活动的严谨性、个人品味的独特性以及日常生活的组织等。此外,还涉及了健康锻炼的重要性、领导力的伦理和社会责任等方面的内容。

Don't even think about using an "el cheapo" website template from a design mill. You need to check for the size of your shirt, jacket, dresses, shorts and many more. Sounds similar to a normal residential EPC? The general procedure is much the same, but on a Commercial EPC is far more intensive.

 

One really good example of this is when I begin Methyl B 12 injections with a person. Most young people don't have a taste for antiques, so it wouldn't make sense to put it in a plaza that contains a tattoo shop, a bar, and a nightclub. They are mostly respected because no one could sway their opinion of their brother and they only grew apart at one point to become even closer later on.

 

On the other hand, morning or afternoon weddings are more relaxed, and usually do not demand a strict dress code.. Nonetheless, I continue to buy more as new and compelling ornaments capture my eye. The more the people the bigger the venue.. The load of UGG BAG exercise should be built gradually.

 

If every new species were to combat with the pre existing species for its survival means, the nature would have become a barren land by now where only fight for survival would exist and not freedom and beauty.. Furthermore, as many of these goods are rather rare, people are very happy to pay for their 'uniqueness'.

 

Make the sales representative feel comfortable with you. Any album will do. Voight's actor career began in the late 1950s. She has successfully crossed over into acting and has published her autobiography titled, I, Tina.. moncler us Several years ago after repeated blows to the head (figuratively), I finally understood what my business coach had tried so desperately to engrain into my "old school" noggin: Live on a little less and re invest the rest; maximize your money by increasing the velocity at which it grows.

 

You can collect all those hiding certificates in diverse parts of your cupboard and now organize them all in one certificate folders. These girls tend to be more silent and think more. Maintaining a clean home ugg womens shoes can help with your performance at work and you will be pleased to come back to a clean home.

 

But on the other hand sometimes it can get lonely. It is a great honor for Adidas to provide the series of products that is the most sustainable in the history for London Olympic Games and Paralympics Games in 2012. Up until that point an opening bracket of six songs from their early '80s commercial peak had proved a few things to us, not least that Maneater remains a sly bit of pop smarts that infiltrated doo wop into the gilded age.

 

Take the ManCave rep for example. MBTs Moncler Outlet are worth every penny!!, December 29, 2008"I have been a runner for years, and have recently been suffering the effects of pounding my body on hard pavement for hours a day. Ethics involve the integrity of the leaders, and social responsibility involves the actions of the leaders.

转载于:https://my.oschina.net/u/3216999/blog/822636

Introduction During the past year, we have seen the rapid development of video generation models with the release of several open-source models, such as HunyuanVideo, CogVideoX and Mochi. It is very exciting to see that open source video models are going to beat closed source. However, the inference speed of these models is still a bottleneck for real-time applications and deployment. In this article, we will use ParaAttention, a library implements Context Parallelism and First Block Cache, as well as other techniques like torch.compile and FP8 Dynamic Quantization, to achieve the fastest inference speed for HunyuanVideo. If you want to speed up other models like CogVideoX, Mochi or FLUX, you can also follow the same steps in this article. We set up our experiments on NVIDIA L20 GPUs, which only have PCIe support. If you have NVIDIA A100 or H100 GPUs with NVLink support, you can achieve a better speedup with context parallelism, especially when the number of GPUs is large. HunyuanVideo Inference with diffusers Like many other generative AI models, HunyuanVideo has its official code repository and is supported by other frameworks like diffusers and ComfyUI. In this article, we will focus on optimizing the inference speed of HunyuanVideo with diffusers. To use HunyuanVideo with diffusers, we need to install its latest version: pip3 install -U diffusers Then, we can load the model and generate video frames with the following code: import time import torch from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel from diffusers.utils import export_to_video model_id = "tencent/HunyuanVideo" transformer = HunyuanVideoTransformer3DModel.from_pretrained( model_id, subfolder="transformer", torch_dtype=torch.bfloat16, revision="refs/pr/18", ) pipe = HunyuanVideoPipeline.from_pretrained( model_id, transformer=transformer, torch_dtype=torch.float16, revision="refs/pr/18", ).to("cuda") pipe.vae.enable_tiling() begin = time.time() output = pipe( prompt="A cat walks on the grass, realistic", height=720, width=1280, num_frames=129, num_inference_steps=30, ).frames[0] end = time.time() print(f"Time: {end - begin:.2f}s") print("Saving video to hunyuan_video.mp4") export_to_video(output, "hunyuan_video.mp4", fps=15) However, most people will experience OOM (Out of Memory) errors when running the above code. This is because the HunyuanVideo transformer model is relatively large and it has a quite large text encoder. Besides, HunyuanVideo requires a variable length of text conditions and the diffusers library implements this feature with a attn_mask in scaled_dot_product_attention. The size of attn_mask is proportional to the square of the input sequence length, which is crazy when we increase the resolution and the number of frames of the inference! Luckily, we can use ParaAttention to solve this problem. In ParaAttention, we patch the original implementation in diffusers to cut the text conditions before calling scaled_dot_product_attention. We implement this in our apply_cache_on_pipe function so we can call it after loading the model: pip3 install -U para-attn pipe = HunyuanVideoPipeline.from_pretrained( model_id, transformer=transformer, torch_dtype=torch.float16, revision="refs/pr/18", ).to("cuda") from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe(pipe, residual_diff_threshold=0.0) We pass residual_diff_threshold=0.0 to apply_cache_on_pipe to disable the cache mechanism now, because we will enable it later. Here, we only want it to cut the text conditions to avoid OOM errors. If you still experience OOM errors, you can try calling pipe.enable_model_cpu_offload or pipe.enable_sequential_cpu_offload after calling apply_cache_on_pipe. This is our baseline. On one single NVIDIA L20 GPU, we can generate 129 frames with 720p resolution in 30 inference steps in 3675.71 seconds. Apply First Block Cache on HunyuanVideo By caching the output of the transformer blocks in the transformer model and resuing them in the next inference steps, we can reduce the computation cost and make the inference faster. However, it is hard to decide when to reuse the cache to ensure the quality of the generated video. Recently, TeaCache suggests that we can use the timestep embedding to approximate the difference among model outputs. And AdaCache also shows that caching can contribute grant significant inference speedups without sacrificing the generation quality, across multiple video DiT baselines. However, TeaCache is still a bit complex as it needs a rescaling strategy to ensure the accuracy of the cache. In ParaAttention, we find that we can directly use the residual difference of the first transformer block output to approximate the difference among model outputs. When the difference is small enough, we can reuse the residual difference of previous inference steps, meaning that we in fact skip this denoising step. This has been proved to be effective in our experiments and we can achieve an up to 2x speedup on HunyuanVideo inference with very good quality. Cache in Diffusion Transformer How AdaCache works, First Block Cache is a variant of it To apply the first block cache on HunyuanVideo, we can call apply_cache_on_pipe with residual_diff_threshold=0.06, which is the default value for HunyuanVideo. apply_cache_on_pipe(pipe, residual_diff_threshold=0.06) HunyuanVideo without FBCache hunyuan_video_original.mp4 HunyuanVideo with FBCache hunyuan_video_fbc.mp4 We observe that the first block cache is very effective in speeding up the inference, and maintaining nearly no quality loss in the generated video. Now, on one single NVIDIA L20 GPU, we can generate 129 frames with 720p resolution in 30 inference steps in 2271.06 seconds. This is a 1.62x speedup compared to the baseline. Quantize the model into FP8 To further speed up the inference and reduce memory usage, we can quantize the model into FP8 with dynamic quantization. We must quantize both the activation and weight of the transformer model to utilize the 8-bit Tensor Cores on NVIDIA GPUs. Here, we use float8_weight_only and float8_dynamic_activation_float8_weightto quantize the text encoder and transformer model respectively. The default quantization method is per tensor quantization. If your GPU supports row-wise quantization, you can also try it for better accuracy. diffusers-torchao provides a really good tutorial on how to quantize models in diffusers and achieve a good speedup. Here, we simply install the latest torchao that is capable of quantizing HunyuanVideo. If you are not familiar with torchao quantization, you can refer to this documentation. pip3 install -U torch torchao We also need to pass the model to torch.compile to gain actual speedup. torch.compile with mode="max-autotune-no-cudagraphs" or mode="max-autotune" can help us to achieve the best performance by generating and selecting the best kernel for the model inference. The compilation process could take a long time, but it is worth it. If you are not familiar with torch.compile, you can refer to the official tutorial. In this example, we only quantize the transformer model, but you can also quantize the text encoder to reduce more memory usage. We also need to notice that the actually compilation process is done on the first time the model is called, so we need to warm up the model to measure the speedup correctly. Note: we find that dynamic quantization can significantly change the distribution of the model output, so you might need to tweak the residual_diff_threshold to a larger value to make it take effect. import time import torch from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel from diffusers.utils import export_to_video model_id = "tencent/HunyuanVideo" transformer = HunyuanVideoTransformer3DModel.from_pretrained( model_id, subfolder="transformer", torch_dtype=torch.bfloat16, revision="refs/pr/18", ) pipe = HunyuanVideoPipeline.from_pretrained( model_id, transformer=transformer, torch_dtype=torch.float16, revision="refs/pr/18", ).to("cuda") from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe(pipe) from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only quantize_(pipe.text_encoder, float8_weight_only()) quantize_(pipe.transformer, float8_dynamic_activation_float8_weight()) pipe.transformer = torch.compile( pipe.transformer, mode="max-autotune-no-cudagraphs", ) # Enable memory savings pipe.vae.enable_tiling() # pipe.enable_model_cpu_offload() # pipe.enable_sequential_cpu_offload() for i in range(2): begin = time.time() output = pipe( prompt="A cat walks on the grass, realistic", height=720, width=1280, num_frames=129, num_inference_steps=1 if i == 0 else 30, ).frames[0] end = time.time() if i == 0: print(f"Warm up time: {end - begin:.2f}s") else: print(f"Time: {end - begin:.2f}s") print("Saving video to hunyuan_video.mp4") export_to_video(output, "hunyuan_video.mp4", fps=15) The NVIDIA L20 GPU only has 48GB memory and could face OOM errors after compiling the model and not calling enable_model_cpu_offload, because the HunyuanVideo has very large activation tensors when running with high resolution and large number of frames. So here we skip measuring the speedup with quantization and compilation on one single NVIDIA L20 GPU and choose to use context parallelism to release the memory pressure. If you want to run HunyuanVideo with torch.compile on GPUs with less than 80GB memory, you can try reducing the resolution and the number of frames to avoid OOM errors. Due to the fact that large video generation models usually have performance bottleneck on the attention computation rather than the fully connected layers, we don't observe a significant speedup with quantization and compilation. However, models like FLUX and SD3 can benefit a lot from quantization and compilation, it is suggested to try it for these models. Parallelize the inference with Context Parallelism A lot faster than before, right? But we are not satisfied with the speedup we have achieved so far. If we want to accelerate the inference further, we can use context parallelism to parallelize the inference. Libraries like xDit and our ParaAttention provide ways to scale up the inference with multiple GPUs. In ParaAttention, we design our API in a compositional way so that we can combine context parallelism with first block cache and dynamic quantization all together. We provide very detailed instructions and examples of how to scale up the inference with multiple GPUs in our ParaAttention repository. Users can easily launch the inference with multiple GPUs by calling torchrun. If there is a need to make the inference process persistent and serviceable, it is suggested to use torch.multiprocessing to write your own inference processor, which can eliminate the overhead of launching the process and loading and recompiling the model. Below is our ultimate code to achieve the fastest HunyuanVideo inference: import time import torch import torch.distributed as dist from diffusers import HunyuanVideoPipeline, HunyuanVideoTransformer3DModel from diffusers.utils import export_to_video dist.init_process_group() torch.cuda.set_device(dist.get_rank()) # [rank1]: RuntimeError: Expected mha_graph->execute(handle, variant_pack, workspace_ptr.get()).is_good() to be true, but got false. (Could this error message be improved? If so, please report an enhancement request to PyTorch.) # torch.backends.cuda.enable_cudnn_sdp(False) model_id = "tencent/HunyuanVideo" transformer = HunyuanVideoTransformer3DModel.from_pretrained( model_id, subfolder="transformer", torch_dtype=torch.bfloat16, revision="refs/pr/18", ) pipe = HunyuanVideoPipeline.from_pretrained( model_id, transformer=transformer, torch_dtype=torch.float16, revision="refs/pr/18", ).to("cuda") from para_attn.context_parallel import init_context_parallel_mesh from para_attn.context_parallel.diffusers_adapters import parallelize_pipe from para_attn.parallel_vae.diffusers_adapters import parallelize_vae mesh = init_context_parallel_mesh( pipe.device.type, ) parallelize_pipe( pipe, mesh=mesh, ) parallelize_vae(pipe.vae, mesh=mesh._flatten()) from para_attn.first_block_cache.diffusers_adapters import apply_cache_on_pipe apply_cache_on_pipe(pipe) # from torchao.quantization import quantize_, float8_dynamic_activation_float8_weight, float8_weight_only # # torch._inductor.config.reorder_for_compute_comm_overlap = True # # quantize_(pipe.text_encoder, float8_weight_only()) # quantize_(pipe.transformer, float8_dynamic_activation_float8_weight()) # pipe.transformer = torch.compile( # pipe.transformer, mode="max-autotune-no-cudagraphs", # ) # Enable memory savings pipe.vae.enable_tiling() # pipe.enable_model_cpu_offload(gpu_id=dist.get_rank()) # pipe.enable_sequential_cpu_offload(gpu_id=dist.get_rank()) for i in range(2): begin = time.time() output = pipe( prompt="A cat walks on the grass, realistic", height=720, width=1280, num_frames=129, num_inference_steps=1 if i == 0 else 30, output_type="pil" if dist.get_rank() == 0 else "pt", ).frames[0] end = time.time() if dist.get_rank() == 0: if i == 0: print(f"Warm up time: {end - begin:.2f}s") else: print(f"Time: {end - begin:.2f}s") if dist.get_rank() == 0: print("Saving video to hunyuan_video.mp4") export_to_video(output, "hunyuan_video.mp4", fps=15) dist.destroy_process_group() We save the above code to run_hunyuan_video.py and run it with torchrun: torchrun --nproc_per_node=8 run_hunyuan_video.py With 8 NVIDIA L20 GPUs, we can generate 129 frames with 720p resolution in 30 inference steps in 649.23 seconds. This is a 5.66x speedup compared to the baseline! 翻译
09-24
Kconfig如下 开启哪些可以配置tcp_delack_min # # IP configuration # config IP_MULTICAST bool "IP: multicasting" help This is code for addressing several networked computers at once, enlarging your kernel by about 2 KB. You need multicasting if you intend to participate in the MBONE, a high bandwidth network on top of the Internet which carries audio and video broadcasts. More information about the MBONE is on the WWW at <http://www.savetz.com/mbone/>. For most people, it's safe to say N. config IP_ADVANCED_ROUTER bool "IP: advanced router" ---help--- If you intend to run your Linux box mostly as a router, i.e. as a computer that forwards and redistributes network packets, say Y; you will then be presented with several options that allow more precise control about the routing process. The answer to this question won't directly affect the kernel: answering N will just cause the configurator to skip all the questions about advanced routing. Note that your box can only act as a router if you enable IP forwarding in your kernel; you can do that by saying Y to "/proc file system support" and "Sysctl support" below and executing the line echo "1" > /proc/sys/net/ipv4/ip_forward at boot time after the /proc file system has been mounted. If you turn on IP forwarding, you should consider the rp_filter, which automatically rejects incoming packets if the routing table entry for their source address doesn't match the network interface they're arriving on. This has security advantages because it prevents the so-called IP spoofing, however it can pose problems if you use asymmetric routing (packets from you to a host take a different path than packets from that host to you) or if you operate a non-routing host which has several IP addresses on different interfaces. To turn rp_filter on use: echo 1 > /proc/sys/net/ipv4/conf/<device>/rp_filter or echo 1 > /proc/sys/net/ipv4/conf/all/rp_filter Note that some distributions enable it in startup scripts. For details about rp_filter strict and loose mode read <file:Documentation/networking/ip-sysctl.txt>. If unsure, say N here. config IP_FIB_TRIE_STATS bool "FIB TRIE statistics" depends on IP_ADVANCED_ROUTER ---help--- Keep track of statistics on structure of FIB TRIE table. Useful for testing and measuring TRIE performance. config IP_MULTIPLE_TABLES bool "IP: policy routing" depends on IP_ADVANCED_ROUTER select FIB_RULES ---help--- Normally, a router decides what to do with a received packet based solely on the packet's final destination address. If you say Y here, the Linux router will also be able to take the packet's source address into account. Furthermore, the TOS (Type-Of-Service) field of the packet can be used for routing decisions as well. If you need more information, see the Linux Advanced Routing and Traffic Control documentation at <http://lartc.org/howto/lartc.rpdb.html> If unsure, say N. config IP_ROUTE_MULTIPATH bool "IP: equal cost multipath" depends on IP_ADVANCED_ROUTER help Normally, the routing tables specify a single action to be taken in a deterministic manner for a given packet. If you say Y here however, it becomes possible to attach several actions to a packet pattern, in effect specifying several alternative paths to travel for those packets. The router considers all these paths to be of equal "cost" and chooses one of them in a non-deterministic fashion if a matching packet arrives. config IP_ROUTE_VERBOSE bool "IP: verbose route monitoring" depends on IP_ADVANCED_ROUTER help If you say Y here, which is recommended, then the kernel will print verbose messages regarding the routing, for example warnings about received packets which look strange and could be evidence of an attack or a misconfigured system somewhere. The information is handled by the klogd daemon which is responsible for kernel messages ("man klogd"). config IP_ROUTE_CLASSID bool config IP_PNP bool "IP: kernel level autoconfiguration" help This enables automatic configuration of IP addresses of devices and of the routing table during kernel boot, based on either information supplied on the kernel command line or by BOOTP or RARP protocols. You need to say Y only for diskless machines requiring network access to boot (in which case you want to say Y to "Root file system on NFS" as well), because all other machines configure the network in their startup scripts. config IP_PNP_DHCP bool "IP: DHCP support" depends on IP_PNP ---help--- If you want your Linux box to mount its whole root file system (the one containing the directory /) from some other computer over the net via NFS and you want the IP address of your computer to be discovered automatically at boot time using the DHCP protocol (a special protocol designed for doing this job), say Y here. In case the boot ROM of your network card was designed for booting Linux and does DHCP itself, providing all necessary information on the kernel command line, you can say N here. If unsure, say Y. Note that if you want to use DHCP, a DHCP server must be operating on your network. Read <file:Documentation/filesystems/nfs/nfsroot.txt> for details. config IP_PNP_BOOTP bool "IP: BOOTP support" depends on IP_PNP ---help--- If you want your Linux box to mount its whole root file system (the one containing the directory /) from some other computer over the net via NFS and you want the IP address of your computer to be discovered automatically at boot time using the BOOTP protocol (a special protocol designed for doing this job), say Y here. In case the boot ROM of your network card was designed for booting Linux and does BOOTP itself, providing all necessary information on the kernel command line, you can say N here. If unsure, say Y. Note that if you want to use BOOTP, a BOOTP server must be operating on your network. Read <file:Documentation/filesystems/nfs/nfsroot.txt> for details. config IP_PNP_RARP bool "IP: RARP support" depends on IP_PNP help If you want your Linux box to mount its whole root file system (the one containing the directory /) from some other computer over the net via NFS and you want the IP address of your computer to be discovered automatically at boot time using the RARP protocol (an older protocol which is being obsoleted by BOOTP and DHCP), say Y here. Note that if you want to use RARP, a RARP server must be operating on your network. Read <file:Documentation/filesystems/nfs/nfsroot.txt> for details. config NET_IPIP tristate "IP: tunneling" select INET_TUNNEL select NET_IP_TUNNEL ---help--- Tunneling means encapsulating data of one protocol type within another protocol and sending it over a channel that understands the encapsulating protocol. This particular tunneling driver implements encapsulation of IP within IP, which sounds kind of pointless, but can be useful if you want to make your (or some other) machine appear on a different network than it physically is, or to use mobile-IP facilities (allowing laptops to seamlessly move between networks without changing their IP addresses). Saying Y to this option will produce two modules ( = code which can be inserted in and removed from the running kernel whenever you want). Most people won't need this and can say N. config NET_IPGRE_DEMUX tristate "IP: GRE demultiplexer" help This is helper module to demultiplex GRE packets on GRE version field criteria. Required by ip_gre and pptp modules. config NET_IP_TUNNEL tristate select DST_CACHE select GRO_CELLS default n config NET_IPGRE tristate "IP: GRE tunnels over IP" depends on (IPV6 || IPV6=n) && NET_IPGRE_DEMUX select NET_IP_TUNNEL help Tunneling means encapsulating data of one protocol type within another protocol and sending it over a channel that understands the encapsulating protocol. This particular tunneling driver implements GRE (Generic Routing Encapsulation) and at this time allows encapsulating of IPv4 or IPv6 over existing IPv4 infrastructure. This driver is useful if the other endpoint is a Cisco router: Cisco likes GRE much better than the other Linux tunneling driver ("IP tunneling" above). In addition, GRE allows multicast redistribution through the tunnel. config NET_IPGRE_BROADCAST bool "IP: broadcast GRE over IP" depends on IP_MULTICAST && NET_IPGRE help One application of GRE/IP is to construct a broadcast WAN (Wide Area Network), which looks like a normal Ethernet LAN (Local Area Network), but can be distributed all over the Internet. If you want to do that, say Y here and to "IP multicast routing" below. config IP_MROUTE_COMMON bool depends on IP_MROUTE || IPV6_MROUTE config IP_MROUTE bool "IP: multicast routing" depends on IP_MULTICAST select IP_MROUTE_COMMON help This is used if you want your machine to act as a router for IP packets that have several destination addresses. It is needed on the MBONE, a high bandwidth network on top of the Internet which carries audio and video broadcasts. In order to do that, you would most likely run the program mrouted. If you haven't heard about it, you don't need it. config IP_MROUTE_MULTIPLE_TABLES bool "IP: multicast policy routing" depends on IP_MROUTE && IP_ADVANCED_ROUTER select FIB_RULES help Normally, a multicast router runs a userspace daemon and decides what to do with a multicast packet based on the source and destination addresses. If you say Y here, the multicast router will also be able to take interfaces and packet marks into account and run multiple instances of userspace daemons simultaneously, each one handling a single table. If unsure, say N. config IP_PIMSM_V1 bool "IP: PIM-SM version 1 support" depends on IP_MROUTE help Kernel side support for Sparse Mode PIM (Protocol Independent Multicast) version 1. This multicast routing protocol is used widely because Cisco supports it. You need special software to use it (pimd-v1). Please see <http://netweb.usc.edu/pim/> for more information about PIM. Say Y if you want to use PIM-SM v1. Note that you can say N here if you just want to use Dense Mode PIM. config IP_PIMSM_V2 bool "IP: PIM-SM version 2 support" depends on IP_MROUTE help Kernel side support for Sparse Mode PIM version 2. In order to use this, you need an experimental routing daemon supporting it (pimd or gated-5). This routing protocol is not used widely, so say N unless you want to play with it. config SYN_COOKIES bool "IP: TCP syncookie support" ---help--- Normal TCP/IP networking is open to an attack known as "SYN flooding". This denial-of-service attack prevents legitimate remote users from being able to connect to your computer during an ongoing attack and requires very little work from the attacker, who can operate from anywhere on the Internet. SYN cookies provide protection against this type of attack. If you say Y here, the TCP/IP stack will use a cryptographic challenge protocol known as "SYN cookies" to enable legitimate users to continue to connect, even when your machine is under attack. There is no need for the legitimate users to change their TCP/IP software; SYN cookies work transparently to them. For technical information about SYN cookies, check out <http://cr.yp.to/syncookies.html>. If you are SYN flooded, the source address reported by the kernel is likely to have been forged by the attacker; it is only reported as an aid in tracing the packets to their actual source and should not be taken as absolute truth. SYN cookies may prevent correct error reporting on clients when the server is really overloaded. If this happens frequently better turn them off. If you say Y here, you can disable SYN cookies at run time by saying Y to "/proc file system support" and "Sysctl support" below and executing the command echo 0 > /proc/sys/net/ipv4/tcp_syncookies after the /proc file system has been mounted. If unsure, say N. config NET_IPVTI tristate "Virtual (secure) IP: tunneling" depends on IPV6 || IPV6=n select INET_TUNNEL select NET_IP_TUNNEL depends on INET_XFRM_MODE_TUNNEL ---help--- Tunneling means encapsulating data of one protocol type within another protocol and sending it over a channel that understands the encapsulating protocol. This can be used with xfrm mode tunnel to give the notion of a secure tunnel for IPSEC and then use routing protocol on top. config NET_UDP_TUNNEL tristate select NET_IP_TUNNEL default n config NET_FOU tristate "IP: Foo (IP protocols) over UDP" select XFRM select NET_UDP_TUNNEL ---help--- Foo over UDP allows any IP protocol to be directly encapsulated over UDP include tunnels (IPIP, GRE, SIT). By encapsulating in UDP network mechanisms and optimizations for UDP (such as ECMP and RSS) can be leveraged to provide better service. config NET_FOU_IP_TUNNELS bool "IP: FOU encapsulation of IP tunnels" depends on NET_IPIP || NET_IPGRE || IPV6_SIT select NET_FOU ---help--- Allow configuration of FOU or GUE encapsulation for IP tunnels. When this option is enabled IP tunnels can be configured to use FOU or GUE encapsulation. config INET_AH tristate "IP: AH transformation" select XFRM_ALGO select CRYPTO select CRYPTO_HMAC select CRYPTO_MD5 select CRYPTO_SHA1 ---help--- Support for IPsec AH. If unsure, say Y. config INET_ESP tristate "IP: ESP transformation" select XFRM_ALGO select CRYPTO select CRYPTO_AUTHENC select CRYPTO_HMAC select CRYPTO_MD5 select CRYPTO_CBC select CRYPTO_SHA1 select CRYPTO_DES select CRYPTO_ECHAINIV ---help--- Support for IPsec ESP. If unsure, say Y. config INET_ESP_OFFLOAD tristate "IP: ESP transformation offload" depends on INET_ESP select XFRM_OFFLOAD default n ---help--- Support for ESP transformation offload. This makes sense only if this system really does IPsec and want to do it with high throughput. A typical desktop system does not need it, even if it does IPsec. If unsure, say N. config INET_IPCOMP tristate "IP: IPComp transformation" select INET_XFRM_TUNNEL select XFRM_IPCOMP ---help--- Support for IP Payload Compression Protocol (IPComp) (RFC3173), typically needed for IPsec. If unsure, say Y. config INET_TABLE_PERTURB_ORDER int "INET: Source port perturbation table size (as power of 2)" if EXPERT default 16 help Source port perturbation table size (as power of 2) for RFC 6056 3.3.4. Algorithm 4: Double-Hash Port Selection Algorithm. The default is almost always what you want. Only change this if you know what you are doing. config INET_XFRM_TUNNEL tristate select INET_TUNNEL default n config INET_TUNNEL tristate default n config INET_XFRM_MODE_TRANSPORT tristate "IP: IPsec transport mode" default y select XFRM ---help--- Support for IPsec transport mode. If unsure, say Y. config INET_XFRM_MODE_TUNNEL tristate "IP: IPsec tunnel mode" default y select XFRM ---help--- Support for IPsec tunnel mode. If unsure, say Y. config INET_XFRM_MODE_BEET tristate "IP: IPsec BEET mode" default y select XFRM ---help--- Support for IPsec BEET mode. If unsure, say Y. config INET_DIAG tristate "INET: socket monitoring interface" default y ---help--- Support for INET (TCP, DCCP, etc) socket monitoring interface used by native Linux tools such as ss. ss is included in iproute2, currently downloadable at: http://www.linuxfoundation.org/collaborate/workgroups/networking/iproute2 If unsure, say Y. config INET_TCP_DIAG depends on INET_DIAG def_tristate INET_DIAG config INET_UDP_DIAG tristate "UDP: socket monitoring interface" depends on INET_DIAG && (IPV6 || IPV6=n) default n ---help--- Support for UDP socket monitoring interface used by the ss tool. If unsure, say Y. config INET_RAW_DIAG tristate "RAW: socket monitoring interface" depends on INET_DIAG && (IPV6 || IPV6=n) default n ---help--- Support for RAW socket monitoring interface used by the ss tool. If unsure, say Y. config INET_DIAG_DESTROY bool "INET: allow privileged process to administratively close sockets" depends on INET_DIAG default n ---help--- Provides a SOCK_DESTROY operation that allows privileged processes (e.g., a connection manager or a network administration tool such as ss) to close sockets opened by other processes. Closing a socket in this way interrupts any blocking read/write/connect operations on the socket and causes future socket calls to behave as if the socket had been disconnected. If unsure, say N. menuconfig TCP_CONG_ADVANCED bool "TCP: advanced congestion control" ---help--- Support for selection of various TCP congestion control modules. Nearly all users can safely say no here, and a safe default selection will be made (CUBIC with new Reno as a fallback). If unsure, say N. if TCP_CONG_ADVANCED config TCP_CONG_BIC tristate "Binary Increase Congestion (BIC) control" default m ---help--- BIC-TCP is a sender-side only change that ensures a linear RTT fairness under large windows while offering both scalability and bounded TCP-friendliness. The protocol combines two schemes called additive increase and binary search increase. When the congestion window is large, additive increase with a large increment ensures linear RTT fairness as well as good scalability. Under small congestion windows, binary search increase provides TCP friendliness. See http://www.csc.ncsu.edu/faculty/rhee/export/bitcp/ config TCP_CONG_CUBIC tristate "CUBIC TCP" default y ---help--- This is version 2.0 of BIC-TCP which uses a cubic growth function among other techniques. See http://www.csc.ncsu.edu/faculty/rhee/export/bitcp/cubic-paper.pdf config TCP_CONG_WESTWOOD tristate "TCP Westwood+" default m ---help--- TCP Westwood+ is a sender-side only modification of the TCP Reno protocol stack that optimizes the performance of TCP congestion control. It is based on end-to-end bandwidth estimation to set congestion window and slow start threshold after a congestion episode. Using this estimation, TCP Westwood+ adaptively sets a slow start threshold and a congestion window which takes into account the bandwidth used at the time congestion is experienced. TCP Westwood+ significantly increases fairness wrt TCP Reno in wired networks and throughput over wireless links. config TCP_CONG_HTCP tristate "H-TCP" default m ---help--- H-TCP is a send-side only modifications of the TCP Reno protocol stack that optimizes the performance of TCP congestion control for high speed network links. It uses a modeswitch to change the alpha and beta parameters of TCP Reno based on network conditions and in a way so as to be fair with other Reno and H-TCP flows. config TCP_CONG_HSTCP tristate "High Speed TCP" default n ---help--- Sally Floyd's High Speed TCP (RFC 3649) congestion control. A modification to TCP's congestion control mechanism for use with large congestion windows. A table indicates how much to increase the congestion window by when an ACK is received. For more detail see http://www.icir.org/floyd/hstcp.html config TCP_CONG_HYBLA tristate "TCP-Hybla congestion control algorithm" default n ---help--- TCP-Hybla is a sender-side only change that eliminates penalization of long-RTT, large-bandwidth connections, like when satellite legs are involved, especially when sharing a common bottleneck with normal terrestrial connections. config TCP_CONG_VEGAS tristate "TCP Vegas" default n ---help--- TCP Vegas is a sender-side only change to TCP that anticipates the onset of congestion by estimating the bandwidth. TCP Vegas adjusts the sending rate by modifying the congestion window. TCP Vegas should provide less packet loss, but it is not as aggressive as TCP Reno. config TCP_CONG_NV tristate "TCP NV" default n ---help--- TCP NV is a follow up to TCP Vegas. It has been modified to deal with 10G networks, measurement noise introduced by LRO, GRO and interrupt coalescence. In addition, it will decrease its cwnd multiplicatively instead of linearly. Note that in general congestion avoidance (cwnd decreased when # packets queued grows) cannot coexist with congestion control (cwnd decreased only when there is packet loss) due to fairness issues. One scenario when they can coexist safely is when the CA flows have RTTs << CC flows RTTs. For further details see http://www.brakmo.org/networking/tcp-nv/ config TCP_CONG_SCALABLE tristate "Scalable TCP" default n ---help--- Scalable TCP is a sender-side only change to TCP which uses a MIMD congestion control algorithm which has some nice scaling properties, though is known to have fairness issues. See http://www.deneholme.net/tom/scalable/ config TCP_CONG_LP tristate "TCP Low Priority" default n ---help--- TCP Low Priority (TCP-LP), a distributed algorithm whose goal is to utilize only the excess network bandwidth as compared to the ``fair share`` of bandwidth as targeted by TCP. See http://www-ece.rice.edu/networks/TCP-LP/ config TCP_CONG_VENO tristate "TCP Veno" default n ---help--- TCP Veno is a sender-side only enhancement of TCP to obtain better throughput over wireless networks. TCP Veno makes use of state distinguishing to circumvent the difficult judgment of the packet loss type. TCP Veno cuts down less congestion window in response to random loss packets. See <http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=1177186> config TCP_CONG_YEAH tristate "YeAH TCP" select TCP_CONG_VEGAS default n ---help--- YeAH-TCP is a sender-side high-speed enabled TCP congestion control algorithm, which uses a mixed loss/delay approach to compute the congestion window. It's design goals target high efficiency, internal, RTT and Reno fairness, resilience to link loss while keeping network elements load as low as possible. For further details look here: http://wil.cs.caltech.edu/pfldnet2007/paper/YeAH_TCP.pdf config TCP_CONG_ILLINOIS tristate "TCP Illinois" default n ---help--- TCP-Illinois is a sender-side modification of TCP Reno for high speed long delay links. It uses round-trip-time to adjust the alpha and beta parameters to achieve a higher average throughput and maintain fairness. For further details see: http://www.ews.uiuc.edu/~shaoliu/tcpillinois/index.html config TCP_CONG_DCTCP tristate "DataCenter TCP (DCTCP)" default n ---help--- DCTCP leverages Explicit Congestion Notification (ECN) in the network to provide multi-bit feedback to the end hosts. It is designed to provide: - High burst tolerance (incast due to partition/aggregate), - Low latency (short flows, queries), - High throughput (continuous data updates, large file transfers) with commodity, shallow-buffered switches. All switches in the data center network running DCTCP must support ECN marking and be configured for marking when reaching defined switch buffer thresholds. The default ECN marking threshold heuristic for DCTCP on switches is 20 packets (30KB) at 1Gbps, and 65 packets (~100KB) at 10Gbps, but might need further careful tweaking. For further details see: http://simula.stanford.edu/~alizade/Site/DCTCP_files/dctcp-final.pdf config TCP_CONG_CDG tristate "CAIA Delay-Gradient (CDG)" default n ---help--- CAIA Delay-Gradient (CDG) is a TCP congestion control that modifies the TCP sender in order to: o Use the delay gradient as a congestion signal. o Back off with an average probability that is independent of the RTT. o Coexist with flows that use loss-based congestion control. o Tolerate packet loss unrelated to congestion. For further details see: D.A. Hayes and G. Armitage. "Revisiting TCP congestion control using delay gradients." In Networking 2011. Preprint: http://goo.gl/No3vdg config TCP_CONG_BBR tristate "BBR TCP" default n ---help--- BBR (Bottleneck Bandwidth and RTT) TCP congestion control aims to maximize network utilization and minimize queues. It builds an explicit model of the the bottleneck delivery rate and path round-trip propagation delay. It tolerates packet loss and delay unrelated to congestion. It can operate over LAN, WAN, cellular, wifi, or cable modem links. It can coexist with flows that use loss-based congestion control, and can operate with shallow buffers, deep buffers, bufferbloat, policers, or AQM schemes that do not provide a delay signal. It requires the fq ("Fair Queue") pacing packet scheduler. #if defined(CONFIG_BCM_KF_MPTCP) && defined(CONFIG_BCM_MPTCP) config TCP_CONG_LIA tristate "MPTCP Linked Increase" depends on MPTCP default n ---help--- MultiPath TCP Linked Increase Congestion Control To enable it, just put 'lia' in tcp_congestion_control config TCP_CONG_OLIA tristate "MPTCP Opportunistic Linked Increase" depends on MPTCP default n ---help--- MultiPath TCP Opportunistic Linked Increase Congestion Control To enable it, just put 'olia' in tcp_congestion_control config TCP_CONG_WVEGAS tristate "MPTCP WVEGAS CONGESTION CONTROL" depends on MPTCP default n ---help--- wVegas congestion control for MPTCP To enable it, just put 'wvegas' in tcp_congestion_control config TCP_CONG_BALIA tristate "MPTCP BALIA CONGESTION CONTROL" depends on MPTCP default n ---help--- Multipath TCP Balanced Linked Adaptation Congestion Control To enable it, just put 'balia' in tcp_congestion_control config TCP_CONG_MCTCPDESYNC tristate "DESYNCHRONIZED MCTCP CONGESTION CONTROL (EXPERIMENTAL)" depends on MPTCP default n ---help--- Desynchronized MultiChannel TCP Congestion Control. This is experimental code that only supports single path and must have set mptcp_ndiffports larger than one. To enable it, just put 'mctcpdesync' in tcp_congestion_control For further details see: http://ieeexplore.ieee.org/abstract/document/6911722/ https://doi.org/10.1016/j.comcom.2015.07.010 #endif choice prompt "Default TCP congestion control" default DEFAULT_CUBIC help Select the TCP congestion control that will be used by default for all connections. config DEFAULT_BIC bool "Bic" if TCP_CONG_BIC=y config DEFAULT_CUBIC bool "Cubic" if TCP_CONG_CUBIC=y config DEFAULT_HTCP bool "Htcp" if TCP_CONG_HTCP=y config DEFAULT_HYBLA bool "Hybla" if TCP_CONG_HYBLA=y config DEFAULT_VEGAS bool "Vegas" if TCP_CONG_VEGAS=y config DEFAULT_VENO bool "Veno" if TCP_CONG_VENO=y config DEFAULT_WESTWOOD bool "Westwood" if TCP_CONG_WESTWOOD=y config DEFAULT_DCTCP bool "DCTCP" if TCP_CONG_DCTCP=y config DEFAULT_CDG bool "CDG" if TCP_CONG_CDG=y config DEFAULT_BBR bool "BBR" if TCP_CONG_BBR=y #if defined(CONFIG_BCM_KF_MPTCP) && defined(CONFIG_BCM_MPTCP) config DEFAULT_LIA bool "Lia" if TCP_CONG_LIA=y config DEFAULT_OLIA bool "Olia" if TCP_CONG_OLIA=y config DEFAULT_WVEGAS bool "Wvegas" if TCP_CONG_WVEGAS=y config DEFAULT_BALIA bool "Balia" if TCP_CONG_BALIA=y config DEFAULT_MCTCPDESYNC bool "Mctcpdesync (EXPERIMENTAL)" if TCP_CONG_MCTCPDESYNC=y #endif config DEFAULT_RENO bool "Reno" endchoice endif config TCP_CONG_CUBIC tristate depends on !TCP_CONG_ADVANCED default y config DEFAULT_TCP_CONG string default "bic" if DEFAULT_BIC default "cubic" if DEFAULT_CUBIC default "htcp" if DEFAULT_HTCP default "hybla" if DEFAULT_HYBLA default "vegas" if DEFAULT_VEGAS default "westwood" if DEFAULT_WESTWOOD default "veno" if DEFAULT_VENO #if defined(CONFIG_BCM_KF_MPTCP) && defined(CONFIG_BCM_MPTCP) default "lia" if DEFAULT_LIA default "olia" if DEFAULT_OLIA default "wvegas" if DEFAULT_WVEGAS default "balia" if DEFAULT_BALIA #endif default "reno" if DEFAULT_RENO default "dctcp" if DEFAULT_DCTCP default "cdg" if DEFAULT_CDG default "bbr" if DEFAULT_BBR default "cubic" config TCP_MD5SIG bool "TCP: MD5 Signature Option support (RFC2385)" select CRYPTO select CRYPTO_MD5 ---help--- RFC2385 specifies a method of giving MD5 protection to TCP sessions. Its main (only?) use is to protect BGP sessions between core routers on the Internet. If unsure, say N.
10-29
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符  | 博主筛选后可见
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值