解决which must be escaped when used within the value

本文针对Tomcat5.5.28及更高版本升级后出现的问题提供了三种解决方案,包括修改JSP代码、配置catalina.sh或catalina.bat文件以及调整Tomcat的JavaOptions设置。

在网上找到如下解决方法,引用:http://baolongchina.iteye.com/blog/585043 ,也可以参考:http://www.docin.com/p-43502449.html

此问题是由于tomcat5.5.28及以上版本升级引起。

1.可以修改代码如下:

<jsp:include page="fastpost.jsp"> 
    <jsp:param name="returl" value='<%=Url.encode(“***”) %>' /> 
</jsp:include>

2.可以在catalina.sh中加入:

JAVA_OPTS="-Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false"

3.或者可以在Tomcat的Java Options中加入

-Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false


如果用第1种方法的话有这种写法的地方都要改下,比较麻烦,我使用第2种方法,但是我是用windows系统,所以是改catalina.bat文件,编辑该文件,找到

if "%OS%" == "Windows_NT" setlocal
 这一行(我这里是17行),在这行下面加入:

set JAVA_OPTS="-Dorg.apache.jasper.compiler.Parser.STRICT_QUOTE_ESCAPING=false"
然后重启Tomcat即可

(venv) D:\Audio2Face\Audio2Face-3D-SDK>trtexec --version &&&& RUNNING TensorRT.trtexec [TensorRT v101401] [b48] # trtexec --version [11/27/2025-14:33:27] [I] TF32 is enabled by default. Add --noTF32 flag to further improve accuracy with some performance cost. === Model Options === --onnx=<file> ONNX model === Build Options === --minShapes=spec Build with dynamic shapes using a profile with the min shapes provided --optShapes=spec Build with dynamic shapes using a profile with the opt shapes provided --maxShapes=spec Build with dynamic shapes using a profile with the max shapes provided --minShapesCalib=spec Calibrate with dynamic shapes using a profile with the min shapes provided --optShapesCalib=spec Calibrate with dynamic shapes using a profile with the opt shapes provided --maxShapesCalib=spec Calibrate with dynamic shapes using a profile with the max shapes provided Note: All three of min, opt and max shapes must be supplied. However, if only opt shapes is supplied then it will be expanded so that min shapes and max shapes are set to the same values as opt shapes. Input names can be wrapped with escaped single quotes (ex: 'Input:0'). Example input shapes spec: input0:1x3x256x256,input1:1x3x128x128 For scalars (0-D shapes), use input0:scalar or simply input0: with nothing after the colon. Each input shape is supplied as a key-value pair where key is the input name and value is the dimensions (including the batch dimension) to be used for that input. Each key-value pair has the key and value separated using a colon (:). Multiple input shapes can be provided via comma-separated key-value pairs, and each input name can contain at most one wildcard ('*') character. --inputIOFormats=spec Type and format of each of the input tensors (default = all inputs in fp32:chw) See --outputIOFormats help for the grammar of type and format list. Note: If this option is specified, please set comma-separated types and formats for all inputs following the same order as network inputs ID (even if only one input needs specifying IO format) or set the type and format once for broadcasting. --outputIOFormats=spec Type and format of each of the output tensors (default = all outputs in fp32:chw) Note: If this option is specified, please set comma-separated types and formats for all outputs following the same order as network outputs ID (even if only one output needs specifying IO format) or set the type and format once for broadcasting. IO Formats: spec ::= IOfmt[","spec] IOfmt ::= type:fmt type ::= "fp32"|"fp16"|"bf16"|"int32"|"int64"|"int8"|"uint8"|"bool" fmt ::= ("chw"|"chw2"|"hwc8"|"chw4"|"chw16"|"chw32"|"dhwc8"| "cdhw32"|"hwc"|"dla_linear"|"dla_hwc4"|"hwc16"|"dhwc")["+"fmt] --memPoolSize=poolspec Specify the size constraints of the designated memory pool(s) Supports the following base-2 suffixes: B (Bytes), G (Gibibytes), K (Kibibytes), M (Mebibytes). If none of suffixes is appended, the defualt unit is in MiB. Note: Also accepts decimal sizes, e.g. 0.25M. Will be rounded down to the nearest integer bytes. In particular, for dlaSRAM the bytes will be rounded down to the nearest power of 2. Pool constraint: poolspec ::= poolfmt[","poolspec] poolfmt ::= pool:size pool ::= "workspace"|"dlaSRAM"|"dlaLocalDRAM"|"dlaGlobalDRAM"|"tacticSharedMem" --profilingVerbosity=mode Specify profiling verbosity. mode ::= layer_names_only|detailed|none (default = layer_names_only). Please only assign once. --avgTiming=M Set the number of times averaged in each iteration for kernel selection (default = 8) --refit Mark the engine as refittable. This will allow the inspection of refittable layers and weights within the engine. --stripWeights Strip weights from plan. This flag works with either refit or refit with identical weights. Default to latter, but you can switch to the former by enabling both --stripWeights and --refit at the same time. --stripAllWeights Alias for combining the --refit and --stripWeights options. It marks all weights as refittable, disregarding any performance impact. Additionally, it strips all refittable weights after the engine is built. --weightless [Deprecated] this knob has been deprecated. Please use --stripWeights --versionCompatible, --vc Mark the engine as version compatible. This allows the engine to be used with newer versions of TensorRT on the same host OS, as well as TensorRT's dispatch and lean runtimes. --pluginInstanceNorm, --pi Set `kNATIVE_INSTANCENORM` to false in the ONNX parser. This will cause the ONNX parser to use a plugin InstanceNorm implementation over the native implementation when parsing. --uint8AsymmetricQuantizationDLA Set `kENABLE_UINT8_AND_ASYMMETRIC_QUANTIZATION_DLA` to true in the ONNX parser. This directs the onnx parser to allow UINT8 as a quantization data type and import zero point values directly without converting to float type or all-zero values. Should only be set with DLA software version >= 3.16. --useRuntime=runtime TensorRT runtime to execute engine. "lean" and "dispatch" require loading VC engine and do not support building an engine. runtime::= "full"|"lean"|"dispatch" --leanDLLPath=<file> External lean runtime DLL to use in version compatible mode. --excludeLeanRuntime When --versionCompatible is enabled, this flag indicates that the generated engine should not include an embedded lean runtime. If this is set, the user must explicitly specify a valid lean runtime to use when loading the engine. --monitorMemory Enable memory monitor report for debugging usage. (default = disabled) --sparsity=spec Control sparsity (default = disabled). Sparsity: spec ::= "disable", "enable", "force" Note: Description about each of these options is as below disable = do not enable sparse tactics in the builder (this is the default) enable = enable sparse tactics in the builder (but these tactics will only be considered if the weights have the right sparsity pattern) force = enable sparse tactics in the builder and force-overwrite the weights to have a sparsity pattern (even if you loaded a model yourself) [Deprecated] this knob has been deprecated. Please use <polygraphy surgeon prune> to rewrite the weights. --noTF32 Disable tf32 precision (default is to enable tf32, in addition to fp32) --fp16 Enable fp16 precision, in addition to fp32 (default = disabled) --bf16 Enable bf16 precision, in addition to fp32 (default = disabled) --int8 Enable int8 precision, in addition to fp32 (default = disabled) --fp8 Enable fp8 precision, in addition to fp32 (default = disabled) --int4 Enable int4 precision, in addition to fp32 (default = disabled) --best Enable all precisions to achieve the best performance (default = disabled) Note: --fp16, --bf16, --int8, --fp8, --int4, --best are deprecated and superseded by strong typing. The AutoCast tool (https://nvidia.github.io/TensorRT-Model-Optimizer/guides/8_autocast.html) can be used to convert the network to be strongly typed. --stronglyTyped Create a strongly typed network. (default = disabled) --directIO [Deprecated] Avoid reformatting at network boundaries. (default = disabled) --precisionConstraints=spec Control precision constraint setting. (default = none) Precision Constraints: spec ::= "none" | "obey" | "prefer" none = no constraints prefer = meet precision constraints set by --layerPrecisions/--layerOutputTypes if possible obey = meet precision constraints set by --layerPrecisions/--layerOutputTypes or fail otherwise --layerPrecisions=spec Control per-layer precision constraints. Effective only when precisionConstraints is set to "obey" or "prefer". (default = none) The specs are read left-to-right, and later ones override earlier ones. Each layer name can contain at most one wildcard ('*') character. Per-layer precision spec ::= layerPrecision[","spec] layerPrecision ::= layerName":"precision precision ::= "fp32"|"fp16"|"bf16"|"int32"|"int8" --layerOutputTypes=spec Control per-layer output type constraints. Effective only when precisionConstraints is set to "obey" or "prefer". (default = none The specs are read left-to-right, and later ones override earlier ones. Each layer name can contain at most one wildcard ('*') character. If a layer has more than one output, then multiple types separated by "+" can be provided for this layer. Per-layer output type spec ::= layerOutputTypes[","spec] layerOutputTypes ::= layerName":"type type ::= "fp32"|"fp16"|"bf16"|"int32"|"int8"["+"type] --layerDeviceTypes=spec Specify layer-specific device type. The specs are read left-to-right, and later ones override earlier ones. If a layer does not have a device type specified, the layer will opt for the default device type. Per-layer device type spec ::= layerDeviceTypePair[","spec] layerDeviceTypePair ::= layerName":"deviceType deviceType ::= "GPU"|"DLA" --decomposableAttentions=spec Specify decomposable attentions by comma-separated names. The specs are read left-to-right, and later ones override earlier ones. Each layer name can contain at most one wildcard ('*') character. --calib=<file> Read INT8 calibration cache file --safe Enable build safety certified engine, --stronglyTyped will be enabled by default with this option. If DLA is enabled, --buildDLAStandalone will be specified --dumpKernelText Dump the kernel text to a file, only available when --safe is enabled --buildDLAStandalone Enable build DLA standalone loadable which can be loaded by cuDLA, when this option is enabled, --allowGPUFallback is disallowed and --skipInference is enabled by default. Additionally, specifying --inputIOFormats and --outputIOFormats restricts I/O data type and memory layout (default = disabled) --allowGPUFallback When DLA is enabled, allow GPU fallback for unsupported layers (default = disabled) --consistency Perform consistency checking on safety certified engine --restricted Enable safety scope checking with kSAFETY_SCOPE build flag --saveEngine=<file> Save the serialized engine --loadEngine=<file> Load a serialized engine --asyncFileReader Load a serialized engine using async stream reader. Should be combined with --loadEngine. --getPlanVersionOnly Print TensorRT version when loaded plan was created. Works without deserialization of the plan. Use together with --loadEngine. Supported only for engines created with 8.6 and forward. --tacticSources=tactics Specify the tactics to be used by adding (+) or removing (-) tactics from the default tactic sources (default = all available tactics). Note: Currently only cuDNN, cuBLAS, cuBLAS-LT, and edge mask convolutions are listed as optional tactics. Tactic Sources: tactics ::= tactic[","tactics] tactic ::= (+|-)lib lib ::= "CUBLAS"|"CUBLAS_LT"|"CUDNN"|"EDGE_MASK_CONVOLUTIONS" |"JIT_CONVOLUTIONS" For example, to disable cudnn and enable cublas: --tacticSources=-CUDNN,+CUBLAS --noBuilderCache Disable timing cache in builder (default is to enable timing cache) --noCompilationCache Disable Compilation cache in builder, and the cache is part of timing cache (default is to enable compilation cache) --errorOnTimingCacheMiss Emit error when a tactic being timed is not present in the timing cache (default = false) --timingCacheFile=<file> Save/load the serialized global timing cache --preview=features Specify preview feature to be used by adding (+) or removing (-) preview features from the default Preview Features: features ::= feature[","features] feature ::= (+|-)flag flag ::= "aliasedPluginIO1003" |"runtimeActivationResize" |"profileSharing0806" --builderOptimizationLevel Set the builder optimization level. (default is 3) A Higher level allows TensorRT to spend more time searching for better optimization strategy. Valid values include integers from 0 to the maximum optimization level, which is currently 5. --maxTactics Set the maximum number of tactics to time when there is a choice of tactics. (default is -1) Larger number of tactics allow TensorRT to spend more building time on evaluating tactics. Default value -1 means TensorRT can decide the number of tactics based on its own heuristic. --hardwareCompatibilityLevel=mode Make the engine file compatible with other GPU architectures. (default = none) Hardware Compatibility Level: mode ::= "none" | "ampere+" | "sameComputeCapability" none = no compatibility ampere+ = compatible with Ampere and newer GPUs sameComputeCapability = compatible with GPUs that have the same Compute Capability version --runtimePlatform=platform Set the target platform for runtime execution. (default = SameAsBuild) When this option is enabled, --skipInference is enabled by default. RuntimePlatfrom: platform ::= "SameAsBuild" | "WindowsAMD64" SameAsBuild = no requirement for cross-platform compatibility. WindowsAMD64 = set the target platform for engine execution as Windows AMD64 system --tempdir=<dir> Overrides the default temporary directory TensorRT will use when creating temporary files. See IRuntime::setTemporaryDirectory API documentation for more information. --tempfileControls=controls Controls what TensorRT is allowed to use when creating temporary executable files. Should be a comma-separated list with entries in the format (in_memory|temporary):(allow|deny). in_memory: Controls whether TensorRT is allowed to create temporary in-memory executable files. temporary: Controls whether TensorRT is allowed to create temporary executable files in the filesystem (in the directory given by --tempdir). For example, to allow in-memory files and disallow temporary files: --tempfileControls=in_memory:allow,temporary:deny If a flag is unspecified, the default behavior is "allow". --maxAuxStreams=N Set maximum number of auxiliary streams per inference stream that TRT is allowed to use to run kernels in parallel if the network contains ops that can run in parallel, with the cost of more memory usage. Set this to 0 for optimal memory usage. (default = using heuristics) --profile Build with dynamic shapes using a profile with the min/max/opt shapes provided. Can be specified multiple times to create multiple profiles with contiguous index. (ex: --profile=0 --minShapes=<spec> --optShapes=<spec> --maxShapes=<spec> --profile=1 ...) --calibProfile Select the optimization profile to calibrate by index. (default = 0) --allowWeightStreaming Enable a weight streaming engine. Must be specified with --stronglyTyped. TensorRT will disable weight streaming at runtime unless --weightStreamingBudget is specified. --markDebug Specify list of names of tensors to be marked as debug tensors. Separate names with a comma --markUnfusedTensorsAsDebugTensors Mark unfused tensors as debug tensors --tilingOptimizationLevel Set the tiling optimization level. (default is 0) A Higher level allows TensorRT to spend more time searching for better optimization strategy. Valid values include integers from 0 to the maximum tiling optimization level(3). --l2LimitForTiling Set the L2 cache usage limit for tiling optimization(default is -1) --remoteAutoTuningConfig Set the remote auto tuning config. Must be specified with --safe. Format: protocol://username[:password]@hostname[:port]?param1=value1&param2=value2 Example: ssh://user:pass@192.0.2.100:22?remote_exec_path=/opt/tensorrt/bin&remote_lib_path=/opt/tensorrt/lib --refitFromOnnx Refit the loaded engine with the weights from the provided ONNX model. The model should be identical to the one used to generate the engine. === Inference Options === --shapes=spec Set input shapes for dynamic shapes inference inputs. Note: Input names can be wrapped with escaped single quotes (ex: 'Input:0'). Example input shapes spec: input0:1x3x256x256, input1:1x3x128x128 For scalars (0-D shapes), use input0:scalar or simply input0: with nothing after the colon. Each input shape is supplied as a key-value pair where key is the input name and value is the dimensions (including the batch dimension) to be used for that input. Each key-value pair has the key and value separated using a colon (:). Multiple input shapes can be provided via comma-separated key-value pairs, and each input name can contain at most one wildcard ('*') character. --loadInputs=spec Load input values from files (default = generate random inputs). Input names can be wrapped with single quotes (ex: 'Input:0') Input values spec ::= Ival[","spec] Ival ::= name":"file Consult the README for more information on generating files for custom inputs. --iterations=N Run at least N inference iterations (default = 10) --warmUp=N Run for N milliseconds to warmup before measuring performance (default = 200) --duration=N Run performance measurements for at least N seconds wallclock time (default = 3) If -1 is specified, inference will keep running unless stopped manually --sleepTime=N Delay inference start with a gap of N milliseconds between launch and compute (default = 0) --idleTime=N Sleep N milliseconds between two continuous iterations(default = 0) --infStreams=N Instantiate N execution contexts to run inference concurrently (default = 1) --exposeDMA Serialize DMA transfers to and from device (default = disabled). --noDataTransfers Disable DMA transfers to and from device (default = enabled). Note some device-to-host data transfers will remain if output dumping is enabled via the --dumpOutput or --exportOutput flags. --useManagedMemory Use managed memory instead of separate host and device allocations (default = disabled). --useSpinWait Actively synchronize on GPU events. This option may decrease synchronization time but increase CPU usage and power (default = disabled) --threads Enable multithreading to drive engines with independent threads or speed up refitting (default = disabled) --useCudaGraph Use CUDA graph to capture engine execution and then launch inference (default = disabled). This flag may be ignored if the graph capture fails. --timeDeserialize Time the amount of time it takes to deserialize the network and exit. --timeRefit Time the amount of time it takes to refit the engine before inference. --separateProfileRun Do not attach the profiler in the benchmark run; if profiling is enabled, a second profile run will be executed (default = disabled) --skipInference Exit after the engine has been built and skip inference perf measurement (default = disabled) --persistentCacheRatio Set the persistentCacheLimit in ratio, 0.5 represent half of max persistent L2 size (default = 0) --useProfile Set the optimization profile for the inference context (default = 0 ). --allocationStrategy=spec Specify how the internal device memory for inference is allocated. Strategy: spec ::= "static"|"profile"|"runtime" static = Allocate device memory based on max size across all profiles. profile = Allocate device memory based on max size of the current profile. runtime = Allocate device memory based on the actual input shapes. --saveDebugTensors Specify list of names of tensors to turn on the debug state and filename to save raw outputs to. These tensors must be specified as debug tensors during build time. Input values spec ::= Ival[","spec] Ival ::= name":"file --saveAllDebugTensors Save all debug tensors to files. Including debug tensors marked by --markDebug and --markUnfusedTensorsAsDebugTensors Multiple file formats can be saved simultaneously. Input values spec ::= format[","format] format ::= "summary"|"numpy"|"string"|"raw" --weightStreamingBudget Set the maximum amount of GPU memory TensorRT is allowed to use for weights. It can take on the following values: -2: (default) Disable weight streaming at runtime. -1: TensorRT will automatically decide the budget. 0-100%: Percentage of streamable weights that reside on the GPU. 0% saves the most memory but will have the worst performance. Requires the '%' character. >=0B: The exact amount of streamable weights that reside on the GPU. Supports the following base-2 suffixes: B (Bytes), G (Gibibytes), K (Kibibytes), M (Mebibytes). === Reporting Options === --verbose Use verbose logging (default = false) --avgRuns=N Report performance measurements averaged over N consecutive iterations (default = 10) --percentile=P1,P2,P3,... Report performance for the P1,P2,P3,... percentages (0<=P_i<=100, 0 representing max perf, and 100 representing min perf; (default = 90,95,99%) --dumpRefit Print the refittable layers and weights from a refittable engine --dumpOutput Print the output tensor(s) of the last inference iteration (default = disabled) --dumpRawBindingsToFile Print the input/output tensor(s) of the last inference iteration to file(default = disabled) --dumpProfile Print profile information per layer (default = disabled) --dumpLayerInfo Print layer information of the engine to console (default = disabled) --dumpOptimizationProfile Print the optimization profile(s) information (default = disabled) --exportTimes=<file> Write the timing results in a json file (default = disabled) --exportOutput=<file> Write the output tensors to a json file (default = disabled) --exportProfile=<file> Write the profile information per layer in a json file (default = disabled) --exportLayerInfo=<file> Write the layer information of the engine in a json file (default = disabled) === System Options === --device=N Select cuda device N (default = 0) --useDLACore=N Select DLA core N for layers that support DLA (default = none) --staticPlugins Plugin library (.so) to load statically (can be specified multiple times) --dynamicPlugins Plugin library (.so) to load dynamically and may be serialized with the engine if they are included in --setPluginsToSerialize (can be specified multiple times) --setPluginsToSerialize Plugin library (.so) to be serialized with the engine (can be specified multiple times) --ignoreParsedPluginLibs By default, when building a version-compatible engine, plugin libraries specified by the ONNX parser are implicitly serialized with the engine (unless --excludeLeanRuntime is specified) and loaded dynamically. Enable this flag to ignore these plugin libraries instead. --safetyPlugins Plugin library (.so) for TensorRT auto safety to manually load safety plugins specified by the command line arguments. Example: --safetyPlugins=/path/to/plugin_lib.so[pluginNamespace1::plugin1,pluginNamespace2::plugin2]. The option can be specified multiple times with different plugin libraries. === Help === --help, -h Print this message [11/27/2025-14:33:27] [E] Model missing or format not recognized &&&& FAILED TensorRT.trtexec [TensorRT v101401] [b48] # trtexec --version
最新发布
11-28
{ // DHCPv4 configuration starts here. This section will be read by DHCPv4 server // and will be ignored by other components. "Control-agent": { "http-host": "localhost", "http-port": 8000 }, "Dhcp4": { "interfaces-config": { "interfaces": [ "enp3s0f0" ] }, "control-socket": { "socket-type": "unix", "socket-name": "/path/to/kea4-ctrl-socket" }, } "Dhcp4": { // Add names of your network interfaces to listen on. "interfaces-config": { // See section 8.2.4 for more details. You probably want to add just // interface name (e.g. "eth0" or specific IPv4 address on that // interface name (e.g. "eth0/192.0.2.1"). "interfaces": ["enp3s0f1/192.168.100.1"] // Kea DHCPv4 server by default listens using raw sockets. This ensures // all packets, including those sent by directly connected clients // that don't have IPv4 address yet, are received. However, if your // traffic is always relayed, it is often better to use regular // UDP sockets. If you want to do that, uncomment this line: // "dhcp-socket-type": "udp" }, // Kea supports control channel, which is a way to receive management // commands while the server is running. This is a Unix domain socket that // receives commands formatted in JSON, e.g. config-set (which sets new // configuration), config-reload (which tells Kea to reload its // configuration from file), statistic-get (to retrieve statistics) and many // more. For detailed description, see Sections 8.8, 16 and 15. "control-socket": { "socket-type": "unix", "socket-name": "kea4-ctrl-socket" }, // Use Memfile lease database backend to store leases in a CSV file. // Depending on how Kea was compiled, it may also support SQL databases // (MySQL and/or PostgreSQL). Those database backends require more // parameters, like name, host and possibly user and password. // There are dedicated examples for each backend. See Section 7.2.2 "Lease // Storage" for details. "lease-database": { // Memfile is the simplest and easiest backend to use. It's an in-memory // C++ database that stores its state in CSV file. "type": "memfile", "lfc-interval": 3600 }, // Kea allows storing host reservations in a database. If your network is // small or you have few reservations, it's probably easier to keep them // in the configuration file. If your network is large, it's usually better // to use database for it. To enable it, uncomment the following: // "hosts-database": { // "type": "mysql", // "name": "kea", // "user": "kea", // "password": "1234", // "host": "localhost", // "port": 3306 // }, // See Section 7.2.3 "Hosts storage" for details. // Setup reclamation of the expired leases and leases affinity. // Expired leases will be reclaimed every 10 seconds. Every 25 // seconds reclaimed leases, which have expired more than 3600 // seconds ago, will be removed. The limits for leases reclamation // are 100 leases or 250 ms for a single cycle. A warning message // will be logged if there are still expired leases in the // database after 5 consecutive reclamation cycles. // If both "flush-reclaimed-timer-wait-time" and "hold-reclaimed-time" are // not 0, when the client sends a release message the lease is expired // instead of being deleted from the lease storage. "expired-leases-processing": { "reclaim-timer-wait-time": 10, "flush-reclaimed-timer-wait-time": 25, "hold-reclaimed-time": 3600, "max-reclaim-leases": 100, "max-reclaim-time": 250, "unwarned-reclaim-cycles": 5 }, // Global timers specified here apply to all subnets, unless there are // subnet specific values defined in particular subnets. "renew-timer": 900, "rebind-timer": 60, "valid-lifetime": 3600, // Many additional parameters can be specified here: // - option definitions (if you want to define vendor options, your own // custom options or perhaps handle standard options // that Kea does not support out of the box yet) // - client classes // - hooks // - ddns information (how the DHCPv4 component can reach a DDNS daemon) // // Some of them have examples below, but there are other parameters. // Consult Kea User's Guide to find out about them. // These are global options. They are going to be sent when a client // requests them, unless overwritten with values in more specific scopes. // The scope hierarchy is: // - global (most generic, can be overwritten by class, subnet or host) // - class (can be overwritten by subnet or host) // - subnet (can be overwritten by host) // - host (most specific, overwrites any other scopes) // // Not all of those options make sense. Please configure only those that // are actually useful in your network. // // For a complete list of options currently supported by Kea, see // Section 7.2.8 "Standard DHCPv4 Options". Kea also supports // vendor options (see Section 7.2.10) and allows users to define their // own custom options (see Section 7.2.9). "option-data": [ // When specifying options, you typically need to specify // one of (name or code) and data. The full option specification // covers name, code, space, csv-format and data. // space defaults to "dhcp4" which is usually correct, unless you // use encapsulate options. csv-format defaults to "true", so // this is also correct, unless you want to specify the whole // option value as long hex string. For example, to specify // domain-name-servers you could do this: // { // "name": "domain-name-servers", // "code": 6, // "csv-format": "true", // "space": "dhcp4", // "data": "192.0.2.1, 192.0.2.2" // } // but it's a lot of writing, so it's easier to do this instead: { "name": "domain-name-servers", "data": "192.0.2.1, 192.0.2.2" }, // Typically people prefer to refer to options by their names, so they // don't need to remember the code names. However, some people like // to use numerical values. For example, option "domain-name" uses // option code 15, so you can reference to it either by // "name": "domain-name" or "code": 15. { "code": 15, "data": "example.org" }, // Domain search is also a popular option. It tells the client to // attempt to resolve names within those specified domains. For // example, name "foo" would be attempted to be resolved as // foo.mydomain.example.com and if it fails, then as foo.example.com { "name": "domain-search", "data": "mydomain.example.com, example.com" }, // String options that have a comma in their values need to have // it escaped (i.e. each comma is preceded by two backslashes). // That's because commas are reserved for separating fields in // compound options. At the same time, we need to be conformant // with JSON spec, that does not allow "\,". Therefore the // slightly uncommon double backslashes notation is needed. // Legal JSON escapes are \ followed by "\/bfnrt character // or \u followed by 4 hexadecimal numbers (currently Kea // supports only \u0000 to \u00ff code points). // CSV processing translates '\\' into '\' and '\,' into ',' // only so for instance '\x' is translated into '\x'. But // as it works on a JSON string value each of these '\' // characters must be doubled on JSON input. { "name": "boot-file-name", "data": "EST5EDT4\\,M3.2.0/02:00\\,M11.1.0/02:00" }, // Options that take integer values can either be specified in // dec or hex format. Hex format could be either plain (e.g. abcd) // or prefixed with 0x (e.g. 0xabcd). { "name": "default-ip-ttl", "data": "0xf0" } // Note that Kea provides some of the options on its own. In particular, // it sends IP Address lease type (code 51, based on valid-lifetime // parameter, Subnet mask (code 1, based on subnet definition), Renewal // time (code 58, based on renew-timer parameter), Rebind time (code 59, // based on rebind-timer parameter). ], // Other global parameters that can be defined here are option definitions // (this is useful if you want to use vendor options, your own custom // options or perhaps handle options that Kea does not handle out of the box // yet). // You can also define classes. If classes are defined, incoming packets // may be assigned to specific classes. A client class can represent any // group of devices that share some common characteristic, e.g. Windows // devices, iphones, broken printers that require special options, etc. // Based on the class information, you can then allow or reject clients // to use certain subnets, add special options for them or change values // of some fixed fields. "client-classes": [ { // This specifies a name of this class. It's useful if you need to // reference this class. "name": "voip", // This is a test. It is an expression that is being evaluated on // each incoming packet. It is supposed to evaluate to either // true or false. If it's true, the packet is added to specified // class. See Section 12 for a list of available expressions. There // are several dozens. Section 8.2.14 for more details for DHCPv4 // classification and Section 9.2.19 for DHCPv6. "test": "substring(option[60].hex,0,6) == 'Aastra'", // If a client belongs to this class, you can define extra behavior. // For example, certain fields in DHCPv4 packet will be set to // certain values. "next-server": "192.0.2.254", "server-hostname": "hal9000", "boot-file-name": "/dev/null" // You can also define option values here if you want devices from // this class to receive special options. } ], // Another thing possible here are hooks. Kea supports a powerful mechanism // that allows loading external libraries that can extract information and // even influence how the server processes packets. Those libraries include // additional forensic logging capabilities, ability to reserve hosts in // more flexible ways, and even add extra commands. For a list of available // hook libraries, see https://gitlab.isc.org/isc-projects/kea/wikis/Hooks-available. "hooks-libraries":[ { "library": "/usr/local/lib64/kea/hooks/libdhcp_macauth.so", "parameters": { "server_ip": "10.10.10.1", "ac_ip": "10.10.10.102", "port": 5001, "shared_secret": "7a5b8c3e9f" } }, { "library": "/usr/local/lib64/kea/hooks/libdhcp_lease_cmds.so" } //{ // "library": "/usr/local/lib64/kea/hooks/libdhcp_lease_query.so" // } ], // "hooks-libraries": [ // { // // Forensic Logging library generates forensic type of audit trail // // of all devices serviced by Kea, including their identifiers // // (like MAC address), their location in the network, times // // when they were active etc. // "library": "/usr/local/lib64/kea/hooks/libdhcp_legal_log.so", // "parameters": { // "base-name": "kea-forensic4" // } // }, // { // // Flexible identifier (flex-id). Kea software provides a way to // // handle host reservations that include addresses, prefixes, // // options, client classes and other features. The reservation can // // be based on hardware address, DUID, circuit-id or client-id in // // DHCPv4 and using hardware address or DUID in DHCPv6. However, // // there are sometimes scenario where the reservation is more // // complex, e.g. uses other options that mentioned above, uses part // // of specific options or perhaps even a combination of several // // options and fields to uniquely identify a client. Those scenarios // // are addressed by the Flexible Identifiers hook application. // "library": "/usr/local/lib64/kea/hooks/libdhcp_flex_id.so", // "parameters": { // "identifier-expression": "relay4[2].hex" // } // }, // { // // the MySQL host backend hook library required for host storage. // "library": "/usr/local/lib64/kea/hooks/libdhcp_mysql.so" // } // ], // Below an example of a simple IPv4 subnet declaration. Uncomment to enable // it. This is a list, denoted with [ ], of structures, each denoted with // { }. Each structure describes a single subnet and may have several // parameters. One of those parameters is "pools" that is also a list of // structures. "subnet4": [ { // This defines the whole subnet. Kea will use this information to // determine where the clients are connected. This is the whole // subnet in your network. // Subnet identifier should be unique for each subnet. "id": 1, // This is mandatory parameter for each subnet. "subnet": "192.168.30.0/24", // Pools define the actual part of your subnet that is governed // by Kea. Technically this is optional parameter, but it's // almost always needed for DHCP to do its job. If you omit it, // clients won't be able to get addresses, unless there are // host reservations defined for them. "pools": [ { "pool": "192.168.30.10 - 192.168.30.200" } ], // This is one of the subnet selectors. Uncomment the "interface" // parameter and specify the appropriate interface name if the DHCPv4 // server will receive requests from local clients (connected to the // same subnet as the server). This subnet will be selected for the // requests received by the server over the specified interface. // This rule applies to the DORA exchanges and rebinding clients. // Renewing clients unicast their messages, and the renewed addresses // are used by the server to determine the subnet they belong to. // When this parameter is used, the "relay" parameter is typically // unused. // "interface": "eth0", // This is another subnet selector. Uncomment the "relay" parameter // and specify a list of the relay addresses. The server will select // this subnet for lease assignments when it receives queries over one // of these relays. When this parameter is used, the "interface" parameter // is typically unused. // "relay": { // "ip-addresses": [ "10.0.0.1" ] // }, // These are options that are subnet specific. In most cases, // you need to define at least routers option, as without this // option your clients will not be able to reach their default // gateway and will not have Internet connectivity. "option-data": [ { // For each IPv4 subnet you most likely need to specify at // least one router. "name": "routers", "data": "192.0.2.1" } ], // Kea offers host reservations mechanism. Kea supports reservations // by several different types of identifiers: hw-address // (hardware/MAC address of the client), duid (DUID inserted by the // client), client-id (client identifier inserted by the client) and // circuit-id (circuit identifier inserted by the relay agent). // // Kea also support flexible identifier (flex-id), which lets you // specify an expression that is evaluated for each incoming packet. // Resulting value is then used for as an identifier. // // Note that reservations are subnet-specific in Kea. This is // different than ISC DHCP. Keep that in mind when migrating // your configurations. "reservations": [ // This is a reservation for a specific hardware/MAC address. // It's a rather simple reservation: just an address and nothing // else. // { // "hw-address": "1a:1b:1c:1d:1e:1f", // "ip-address": "192.0.2.201" // }, // This is a reservation for a specific client-id. It also shows // the this client will get a reserved hostname. A hostname can // be defined for any identifier type, not just client-id. { "client-id": "01:11:22:33:44:55:66", "ip-address": "192.168.30.202", "hostname": "special-snowflake" }, // The third reservation is based on DUID. This reservation defines // a special option values for this particular client. If the // domain-name-servers option would have been defined on a global, // subnet or class level, the host specific values take preference. { "duid": "01:02:03:04:05", "ip-address": "192.168.30.203", "option-data": [ { "name": "domain-name-servers", "data": "10.1.1.202, 10.1.1.203" } ] }, // The fourth reservation is based on circuit-id. This is an option // inserted by the relay agent that forwards the packet from client // to the server. In this example the host is also assigned vendor // specific options. // // When using reservations, it is useful to configure // reservations-global, reservations-in-subnet, // reservations-out-of-pool (subnet specific parameters) // and host-reservation-identifiers (global parameter). { "client-id": "01:12:23:34:45:56:67", "ip-address": "192.168.30.204", "option-data": [ { "name": "vivso-suboptions", "data": "4491" }, { "name": "tftp-servers", "space": "vendor-4491", "data": "10.1.1.202, 10.1.1.203" } ] }, // This reservation is for a client that needs specific DHCPv4 // fields to be set. Three supported fields are next-server, // server-hostname and boot-file-name { "client-id": "01:0a:0b:0c:0d:0e:0f", "ip-address": "192.168.30.205", "next-server": "192.168.30.1", "server-hostname": "hal9000", "boot-file-name": "/dev/null" }, // This reservation is using flexible identifier. Instead of // relying on specific field, sysadmin can define an expression // similar to what is used for client classification, // e.g. substring(relay[0].option[17],0,6). Then, based on the // value of that expression for incoming packet, the reservation // is matched. Expression can be specified either as hex or // plain text using single quotes. // // Note: flexible identifier requires flex_id hook library to be // loaded to work. { "flex-id": "'s0mEVaLue'", "ip-address": "192.168.30.206" } // You can add more reservations here. ] // You can add more subnets there. }, { "subnet": "192.168.100.0/24", "id":100, "pools": [ { "pool": "192.168.100.100 - 192.168.100.200" } ], "option-data": [ { "name": "routers", "data": "192.168.100.2" }, { "name": "domain-name-servers", "data": "8.8.8.8, 8.8.4.4" } ] }, { "subnet": "192.168.10.0/24", "id":10, "pools": [ { "pool": "192.168.10.100 - 192.168.10.200" } ], "relay": { "ip-addresses": ["192.168.10.1"] }, "option-data": [ { "name": "routers", "data": "192.168.10.1" }, { "name": "domain-name-servers", "data": "114.114.114.114,8.8.8.8" } ] }, { "id":20, "subnet": "192.168.20.0/24", "pools": [ { "pool": "192.168.20.100 - 192.168.20.200" } ], "relay": { "ip-addresses": ["192.168.20.1"] }, "option-data": [ { "name": "routers", "data": "192.168.20.1" }, { "name": "domain-name-servers", "data": "114.114.114.114, 8.8.4.4" } ] } ], // There are many, many more parameters that DHCPv4 server is able to use. // They were not added here to not overwhelm people with too much // information at once. // Logging configuration starts here. Kea uses different loggers to log various // activities. For details (e.g. names of loggers), see Chapter 18. "loggers": [ { // This section affects kea-dhcp4, which is the base logger for DHCPv4 // component. It tells DHCPv4 server to write all log messages (on // severity INFO or more) to a file. "name": "kea-dhcp4", "output-options": [ { // Specifies the output file. There are several special values // supported: // - stdout (prints on standard output) // - stderr (prints on standard error) // - syslog (logs to syslog) // - syslog:name (logs to syslog using specified name) // Any other value is considered a name of the file "output": "kea-dhcp4.log" // Shorter log pattern suitable for use with systemd, // avoids redundant information // "pattern": "%-5p %m\n", // This governs whether the log output is flushed to disk after // every write. // "flush": false, // This specifies the maximum size of the file before it is // rotated. // "maxsize": 1048576, // This specifies the maximum number of rotated files to keep. // "maxver": 8 } ], // This specifies the severity of log messages to keep. Supported values // are: FATAL, ERROR, WARN, INFO, DEBUG "severity": "INFO", // If DEBUG level is specified, this value is used. 0 is least verbose, // 99 is most verbose. Be cautious, Kea can generate lots and lots // of logs if told to do so. "debuglevel": 0 } ] } } 查看以上配置文件查看看dhcp配置接口开放配置有什么问题及语法错误并修复
08-15
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值