The two parameter related to the streams performance.

The two parameter may related to the streams performance are

_job_queue_interval and _ _spin_count.

Usually it is suggested to set _job_queue_interva=1 and _spin_count=5000 to get the better performance for streams.

We need to modify the parameter from spfile.

But how to know what current value it is.

Let us see to the query to get the current value:

select a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value",
a.KSPPDESC Description
from x$ksppi a, x$ksppcv b, x$ksppsv c
where a.indx = b.indx and a.indx = c.indx
and substr(ksppinm,1,1)='_'
and ksppinm like '%job%' ;

===================================

select a.ksppinm "Parameter", b.ksppstvl "Session Value", c.ksppstvl "Instance Value",
a.KSPPDESC Description
from x$ksppi a, x$ksppcv b, x$ksppsv c
where a.indx = b.indx and a.indx = c.indx
and substr(ksppinm,1,1)='_'
and ksppinm like '%spin%' ;

protected-mode no port 6379 tcp-backlog 511 timeout 0 tcp-keepalive 300 daemonize no pidfile /var/run/redis_6379.pid loglevel notice logfile "" databases 16 always-show-logo no set-proc-title yes proc-title-template "{title} {listen-addr} {server-mode}" stop-writes-on-bgsave-error yes rdbcompression yes rdbchecksum yes dbfilename dump.rdb rdb-del-sync-files no dir ./ replica-serve-stale-data yes replica-read-only yes repl-diskless-sync no repl-diskless-sync-delay 5 repl-diskless-load disabled repl-disable-tcp-nodelay no replica-priority 100 acllog-max-len 128 requirepass Guyuan@2021 # New users are initialized with restrictive permissions by default, via the # equivalent of this ACL rule 'off resetkeys -@all'. Starting with Redis 6.2, it # is possible to manage access to Pub/Sub channels with ACL rules as well. The # default Pub/Sub channels permission if new users is controlled by the # acl-pubsub-default configuration directive, which accepts one of these values: # # allchannels: grants access to all Pub/Sub channels # resetchannels: revokes access to all Pub/Sub channels # # To ensure backward compatibility while upgrading Redis 6.0, acl-pubsub-default # defaults to the 'allchannels' permission. # # Future compatibility note: it is very likely that in a future version of Redis # the directive's default of 'allchannels' will be changed to 'resetchannels' in # order to provide better out-of-the-box Pub/Sub security. Therefore, it is # recommended that you explicitly define Pub/Sub permissions for all users # rather then rely on implicit default values. Once you've set explicit # Pub/Sub for all existing users, you should uncomment the following line. # # acl-pubsub-default resetchannels # Command renaming (DEPRECATED). # # ------------------------------------------------------------------------ # WARNING: avoid using this option if possible. Instead use ACLs to remove # commands from the default user, and put them only in some admin user you # create for administrative purposes. # ------------------------------------------------------------------------ # # It is possible to change the name of dangerous commands in a shared # environment. For instance the CONFIG command may be renamed into something # hard to guess so that it will still be available for internal-use tools # but not available for general clients. # # Example: # # rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52 # # It is also possible to completely kill a command by renaming it into # an empty string: # # rename-command CONFIG "" # # Please note that changing the name of commands that are logged into the # AOF file or transmitted to replicas may cause problems. ################################### CLIENTS #################################### # Set the max number of connected clients at the same time. By default # this limit is set to 10000 clients, however if the Redis server is not # able to configure the process file limit to allow for the specified limit # the max number of allowed clients is set to the current file limit # minus 32 (as Redis reserves a few file descriptors for internal uses). # # Once the limit is reached Redis will close all the new connections sending # an error 'max number of clients reached'. # # IMPORTANT: When Redis Cluster is used, the max number of connections is also # shared with the cluster bus: every node in the cluster will use two # connections, one incoming and another outgoing. It is important to size the # limit accordingly in case of very large clusters. # # maxclients 10000 ############################## MEMORY MANAGEMENT ################################ # Set a memory usage limit to the specified amount of bytes. # When the memory limit is reached Redis will try to remove keys # according to the eviction policy selected (see maxmemory-policy). # # If Redis can't remove keys according to the policy, or if the policy is # set to 'noeviction', Redis will start to reply with errors to commands # that would use more memory, like SET, LPUSH, and so on, and will continue # to reply to read-only commands like GET. # # This option is usually useful when using Redis as an LRU or LFU cache, or to # set a hard memory limit for an instance (using the 'noeviction' policy). # # WARNING: If you have replicas attached to an instance with maxmemory on, # the size of the output buffers needed to feed the replicas are subtracted # from the used memory count, so that network problems / resyncs will # not trigger a loop where keys are evicted, and in turn the output # buffer of replicas is full with DELs of keys evicted triggering the deletion # of more keys, and so forth until the database is completely emptied. # # In short... if you have replicas attached it is suggested that you set a lower # limit for maxmemory so that there is some free RAM on the system for replica # output buffers (but this is not needed if the policy is 'noeviction'). # # maxmemory <bytes> # MAXMEMORY POLICY: how Redis will select what to remove when maxmemory # is reached. You can select one from the following behaviors: # # volatile-lru -> Evict using approximated LRU, only keys with an expire set. # allkeys-lru -> Evict any key using approximated LRU. # volatile-lfu -> Evict using approximated LFU, only keys with an expire set. # allkeys-lfu -> Evict any key using approximated LFU. # volatile-random -> Remove a random key having an expire set. # allkeys-random -> Remove a random key, any key. # volatile-ttl -> Remove the key with the nearest expire time (minor TTL) # noeviction -> Don't evict anything, just return an error on write operations. # # LRU means Least Recently Used # LFU means Least Frequently Used # # Both LRU, LFU and volatile-ttl are implemented using approximated # randomized algorithms. # # Note: with any of the above policies, when there are no suitable keys for # eviction, Redis will return an error on write operations that require # more memory. These are usually commands that create new keys, add data or # modify existing keys. A few examples are: SET, INCR, HSET, LPUSH, SUNIONSTORE, # SORT (due to the STORE argument), and EXEC (if the transaction includes any # command that requires memory). # # The default is: # # maxmemory-policy noeviction # LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated # algorithms (in order to save memory), so you can tune it for speed or # accuracy. By default Redis will check five keys and pick the one that was # used least recently, you can change the sample size using the following # configuration directive. # # The default of 5 produces good enough results. 10 Approximates very closely # true LRU but costs more CPU. 3 is faster but not very accurate. # # maxmemory-samples 5 # Eviction processing is designed to function well with the default setting. # If there is an unusually large amount of write traffic, this value may need to # be increased. Decreasing this value may reduce latency at the risk of # eviction processing effectiveness # 0 = minimum latency, 10 = default, 100 = process without regard to latency # # maxmemory-eviction-tenacity 10 # Starting from Redis 5, by default a replica will ignore its maxmemory setting # (unless it is promoted to master after a failover or manually). It means # that the eviction of keys will be just handled by the master, sending the # DEL commands to the replica as keys evict in the master side. # # This behavior ensures that masters and replicas stay consistent, and is usually # what you want, however if your replica is writable, or you want the replica # to have a different memory setting, and you are sure all the writes performed # to the replica are idempotent, then you may change this default (but be sure # to understand what you are doing). # # Note that since the replica by default does not evict, it may end using more # memory than the one set via maxmemory (there are certain buffers that may # be larger on the replica, or data structures may sometimes take more memory # and so forth). So make sure you monitor your replicas and make sure they # have enough memory to never hit a real out-of-memory condition before the # master hits the configured maxmemory setting. # # replica-ignore-maxmemory yes # Redis reclaims expired keys in two ways: upon access when those keys are # found to be expired, and also in background, in what is called the # "active expire key". The key space is slowly and interactively scanned # looking for expired keys to reclaim, so that it is possible to free memory # of keys that are expired and will never be accessed again in a short time. # # The default effort of the expire cycle will try to avoid having more than # ten percent of expired keys still in memory, and will try to avoid consuming # more than 25% of total memory and to add latency to the system. However # it is possible to increase the expire "effort" that is normally set to # "1", to a greater value, up to the value "10". At its maximum value the # system will use more CPU, longer cycles (and technically may introduce # more latency), and will tolerate less already expired keys still present # in the system. It's a tradeoff between memory, CPU and latency. # # active-expire-effort 1 ############################# LAZY FREEING #################################### # Redis has two primitives to delete keys. One is called DEL and is a blocking # deletion of the object. It means that the server stops processing new commands # in order to reclaim all the memory associated with an object in a synchronous # way. If the key deleted is associated with a small object, the time needed # in order to execute the DEL command is very small and comparable to most other # O(1) or O(log_N) commands in Redis. However if the key is associated with an # aggregated value containing millions of elements, the server can block for # a long time (even seconds) in order to complete the operation. # # For the above reasons Redis also offers non blocking deletion primitives # such as UNLINK (non blocking DEL) and the ASYNC option of FLUSHALL and # FLUSHDB commands, in order to reclaim memory in background. Those commands # are executed in constant time. Another thread will incrementally free the # object in the background as fast as possible. # # DEL, UNLINK and ASYNC option of FLUSHALL and FLUSHDB are user-controlled. # It's up to the design of the application to understand when it is a good # idea to use one or the other. However the Redis server sometimes has to # delete keys or flush the whole database as a side effect of other operations. # Specifically Redis deletes objects independently of a user call in the # following scenarios: # # 1) On eviction, because of the maxmemory and maxmemory policy configurations, # in order to make room for new data, without going over the specified # memory limit. # 2) Because of expire: when a key with an associated time to live (see the # EXPIRE command) must be deleted from memory. # 3) Because of a side effect of a command that stores data on a key that may # already exist. For example the RENAME command may delete the old key # content when it is replaced with another one. Similarly SUNIONSTORE # or SORT with STORE option may delete existing keys. The SET command # itself removes any old content of the specified key in order to replace # it with the specified string. # 4) During replication, when a replica performs a full resynchronization with # its master, the content of the whole database is removed in order to # load the RDB file just transferred. # # In all the above cases the default is to delete objects in a blocking way, # like if DEL was called. However you can configure each case specifically # in order to instead release memory in a non-blocking way like if UNLINK # was called, using the following configuration directives. lazyfree-lazy-eviction no lazyfree-lazy-expire no lazyfree-lazy-server-del no replica-lazy-flush no # It is also possible, for the case when to replace the user code DEL calls # with UNLINK calls is not easy, to modify the default behavior of the DEL # command to act exactly like UNLINK, using the following configuration # directive: lazyfree-lazy-user-del no # FLUSHDB, FLUSHALL, and SCRIPT FLUSH support both asynchronous and synchronous # deletion, which can be controlled by passing the [SYNC|ASYNC] flags into the # commands. When neither flag is passed, this directive will be used to determine # if the data should be deleted asynchronously. lazyfree-lazy-user-flush no ################################ THREADED I/O ################################# # Redis is mostly single threaded, however there are certain threaded # operations such as UNLINK, slow I/O accesses and other things that are # performed on side threads. # # Now it is also possible to handle Redis clients socket reads and writes # in different I/O threads. Since especially writing is so slow, normally # Redis users use pipelining in order to speed up the Redis performances per # core, and spawn multiple instances in order to scale more. Using I/O # threads it is possible to easily speedup two times Redis without resorting # to pipelining nor sharding of the instance. # # By default threading is disabled, we suggest enabling it only in machines # that have at least 4 or more cores, leaving at least one spare core. # Using more than 8 threads is unlikely to help much. We also recommend using # threaded I/O only if you actually have performance problems, with Redis # instances being able to use a quite big percentage of CPU time, otherwise # there is no point in using this feature. # # So for instance if you have a four cores boxes, try to use 2 or 3 I/O # threads, if you have a 8 cores, try to use 6 threads. In order to # enable I/O threads use the following configuration directive: # # io-threads 4 # # Setting io-threads to 1 will just use the main thread as usual. # When I/O threads are enabled, we only use threads for writes, that is # to thread the write(2) syscall and transfer the client buffers to the # socket. However it is also possible to enable threading of reads and # protocol parsing using the following configuration directive, by setting # it to yes: # # io-threads-do-reads no # # Usually threading reads doesn't help much. # # NOTE 1: This configuration directive cannot be changed at runtime via # CONFIG SET. Aso this feature currently does not work when SSL is # enabled. # # NOTE 2: If you want to test the Redis speedup using redis-benchmark, make # sure you also run the benchmark itself in threaded mode, using the # --threads option to match the number of Redis threads, otherwise you'll not # be able to notice the improvements. ############################ KERNEL OOM CONTROL ############################## # On Linux, it is possible to hint the kernel OOM killer on what processes # should be killed first when out of memory. # # Enabling this feature makes Redis actively control the oom_score_adj value # for all its processes, depending on their role. The default scores will # attempt to have background child processes killed before all others, and # replicas killed before masters. # # Redis supports three options: # # no: Don't make changes to oom-score-adj (default). # yes: Alias to "relative" see below. # absolute: Values in oom-score-adj-values are written as is to the kernel. # relative: Values are used relative to the initial value of oom_score_adj when # the server starts and are then clamped to a range of -1000 to 1000. # Because typically the initial value is 0, they will often match the # absolute values. oom-score-adj no # When oom-score-adj is used, this directive controls the specific values used # for master, replica and background child processes. Values range -2000 to # 2000 (higher means more likely to be killed). # # Unprivileged processes (not root, and without CAP_SYS_RESOURCE capabilities) # can freely increase their value, but not decrease it below its initial # settings. This means that setting oom-score-adj to "relative" and setting the # oom-score-adj-values to positive values will always succeed. oom-score-adj-values 0 200 800 #################### KERNEL transparent hugepage CONTROL ###################### # Usually the kernel Transparent Huge Pages control is set to "madvise" or # or "never" by default (/sys/kernel/mm/transparent_hugepage/enabled), in which # case this config has no effect. On systems in which it is set to "always", # redis will attempt to disable it specifically for the redis process in order # to avoid latency problems specifically with fork(2) and CoW. # If for some reason you prefer to keep it enabled, you can set this config to # "no" and the kernel global to "always". disable-thp yes ############################## APPEND ONLY MODE ############################### # By default Redis asynchronously dumps the dataset on disk. This mode is # good enough in many applications, but an issue with the Redis process or # a power outage may result into a few minutes of writes lost (depending on # the configured save points). # # The Append Only File is an alternative persistence mode that provides # much better durability. For instance using the default data fsync policy # (see later in the config file) Redis can lose just one second of writes in a # dramatic event like a server power outage, or a single write if something # wrong with the Redis process itself happens, but the operating system is # still running correctly. # # AOF and RDB persistence can be enabled at the same time without problems. # If the AOF is enabled on startup Redis will load the AOF, that is the file # with the better durability guarantees. # # Please check https://redis.io/topics/persistence for more information. appendonly yes # The name of the append only file (default: "appendonly.aof") appendfilename "appendonly.aof" # The fsync() call tells the Operating System to actually write data on disk # instead of waiting for more data in the output buffer. Some OS will really flush # data on disk, some other OS will just try to do it ASAP. # # Redis supports three different modes: # # no: don't fsync, just let the OS flush the data when it wants. Faster. # always: fsync after every write to the append only log. Slow, Safest. # everysec: fsync only one time every second. Compromise. # # The default is "everysec", as that's usually the right compromise between # speed and data safety. It's up to you to understand if you can relax this to # "no" that will let the operating system flush the output buffer when # it wants, for better performances (but if you can live with the idea of # some data loss consider the default persistence mode that's snapshotting), # or on the contrary, use "always" that's very slow but a bit safer than # everysec. # # More details please check the following article: # http://antirez.com/post/redis-persistence-demystified.html # # If unsure, use "everysec". # appendfsync always appendfsync everysec # appendfsync no # When the AOF fsync policy is set to always or everysec, and a background # saving process (a background save or AOF log background rewriting) is # performing a lot of I/O against the disk, in some Linux configurations # Redis may block too long on the fsync() call. Note that there is no fix for # this currently, as even performing fsync in a different thread will block # our synchronous write(2) call. # # In order to mitigate this problem it's possible to use the following option # that will prevent fsync() from being called in the main process while a # BGSAVE or BGREWRITEAOF is in progress. # # This means that while another child is saving, the durability of Redis is # the same as "appendfsync none". In practical terms, this means that it is # possible to lose up to 30 seconds of log in the worst scenario (with the # default Linux settings). # # If you have latency problems turn this to "yes". Otherwise leave it as # "no" that is the safest pick from the point of view of durability. no-appendfsync-on-rewrite no # Automatic rewrite of the append only file. # Redis is able to automatically rewrite the log file implicitly calling # BGREWRITEAOF when the AOF log size grows by the specified percentage. # # This is how it works: Redis remembers the size of the AOF file after the # latest rewrite (if no rewrite has happened since the restart, the size of # the AOF at startup is used). # # This base size is compared to the current size. If the current size is # bigger than the specified percentage, the rewrite is triggered. Also # you need to specify a minimal size for the AOF file to be rewritten, this # is useful to avoid rewriting the AOF file even if the percentage increase # is reached but it is still pretty small. # # Specify a percentage of zero in order to disable the automatic AOF # rewrite feature. auto-aof-rewrite-percentage 100 auto-aof-rewrite-min-size 64mb # An AOF file may be found to be truncated at the end during the Redis # startup process, when the AOF data gets loaded back into memory. # This may happen when the system where Redis is running # crashes, especially when an ext4 filesystem is mounted without the # data=ordered option (however this can't happen when Redis itself # crashes or aborts but the operating system still works correctly). # # Redis can either exit with an error when this happens, or load as much # data as possible (the default now) and start if the AOF file is found # to be truncated at the end. The following option controls this behavior. # # If aof-load-truncated is set to yes, a truncated AOF file is loaded and # the Redis server starts emitting a log to inform the user of the event. # Otherwise if the option is set to no, the server aborts with an error # and refuses to start. When the option is set to no, the user requires # to fix the AOF file using the "redis-check-aof" utility before to restart # the server. # # Note that if the AOF file will be found to be corrupted in the middle # the server will still exit with an error. This option only applies when # Redis will try to read more data from the AOF file but not enough bytes # will be found. aof-load-truncated yes # When rewriting the AOF file, Redis is able to use an RDB preamble in the # AOF file for faster rewrites and recoveries. When this option is turned # on the rewritten AOF file is composed of two different stanzas: # # [RDB file][AOF tail] # # When loading, Redis recognizes that the AOF file starts with the "REDIS" # string and loads the prefixed RDB file, then continues loading the AOF # tail. aof-use-rdb-preamble yes ################################ LUA SCRIPTING ############################### # Max execution time of a Lua script in milliseconds. # # If the maximum execution time is reached Redis will log that a script is # still in execution after the maximum allowed time and will start to # reply to queries with an error. # # When a long running script exceeds the maximum execution time only the # SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be # used to stop a script that did not yet call any write commands. The second # is the only way to shut down the server in the case a write command was # already issued by the script but the user doesn't want to wait for the natural # termination of the script. # # Set it to 0 or a negative value for unlimited execution without warnings. lua-time-limit 5000 ################################ REDIS CLUSTER ############################### # Normal Redis instances can't be part of a Redis Cluster; only nodes that are # started as cluster nodes can. In order to start a Redis instance as a # cluster node enable the cluster support uncommenting the following: # # cluster-enabled yes # Every cluster node has a cluster configuration file. This file is not # intended to be edited by hand. It is created and updated by Redis nodes. # Every Redis Cluster node requires a different cluster configuration file. # Make sure that instances running in the same system do not have # overlapping cluster configuration file names. # # cluster-config-file nodes-6379.conf # Cluster node timeout is the amount of milliseconds a node must be unreachable # for it to be considered in failure state. # Most other internal time limits are a multiple of the node timeout. # # cluster-node-timeout 15000 # A replica of a failing master will avoid to start a failover if its data # looks too old. # # There is no simple way for a replica to actually have an exact measure of # its "data age", so the following two checks are performed: # # 1) If there are multiple replicas able to failover, they exchange messages # in order to try to give an advantage to the replica with the best # replication offset (more data from the master processed). # Replicas will try to get their rank by offset, and apply to the start # of the failover a delay proportional to their rank. # # 2) Every single replica computes the time of the last interaction with # its master. This can be the last ping or command received (if the master # is still in the "connected" state), or the time that elapsed since the # disconnection with the master (if the replication link is currently down). # If the last interaction is too old, the replica will not try to failover # at all. # # The point "2" can be tuned by user. Specifically a replica will not perform # the failover if, since the last interaction with the master, the time # elapsed is greater than: # # (node-timeout * cluster-replica-validity-factor) + repl-ping-replica-period # # So for example if node-timeout is 30 seconds, and the cluster-replica-validity-factor # is 10, and assuming a default repl-ping-replica-period of 10 seconds, the # replica will not try to failover if it was not able to talk with the master # for longer than 310 seconds. # # A large cluster-replica-validity-factor may allow replicas with too old data to failover # a master, while a too small value may prevent the cluster from being able to # elect a replica at all. # # For maximum availability, it is possible to set the cluster-replica-validity-factor # to a value of 0, which means, that replicas will always try to failover the # master regardless of the last time they interacted with the master. # (However they'll always try to apply a delay proportional to their # offset rank). # # Zero is the only value able to guarantee that when all the partitions heal # the cluster will always be able to continue. # # cluster-replica-validity-factor 10 # Cluster replicas are able to migrate to orphaned masters, that are masters # that are left without working replicas. This improves the cluster ability # to resist to failures as otherwise an orphaned master can't be failed over # in case of failure if it has no working replicas. # # Replicas migrate to orphaned masters only if there are still at least a # given number of other working replicas for their old master. This number # is the "migration barrier". A migration barrier of 1 means that a replica # will migrate only if there is at least 1 other working replica for its master # and so forth. It usually reflects the number of replicas you want for every # master in your cluster. # # Default is 1 (replicas migrate only if their masters remain with at least # one replica). To disable migration just set it to a very large value or # set cluster-allow-replica-migration to 'no'. # A value of 0 can be set but is useful only for debugging and dangerous # in production. # # cluster-migration-barrier 1 # Turning off this option allows to use less automatic cluster configuration. # It both disables migration to orphaned masters and migration from masters # that became empty. # # Default is 'yes' (allow automatic migrations). # # cluster-allow-replica-migration yes # By default Redis Cluster nodes stop accepting queries if they detect there # is at least a hash slot uncovered (no available node is serving it). # This way if the cluster is partially down (for example a range of hash slots # are no longer covered) all the cluster becomes, eventually, unavailable. # It automatically returns available as soon as all the slots are covered again. # # However sometimes you want the subset of the cluster which is working, # to continue to accept queries for the part of the key space that is still # covered. In order to do so, just set the cluster-require-full-coverage # option to no. # # cluster-require-full-coverage yes # This option, when set to yes, prevents replicas from trying to failover its # master during master failures. However the replica can still perform a # manual failover, if forced to do so. # # This is useful in different scenarios, especially in the case of multiple # data center operations, where we want one side to never be promoted if not # in the case of a total DC failure. # # cluster-replica-no-failover no # This option, when set to yes, allows nodes to serve read traffic while the # the cluster is in a down state, as long as it believes it owns the slots. # # This is useful for two cases. The first case is for when an application # doesn't require consistency of data during node failures or network partitions. # One example of this is a cache, where as long as the node has the data it # should be able to serve it. # # The second use case is for configurations that don't meet the recommended # three shards but want to enable cluster mode and scale later. A # master outage in a 1 or 2 shard configuration causes a read/write outage to the # entire cluster without this option set, with it set there is only a write outage. # Without a quorum of masters, slot ownership will not change automatically. # # cluster-allow-reads-when-down no # In order to setup your cluster make sure to read the documentation # available at https://redis.io web site. ########################## CLUSTER DOCKER/NAT support ######################## # In certain deployments, Redis Cluster nodes address discovery fails, because # addresses are NAT-ted or because ports are forwarded (the typical case is # Docker and other containers). # # In order to make Redis Cluster working in such environments, a static # configuration where each node knows its public address is needed. The # following four options are used for this scope, and are: # # * cluster-announce-ip # * cluster-announce-port # * cluster-announce-tls-port # * cluster-announce-bus-port # # Each instructs the node about its address, client ports (for connections # without and with TLS) and cluster message bus port. The information is then # published in the header of the bus packets so that other nodes will be able to # correctly map the address of the node publishing the information. # # If cluster-tls is set to yes and cluster-announce-tls-port is omitted or set # to zero, then cluster-announce-port refers to the TLS port. Note also that # cluster-announce-tls-port has no effect if cluster-tls is set to no. # # If the above options are not used, the normal Redis Cluster auto-detection # will be used instead. # # Note that when remapped, the bus port may not be at the fixed offset of # clients port + 10000, so you can specify any port and bus-port depending # on how they get remapped. If the bus-port is not set, a fixed offset of # 10000 will be used as usual. # # Example: # # cluster-announce-ip 10.1.1.5 # cluster-announce-tls-port 6379 # cluster-announce-port 0 # cluster-announce-bus-port 6380 ################################## SLOW LOG ################################### # The Redis Slow Log is a system to log queries that exceeded a specified # execution time. The execution time does not include the I/O operations # like talking with the client, sending the reply and so forth, # but just the time needed to actually execute the command (this is the only # stage of command execution where the thread is blocked and can not serve # other requests in the meantime). # # You can configure the slow log with two parameters: one tells Redis # what is the execution time, in microseconds, to exceed in order for the # command to get logged, and the other parameter is the length of the # slow log. When a new command is logged the oldest one is removed from the # queue of logged commands. # The following time is expressed in microseconds, so 1000000 is equivalent # to one second. Note that a negative number disables the slow log, while # a value of zero forces the logging of every command. slowlog-log-slower-than 10000 # There is no limit to this length. Just be aware that it will consume memory. # You can reclaim memory used by the slow log with SLOWLOG RESET. slowlog-max-len 128 ################################ LATENCY MONITOR ############################## # The Redis latency monitoring subsystem samples different operations # at runtime in order to collect data related to possible sources of # latency of a Redis instance. # # Via the LATENCY command this information is available to the user that can # print graphs and obtain reports. # # The system only logs operations that were performed in a time equal or # greater than the amount of milliseconds specified via the # latency-monitor-threshold configuration directive. When its value is set # to zero, the latency monitor is turned off. # # By default latency monitoring is disabled since it is mostly not needed # if you don't have latency issues, and collecting data has a performance # impact, that while very small, can be measured under big load. Latency # monitoring can easily be enabled at runtime using the command # "CONFIG SET latency-monitor-threshold <milliseconds>" if needed. latency-monitor-threshold 0 ############################# EVENT NOTIFICATION ############################## # Redis can notify Pub/Sub clients about events happening in the key space. # This feature is documented at https://redis.io/topics/notifications # # For instance if keyspace events notification is enabled, and a client # performs a DEL operation on key "foo" stored in the Database 0, two # messages will be published via Pub/Sub: # # PUBLISH __keyspace@0__:foo del # PUBLISH __keyevent@0__:del foo # # It is possible to select the events that Redis will notify among a set # of classes. Every class is identified by a single character: # # K Keyspace events, published with __keyspace@<db>__ prefix. # E Keyevent events, published with __keyevent@<db>__ prefix. # g Generic commands (non-type specific) like DEL, EXPIRE, RENAME, ... # $ String commands # l List commands # s Set commands # h Hash commands # z Sorted set commands # x Expired events (events generated every time a key expires) # e Evicted events (events generated when a key is evicted for maxmemory) # t Stream commands # d Module key type events # m Key-miss events (Note: It is not included in the 'A' class) # A Alias for g$lshzxetd, so that the "AKE" string means all the events # (Except key-miss events which are excluded from 'A' due to their # unique nature). # # The "notify-keyspace-events" takes as argument a string that is composed # of zero or multiple characters. The empty string means that notifications # are disabled. # # Example: to enable list and generic events, from the point of view of the # event name, use: # # notify-keyspace-events Elg # # Example 2: to get the stream of the expired keys subscribing to channel # name __keyevent@0__:expired use: # # notify-keyspace-events Ex # # By default all notifications are disabled because most users don't need # this feature and the feature has some overhead. Note that if you don't # specify at least one of K or E, no events will be delivered. notify-keyspace-events "" ############################### GOPHER SERVER ################################# # Redis contains an implementation of the Gopher protocol, as specified in # the RFC 1436 (https://www.ietf.org/rfc/rfc1436.txt). # # The Gopher protocol was very popular in the late '90s. It is an alternative # to the web, and the implementation both server and client side is so simple # that the Redis server has just 100 lines of code in order to implement this # support. # # What do you do with Gopher nowadays? Well Gopher never *really* died, and # lately there is a movement in order for the Gopher more hierarchical content # composed of just plain text documents to be resurrected. Some want a simpler # internet, others believe that the mainstream internet became too much # controlled, and it's cool to create an alternative space for people that # want a bit of fresh air. # # Anyway for the 10nth birthday of the Redis, we gave it the Gopher protocol # as a gift. # # --- HOW IT WORKS? --- # # The Redis Gopher support uses the inline protocol of Redis, and specifically # two kind of inline requests that were anyway illegal: an empty request # or any request that starts with "/" (there are no Redis commands starting # with such a slash). Normal RESP2/RESP3 requests are completely out of the # path of the Gopher protocol implementation and are served as usual as well. # # If you open a connection to Redis when Gopher is enabled and send it # a string like "/foo", if there is a key named "/foo" it is served via the # Gopher protocol. # # In order to create a real Gopher "hole" (the name of a Gopher site in Gopher # talking), you likely need a script like the following: # # https://github.com/antirez/gopher2redis # # --- SECURITY WARNING --- # # If you plan to put Redis on the internet in a publicly accessible address # to server Gopher pages MAKE SURE TO SET A PASSWORD to the instance. # Once a password is set: # # 1. The Gopher server (when enabled, not by default) will still serve # content via Gopher. # 2. However other commands cannot be called before the client will # authenticate. # # So use the 'requirepass' option to protect your instance. # # Note that Gopher is not currently supported when 'io-threads-do-reads' # is enabled. # # To enable Gopher support, uncomment the following line and set the option # from no (the default) to yes. # # gopher-enabled no ############################### ADVANCED CONFIG ############################### # Hashes are encoded using a memory efficient data structure when they have a # small number of entries, and the biggest entry does not exceed a given # threshold. These thresholds can be configured using the following directives. hash-max-ziplist-entries 512 hash-max-ziplist-value 64 # Lists are also encoded in a special way to save a lot of space. # The number of entries allowed per internal list node can be specified # as a fixed maximum size or a maximum number of elements. # For a fixed maximum size, use -5 through -1, meaning: # -5: max size: 64 Kb <-- not recommended for normal workloads # -4: max size: 32 Kb <-- not recommended # -3: max size: 16 Kb <-- probably not recommended # -2: max size: 8 Kb <-- good # -1: max size: 4 Kb <-- good # Positive numbers mean store up to _exactly_ that number of elements # per list node. # The highest performing option is usually -2 (8 Kb size) or -1 (4 Kb size), # but if your use case is unique, adjust the settings as necessary. list-max-ziplist-size -2 # Lists may also be compressed. # Compress depth is the number of quicklist ziplist nodes from *each* side of # the list to *exclude* from compression. The head and tail of the list # are always uncompressed for fast push/pop operations. Settings are: # 0: disable all list compression # 1: depth 1 means "don't start compressing until after 1 node into the list, # going from either the head or tail" # So: [head]->node->node->...->node->[tail] # [head], [tail] will always be uncompressed; inner nodes will compress. # 2: [head]->[next]->node->node->...->node->[prev]->[tail] # 2 here means: don't compress head or head->next or tail->prev or tail, # but compress all nodes between them. # 3: [head]->[next]->[next]->node->node->...->node->[prev]->[prev]->[tail] # etc. list-compress-depth 0 # Sets have a special encoding in just one case: when a set is composed # of just strings that happen to be integers in radix 10 in the range # of 64 bit signed integers. # The following configuration setting sets the limit in the size of the # set in order to use this special memory saving encoding. set-max-intset-entries 512 # Similarly to hashes and lists, sorted sets are also specially encoded in # order to save a lot of space. This encoding is only used when the length and # elements of a sorted set are below the following limits: zset-max-ziplist-entries 128 zset-max-ziplist-value 64 # HyperLogLog sparse representation bytes limit. The limit includes the # 16 bytes header. When an HyperLogLog using the sparse representation crosses # this limit, it is converted into the dense representation. # # A value greater than 16000 is totally useless, since at that point the # dense representation is more memory efficient. # # The suggested value is ~ 3000 in order to have the benefits of # the space efficient encoding without slowing down too much PFADD, # which is O(N) with the sparse encoding. The value can be raised to # ~ 10000 when CPU is not a concern, but space is, and the data set is # composed of many HyperLogLogs with cardinality in the 0 - 15000 range. hll-sparse-max-bytes 3000 # Streams macro node max size / items. The stream data structure is a radix # tree of big nodes that encode multiple items inside. Using this configuration # it is possible to configure how big a single node can be in bytes, and the # maximum number of items it may contain before switching to a new node when # appending new stream entries. If any of the following settings are set to # zero, the limit is ignored, so for instance it is possible to set just a # max entries limit by setting max-bytes to 0 and max-entries to the desired # value. stream-node-max-bytes 4096 stream-node-max-entries 100 # Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in # order to help rehashing the main Redis hash table (the one mapping top-level # keys to values). The hash table implementation Redis uses (see dict.c) # performs a lazy rehashing: the more operation you run into a hash table # that is rehashing, the more rehashing "steps" are performed, so if the # server is idle the rehashing is never complete and some more memory is used # by the hash table. # # The default is to use this millisecond 10 times every second in order to # actively rehash the main dictionaries, freeing memory when possible. # # If unsure: # use "activerehashing no" if you have hard latency requirements and it is # not a good thing in your environment that Redis can reply from time to time # to queries with 2 milliseconds delay. # # use "activerehashing yes" if you don't have such hard requirements but # want to free memory asap when possible. activerehashing yes # The client output buffer limits can be used to force disconnection of clients # that are not reading data from the server fast enough for some reason (a # common reason is that a Pub/Sub client can't consume messages as fast as the # publisher can produce them). # # The limit can be set differently for the three different classes of clients: # # normal -> normal clients including MONITOR clients # replica -> replica clients # pubsub -> clients subscribed to at least one pubsub channel or pattern # # The syntax of every client-output-buffer-limit directive is the following: # # client-output-buffer-limit <class> <hard limit> <soft limit> <soft seconds> # # A client is immediately disconnected once the hard limit is reached, or if # the soft limit is reached and remains reached for the specified number of # seconds (continuously). # So for instance if the hard limit is 32 megabytes and the soft limit is # 16 megabytes / 10 seconds, the client will get disconnected immediately # if the size of the output buffers reach 32 megabytes, but will also get # disconnected if the client reaches 16 megabytes and continuously overcomes # the limit for 10 seconds. # # By default normal clients are not limited because they don't receive data # without asking (in a push way), but just after a request, so only # asynchronous clients may create a scenario where data is requested faster # than it can read. # # Instead there is a default limit for pubsub and replica clients, since # subscribers and replicas receive data in a push fashion. # # Both the hard or the soft limit can be disabled by setting them to zero. client-output-buffer-limit normal 0 0 0 client-output-buffer-limit replica 256mb 64mb 60 client-output-buffer-limit pubsub 32mb 8mb 60 # Client query buffers accumulate new commands. They are limited to a fixed # amount by default in order to avoid that a protocol desynchronization (for # instance due to a bug in the client) will lead to unbound memory usage in # the query buffer. However you can configure it here if you have very special # needs, such us huge multi/exec requests or alike. # # client-query-buffer-limit 1gb # In the Redis protocol, bulk requests, that are, elements representing single # strings, are normally limited to 512 mb. However you can change this limit # here, but must be 1mb or greater # # proto-max-bulk-len 512mb # Redis calls an internal function to perform many background tasks, like # closing connections of clients in timeout, purging expired keys that are # never requested, and so forth. # # Not all tasks are performed with the same frequency, but Redis checks for # tasks to perform according to the specified "hz" value. # # By default "hz" is set to 10. Raising the value will use more CPU when # Redis is idle, but at the same time will make Redis more responsive when # there are many keys expiring at the same time, and timeouts may be # handled with more precision. # # The range is between 1 and 500, however a value over 100 is usually not # a good idea. Most users should use the default of 10 and raise this up to # 100 only in environments where very low latency is required. hz 10 # Normally it is useful to have an HZ value which is proportional to the # number of clients connected. This is useful in order, for instance, to # avoid too many clients are processed for each background task invocation # in order to avoid latency spikes. # # Since the default HZ value by default is conservatively set to 10, Redis # offers, and enables by default, the ability to use an adaptive HZ value # which will temporarily raise when there are many connected clients. # # When dynamic HZ is enabled, the actual configured HZ will be used # as a baseline, but multiples of the configured HZ value will be actually # used as needed once more clients are connected. In this way an idle # instance will use very little CPU time while a busy instance will be # more responsive. dynamic-hz yes # When a child rewrites the AOF file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. aof-rewrite-incremental-fsync yes # When redis saves RDB file, if the following option is enabled # the file will be fsync-ed every 32 MB of data generated. This is useful # in order to commit the file to the disk more incrementally and avoid # big latency spikes. rdb-save-incremental-fsync yes # Redis LFU eviction (see maxmemory setting) can be tuned. However it is a good # idea to start with the default settings and only change them after investigating # how to improve the performances and how the keys LFU change over time, which # is possible to inspect via the OBJECT FREQ command. # # There are two tunable parameters in the Redis LFU implementation: the # counter logarithm factor and the counter decay time. It is important to # understand what the two parameters mean before changing them. # # The LFU counter is just 8 bits per key, it's maximum value is 255, so Redis # uses a probabilistic increment with logarithmic behavior. Given the value # of the old counter, when a key is accessed, the counter is incremented in # this way: # # 1. A random number R between 0 and 1 is extracted. # 2. A probability P is calculated as 1/(old_value*lfu_log_factor+1). # 3. The counter is incremented only if R < P. # # The default lfu-log-factor is 10. This is a table of how the frequency # counter changes with a different number of accesses with different # logarithmic factors: # # +--------+------------+------------+------------+------------+------------+ # | factor | 100 hits | 1000 hits | 100K hits | 1M hits | 10M hits | # +--------+------------+------------+------------+------------+------------+ # | 0 | 104 | 255 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 1 | 18 | 49 | 255 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 10 | 10 | 18 | 142 | 255 | 255 | # +--------+------------+------------+------------+------------+------------+ # | 100 | 8 | 11 | 49 | 143 | 255 | # +--------+------------+------------+------------+------------+------------+ # # NOTE: The above table was obtained by running the following commands: # # redis-benchmark -n 1000000 incr foo # redis-cli object freq foo # # NOTE 2: The counter initial value is 5 in order to give new objects a chance # to accumulate hits. # # The counter decay time is the time, in minutes, that must elapse in order # for the key counter to be divided by two (or decremented if it has a value # less <= 10). # # The default value for the lfu-decay-time is 1. A special value of 0 means to # decay the counter every time it happens to be scanned. # # lfu-log-factor 10 # lfu-decay-time 1 ########################### ACTIVE DEFRAGMENTATION ####################### # # What is active defragmentation? # ------------------------------- # # Active (online) defragmentation allows a Redis server to compact the # spaces left between small allocations and deallocations of data in memory, # thus allowing to reclaim back memory. # # Fragmentation is a natural process that happens with every allocator (but # less so with Jemalloc, fortunately) and certain workloads. Normally a server # restart is needed in order to lower the fragmentation, or at least to flush # away all the data and create it again. However thanks to this feature # implemented by Oran Agra for Redis 4.0 this process can happen at runtime # in a "hot" way, while the server is running. # # Basically when the fragmentation is over a certain level (see the # configuration options below) Redis will start to create new copies of the # values in contiguous memory regions by exploiting certain specific Jemalloc # features (in order to understand if an allocation is causing fragmentation # and to allocate it in a better place), and at the same time, will release the # old copies of the data. This process, repeated incrementally for all the keys # will cause the fragmentation to drop back to normal values. # # Important things to understand: # # 1. This feature is disabled by default, and only works if you compiled Redis # to use the copy of Jemalloc we ship with the source code of Redis. # This is the default with Linux builds. # # 2. You never need to enable this feature if you don't have fragmentation # issues. # # 3. Once you experience fragmentation, you can enable this feature when # needed with the command "CONFIG SET activedefrag yes". # # The configuration parameters are able to fine tune the behavior of the # defragmentation process. If you are not sure about what they mean it is # a good idea to leave the defaults untouched. # Enabled active defragmentation # activedefrag no # Minimum amount of fragmentation waste to start active defrag # active-defrag-ignore-bytes 100mb # Minimum percentage of fragmentation to start active defrag # active-defrag-threshold-lower 10 # Maximum percentage of fragmentation at which we use maximum effort # active-defrag-threshold-upper 100 # Minimal effort for defrag in CPU percentage, to be used when the lower # threshold is reached # active-defrag-cycle-min 1 # Maximal effort for defrag in CPU percentage, to be used when the upper # threshold is reached # active-defrag-cycle-max 25 # Maximum number of set/hash/zset/list fields that will be processed from # the main dictionary scan # active-defrag-max-scan-fields 1000 # Jemalloc background thread for purging will be enabled by default jemalloc-bg-thread yes # It is possible to pin different threads and processes of Redis to specific # CPUs in your system, in order to maximize the performances of the server. # This is useful both in order to pin different Redis threads in different # CPUs, but also in order to make sure that multiple Redis instances running # in the same host will be pinned to different CPUs. # # Normally you can do this using the "taskset" command, however it is also # possible to this via Redis configuration directly, both in Linux and FreeBSD. # # You can pin the server/IO threads, bio threads, aof rewrite child process, and # the bgsave child process. The syntax to specify the cpu list is the same as # the taskset command: # # Set redis server/io threads to cpu affinity 0,2,4,6: # server_cpulist 0-7:2 # # Set bio threads to cpu affinity 1,3: # bio_cpulist 1,3 # # Set aof rewrite child process to cpu affinity 8,9,10,11: # aof_rewrite_cpulist 8-11 # # Set bgsave child process to cpu affinity 1,10,11 # bgsave_cpulist 1,10-11 # In some cases redis will emit warnings and even refuse to start if it detects # that the system is in bad state, it is possible to suppress these warnings # by setting the following config which takes a space delimited list of warnings # to suppress # # ignore-warnings ARM64-COW-BUG 在里面那边加上bind 0.0.0.0
05-24
请翻译下面内容为中文, ====================================== INSTALLING SUBVERSION A Quick Guide ====================================== $LastChangedDate$ Contents: I. INTRODUCTION A. Audience B. Dependency Overview C. Dependencies in Detail D. Documentation II. INSTALLATION A. Building from a Tarball B. Building the Latest Source under Unix C. Building under Unix in Different Directories D. Installing from a Zip or Installer File under Windows E. Building the Latest Source under Windows F. Building using CMake III. BUILDING A SUBVERSION SERVER A. Setting Up Apache Httpd B. Making and Installing the Subversion Apache Server Module C. Configuring Apache Httpd for Subversion D. Running and Testing E. Alternative: 'svnserve' and ra_svn IV. PROGRAMMING LANGUAGE BINDINGS (PYTHON, PERL, RUBY, JAVA) I. INTRODUCTION ============ A. Audience This document is written for people who intend to build Subversion from source code. Normally, the only people who do this are Subversion developers and package maintainers. If neither of these labels fits you, we recommend you find an appropriate binary package of Subversion and install that. While the Subversion project doesn't officially release binary packages, a number of volunteers have made such packages available for different operating systems. Most Linux and BSD distributions already have Subversion packages ready to go via standard packaging channels, and other volunteers have built 'installers' for both Windows and OS X. Visit this page for package links: https://subversion.apache.org/packages.html For those of you who still wish to build from source, Subversion follows the Unix convention of "./configure && make", but it has a number of dependencies. B. Dependency Overview You'll need the following build tools to compile Subversion: * autoconf 2.59 or later (Unix only) * libtool 1.4 or later (Unix only) * a reasonable C compiler (gcc, Visual Studio, etc.) Subversion also depends on the following third-party libraries: * libapr and libapr-util (REQUIRED for client and server) The Apache Portable Runtime (APR) library provides an abstraction of operating-system level services such as file and network I/O, memory management, and so on. It also provides convenience routines for things like hashtables, checksums, and argument processing. While it was originally developed for the Apache HTTP server, APR is a standalone library used by Subversion and other products. It is a critical dependency for all of Subversion; it's the layer that allows Subversion clients and servers to run on different operating systems. * SQLite (REQUIRED for client and server) Subversion uses SQLite to manage some internal databases. * libz (REQUIRED for client and server) Subversion uses zlib for compressing binary differences. These diff streams are used everywhere -- over the network, in the repository, and in the client's working copy. * utf8proc (REQUIRED for client and server) Subversion uses utf8proc for UTF-8 support, including Unicode normalization. * Apache Serf (OPTIONAL for client) The Apache Serf library allows the Subversion client to send HTTP requests. This is necessary if you want your client to access a repository served by the Apache HTTP server. There is an alternate 'svnserve' server as well, though, and clients automatically know how to speak the svnserve protocol. Thus it's not strictly necessary for your client to be able to speak HTTP... though we still recommend that your client be built to speak both HTTP and svnserve protocols. * OpenSSL (OPTIONAL for client and server) OpenSSL enables your client to access SSL-encrypted https:// URLs (using Apache Serf) in addition to unencrypted http:// URLs. To use SSL with Subversion's WebDAV server, Apache needs to be compiled with OpenSSL as well. * Netwide Assembler (OPTIONAL for client and server) The Netwide Assembler (NASM) is used to build the (optional) assembler modules of OpenSSL. As of OpenSSL 1.1.0 NASM is the only supported assembler. * Berkeley DB (DEPRECATED and OPTIONAL for client and server) When you create a repository, you have the option of specifying a storage 'back-end' implementation. Currently, there are two options. The newer and recommended one, known as FSFS, does not require Berkeley DB. FSFS stores data in a flat filesystem. The older implementation, known as BDB, has been deprecated and is not recommended for new repositories, but is still available. BDB stores data in a Berkeley DB database. This back-end will only be available if the BDB libraries are discovered at compile time. * libsasl (OPTIONAL for client and server) If the Cyrus SASL library is detected at compile time, then the svn client (and svnserve server) will be able to utilize SASL to do various forms of authentication when speaking the svnserve protocol. * Python, Perl, Java, Ruby (OPTIONAL) Subversion is mostly a collection of C libraries with well-defined APIs, with a small collection of programs that use the APIs. If you want to build Subversion API bindings for other languages, you need to have those languages available at build time. * py3c (OPTIONAL, but REQUIRED for Python bindings) The Python 3 Compatibility Layer for C Extensions is required to build the Python language bindings. * KDE Framework 5, libsecret, GNOME Keyring (OPTIONAL for client) Subversion contains optional support for storing passwords in KWallet via KDE Framework 5 libraries (preferred) or kdelibs4, and GNOME Keyring via libsecret (preferred) or GNOME APIs. * libmagic (OPTIONAL) If the libmagic library is detected at compile time, it will be used to determine mime-types of binary files which are added to version control. Note that mime-types configured via auto-props or the mime-types-file option take precedence. C. Dependencies in Detail Subversion depends on a number of third party tools and libraries. Some of them are only required to run a Subversion server; others are necessary just for a Subversion client. This section explains what other tools and libraries will be required so that Subversion can be built with the set of features you want. On Unix systems, the './configure' script will tell you if you are missing the correct version of any of the required libraries or tools, so if you are in a real hurry to get building, you can skip straight to section II. If you want to gather the pieces you will need before starting out, however, you should read the following. If you're just installing a Subversion client, the Subversion team has created a script that downloads the minimal prerequisite libraries (Apache Portable Runtime, Sqlite, and Zlib). The script, 'get-deps.sh', is available in the same directory as this file. When run, it will place 'apr', 'apr-util', 'serf', 'zlib', and 'sqlite-amalgamation' directories directly into your unpacked Subversion distribution. With the exception of sqlite-amalgamation, they will still need to be configured, built and installed explicitly, and Subversion's own configure script may need to be told where to find them, if they were not installed in standard system locations. Note: there are optional dependencies (such as OpenSSL, swig, and httpd) which get-deps.sh does not download. Note: Because previous builds of Subversion may have installed older versions of these libraries, you may want to run some of the cleanup commands described in section II.B before installing the following. 1. Apache Portable Runtime 1.4 or newer (REQUIRED) Whenever you want to build any part of Subversion, you need the Apache Portable Runtime (APR) and the APR Utility (APR-util) libraries. If you do not have a pre-installed APR and APR-util, you will need to get these yourself: https://apr.apache.org/download.cgi On Unix systems, if you already have the APR libraries compiled and do not wish to regenerate them from source code, then Subversion needs to be able to find them. There are a couple of options to "./configure" that tell it where to look for the APR and APR-util libraries. By default it will try to locate the libraries using apr-config and apu-config scripts. These scripts provide all the relevant information for the APR and APR-util installations. If you want to specify the location of the APR library, you can use the "--with-apr=" option of "./configure". It should be able to find the apr-config script in the standard location under that directory (e.g. ${prefix}/bin). Similarly, you can specify the location of APR-util using the "--with-apr-util=" option to "./configure". It will look for the apu-config script relative to that directory. For example, if you want to use the APR libraries you built with the Apache httpd server, you could run: $ ./configure --with-apr=/usr/local/apache2 \ --with-apr-util=/usr/local/apache2 ... Notes on Windows platforms: * Do not use APR version 1.7.3 as that release contains a bug that makes it impossible for Subversion to use it properly. This issue only affects APR builds on Windows. This issue was fixed in APR version 1.7.4. See: https://lists.apache.org/thread/xd5t922jvb9423ph4j84rsp5fxks1k0z * If you check out APR and APR-util sources from their Subversion repository, be sure to use a native Windows SVN client (as opposed to Cygwin's version) so that the .dsp files get carriage-returns at the ends of their lines. Otherwise Visual Studio will complain that it doesn't recognize the .dsp files. Notes on Unix platforms: * If you check out APR and APR-util sources from their Subversion repository, you need to run the 'buildconf' script in each library's directory to regenerate the configure scripts and other files required for compiling the libraries. Afterwards, configure, build, and install both libraries before running Subversion's configure script. For example: $ cd apr $ ./buildconf $ ./configure <options...> $ make $ make install $ cd .. $ cd apr-util $ ./buildconf $ ./configure <options...> $ make $ make install $ cd .. 2. SQLite (REQUIRED) Subversion requires SQLite version 3.24.0 or above. You can meet this dependency several ways: * Use an SQLite amalgamation file. * Specify an SQLite installation to use. * Let Subversion find an installed SQLite. To use an SQLite-provided amalgamation, just drop sqlite3.c into Subversion's sqlite-amalgamation/ directory, or point to it with the --with-sqlite configure option. This file also ships with the Subversion dependencies distribution, or you can download it from SQLite: https://www.sqlite.org/download.html 3. Zlib (REQUIRED) Subversion's binary-differencing engine depends on zlib for compression. Most Unix systems have libz pre-installed, but if you need it, you can get it from http://www.zlib.net/ 4. utf8proc (REQUIRED) Subversion uses utf8proc for UTF-8 support. Configure will attempt to locate utf8proc by default using pkg-config and known paths. If it is installed in a non-standard location, then use: --with-utf8proc=/path/to/libutf8proc Alternatively, a copy of utf8proc comes bundled with the Subversion sources. If configure should use the bundled copy, use: --with-utf8proc=internal 5. autoconf 2.59 or newer (Unix only) This is required only if you plan to build from the latest source (see section II.B). Generally only developers would be doing this. 6. libtool 1.4 or newer (Unix only) This is required only if you plan to build from the latest source (see section II.B). Note: Some systems (Solaris, for example) require libtool 1.4.3 or newer. The autogen.sh script knows about that. 7. Apache Serf library 1.3.4 or newer (OPTIONAL) If you want your client to be able to speak to an Apache server (via a http:// or https:// URL), you must link against Apache Serf. Though optional, we strongly recommend this. In order to use ra_serf, you must install serf, and run Subversion's ./configure with the argument --with-serf. If serf is installed in a non-standard place, you should use --with-serf=/path/to/serf/install instead. Apache Serf can be obtained via your system's package distribution system or directly from https://serf.apache.org/. For more information on Apache Serf and Subversion's ra_serf, see the file subversion/libsvn_ra_serf/README. 8. OpenSSL (OPTIONAL) ### needs some updates. I think Apache Serf automagically handles ### finding OpenSSL, but we may need more docco here. and w.r.t ### zlib. The Apache Serf library has support for SSL encryption by relying on the OpenSSL library. a. Using OpenSSL on the client through Apache Serf On Unix systems, to build Apache Serf with OpenSSL, you need OpenSSL installed on your system, and you must add "--with-ssl" as a "./configure" parameter. If your OpenSSL installation is hard for Apache Serf to find, you may need to use "--with-libs=/path/to/lib" in addition. In particular, on Red Hat (but not Fedora Core) it is necessary to specify "--with-libs=/usr/kerberos" for OpenSSL to be found. You can also specify a path to the zlib library using "--with-libs". Under Windows, you can specify the paths to these libraries by passing the options --with-zlib and --with-openssl to gen-make.py. b. Using OpenSSL on the Apache server You can also add support for these features to an Apache httpd server to be used for Subversion using the same support libraries. The Subversion build system will not provide them, however. You add them by specifying parameters to the "./configure" script of the Apache Server instead. For getting SSL on your server, you would add the "--enable-ssl" or "--with-ssl=/path/to/lib" option to Apache's "./configure" script. Apache enables zlib support by default, but you can specify a nonstandard location for the library with the "--with-z=/path/to/dir" option. Consult the Apache documentation for more details, and for other modules you may wish to install to enhance your Subversion server. If you don't already have it, you can get a copy of OpenSSL, including instructions for building and packaging on both Unix systems and Windows, at: https://www.openssl.org/ 9. Berkeley DB 4.X (DEPRECATED and OPTIONAL) You need the Berkeley DB libraries only if you are building a Subversion server that supports the older BDB repository storage back-end, or a Subversion client that can access local BDB repositories via the file:// URI scheme. The BDB back-end has been deprecated and is not recommended for new repositories. BDB may be removed in Subversion 2.0. We recommend the newer FSFS back-end for all new repositories. FSFS does not require the Berkeley DB libraries. If in doubt, the 'svnadmin info' command, added in Subversion 1.9, can identify whether an existing repository uses BDB or FSFS. The current recommended version of Berkeley DB is 4.4.20 or newer, which brings auto-recovery functionality to the Berkeley DB database environment. If you must use an older version of Berkeley DB, we *strongly* recommend using 4.3 or 4.2 over the 4.1 or 4.0 versions. Not only are these significantly faster and more stable, but they also enable Subversion repositories to automatically clean up database journal files to save disk space. You'll need Berkeley DB installed on your system. You can get it from: http://www.oracle.com/technetwork/database/database-technologies/berkeleydb/overview/index.html If you have Berkeley DB installed in a place not searched by default for includes and libraries, add something like this: --with-berkeley-db=db.h:/usr/local/include/db4.7:/usr/local/lib/db4.7:db-4.7 to your `configure' switches, and the build process will use the Berkeley DB header and library in the named directories. You may need to use a different path, of course. Note that in order for the detection to succeed, the dynamic linker must be able to find the libraries at configure time. 10. Cyrus SASL library (OPTIONAL) If the Simple Authentication and Security Layer (SASL) library is detected on your system, then the Subversion client and svnserve server can utilize its abilities for various forms of authentication. To learn more about SASL or to get the source code, visit: http://freshmeat.net/projects/cyrussasl/ 11. Apache Web Server 2.2.X or newer (OPTIONAL) (https://httpd.apache.org/download.cgi) The Apache httpd server is one of two methods to make your Subversion repository available over a network - the other is a custom server program called svnserve, which requires no extra software packages. Building Subversion, the Apache server, and the modules that Apache needs to communicate with Subversion are complicated enough that there is a whole section at the end of this document that describes how it is done: See section III for details. 12. Python 3.x or newer (https://www.python.org/) (OPTIONAL) Subversion does not require Python for its basic operation. However, Python is required for building and testing Subversion and for using Subversion's SWIG Python bindings or hook scripts coded in Python. The majority of Subversion's test suite is written in Python, as is part of Subversion's build system. In more detail, Python is required to do any of the following: * Use the SWIG Python bindings. * Use the ctypes Python bindings. * Use hook scripts coded in Python. * Build Subversion from a tarball on Unix-like systems and run Subversion's test suite as described in section II.B. * Build Subversion on Windows as described in section II.E. * Build Subversion from a working copy checked out from Subversion's own repository (whether or not running the test suite). * Build the SWIG Python bindings. * Build the ctypes Python bindings. * Testing as described in section III.D. The Python bindings are used by: * Third-party programs (e.g., ViewVC) * Scripts distributed with Subversion itself in the tools/ subdirectory. * Any in-house scripts you may have. Python is NOT required to do any of the following: * Use the core command-line binaries (svn, svnadmin, svnsync, etc.) * Use Subversion's C libraries. * Use any of Subversion's other language bindings. * Build Subversion from a tarball on Unix-like systems without running Subversion's test suite Although this section calls for Python 3.x, Subversion still technically works with Python 2.7. However, Support for Python 2.7 is being phased out. As of 1 January 2020, Python 2.7 has reached end of life. All users are strongly encouraged to move to Python 3. Note: If you are using a Subversion distribution tarball and want to build the Python bindings for Python 2, you should rebuild the build environment in non-release mode by running 'sh autogen.sh' before running the ./configure script; see section II.B for more about autogen.sh. 13. Perl 5.8 or newer (Windows only) (OPTIONAL) To build Subversion under any of the MS Windows platforms, you will also need Perl 5.8 or newer to run apr-util's w32locatedb.pl script. 14. pkg-config (Unix only, OPTIONAL) Subversion uses pkg-config to find appropriate options used at build time. 15. D-Bus (Unix only, OPTIONAL) D-Bus is a message bus system. D-Bus is required for support for KWallet and GNOME Keyring. pkg-config is needed to find D-Bus headers and library. 16. Qt 5 or Qt 4 (Unix only, OPTIONAL) Qt is a cross-platform application framework. QtCore, QtDBus and QtGui modules are required for support for KWallet. pkg-config is needed to find Qt headers and libraries. 17. KDE 5 Framework libraries or KDELibs 4 (Unix only, OPTIONAL) Subversion contains optional support for storing passwords in KWallet. Subversion will look for KF5Wallet, KF5CoreAddons, KF5I18n APIs by default, and needs kf5-config to find them. The KDELibs 4 api is also supported. KDELibs contains core KDE libraries. Subversion uses libkdecore and libkdeui libraries when support for KWallet is enabled. kde4-config is used to get some necessary options. pkg-config, D-Bus and Qt 4 are also required. If you want to build support for KWallet, then pass the '--with-kwallet' option to `configure`. If KDE is installed in a non-standard prefix, then use: --with-kwallet=/path/to/KDE/prefix 18. GLib 2 (Unix only, OPTIONAL) GLib is a general-purpose utility library. GLib is required for support for GNOME Keyring. pkg-config is needed to find GLib headers and library. 19. GNOME Keyring (Unix only, OPTIONAL) Subversion contains optional support for storing passwords in GNOME Keyring. pkg-config is needed to find GNOME Keyring headers and library. D-Bus and GLib are also required. If you want to build support for GNOME Keyring, then pass the '--with-gnome-keyring' option to `configure`. 20. Ctypesgen (OPTIONAL) Ctypesgen is Python wrapper generator for ctypes. It is used to generate a part of Subversion Ctypes Python bindings (CSVN). If you want to build CSVN, then pass the '--with-ctypesgen' option to `configure`. If ctypesgen.py is installed in a non-standard place, then use: --with-ctypesgen=/path/to/ctypesgen.py For more information on CSVN, see subversion/bindings/ctypes-python/README. 21. libmagic (OPTIONAL) Subversion's configure script attempts to find libmagic automatically. If it is installed in a non-standard location, then use: --with-libmagic=/path/to/libmagic/prefix The files include/magic.h and lib/libmagic.so.1.0 (or similar) are expected beneath this prefix directory. If they cannot be found Subversion will be compiled without support for libmagic. If libmagic is installed but support for it should not be compiled in, then use: --with-libmagic=no If configure should fail when libmagic is not present, but only the default locations should be searched, then use: --with-libmagic 22. LZ4 (OPTIONAL) Subversion uses LZ4 compression library version r129 or above. Configure will attempt to locate the system library by default using pkg-config and known paths. If it is installed in a non-standard location, then use: --with-lz4=/path/to/liblz4 If configure should use the version bundled with the sources, use: --with-lz4=internal 23. py3c (OPTIONAL) Subversion uses the Python 3 Compatibility Layer for C Extensions (py3c) library when building the Python language bindings. As py3c is a header-only library, it is needed only to build the bindings, not to use them. Configure will attempt to locate py3c by default using pkg-config and known paths. If it is installed in a non-standard location, then use: --with-py3c=/path/to/py3c/prefix The library can be downloaded from GitHub: https://github.com/encukou/py3c On Unix systems, you can also use the provided get-deps.sh script to download py3c and several other dependencies; see the top of section I.C for more about get-deps.sh. D. Documentation The primary documentation for Subversion is the free book "Version Control with Subversion", a.k.a. "The Subversion Book", obtainable from https://svnbook.red-bean.com/. Various additional documentation exists in the doc/ subdirectory of the Subversion source. See the file doc/README for more information. II. INSTALLATION ============ Subversion support three different build systems: - Autoconf/make, for Unix builds - Visual Studio vcproj, for Windows builds - CMake, for both Unix and Windows The first two have been in use since 2001. Sections A-E below describe the classic build system. The CMake build system was created in 2024 and is still under development. It will be included in Subversion 1.15 and is expected to be the default build system starting with Subversion 1.16. Section F below describes the CMake build system. A. Building from a Tarball ------------------------------ 1. Building from a Tarball Download the most recent distribution tarball from: https://subversion.apache.org/download/ Unpack it, and use the standard GNU procedure to compile: $ ./configure $ make # make install You can also run the full test suite by running 'make check'. Even in successful runs, some tests will report XFAIL; that is normal. Failed runs are indicated by FAIL or XPASS results, or a non-zero exit code from "make check". B. Building the Latest Source under Unix ------------------------------------- These instructions assume you have already installed Subversion and checked out a working copy of Subversion's own code -- either the latest /trunk code, or some branch or tag. You also need to have already installed whatever prerequisites that version of Subversion requires (if you haven't, the ./configure step should complain). You can discard the directory created by the tarball; you're about to build the latest, greatest Subversion client. This is the procedure Subversion developers use. First off, if you have any Subversion libraries lying around from previous 'make installs', clean them up first! # rm -f /usr/local/lib/libsvn* # rm -f /usr/local/lib/libapr* # rm -f /usr/local/lib/libserf* Start the process by running "autogen.sh": $ sh ./autogen.sh This script will make sure you have all the necessary components available to build Subversion. If any are missing, you will be told where to get them from. (See the 'Dependency Overview' in section I.) Note: if the command "autoconf" on your machine does not run autoconf 2.59 or later, but you do have a new enough autoconf available, then you can specify the correct one with the AUTOCONF variable. (The AUTOHEADER variable is similar.) This may be required on Debian GNU/Linux, where "autoconf" is actually a Perl script that attempts to guess which version is required -- because of the interaction between Subversion's and APR's configuration systems, the Perl script may get it wrong. So for example, you might need to do: $ AUTOCONF=autoconf2.59 sh ./autogen.sh Once you've prepared the working copy by running autogen.sh, just follow the usual configuration and build procedure: $ ./configure $ make # make install (Optionally, you might want to pass --enable-maintainer-mode to the ./configure script. This enables debugging symbols in your binaries (among other things) and most Subversion developers use it.) Since the resulting binary depends on shared libraries, the destination library directory must be identified in your operating system's library search path. That is in either /etc/ld.so.conf or $LD_LIBRARY_PATH for Linux systems and in /etc/rc.conf for FreeBSD, followed by a run of the 'ldconfig' program. Check your system documentation for details. By identifying the destination directory, Subversion will be able to dynamically load repository access plugins. If you try to do a checkout and see an error like: subversion/libsvn_ra/ra_loader.c:209: (apr_err=170000) svn: Unrecognized URL scheme 'https://svn.apache.org/repos/asf/subversion/trunk' It probably means that the dynamic loader/linker can't find all of the libsvn_* libraries. C. Building under Unix in Different Directories -------------------------------------------- It is possible to configure and build Subversion on Unix in a directory other than the working copy. For example $ svn co https://svn.apache.org/repos/asf/subversion/trunk svn $ cd svn $ # get SQLite amalgamation if required $ chmod +x autogen.sh $ ./autogen.sh $ mkdir ../obj $ cd ../obj $ ../svn/configure [...with options as appropriate...] $ make puts the Subversion working copy in the directory svn and builds it in a separate, parallel directory obj. Why would you want to do this? Well there are a number of reasons... * You may prefer to avoid "polluting" the working copy with files generated during the build. * You may want to put the build directory and the working copy on different physical disks to improve performance. * You may want to separate source and object code and only backup the source. * You may want to remote mount the working copy on multiple machines, and build for different machines from the same working copy. * You may want to build multiple configurations from the same working copy. The last reason above is possibly the most useful. For instance you can have separate debug and optimized builds each using the same working copy. Or you may want a client-only build and a client-server build. Using multiple build directories you can rebuild any or all configurations after an edit without the need to either clean and reconfigure, or identify and copy changes into another working copy. D. Installing from a Zip or Installer File under Windows ----------------------------------------------------- Of all the ways of getting a Subversion client, this is the easiest. Download a Zip or self-extracting installer via: https://subversion.apache.org/packages.html#windows For a Zip file extract the DLLs and EXEs to a directory of your choice. Included in the download are among other tools the SVN client, the SVNADMIN administration tool and the SVNLOOK reporting tool. You may want to add the bin directory in the Subversion folder to your PATH environment variable so as to not have to use the full path when running Subversion commands. To test the installation, open a DOS box (run either "cmd" or "command" from the Start menu's "Run..." menu option), change to the directory you installed the executables into, and run: C:\test>svn co https://svn.apache.org/repos/asf/subversion/trunk svn This will get the latest Subversion sources and put them into the "svn" subdirectory. If using a self-extracting .exe file, just run it instead of unzipping it, to install Subversion. E. Building the Latest Source under Windows ---------------------------------------- E.1 Prerequisites * Microsoft Visual Studio. Any recent (2005+) version containing the Visual C++ component will work (E.g. Professional, Express, Community Edition). Make sure you enable C++ support during setup. * Python 2.7 or higher, downloaded from https://www.python.org/ which is used to generate the project files. * Perl 5.8 or higher from https://www.perl.org/get.html * Awk is needed to compile Apache. Source code is available in tools\dev\awk, run the buildwin.bat program to compile. * Apache apr, apr-util, and optionally apr-iconv libraries, version 1.4 or later (1.2 for apr-iconv). If you are building from a Subversion checkout and have not downloaded Apache 2, then get these 3 libraries from https://www.apache.org/dist/apr/. * SQLite 3.24.0 or higher from https://www.sqlite.org/download.html (3.39.4 or higher recommended) * ZLib 1.2 or higher is required and can be obtained from http://www.zlib.net/ * Either a Subversion client binary from https://subversion.apache.org/packages.html to do the initial checkout of the Subversion source or the zip file source distribution. Additional Options * [Optional] Apache Httpd 2 source, downloaded from https://httpd.apache.org/download.cgi, these instructions assume version 2.0.58. This is only needed for building the Subversion server Apache modules. ### FIXME Apache 2.2 or greater required. * [Optional] Berkeley DB for backend support of the server components are available from http://www.oracle.com/technetwork/database/database-technologies/berkeleydb/downloads/index-082944.html (Version 4.4.20 or in specific cases some higher version recommended) For more information see Section I.C.9. * [Optional] Openssl can be obtained from https://www.openssl.org/source/ * [Optional] NASM can be obtained from http://www.nasm.us/ * [Optional] A modified version of GNU libintl, called svn-win32-libintl.zip, can be used for displaying localized messages. Available at: http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=2627 * [Optional] GNU gettext for generating message catalog (.mo) files from message translations. You can get the latest binaries from http://gnuwin32.sourceforge.net/. You'll need the binaries (gettext-0.14.1-bin.zip) and dependencies (gettext-0.14.1-dep.zip). E.2 Notes The Apache Serf library supports secure connections with OpenSSL and on-the-wire compression with zlib. If you want to use the secure connections feature, you should pass the option "--with-openssl" to the gen-make.py script. See Section I.C.7 for more details. E.3 Preparation This section describes how to unpack the files to make a build tree. * Make a directory SVN and cd into it. * Either checkout Subversion: svn co https://svn.apache.org/repos/asf/subversion/trunk src-trunk or unpack the zip file distribution and rename the directory to src-trunk. * Install Visual Studio Environment. You either have to tell the installer to register environment variables or run VCVARS32.BAT before building anything. If you are using a newer Visual Studio, use the 'Visual Studio 20xx Command Prompt' on the Start menu. * Install Python and add it to your path * Install Perl (it should add itself to the path) ### Subversion doesn't need perl. Only some dependencies need it (OpenSSL and some apr scripts) * Copy AWK (awk95.exe) to awk.exe (e.g. SVN\awk\awk.exe) and add the directory containing it (e.g. SVN\awk) to the path. ### Subversion doesn't need awk. Only some dependencies need it (some apr scripts) * [Optional] Install NASM and add it to your path ### Subversion doesn't need NASM. Only some dependencies need it optionally (OpenSSL) * [Optional] If you checked out Subversion from the repository and want to build Subversion with http/https access support then install the Apache Serf sources into SVN\src-trunk\serf. * [Optional] If you want BDB backend support, extract the Berkeley DB files into SVN\src-trunk\db4-win32. It's a good idea to add SVN\src-trunk\db4-win32\bin to your PATH, so that Subversion can find the Berkeley DB DLLs. [NOTE: This binary package of Berkeley DB is provided for convenience only. Please don't address questions about Berkeley DB that aren't directly related to using Subversion to the project mailing list.] If you build Berkeley DB from the source, you will have to copy the file db-x.x.x\build_win32\db.h to SVN\src-trunk\db4-win32\include, and all the import libraries to SVN\src-trunk\db4-win32\lib. Again, the DLLs should be somewhere in your path. ### Just use --with-serf instead of the hardcoded path * [Optional] If you want to build the server modules, extract Apache source into SVN\httpd-2.x.x. * If you are building from a checkout of Subversion, and you are NOT building Apache, then you will need the APR libraries. Depending on how you got your version of APR, either: - Extract the APR, APR-util and APR-iconv source distributions into SVN\apr, SVN\apr-util, and SVN\apr-iconv respectively. Or: - Extract the apr, apr-util and apr-iconv directories from the srclib folder in the Apache httpd source into SVN\apr, SVN\apr-util, and SVN\apr-iconv respectively. ### Just use --with-apr, etc. instead of the hardcoded paths * Extract the ZLib sources into SVN\zlib if you are not using the zlib included in the dependencies zip file. ### Just use --with-zlib instead of the hardcoded path * [Optional] If you want secure connection (https) client support extract OpenSSL into SVN\openssl ### And pass the path to both serf and gen-make.py * [Optional] If you want localized message support, extract svn-win32-libintl.zip into SVN\svn-win32-libintl and extract gettext-x.x.x-bin.zip and gettext-x.x.x-dep.zip into SVN\gettext-x.x.x-bin. Add SVN\gettext-x.x.x-bin\bin to your path. * Download the SQLite amalgamation from https://www.sqlite.org/download.html and extract it into SVN\sqlite-amalgamation. See I.C.12 for alternatives to using the amalgamation package. E.4 Building the Binaries To build the binaries either follow these instructions. Start in the SVN directory you created. Set up the environment (commands should be one line even if wrapped here). C:>set VER=trunk C:>set DIR=trunk C:>set BUILD_ROOT=C:\SVN C:>set PYTHONDIR=C:\Python27 C:>set AWKDIR=C:\SVN\Awk C:>set ASMDIR=C:\SVN\asm C:>set SDKINC="C:\Program Files\Microsoft SDK\include" C:>set SDKLIB="C:\Program Files\Microsoft SDK\lib" C:>set GETTEXTBIN=C:\SVN\gettext-0.14.1-bin\bin C:>PATH=%PATH%;%BUILD_ROOT%\src-%DIR%\db4-win32;%ASMDIR%; %PYTHONDIR%;%AWKDIR%;%GETTEXTBIN% C:>set INCLUDE=%SDKINC%;%INCLUDE% C:>set LIB=%SDKLIB%;%LIB% OpenSSL < 1.1.0 C:>cd openssl C:>perl Configure VC-WIN32 [*] C:>call ms\do_masm C:>nmake -f ms\ntdll.mak C:>cd out32dll C:>call ..\ms\test C:>cd ..\.. *Note: Use "call ms\do_nasm" if you have nasm instead of MASM, or "call ms\do_ms" if you don't have an assembler. Also if you are using OpenSSL >= 1.0.0 masm is no longer supported. You will have to use do_nasm or do_ms in this case. OpenSSL >= 1.1.0 C:>cd openssl C:>perl Configure VC-WIN32 C:>nmake C:>nmake test C:>cd .. Apache 2 This step is only required for building the server dso modules. ### FIXME Apache 2.2 or greater required. Old build instructions for VC6. C:>set APACHEDIR=C:\Program Files\Apache Group\Apache2 C:>msdev httpd-2.0.58\apache.dsw /MAKE "BuildBin - Win32 Release" APR If you downloaded APR / APR-UTIL / APR_ICONV by source, you will have to build these libraries first. Building these libraries on Windows is straight forward and in most cases as simple as issuing these two commands: C:>nmake -f Makefile.win C:>nmake -f Makefile.win install Please refer to the build instructions provided by the library source for actual build instructions. ZLib If you downloaded the zlib source, you will have to build ZLib first. Building ZLib using Visual Studio should be quite simple. Just open the appropriate solution and build the project zlibstat using the IDE. Please refer to the build instructions provided by the library source for actual build instructions. Note that you'd make sure to define ZLIB_WINAPI in the ZLib config header and move the lib-file into the zlib root-directory. Please note that you MUST NOT build ZLib with the included assembler optimized code. It is known to be buggy, see for example the discussion https://svn.haxx.se/dev/archive-2013-10/0109.shtml. This means that you must not define ASMV or ASMINF. Note that the VS projects in contrib\visualstudio define these in the Debug configuration. Apache Serf ### Section about Apache Serf might be required/useful to add. ### scons is required too and Apache Serf needs to be configured prior to ### be able to build Subversion using: ### scons APR=[PATH_TO_APR] APU=[PATH_TO_APU] OPENSSL=[PATH_TO_OPENSSL] ### ZLIB=[PATH_TO_ZLIB] PREFIX=[PATH_TO_SERF_DEST] ### scons check ### scons install Subversion Things to note: * If you don't want to build mod_dav_svn, omit the --with-httpd option. The zip file source distribution contains apr, apr-util and apr-iconv in the default build location. If you have downloaded the apr files yourself you will have to tell the generator where to find the APR libraries; the options are --with-apr, --with-apr-util and --with-apr-iconv. * If you would like a debug build substitute Debug for Release in the msbuild command. * There have been rumors that Subversion on Win32 can be built using the latest cygwin, you probably don't want the zip file source distribution though. ymmv. * You will also have to distribute the C runtime dll with the binaries. Also, since Apache/APR do not provide .vcproj files, you will need to convert the Apache/APR .dsp files to .vcproj files with Visual Studio before building -- just open the Apache .dsw file and answer 'Yes To All' when the conversion dialog pops up, or you can open the individual .dsp files and convert them one at a time. The Apache/APR projects required by Subversion are: apr-util\libaprutil.dsp, apr\libapr.dsp, apr-iconv\libapriconv.dsp, apr-util\xml\expat\lib\xml.dsp, apr-iconv\ccs\libapriconv_ccs_modules.dsp, and apr-iconv\ces\libapriconv_ces_modules.dsp. * If the server dso modules are being built and tested Apache must not be running or the copy of the dso modules will fail. C:>cd src-%DIR% If Apache 2 has been built and the server modules are required then gen-make.py will already have been run. If the source is from the zip file, Apache 2 has not been built so gen-make.py must be run: C:>python gen-make.py --vsnet-version=20xx --with-berkeley-db=db4-win32 --with-openssl=..\openssl --with-zlib=..\zlib --with-libintl=..\svn-win32-libintl Then build subversion: C:>msbuild subversion_vcnet.sln /t:__MORE__ /p:Configuration=Release C:>cd .. The binaries have now been built. E.5 Packaging the binaries You now need to copy the binaries ready to make the release zip file. You also need to do this to run the tests as the new binaries need to be in your path. You can use the build/win32/make_dist.py script in the Subversion source directory to do that. [TBD: Describe how to do this. Note dependencies on zip, jar, doxygen.] E.6 Testing the Binaries [TBD: It's been a long, long while since it was necessary to move binaries around for testing. win-tests.py does that automagically. Fix this section accordingly, and probably reorder, putting the packaging at the end.] The build process creates the binary test programs but it does not copy the client tests into the release test area. C:>cd src-%DIR% C:>mkdir Release\subversion\tests\cmdline C:>xcopy /S /Y subversion\tests\cmdline Release\subversion\tests\cmdline If the server dso modules have been built then copy the dso files and dlls into the Apache modules directory. C:>copy Release\subversion\mod_dav_svn\mod_dav_svn.so "%APACHEDIR%"\modules C:>copy Release\subversion\mod_authz_svn\mod_authz_svn.so "%APACHEDIR%"\modules C:>copy svn-win32-%VER%\bin\intl.dll "%APACHEDIR%\bin" C:>copy svn-win32-%VER%\bin\iconv.dll "%APACHEDIR%\bin" C:>copy svn-win32-%VER%\bin\libdb42.dll "%APACHEDIR%\bin" C:>cd .. Put the svn-win32-trunk\bin directory at the start of your path so you run the newly built binaries and not another version you might have installed. Then run the client tests: C:>PATH=%BUILD_ROOT%\svn-win32-%VER%\bin;%PATH% C:>cd src-%DIR% C:>python win-tests.py -c -r -v If the server dso modules were built configure Apache to use the mod_dav_svn and mod_authz_svn modules by making sure these lines appear uncommented in httpd.conf: LoadModule dav_module modules/mod_dav.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so And further down the file add location directives to point to the test repositories. Change the paths to the SVN directory you created (paths should be on one line even if wrapped here): <Location /svn-test-work/repositories> DAV svn SVNParentPath C:/SVN/src-trunk/Release/subversion/tests/cmdline/ svn-test-work/repositories </Location> <Location /svn-test-work/local_tmp/repos> DAV svn SVNPath c:/SVN/src-trunk/Release/subversion/tests/cmdline/ svn-test-work/local_tmp/repos </Location> Then restart Apache and run the tests: C:>python win-tests.py -c -r -v -u http://localhost C:>cd .. F. Building using CMake -------------------- Get the sources, either a release tarball or by checking out the official repository. The CMake build system currently only exists in /trunk and it will be included in the 1.15 release. The process for building on Unix and Windows is the same. $ python gen-make.py -t cmake $ cmake -B out [build options] $ cmake --build out "out" in the commands above is the build directory used by CMake. Build options can be added, for example: $ cmake -B out -DCMAKE_INSTALL_PREFIX=/usr/local/subversion -DSVN_ENABLE_RA_SERF=ON Build options can be listed using: $ cmake -LH Windows tricks: - Modern versions of Microsoft Visual Studio provide support for CMake projects out-of-box, including intellisense, integrated options editor, test explorer, and more. In order to use it for Subversion, open the source directory with Visual Studio, and the configuration should start automatically. For editing the cache (options), do right-click to the CMakeLists.txt file and clicking `CMake Settings for Subversion` will open the editor. After the required settings are configured, hit `F7` in order to build. For more info, check the article bellow: https://learn.microsoft.com/en-us/cpp/build/cmake-projects-in-visual-studio - There is a useful tool for bootstrapping the dependencies, vcpkg. It provides ports for the most of the Subversion's dependencies, which then could be installed via a single command. To start using it, download the registry from GitHub, bootstrap vcpkg, and install the dependencies: $ git clone https://github.com/microsoft/vcpkg $ cd vcpkg && .\bootstrap-vcpkg.bat -disableMetrics $ .\vcpkg install apr apr-util expat zlib sqlite3 [any other dependency] After this is done, vcpkg can be integrated into CMake by passing the vcpkg toolchain to CMAKE_TOOLCHAIN_FILE option. In order to do it with Visual Studio, open the CMake cache editor as explained in the previous step, and put the following into `CMake toolchain file` field, where VCPKG_ROOT is the path to vcpkg registry: <VCPKG_ROOT>/scripts/buildsystems/vcpkg.cmake III. BUILDING A SUBVERSION SERVER ============================ Subversion has two servers you can choose from: svnserve and Apache. svnserve is a small, lightweight server program that is automatically compiled when you build Subversion's source. Apache is a more heavyweight HTTP server, but tends to have more features. This section primarily focuses on how to build Apache and the accompanying mod_dav_svn server module for it. If you plan to use svnserve instead, jump right to section E for a quick explanation. A. Setting Up Apache Httpd ----------------------- 1. Obtaining and Installing Apache Httpd 2 Subversion tries to compile against the latest released version of Apache httpd 2.2+. The easiest thing for you to do is download a source tarball of the latest release and unpack that. If you have questions about the Apache httpd 2.2 build, please consult the httpd install documentation: https://httpd.apache.org/docs-2.2/install.html At the top of the httpd tree: $ ./buildconf $ ./configure --enable-dav --enable-so --enable-maintainer-mode The first arg says to build mod_dav. The second arg says to enable shared module support which is needed for a typical compile of mod_dav_svn (see below). The third arg says to include debugging information. If you built Subversion with --enable-maintainer-mode, then you should do the same for Apache; there can be problems if one was compiled with debugging and the other without. Note: if you have multiple db versions installed on your system, Apache might link to a different one than Subversion, causing failures when accessing the repository through Apache. To prevent this from happening, you have to tell Apache which db version to use and where to find db. Add --with-dbm=db4 and --with-berkeley-db=/usr/local/BerkeleyDB.4.2 to the configure line. Make sure this is the same db as the one Subversion uses. This note assumes you have installed Berkeley DB 4.2.52 at its default locations. For more info about the db requirement, see section I.C.9. You may also want to include other modules in your build. Add --enable-ssl to turn on SSL support, and --enable-deflate to turn on compression support, for example. Consult the Apache documentation for more details. All instructions below assume you configured Apache to install in its default location, /usr/local/apache2/; substitute appropriately if you chose some other location. Compile and install apache: $ make && make install B. Making and Installing the Subversion Apache Server Module --------------------------------------------------------- Go back into your subversion working copy and run ./autogen.sh if you need to. Then, assuming Apache httpd 2.2 is installed in the standard location, run: $ ./configure Note: do *not* configure subversion with "--disable-shared"! mod_dav_svn *must* be built as a shared library, and it will look for other libsvn_*.so libraries on your system. If you see a warning message that the build of mod_dav_svn is being skipped, this may be because you have Apache httpd 2.x installed in a non-standard location. You can use the "--with-apxs=" option to locate the apxs script: $ ./configure --with-apxs=/usr/local/apache2/bin/apxs Note: it *is* possible to build mod_dav_svn as a static library and link it directly into Apache. Possible, but painful. Stick with the shared library for now; if you can't, then ask. $ rm /usr/local/lib/libsvn* If you have old subversion libraries sitting on your system, libtool will link them instead of the `fresh' ones in your tree. Remove them before building subversion. $ make clean && make && make install After the make install, the Subversion shared libraries are in /usr/local/lib/. mod_dav_svn.so should be installed in /usr/local/libexec/ (or elsewhere, such as /usr/local/apache2/modules/, if you passed --with-apache-libexecdir to configure). Section II.E explains how to build the server on Windows. C. Configuring Apache Httpd for Subversion --------------------------------------- The following section is an abbreviated version of the information in the Subversion Book (https://svnbook.red-bean.com). Please read chapter 6 for more details. The following assumes you have already created a repository. For documentation on how to do that, see README. The following also assumes that you have modified /usr/local/apache2/conf/httpd.conf to reflect your setup. At a minimum you should look at the User, Group and ServerName directives. Full details on setting up apache can be found at: https://httpd.apache.org/docs-2.2/ First, your httpd.conf needs to load the mod_dav_svn module. If you pass --enable-mod-activation to Subversion's configure, 'make install' target should automatically add this line for you. In any case, if Apache HTTPD gives you an error like "Unknown DAV provider: svn", then you may want to verify that this line exists in your httpd.conf: LoadModule dav_svn_module modules/mod_dav_svn.so NOTE: if you built mod_dav as a dynamic module as well, make sure the above line appears after the one that loads mod_dav.so. Next, add this to the *bottom* of your httpd.conf: <Location /svn/repos> DAV svn SVNPath /absolute/path/to/repository </Location> This will give anyone unrestricted access to the repository. If you want limited access, read or write, you add these lines to the Location block: AuthType Basic AuthName "Subversion repository" AuthUserFile /my/svn/user/passwd/file And: a) For a read/write restricted repository: Require valid-user b) For a write restricted repository: <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> c) For separate restricted read and write access: AuthGroupFile /my/svn/group/file <LimitExcept GET PROPFIND OPTIONS REPORT> Require group svn_committers </LimitExcept> <Limit GET PROPFIND OPTIONS REPORT> Require group svn_committers Require group svn_readers </Limit> ### FIXME Tutorials section refers to old 2.0 docs These are only a few simple examples. For a complete tutorial on Apache access control, please consider taking a look at the tutorials found under "Security" on the following page: https://httpd.apache.org/docs-2.0/misc/tutorials.html In order for 'svn cp' to work (which is actually implemented as a DAV COPY command), mod_dav needs to be able to determine the hostname of the server. A standard way of doing this is to use Apache's ServerName directive to set the server's hostname. Edit your /usr/local/apache2/conf/httpd.conf to include: ServerName svn.myserver.org If you are using virtual hosting through Apache's NameVirtualHost directive, you may need to use the ServerAlias directive to specify additional names that your server is known by. If you have configured mod_deflate to be in the server, you can enable compression support for your repository by adding the following line to your Location block: SetOutputFilter DEFLATE NOTE: If you are unfamiliar with an Apache directive, or not exactly sure about what it does, don't hesitate to look it up in the documentation: https://httpd.apache.org/docs-2.2/mod/directives.html. NOTE: Make sure that the user 'nobody' (or whatever UID the httpd process runs as) has permission to read and write the Berkeley DB files! This is a very common problem. D. Running and Testing ------------------- Fire up apache 2: $ /usr/local/apache2/bin/apachectl stop $ /usr/local/apache2/bin/apachectl start Check /usr/local/apache2/logs/error_log to make sure it started up okay. Try doing a network checkout from the repository: $ svn co http://localhost/svn/repos wc The most common reason this might fail is permission problems reading the repository db files. If the checkout fails, make sure that the httpd process has permission to read and write to the repository. You can see all of mod_dav_svn's complaints in the Apache error logfile, /usr/local/apache2/logs/error_log. To run the regression test suite for networked Subversion, see the instructions in subversion/tests/cmdline/README. For advice about tracing problems, see "Debugging the server" in https://subversion.apache.org/docs/community-guide/. E. Alternative: 'svnserve' and ra_svn ----------------------------------- An alternative network layer is libsvn_ra_svn (on the client side) and the 'svnserve' process on the server. This is a simple network layer that speaks a custom protocol over plain TCP (documented in libsvn_ra_svn/protocol): $ svnserve -d # becomes a background daemon $ svn checkout svn://localhost/usr/local/svn/repository You can use the "-r" option to svnserve to set a logical root for repositories, and the "-R" option to restrict connections to read-only access. ("Read-only" is a logical term here; svnserve still needs write access to the database in this mode, but will not allow commits or revprop changes.) 'svnserve' has built-in CRAM-MD5 authentication (so you can use non-system accounts), and can also be tunneled over SSH (so you can use existing system accounts). It's also capable of using Cyrus SASL if libsasl2 is detected at ./configure time. Please read chapter 6 in the Subversion Book (https://svnbook.red-bean.com) for details on these features. IV. PROGRAMMING LANGUAGE BINDINGS (PYTHON, PERL, RUBY, JAVA) ======================================================== For Python, Perl and Ruby bindings, see the file ./subversion/bindings/swig/INSTALL For Java bindings, see the file ./subversion/bindings/javahl/README
最新发布
06-24
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值