There is no source code available for the current location.

本文讨论了在使用日文版Win7 32位操作系统和英文版VS2005时遇到的断点问题,通过删除工程.suo文件成功解决。原因可能与.suo文件包含所有断点信息有关。

环境:日文版 win7 32位操作系统, vs2005 英文版,vc工程

问题:在某行打了断点,本来vs是说永远不会走到,于是右键breakpoint,location,选中Allow the source code to be different from the original version

虽然走是走到断点了,但F10每走一步就弹个"There is no source code available for the current location.",伤不起啊

解决:百度

1.先参考http://blog.youkuaiyun.com/lsg32/article/details/7830771,删了debug目录,没解决

2.再删工程.pdb文件,没解决

3.参考stackoverflow.com/questions/314329/getting-rid-of-there-is-no-source-code-available-for-the-current-location,删除工程.suo文件,居然解决了


原因(可能的):

One way, that also works for Express Editions of Visual Studio (say, Visual Basic 2005 Express Edition), is to rename the.suo file. It is in the same folder as the solution file, .sln. Exit Visual Studio before renaming the file.

The .suo file contains non-critical settings, like window positions, etc. However, it also contains all the breakpoints which is why it is probably better to rename it than delete it in case this action is regretted.

OAM Engine Configuration The OAM engine configuration requires common infrastructure settings that affect all OAM flows. For each OAM flow, the application must configure the OAM Table attributes that define the flow behavior. This is achieved by setting the fields of the OAM Engine table. This table has 2K rules and it must be partitioned between Ingress and Egress OAM engines. The OAM engine table record is described in the CPSS_DXCH_OAM_ENTRY_STC structure. The flow configuration is described in details in OAM Engine Single Flow Configuration. The OAM engine detects various exceptions. The device also maintains special counters and indications of the exceptions. Exception handling configuration is described in OAM Exception – Configuration, Indications, Counters, and Recovery. Exception recovery is described in Exception Recovery. Using stage Parameter in OAM APIs Most of the CPSS APIs described in this section have a parameter called stage that defines if the API is applicable to either Ingress or Egress OAM processing. The Ingress and Egress processing is defined by the CPSS_DXCH_OAM_STAGE_TYPE_ENT type. To set the OAM processing to Ingress stage, use constant CPSS_DXCH_OAM_STAGE_TYPE_INGRESS_E. To set the OAM processing to Egress stage, use constant CPSS_DXCH_OAM_STAGE_TYPE_EGRESS_E. If the stage parameter is omitted, the API is applicable to both Ingress and Egress stages. OAM Engine Initialization To enable the Ingress or Egress OAM processing, call cpssDxChOamEnableSet. The OAM Engine table has 2K flow entries. The application may need to allocate continuous areas for OAM Ingress and Egress flows in the OAM table. To set the basic flow offset for each stage, call cpssDxChOamTableBaseFlowIdSet. All other OAM APIs rely on this setting for accessing the OAM table. Keepalive Functionality Configuration The OAM engine uses the keepalive daemon for monitoring the connectivity with a peer device. Each flow in the OAM table defines keepalive attributes. The built-in aging daemon applies them. To detect LOC, the daemon uses up to 8 configurable timers. Each timer is used to measure the time between successful keepalive message arrivals. The LOC timeout for a single flow is defined as the number of times the timer elapsed. A keepalive exception is raised if there was no packet for the configured time. Each timer can be set to a different period. Each flow can use any of the 8 timers. To enable keepalive detection on the device, call cpssDxChOamAgingDaemonEnableSet. Set the enable parameter to GT_TRUE to enable the aging daemon. If the daemon is enabled, the periodic keepalive check will be performed on entries according to the aging settings in the OAM Engine table. Otherwise, the Ingress or Egress keepalive check will be globally disabled. The device supports 8 different aging timers per stage to provide a greater granularity. To configure each one of the 8 aging timers, call cpssDxChOamAgingPeriodEntrySet. The timers are configured in units of 40 ns. The applicable range of time units is 0 to 0x3FFFFFFFF. Therefore, the maximal time that can be set equals to ~10 minutes. The timers are referenced in the OAM Table entry field agingPeriodIndex described in LOC Detection Configuration. An application may configure a keep-alive engine to process dropped keep-alive packets. There is a separate configuration for soft-dropped and hard-dropped packets. To enable processing of dropped packets, call cpssDxChOamKeepaliveForPacketCommandEnableSet. Reporting LOC Event Set OAM engine to report LOC events by calling cpssDxChOamAgingBitmapUpdateModeSet with mode set to CPSS_DXCH_OAM_AGING_BITMAP_UPDATE_MODE_ONLY_FAILURES_E. This ensures aging bitmap is updated only upon flow failure. Setting mode to CPSS_DXCH_OAM_AGING_BITMAP_UPDATE_MODE_ALL_E, allows updating aging bitmap to OK as well as to failure. Enabling Protection LOC The OAM Engine can trigger protection switching upon a LOC event. To enable a protection switching update, set the of CPSS_DXCH_OAM_ENTRY_STC, when calling cpssDxChOamEntrySet or cpssDxChOamPortGroupEntrySet. The protection switching configuration is described in Protection Switching. Note that the protection LOC update must be configured in the OAM Engine table at the same row as the row of the LOC table that implements the protection switch. Monitoring Payload In some cases, it is desired to validate the packet payload beyond verifying that the message had arrived with the correct header. The OAM engine provides the ability to monitor the packet payload for correctness. This is implemented by comparing the hashed value calculated for the monitored packet fields with the configured one. The OAM engine can optionally report the changes in the monitored packet data fields. To configure a continuous area of up to 12 bits that will be monitored by the hash mechanism, call cpssDxChOamHashBitSelectionSet. This setting will be used by the OAM engine as described in Packet Header Correctness Detection. OAM Table Related Configuration For a TCAM action to assign a flow ID to an OAM packet, the respective entry in the OAM table requires configuring using the cpssDxChOamEntrySet API. In addition, additional configurations are required for proper processing of OAM packets, as described below. Packet Command Profile Configuration The OAM engine uses the Packet Opcode table to apply commands and set CPU codes for packets trapped to the CPU. To access entries in the Opcode to Packet Command table is a lookup table, use the following two indexes: The 8-bit opcode from the CFM packet header The profile ID – The packetCommandProfile field of CPSS_DXCH_OAM_ENTRY_STC, set by the cpssDxChOamEntrySet API Call cpssDxChOamEntrySet to set the opcodeParsingEnable field of CPSS_DXCH_OAM_ENTRY_STC set to GT_TRUE, in order to enable access to the Opcode to Packet Command table. The contents of the table is a packet command of the CPSS_PACKET_CMD_ENT type, including CPSS_PACKET_CMD_LOOPBACK_E as a possible command. It is recommended to configure the table prior to enabling the OAM functionality. To configure the profile table, call cpssDxChOamOpcodeProfilePacketCommandEntrySet. If the packet command is drop or forward to CPU, cpssDxChOamOpcodeProfilePacketCommandEntrySet is also used to configure the CPU/DROP code to be sent to the CPU. Multicast packets can be automatically assigned (profile ID +1) for accessing the Packet Opcode table. In this way, an application can enable different handling for Multicast and Unicast flows. In order to enable a dedicated profile for Multicast traffic, use cpssDxChOamOpcodeProfileDedicatedMcProfileEnableSet. Dual-Ended Loss Measurement Command To define a packet command for Dual-Ended Loss Measurement packets, call cpssDxChOamDualEndedLmPacketCommandSet. The structure CPSS_PACKET_CMD_END describes the command types. CPU Code Configuration for Trapped Packets All trapped packets contain the CPU code that can be used by the application for further processing. The opcode is constructed dynamically for each packet from 3 configured values as follows: CPU_result_code=<OAM_CPU_Code_Base>+ (OAM_Table_Flow_Cpu_Code_Offset> << 2) + (Opcode_Packet_Command_Table_CPU_Code_Offset) where: OAM_CPU_Code_Base is the value configured by cpssDxChOamCpuCodeBaseSet. OAM Table_Cpu OAM_Table_Flow_Cpu_Code_Offset is the value configured for a specific flow in the OAM Engine table. For more details, see OAM Engine Single Flow Configuration. Opcode_Packet_Command_Table_CPU_Code_Offset is the value from the Opcode to Packet command table to be set by calling cpssDxChOamOpcodeProfilePacketCommandEntrySet. The available CPU code offset constants are defined by the CPSS_NET_RX_CPU_CODE_ENT enumeration type. Timestamp Configuration CPSS provides APIs that enable time stamping in OAM frames and configure the offset where the time stamp must be inserted. To enable time stamping parsing for the incoming frames, call cpssDxChOamTimeStampParsingEnableSet. To configure Ethertype to be inserted into outgoing DM frames, call cpssDxChOamTimeStampEtherTypeSet. Timestamping can be done anywhere within OAM packets using the PTP Timestamp table. To insert a timestamp: Call cpssDxChOamEntrySet to set the timestampEnable and oamPtpOffsetIndex fields of CPSS_DXCH_OAM_ENTRY_STC. If the packet is not DM, turn off (set to GT_FALSE) opcodeParsingEnable. Call cpssDxChPtpTsCfgTableSet to configure the entry of index oamPtpOffsetIndex from Step 1. Set the entry of type CPSS_DXCH_PTP_TS_CFG_ENTRY_STC to be used as a parameter of cpssDxChPtpTsCfgTableSet to: tsMode = CPSS_DXCH_PTP_TS_TIMESTAMPING_MODE_DO_ACTION_E Set tsAction of type CPSS_DXCH_PTP_TS_ACTION_ENT to the required timestamp type, for example CPSS_DXCH_PTP_TS_ACTION_ADD_INGRESS_TIME_E packetFormat = CPSS_DXCH_PTP_TS_PACKET_TYPE_Y1731_E ptpTransport = CPSS_DXCH_PTP_TRANSPORT_TYPE_ETHERNET_E Set L3 offset of timestamp insertion Packet to Opcode Table Usage Some OAM packets are processed as known types of OAM messages (LM, DM, CCM Keep Alive). OAM types with dedicated processing are listed in CPSS_DXCH_OAM_OPCODE_TYPE_ENT. Packet are classified by opcode-matching with predefined OAM opcode types listed in the Opcode table. Upon finding an opcode match, an internal OAM process (not an OAM Action) is triggered. Call cpssDxChOamOpcodeSet to set the table per stage and per OAM opcode type (keepalive message, LM, DM).it triggers The following figure illustrates the common format for all OAM PDUs. Figure 299: Common OAM PDU Format Set opcodeType to CPSS_DXCH_OAM_OPCODE_TYPE_LM_SINGLE_ENDED_E to configure opcode for the single-ended LM opcode. Set opcodeType to CPSS_DXCH_OAM_OPCODE_TYPE_LM_DUAL_ENDED_E to define an opcode for dual-ended loss measurement. Set opcodeType to CPSS_DXCH_OAM_OPCODE_TYPE_KEEPALIVE_E to define an opcode for keepalive monitoring. Note, that if the opcode does not match CPSS_DXCH_OAM_OPCODE_TYPE_DM_E, even though opcode parsing is enabled and timestampEnable is set, no timestamp is added to the packet. Each flow in the OAM table is configured to either attempt opcode matching or skip it. To enable OAM Engine matching of packet opcode to a configured one, call cpssDxChOamEntrySet, and set the field opcodeParsingEnable in CPSS_DXCH_OAM_ENTRY_STC. Loss Measurements Configuration – Destination Offset There is a special LM Offset table that contains a packet destination offset. The OAM engine accesses the LM Offset table to determine the offset in the packet and insert the LM counters data. This table is accessed according to the index configured in the OAM Engine table, as described in Loss Measurements (LM) Configuration. To configure the LM Offset table, call cpssDxChOamLmOffsetTableSet. The parameter entryIndex defines the table row. The parameter offset contains the offset value. IETF MPLS-TP OAM Support The OAM engine determines the packet command according to 8-bit opcode values retrieved from OAM packets. However, in the MPLS TP, the OAM is represented by a 16-bit MPLS Control Word value. The device provides a flexible way of mapping MPLS -TP Control Word to 8-bit opcode values used by the OAM engine. This is done by using 16 profiles. To map an MPLS Channel Type to a profile, call cpssDxChOamMplsCwChannelTypeProfileSet. To configure mapping profiles, call cpssDxChPclOamChannelTypeProfileToOpcodeMappingSet. OAM Exception – Configuration, Indications, Counters, and Recovery Exception Overview There are 7 OAM exceptions that may occur during OAM processing. Keepalive Aging Exception – Occurs when OAM flows age out and Loss of Continuity occurs. Excess Keepalive Exception – Occurs when an excess number of keepalive messages is received in one of the flows. RDI Status Exception – Occurs when an OAM message is received with an RDI value that is different than the current RDI status of the corresponding OAM Table entry. Tx Period Exception – Occurs when the transmission period of an OAM message differs from the configured transmission period in the corresponding OAM Table entry. Invalid Keepalive Exception – Occurs when the hash verification of a received OAM packet fails. MEG Level Exception – Occurs when the MEG Level of the received OAM message is lower than expected. Source Interface Exception – Occurs when the source interface of the OAM message is different from the one expected. The device also maintains a summary exception indication. It is set if any of the above exceptions occurs. The CPSS_DXCH_OAM_EXCEPTION_TYPE_ENT type must be used to define the exception type in any of the exception related APIs described in this section. Exception Action Configuration CPSS provides an API that defines the command to apply on a packet upon exception and the data to forward to the CPU if CPU TRAP was asserted upon exception. To bind a command and the CPU code to an exception, call cpssDxChOamExceptionConfigSet. The structure CPSS_DXCH_OAM_EXCEPTION_CONFIG_STC defines the command and CPU data for each exception. The commands to apply on the packet upon exception are listed by CPSS_PACKET_CMD_ENT. The codes to pass to the CPU are listed by CPSS_NET_RX_CPU_CODE_ENT. Exception Counters Access The device maintains counters for each exception type at the device level (cumulative counter for exceptions that occurred in all 2K flows). Call cpssDxChOamExceptionCounterGet to obtain the current value of the device level exception counter for the specified exception type. Note, the exception counters are not cleared on read, and wrap around upon reaching the maximal value (232-1). Counter types are listed by CPSS_DXCH_OAM_EXCEPTION_TYPE_ENT. Exception Recovery At times, exception state toggles from Fail to Pass. In such cases, it is possible to assign a pre-configured Recovery Packet Command and CPU/drop code to the packet that triggered the state change. This allows notifying the application of flow recovery by assigning a MIRROR command to the packet. To achieve that, call cpssDxChOamExceptionRecoveryConfigSet with exceptionCommandPtr (CPSS_DXCH_OAM_EXCEPTION_COMMAND_CONFIG_STC) set to the desired exception recovery configuration per the specified exception type and OAM direction/stage (ingress/egress). Exception Storm Suppression This section is applicable for Falcon family of devices CPSS allows suppressing exception storms for OAM exceptions, though it is possible to still assign command and CPU code (the latter, for packets marked as TO CPU) to the respective packets. To suppress exception storm for exceptions: Enable exception suppression for the desired exception type in the relevant OAM table entry (CPSS_DXCH_OAM_ENTRY_STC). The following fields are available: Keepalive aging – keepaliveAgingStormSuppressEnable Invalid keepalive hash – invalidHashKeepaliveStormSuppressEnable MEG level – megLevelStormSuppressEnable Source interface – sourceInterfaceStormSuppressEnable Tx period – txPeriodStormSuppressEnable NOTE: For explanation on each of these exception types, see OAM Exception – Configuration, Indications, Counters, and Recovery. Call cpssDxChOamExceptionSuppressConfigSet with exceptionCommandPtr (CPSS_DXCH_OAM_EXCEPTION_COMMAND_CONFIG_STC) set to the desired packet OAM handling configuration per the specified exception type and OAM direction/stage (ingress/egress). Exception Status Indication The device maintains 2 structures per each exception type—the device exception status vector, and flows exception status table. Device Exception Status Access The device exception status vector has 64 bits where each bit represents the cumulative exception status of 32 consecutive flows. For example, if bit 3 is set to 1, there is an exception in one of the flows, from flow 96 up to flow 127. To read the device exception status vector of all 2K flows, call cpssDxChOamExceptionGroupStatusGet. Set the exceptionType parameter to indicate the required exception type. Single-Flow Exception Status Access For each of the above exceptions, the device maintains an exception status indications table. The exception status indication table has 64 rows. Each row has 32 bits—one bit per OAM flow. When an exception occurs for flow i, the OAM engine sets bit i in the corresponding exception table row. Figure 300: Calculation of Flow ID with Exception To get the status of 32 flow exceptions, call cpssDxChOamExceptionStatusGet and provide the exception type and row index that contains the required flow exception. The cpssDxChOamExceptionGroupStatusGet API provides the row IDs to be used as inputs to cpssDxChOamExceptionStatusGet. In Falcon devices, obtain the exception status by calling cpssDxChOamPortGroupEntryGet. To detect which flow caused the exception, call cpssDxChOamExceptionGroupStatusGet. The indexes to set bits in the returned vector groupStatusArr must be used as input parameters to cpssDxChOamExceptionStatusGet. An example shown in the previous figure explains how to calculate the flow ID that caused the exception. OAM Engine Single Flow Configuration The OAM engine provides building blocks to implement any of the CFM protocols defined by the Ethernet OAM standards 802.1ag/Y.1731, MPLS OAM ITU-T Y.1711 standard, and others. The CFM supports 3 protocols with 3 message types: Linktrace Protocol with Linktrace Message (LTM) Continuity Check Protocol with Continuity Check Message (CCM) Loopback Protocol with Loopback Message LBM The standards also introduce the requirements for filtering CFM messages, Delay Measurements (DM) and Loss Measurements (LM) as well as for sending and detecting indications of local alarms. (RDI). The above requirements can be supported by configuring the entry in OAM Engine table. To configure an OAM Engine table entry, call cpssDxChOamEntrySet or cpssDxChOamPortGroupEntrySet. All the settings are configured through the CPSS_DXCH_OAM_ENTRY_STC structure field. The fields described in this section are assumed to be members of this structure. The OAM Engine table is configured for each OAM flow and consists of the following: OAM Packet Parsing MEG Level Filtering Configuration Source Interface Filtering Configuration Keepalive Monitoring Configuration Delay Measurement (DM) Configuration Loss Measurements (LM) Configuration OAM Packet Parsing Set opcodeParsingEnable to GT_TRUE to use the packet Opcode to determine the packet command. This field is typically enabled for OAM flows of the 802.1ag / Y.1731 / MPLS-TP OAM, and is typically disabled for flows of other OAM protocols, such as BFD or Y.1711. If set, the packet command is determined using the Opcode-to-packet-command table. For the LM and DM processing, set this field to apply the LM and DM actions only to packets with opcode that matches the configured opcodes. If opcodeParsingEnable is not set, the DM or LM action is applied to any packet that passes the TTI or PCL classification and is referred to OAM processing. For details on the DM processing, see Delay Measurement (DM) Configuration. For details on the LM processing, see Loss Measurements (LM) Configuration. MEG Level Filtering Configuration The IEEE 802.1ag l standard specifies that OAM messages of the level below the level configured must be dropped. In the following example, the device is configured to process OAM packets for portId =5, MEG level =3 and VID =10. Packets with MEG levels 0,1, and 2 must be dropped while packets with levels above 3 must be forwarded. Set the megLevelCheckEnable parameter to GT_TRUE to enable MEG filtering. Set megLevel = 3. CFM packets from any MEG level for port 0 and VID =10 will be classified for the OAM engine. The OAM engine will drop all packets below level 3 while the CFM frames above level 3 will be forwarded. The CFM packets of MEG level 3 will undergo OAM processing according to the Opcode to Packet command mapping table configuration. The MEG Level exception occurs when the MEG level of the received OAM message is lower than expected. Multiple MEG Level Filtering The same IEEE 802.1ag standard specifies that multiple MEG levels may be defined for a single interface. The following example explains how to configure 2 separate Maintenance Points (MP). There are 2 MP for the same service—one at level (3) and another one at Level (5). Port=0, VID=7, MEG Level=3 Port=0, VID=7, MEG Level=5 In this case, 2 separate OAM Table entries are created, one for each of these MPs: The first entry must not perform MEG filtering. megLevelCheckEnable = GT_FALSE The second entry – filtering enabled for MEG Level=5. megLevelCheckEnable = GT_TRUE megLevel = 5 Two corresponding TCAM rules are created for these flows: (either in the TTI or in the PCL) First rule – EtherType=CFM, Port=0, VID=7, MEG Level=3. Second rule (must appear after the first one) – EtherType=CFM, Port=0, VID=7, MEG Level=* The first rule binds the OAM flow with MEG Level=3 to the corresponding OAM entry. The second rule binds the OAM flow to the second OAM entry, resulting in a MEG Level filtering. The OAM packets with MEG level 3 will be matched by the first TCAM entry and will be processed by the OAM engine’s first rule. The other OAM packets with MEG levels other than 3 will be matched by the second TCAM rule, and will be processed by the second OAM entry. Thus, the following MEG Levels are dropped: 0, 1, 2, 4, while all packets in MEG Levels above 5 are forwarded. Source Interface Filtering Configuration Source interface filtering is defined in IEEE 802.1ag. The device can be configured to detect source interface violations. The Source Interface exception occurs when the source interface of the OAM message is different than the one configured, as explained further. If classification rules do not use the source interface as the classification parameter, the OAM frames may arrive from different interfaces. Set sourceInterfaceCheckEnable to enable source interface filtering. Set sourceInterface to define the filtering interface. To enable packet filtering from any port except for the configured one, set sourceInterfaceCheckMode to CPSS_DXCH_OAM_SOURCE_INTERFACE_CHECK_MODE_MATCH_E. Set sourceInterfaceCheckMode to CPSS_DXCH_OAM_SOURCE_INTERFACE_CHECK_MODE_NO_MATCH_E to raise an exception if an OAM packet arrives from the interface other than the one set in the sourceInterface field. Multiple Interface Filtering It is possible to configure filtering of multiple interfaces on the same device. Multiple MEPs can be defined within a single switch, with the same VID and MEG level, but with different interfaces. The following example shows how to configure processing of OAM packets from 2 different interfaces, while dropping OAM packets from any other interface. For example, 2 separate Down Maintenance Points (MP) may be defined as follows: ePort=0, VID=7, MEG Level=3 ePort=1, VID=7, MEG Level=3 In this case, 2 separate OAM Table entries are created, one for each of these MPs. First entry – the set source interface filtering is disabled. sourceInterfaceCheckEnable = GT_FALSE; Second entry – source interface filtering is enabled in the following way: sourceInterfaceCheckEnable = GT_TRUE; sourceInterface.portNum = 1; sourceInterfaceCheckMode = CPSS_DXCH_OAM_SOURCE_INTERFACE_CHECK_MODE_MATCH_E; Two corresponding TCAM rules are created for these flows (either in TTI or PCL). First rule – EtherType=CFM, ePort=0, VID=7. Second rule (must appear after the first one) – EtherType=CFM, ePort=*, VID=7. The first rule binds the OAM flow with ePort=0 to the corresponding OAM entry. The second rule binds the OAM flow to the second OAM entry, resulting in a source interface filtering. Thus, OAM packets with VID=7 from ePorts 0 or 1 are not dropped, while packets from other ports are dropped. Keepalive Monitoring Configuration Keepalive monitoring provides the following configurable functionalities: LOC Detection Configuration Packet Header Correctness Detection Excess Keep Alive Message Detection LOC Detection Configuration To define the keepalive timeout for the flow, set the agingPeriodIndex field to point to one of the 8 aging timers described in Keepalive Functionality Configuration. Set the agingThreshold field to configure the number of periods of the selected aging timer. LOC is detected if there is no CCM packet during the time period defined by agingThreshold. The Keepalive Aging exception occurs when an OAM flow ages out and LOC occurs. To configure the LOC timeout period for 100 ms using the aging timer of 1 ms, set agingThreshold =100. The Keepalive exception occurs if a message does not arrive within 100 ms. Packet Header Correctness Detection The device can be configured to detect the correctness of a packet header. Set the hashVerifyEnable field to enable detection. If enabled, the packet header is verified against the hash value that is set in the flowHash field. This field can be either configured by an API or can be dynamically set, according to the first OAM packet, by the device. To use the configured value, set the lockHashValueEnable field to GT_TRUE. Otherwise, the OAM engine will control this field. The packet header correctness check is based on monitoring a 12-bit hash value out of the 32-bit hash value computed by the hash generator. To select packet fields and a hash method, see Hash Modes and Mechanism. The configuration of a continuous area of up to 12 bits that will be monitored by the hash mechanism is described in Monitoring Payload. Excess Keep Alive Message Detection The OAM engine can be configured to detect excess keep alive messages. The excess keep alive detection algorithm causes the exception if for the configured detection time the expected number of keep alive messages is above the threshold. Set excessKeepaliveDetectionEnable to detect excess keepalive messages. To configure the detection time, set excessKeepalivePeriodThreshold to the number of aging timer periods and excessKeepaliveMessageThreshold to the minimal number of messages expected during the configured period. The Excess Keepalive exception occurs when an excess number of keepalive messages is received. Set the following fields to detect excess keepalive frames in 100 ms, using a minimal amount of messages (4), if the aging timer is configured to period of 1 ms. excessKeepalivePeriodThreshold =100; excessKeepaliveMessageThreshold =4; The OAM engine may be set to compare the period of received keepalive packets with the configured one. To enable this check, set the periodCheckEnable field and set the expected period in the keepaliveTxPeriod field. The Tx Period exception occurs when the transmission period of an OAM message differs from the configured transmission period in the corresponding OAM Table entry. RDI Check Configuration The OAM engine can be configured to compare the RDI field in the packet to the configured one in the OAM engine table. To enable this check, set the rdiCheckEnable field. The RDI check is performed only if the keepaliveAgingEnable field is set. The OAM Engine monitors the RDI bit that was extracted into UDB according to the profile. The expected location RDI must be set by calling cpssDxChPclOamRdiMatchingSet. The RDI Status exception occurs when an OAM message is received with an RDI value that is different than the current RDI status of the corresponding OAM Table entry. Delay Measurement (DM) Configuration The OAM Engine provides a convenient way to configure time stamping for implementing an accurate delay measurement functionality. The device maintains an internal Time of Day (ToD) counter that is used for time stamping. This ToD counter can be synchronized to a network Grandmaster clock using the Precision Time Protocol (PTP) or to a time server using the Network Time Protocol (NTP). For details on synchronizing the ToD, see Time Synchronization and Timestamping. The OAM engine uses the offset table defined in Time Synchronization and Timestamping to read the offset of the packet in which the time stamp must be inserted. The OAM Engine entry is configured with the index to the offset table. To enable time stamping in the OAM packets serviced by a flowId entry of the OAM engine, call cpssDxChOamEntrySet and set following fields: Set the opcodeParsingEnable field to GT_TRUE Set the timestampEnable field to GT_TRUE. Configure the offset of the packet where the time stamp is copied by setting the offsetIndex field to point to the offset table with the configured offset. The time stamping will be performed only for packets with an opcode matched to one of the 16 opcodes available in the DM Opcodes table. To configure 16 DM opcodes, call cpssDxChOamOpcodeSet. Set the opcodeType parameter to CPSS_DXCH_OAM_OPCODE_TYPE_DM_E and set 16 DM opcodes in opcodeValue. The opcodeIndex parameter defines the required index in the DM opcodes table. If opcodeParsingEnable is set to GT_FALSE, the timestamps are set to any packet classified to the OAM flow. Loss Measurements (LM) Configuration Loss Measurement (LM) is performed by reading billing and policy counters and inserting them into OAM frames. All the service counters are assigned using the TTI or PCL classification rules, as defined in a. The TTI, IPCL, or EPCL engine rules must be set to bound the traffic to counters. Only a green conforming counter out of 3 billing counters is used for LM. For more details on configuring counters in the TTI engine, see TTI Rules and Actions. For more details on configuring counters in a PCL lookup, see Policy Action. An OAM Engine Table rule defines where to insert LM counters into a frame. The OAM engine maintains a table that allows setting LM values into a different offset depending on the packet opcode. The LM configuration is explained in detail further in this section. The OAM packets are identified and classified into flows in the TTI (see TTI Rules and Actions). The relevant rule action must have the following fields set: oamProcessEnable + flowId – Bind the packet to a specific entry in the OAM table. bindToPolicer – This field must be enabled in the action entry if LM counting is enabled for this flow. policerIndex – Specifies the index of the LM counting entry when Bind To Policer Counter is set. To bind the Policer counter to the OAM, call cpssDxChPclRuleSetas defined in Policy Action. To define LM counting, call cpssDxChOamEntrySet and set following fields in structure CPSS_DXCH_OAM_ENTRY_ST: To enable counting of OAM packets in LM, set lmCountingMode = CPSS_DXCH_OAM_LM_COUNTING_MODE_ENABLE_E. To insert an Egress counter into the packet as defined in the LM table, set the lmCounterCaptureEnable to GT_TRUE. To define an offset for inserting the LM data, set offsetIndex to point to the LM Offset table (see Loss Measurements Configuration – Destination Offset). CPU Code Offset Configuration To configure the value to be added to the CPU code value for packets trapped or mirrored to the CPU, configure the cpuCodeOffset field.能否提取到cpssDxChOamEntrySet适用于哪些机型,AC5调用哪个接口
最新发布
12-06
翻译, ====================================== INSTALLING SUBVERSION A Quick Guide ====================================== $LastChangedDate$ Contents: I. INTRODUCTION A. Audience B. Dependency Overview C. Dependencies in Detail D. Documentation II. INSTALLATION A. Building from a Tarball B. Building the Latest Source under Unix C. Building under Unix in Different Directories D. Installing from a Zip or Installer File under Windows E. Building the Latest Source under Windows F. Building using CMake III. BUILDING A SUBVERSION SERVER A. Setting Up Apache Httpd B. Making and Installing the Subversion Apache Server Module C. Configuring Apache Httpd for Subversion D. Running and Testing E. Alternative: 'svnserve' and ra_svn IV. PROGRAMMING LANGUAGE BINDINGS (PYTHON, PERL, RUBY, JAVA) I. INTRODUCTION ============ A. Audience This document is written for people who intend to build Subversion from source code. Normally, the only people who do this are Subversion developers and package maintainers. If neither of these labels fits you, we recommend you find an appropriate binary package of Subversion and install that. While the Subversion project doesn't officially release binary packages, a number of volunteers have made such packages available for different operating systems. Most Linux and BSD distributions already have Subversion packages ready to go via standard packaging channels, and other volunteers have built 'installers' for both Windows and OS X. Visit this page for package links: https://subversion.apache.org/packages.html For those of you who still wish to build from source, Subversion follows the Unix convention of "./configure && make", but it has a number of dependencies. B. Dependency Overview You'll need the following build tools to compile Subversion: * autoconf 2.59 or later (Unix only) * libtool 1.4 or later (Unix only) * a reasonable C compiler (gcc, Visual Studio, etc.) Subversion also depends on the following third-party libraries: * libapr and libapr-util (REQUIRED for client and server) The Apache Portable Runtime (APR) library provides an abstraction of operating-system level services such as file and network I/O, memory management, and so on. It also provides convenience routines for things like hashtables, checksums, and argument processing. While it was originally developed for the Apache HTTP server, APR is a standalone library used by Subversion and other products. It is a critical dependency for all of Subversion; it's the layer that allows Subversion clients and servers to run on different operating systems. * SQLite (REQUIRED for client and server) Subversion uses SQLite to manage some internal databases. * libz (REQUIRED for client and server) Subversion uses zlib for compressing binary differences. These diff streams are used everywhere -- over the network, in the repository, and in the client's working copy. * utf8proc (REQUIRED for client and server) Subversion uses utf8proc for UTF-8 support, including Unicode normalization. * Apache Serf (OPTIONAL for client) The Apache Serf library allows the Subversion client to send HTTP requests. This is necessary if you want your client to access a repository served by the Apache HTTP server. There is an alternate 'svnserve' server as well, though, and clients automatically know how to speak the svnserve protocol. Thus it's not strictly necessary for your client to be able to speak HTTP... though we still recommend that your client be built to speak both HTTP and svnserve protocols. * OpenSSL (OPTIONAL for client and server) OpenSSL enables your client to access SSL-encrypted https:// URLs (using Apache Serf) in addition to unencrypted http:// URLs. To use SSL with Subversion's WebDAV server, Apache needs to be compiled with OpenSSL as well. * Netwide Assembler (OPTIONAL for client and server) The Netwide Assembler (NASM) is used to build the (optional) assembler modules of OpenSSL. As of OpenSSL 1.1.0 NASM is the only supported assembler. * Berkeley DB (DEPRECATED and OPTIONAL for client and server) When you create a repository, you have the option of specifying a storage 'back-end' implementation. Currently, there are two options. The newer and recommended one, known as FSFS, does not require Berkeley DB. FSFS stores data in a flat filesystem. The older implementation, known as BDB, has been deprecated and is not recommended for new repositories, but is still available. BDB stores data in a Berkeley DB database. This back-end will only be available if the BDB libraries are discovered at compile time. * libsasl (OPTIONAL for client and server) If the Cyrus SASL library is detected at compile time, then the svn client (and svnserve server) will be able to utilize SASL to do various forms of authentication when speaking the svnserve protocol. * Python, Perl, Java, Ruby (OPTIONAL) Subversion is mostly a collection of C libraries with well-defined APIs, with a small collection of programs that use the APIs. If you want to build Subversion API bindings for other languages, you need to have those languages available at build time. * py3c (OPTIONAL, but REQUIRED for Python bindings) The Python 3 Compatibility Layer for C Extensions is required to build the Python language bindings. * KDE Framework 5, libsecret, GNOME Keyring (OPTIONAL for client) Subversion contains optional support for storing passwords in KWallet via KDE Framework 5 libraries (preferred) or kdelibs4, and GNOME Keyring via libsecret (preferred) or GNOME APIs. * libmagic (OPTIONAL) If the libmagic library is detected at compile time, it will be used to determine mime-types of binary files which are added to version control. Note that mime-types configured via auto-props or the mime-types-file option take precedence. C. Dependencies in Detail Subversion depends on a number of third party tools and libraries. Some of them are only required to run a Subversion server; others are necessary just for a Subversion client. This section explains what other tools and libraries will be required so that Subversion can be built with the set of features you want. On Unix systems, the './configure' script will tell you if you are missing the correct version of any of the required libraries or tools, so if you are in a real hurry to get building, you can skip straight to section II. If you want to gather the pieces you will need before starting out, however, you should read the following. If you're just installing a Subversion client, the Subversion team has created a script that downloads the minimal prerequisite libraries (Apache Portable Runtime, Sqlite, and Zlib). The script, 'get-deps.sh', is available in the same directory as this file. When run, it will place 'apr', 'apr-util', 'serf', 'zlib', and 'sqlite-amalgamation' directories directly into your unpacked Subversion distribution. With the exception of sqlite-amalgamation, they will still need to be configured, built and installed explicitly, and Subversion's own configure script may need to be told where to find them, if they were not installed in standard system locations. Note: there are optional dependencies (such as OpenSSL, swig, and httpd) which get-deps.sh does not download. Note: Because previous builds of Subversion may have installed older versions of these libraries, you may want to run some of the cleanup commands described in section II.B before installing the following. 1. Apache Portable Runtime 1.4 or newer (REQUIRED) Whenever you want to build any part of Subversion, you need the Apache Portable Runtime (APR) and the APR Utility (APR-util) libraries. If you do not have a pre-installed APR and APR-util, you will need to get these yourself: https://apr.apache.org/download.cgi On Unix systems, if you already have the APR libraries compiled and do not wish to regenerate them from source code, then Subversion needs to be able to find them. There are a couple of options to "./configure" that tell it where to look for the APR and APR-util libraries. By default it will try to locate the libraries using apr-config and apu-config scripts. These scripts provide all the relevant information for the APR and APR-util installations. If you want to specify the location of the APR library, you can use the "--with-apr=" option of "./configure". It should be able to find the apr-config script in the standard location under that directory (e.g. ${prefix}/bin). Similarly, you can specify the location of APR-util using the "--with-apr-util=" option to "./configure". It will look for the apu-config script relative to that directory. For example, if you want to use the APR libraries you built with the Apache httpd server, you could run: $ ./configure --with-apr=/usr/local/apache2 \ --with-apr-util=/usr/local/apache2 ... Notes on Windows platforms: * Do not use APR version 1.7.3 as that release contains a bug that makes it impossible for Subversion to use it properly. This issue only affects APR builds on Windows. This issue was fixed in APR version 1.7.4. See: https://lists.apache.org/thread/xd5t922jvb9423ph4j84rsp5fxks1k0z * If you check out APR and APR-util sources from their Subversion repository, be sure to use a native Windows SVN client (as opposed to Cygwin's version) so that the .dsp files get carriage-returns at the ends of their lines. Otherwise Visual Studio will complain that it doesn't recognize the .dsp files. Notes on Unix platforms: * If you check out APR and APR-util sources from their Subversion repository, you need to run the 'buildconf' script in each library's directory to regenerate the configure scripts and other files required for compiling the libraries. Afterwards, configure, build, and install both libraries before running Subversion's configure script. For example: $ cd apr $ ./buildconf $ ./configure <options...> $ make $ make install $ cd .. $ cd apr-util $ ./buildconf $ ./configure <options...> $ make $ make install $ cd .. 2. SQLite (REQUIRED) Subversion requires SQLite version 3.24.0 or above. You can meet this dependency several ways: * Use an SQLite amalgamation file. * Specify an SQLite installation to use. * Let Subversion find an installed SQLite. To use an SQLite-provided amalgamation, just drop sqlite3.c into Subversion's sqlite-amalgamation/ directory, or point to it with the --with-sqlite configure option. This file also ships with the Subversion dependencies distribution, or you can download it from SQLite: https://www.sqlite.org/download.html 3. Zlib (REQUIRED) Subversion's binary-differencing engine depends on zlib for compression. Most Unix systems have libz pre-installed, but if you need it, you can get it from http://www.zlib.net/ 4. utf8proc (REQUIRED) Subversion uses utf8proc for UTF-8 support. Configure will attempt to locate utf8proc by default using pkg-config and known paths. If it is installed in a non-standard location, then use: --with-utf8proc=/path/to/libutf8proc Alternatively, a copy of utf8proc comes bundled with the Subversion sources. If configure should use the bundled copy, use: --with-utf8proc=internal 5. autoconf 2.59 or newer (Unix only) This is required only if you plan to build from the latest source (see section II.B). Generally only developers would be doing this. 6. libtool 1.4 or newer (Unix only) This is required only if you plan to build from the latest source (see section II.B). Note: Some systems (Solaris, for example) require libtool 1.4.3 or newer. The autogen.sh script knows about that. 7. Apache Serf library 1.3.4 or newer (OPTIONAL) If you want your client to be able to speak to an Apache server (via a http:// or https:// URL), you must link against Apache Serf. Though optional, we strongly recommend this. In order to use ra_serf, you must install serf, and run Subversion's ./configure with the argument --with-serf. If serf is installed in a non-standard place, you should use --with-serf=/path/to/serf/install instead. Apache Serf can be obtained via your system's package distribution system or directly from https://serf.apache.org/. For more information on Apache Serf and Subversion's ra_serf, see the file subversion/libsvn_ra_serf/README. 8. OpenSSL (OPTIONAL) ### needs some updates. I think Apache Serf automagically handles ### finding OpenSSL, but we may need more docco here. and w.r.t ### zlib. The Apache Serf library has support for SSL encryption by relying on the OpenSSL library. a. Using OpenSSL on the client through Apache Serf On Unix systems, to build Apache Serf with OpenSSL, you need OpenSSL installed on your system, and you must add "--with-ssl" as a "./configure" parameter. If your OpenSSL installation is hard for Apache Serf to find, you may need to use "--with-libs=/path/to/lib" in addition. In particular, on Red Hat (but not Fedora Core) it is necessary to specify "--with-libs=/usr/kerberos" for OpenSSL to be found. You can also specify a path to the zlib library using "--with-libs". Under Windows, you can specify the paths to these libraries by passing the options --with-zlib and --with-openssl to gen-make.py. b. Using OpenSSL on the Apache server You can also add support for these features to an Apache httpd server to be used for Subversion using the same support libraries. The Subversion build system will not provide them, however. You add them by specifying parameters to the "./configure" script of the Apache Server instead. For getting SSL on your server, you would add the "--enable-ssl" or "--with-ssl=/path/to/lib" option to Apache's "./configure" script. Apache enables zlib support by default, but you can specify a nonstandard location for the library with the "--with-z=/path/to/dir" option. Consult the Apache documentation for more details, and for other modules you may wish to install to enhance your Subversion server. If you don't already have it, you can get a copy of OpenSSL, including instructions for building and packaging on both Unix systems and Windows, at: https://www.openssl.org/ 9. Berkeley DB 4.X (DEPRECATED and OPTIONAL) You need the Berkeley DB libraries only if you are building a Subversion server that supports the older BDB repository storage back-end, or a Subversion client that can access local BDB repositories via the file:// URI scheme. The BDB back-end has been deprecated and is not recommended for new repositories. BDB may be removed in Subversion 2.0. We recommend the newer FSFS back-end for all new repositories. FSFS does not require the Berkeley DB libraries. If in doubt, the 'svnadmin info' command, added in Subversion 1.9, can identify whether an existing repository uses BDB or FSFS. The current recommended version of Berkeley DB is 4.4.20 or newer, which brings auto-recovery functionality to the Berkeley DB database environment. If you must use an older version of Berkeley DB, we *strongly* recommend using 4.3 or 4.2 over the 4.1 or 4.0 versions. Not only are these significantly faster and more stable, but they also enable Subversion repositories to automatically clean up database journal files to save disk space. You'll need Berkeley DB installed on your system. You can get it from: http://www.oracle.com/technetwork/database/database-technologies/berkeleydb/overview/index.html If you have Berkeley DB installed in a place not searched by default for includes and libraries, add something like this: --with-berkeley-db=db.h:/usr/local/include/db4.7:/usr/local/lib/db4.7:db-4.7 to your `configure' switches, and the build process will use the Berkeley DB header and library in the named directories. You may need to use a different path, of course. Note that in order for the detection to succeed, the dynamic linker must be able to find the libraries at configure time. 10. Cyrus SASL library (OPTIONAL) If the Simple Authentication and Security Layer (SASL) library is detected on your system, then the Subversion client and svnserve server can utilize its abilities for various forms of authentication. To learn more about SASL or to get the source code, visit: http://freshmeat.net/projects/cyrussasl/ 11. Apache Web Server 2.2.X or newer (OPTIONAL) (https://httpd.apache.org/download.cgi) The Apache httpd server is one of two methods to make your Subversion repository available over a network - the other is a custom server program called svnserve, which requires no extra software packages. Building Subversion, the Apache server, and the modules that Apache needs to communicate with Subversion are complicated enough that there is a whole section at the end of this document that describes how it is done: See section III for details. 12. Python 3.x or newer (https://www.python.org/) (OPTIONAL) Subversion does not require Python for its basic operation. However, Python is required for building and testing Subversion and for using Subversion's SWIG Python bindings or hook scripts coded in Python. The majority of Subversion's test suite is written in Python, as is part of Subversion's build system. In more detail, Python is required to do any of the following: * Use the SWIG Python bindings. * Use the ctypes Python bindings. * Use hook scripts coded in Python. * Build Subversion from a tarball on Unix-like systems and run Subversion's test suite as described in section II.B. * Build Subversion on Windows as described in section II.E. * Build Subversion from a working copy checked out from Subversion's own repository (whether or not running the test suite). * Build the SWIG Python bindings. * Build the ctypes Python bindings. * Testing as described in section III.D. The Python bindings are used by: * Third-party programs (e.g., ViewVC) * Scripts distributed with Subversion itself in the tools/ subdirectory. * Any in-house scripts you may have. Python is NOT required to do any of the following: * Use the core command-line binaries (svn, svnadmin, svnsync, etc.) * Use Subversion's C libraries. * Use any of Subversion's other language bindings. * Build Subversion from a tarball on Unix-like systems without running Subversion's test suite Although this section calls for Python 3.x, Subversion still technically works with Python 2.7. However, Support for Python 2.7 is being phased out. As of 1 January 2020, Python 2.7 has reached end of life. All users are strongly encouraged to move to Python 3. Note: If you are using a Subversion distribution tarball and want to build the Python bindings for Python 2, you should rebuild the build environment in non-release mode by running 'sh autogen.sh' before running the ./configure script; see section II.B for more about autogen.sh. 13. Perl 5.8 or newer (Windows only) (OPTIONAL) To build Subversion under any of the MS Windows platforms, you will also need Perl 5.8 or newer to run apr-util's w32locatedb.pl script. 14. pkg-config (Unix only, OPTIONAL) Subversion uses pkg-config to find appropriate options used at build time. 15. D-Bus (Unix only, OPTIONAL) D-Bus is a message bus system. D-Bus is required for support for KWallet and GNOME Keyring. pkg-config is needed to find D-Bus headers and library. 16. Qt 5 or Qt 4 (Unix only, OPTIONAL) Qt is a cross-platform application framework. QtCore, QtDBus and QtGui modules are required for support for KWallet. pkg-config is needed to find Qt headers and libraries. 17. KDE 5 Framework libraries or KDELibs 4 (Unix only, OPTIONAL) Subversion contains optional support for storing passwords in KWallet. Subversion will look for KF5Wallet, KF5CoreAddons, KF5I18n APIs by default, and needs kf5-config to find them. The KDELibs 4 api is also supported. KDELibs contains core KDE libraries. Subversion uses libkdecore and libkdeui libraries when support for KWallet is enabled. kde4-config is used to get some necessary options. pkg-config, D-Bus and Qt 4 are also required. If you want to build support for KWallet, then pass the '--with-kwallet' option to `configure`. If KDE is installed in a non-standard prefix, then use: --with-kwallet=/path/to/KDE/prefix 18. GLib 2 (Unix only, OPTIONAL) GLib is a general-purpose utility library. GLib is required for support for GNOME Keyring. pkg-config is needed to find GLib headers and library. 19. GNOME Keyring (Unix only, OPTIONAL) Subversion contains optional support for storing passwords in GNOME Keyring. pkg-config is needed to find GNOME Keyring headers and library. D-Bus and GLib are also required. If you want to build support for GNOME Keyring, then pass the '--with-gnome-keyring' option to `configure`. 20. Ctypesgen (OPTIONAL) Ctypesgen is Python wrapper generator for ctypes. It is used to generate a part of Subversion Ctypes Python bindings (CSVN). If you want to build CSVN, then pass the '--with-ctypesgen' option to `configure`. If ctypesgen.py is installed in a non-standard place, then use: --with-ctypesgen=/path/to/ctypesgen.py For more information on CSVN, see subversion/bindings/ctypes-python/README. 21. libmagic (OPTIONAL) Subversion's configure script attempts to find libmagic automatically. If it is installed in a non-standard location, then use: --with-libmagic=/path/to/libmagic/prefix The files include/magic.h and lib/libmagic.so.1.0 (or similar) are expected beneath this prefix directory. If they cannot be found Subversion will be compiled without support for libmagic. If libmagic is installed but support for it should not be compiled in, then use: --with-libmagic=no If configure should fail when libmagic is not present, but only the default locations should be searched, then use: --with-libmagic 22. LZ4 (OPTIONAL) Subversion uses LZ4 compression library version r129 or above. Configure will attempt to locate the system library by default using pkg-config and known paths. If it is installed in a non-standard location, then use: --with-lz4=/path/to/liblz4 If configure should use the version bundled with the sources, use: --with-lz4=internal 23. py3c (OPTIONAL) Subversion uses the Python 3 Compatibility Layer for C Extensions (py3c) library when building the Python language bindings. As py3c is a header-only library, it is needed only to build the bindings, not to use them. Configure will attempt to locate py3c by default using pkg-config and known paths. If it is installed in a non-standard location, then use: --with-py3c=/path/to/py3c/prefix The library can be downloaded from GitHub: https://github.com/encukou/py3c On Unix systems, you can also use the provided get-deps.sh script to download py3c and several other dependencies; see the top of section I.C for more about get-deps.sh. D. Documentation The primary documentation for Subversion is the free book "Version Control with Subversion", a.k.a. "The Subversion Book", obtainable from https://svnbook.red-bean.com/. Various additional documentation exists in the doc/ subdirectory of the Subversion source. See the file doc/README for more information. II. INSTALLATION ============ Subversion support three different build systems: - Autoconf/make, for Unix builds - Visual Studio vcproj, for Windows builds - CMake, for both Unix and Windows The first two have been in use since 2001. Sections A-E below describe the classic build system. The CMake build system was created in 2024 and is still under development. It will be included in Subversion 1.15 and is expected to be the default build system starting with Subversion 1.16. Section F below describes the CMake build system. A. Building from a Tarball ------------------------------ 1. Building from a Tarball Download the most recent distribution tarball from: https://subversion.apache.org/download/ Unpack it, and use the standard GNU procedure to compile: $ ./configure $ make # make install You can also run the full test suite by running 'make check'. Even in successful runs, some tests will report XFAIL; that is normal. Failed runs are indicated by FAIL or XPASS results, or a non-zero exit code from "make check". B. Building the Latest Source under Unix ------------------------------------- These instructions assume you have already installed Subversion and checked out a working copy of Subversion's own code -- either the latest /trunk code, or some branch or tag. You also need to have already installed whatever prerequisites that version of Subversion requires (if you haven't, the ./configure step should complain). You can discard the directory created by the tarball; you're about to build the latest, greatest Subversion client. This is the procedure Subversion developers use. First off, if you have any Subversion libraries lying around from previous 'make installs', clean them up first! # rm -f /usr/local/lib/libsvn* # rm -f /usr/local/lib/libapr* # rm -f /usr/local/lib/libserf* Start the process by running "autogen.sh": $ sh ./autogen.sh This script will make sure you have all the necessary components available to build Subversion. If any are missing, you will be told where to get them from. (See the 'Dependency Overview' in section I.) Note: if the command "autoconf" on your machine does not run autoconf 2.59 or later, but you do have a new enough autoconf available, then you can specify the correct one with the AUTOCONF variable. (The AUTOHEADER variable is similar.) This may be required on Debian GNU/Linux, where "autoconf" is actually a Perl script that attempts to guess which version is required -- because of the interaction between Subversion's and APR's configuration systems, the Perl script may get it wrong. So for example, you might need to do: $ AUTOCONF=autoconf2.59 sh ./autogen.sh Once you've prepared the working copy by running autogen.sh, just follow the usual configuration and build procedure: $ ./configure $ make # make install (Optionally, you might want to pass --enable-maintainer-mode to the ./configure script. This enables debugging symbols in your binaries (among other things) and most Subversion developers use it.) Since the resulting binary depends on shared libraries, the destination library directory must be identified in your operating system's library search path. That is in either /etc/ld.so.conf or $LD_LIBRARY_PATH for Linux systems and in /etc/rc.conf for FreeBSD, followed by a run of the 'ldconfig' program. Check your system documentation for details. By identifying the destination directory, Subversion will be able to dynamically load repository access plugins. If you try to do a checkout and see an error like: subversion/libsvn_ra/ra_loader.c:209: (apr_err=170000) svn: Unrecognized URL scheme 'https://svn.apache.org/repos/asf/subversion/trunk' It probably means that the dynamic loader/linker can't find all of the libsvn_* libraries. C. Building under Unix in Different Directories -------------------------------------------- It is possible to configure and build Subversion on Unix in a directory other than the working copy. For example $ svn co https://svn.apache.org/repos/asf/subversion/trunk svn $ cd svn $ # get SQLite amalgamation if required $ chmod +x autogen.sh $ ./autogen.sh $ mkdir ../obj $ cd ../obj $ ../svn/configure [...with options as appropriate...] $ make puts the Subversion working copy in the directory svn and builds it in a separate, parallel directory obj. Why would you want to do this? Well there are a number of reasons... * You may prefer to avoid "polluting" the working copy with files generated during the build. * You may want to put the build directory and the working copy on different physical disks to improve performance. * You may want to separate source and object code and only backup the source. * You may want to remote mount the working copy on multiple machines, and build for different machines from the same working copy. * You may want to build multiple configurations from the same working copy. The last reason above is possibly the most useful. For instance you can have separate debug and optimized builds each using the same working copy. Or you may want a client-only build and a client-server build. Using multiple build directories you can rebuild any or all configurations after an edit without the need to either clean and reconfigure, or identify and copy changes into another working copy. D. Installing from a Zip or Installer File under Windows ----------------------------------------------------- Of all the ways of getting a Subversion client, this is the easiest. Download a Zip or self-extracting installer via: https://subversion.apache.org/packages.html#windows For a Zip file extract the DLLs and EXEs to a directory of your choice. Included in the download are among other tools the SVN client, the SVNADMIN administration tool and the SVNLOOK reporting tool. You may want to add the bin directory in the Subversion folder to your PATH environment variable so as to not have to use the full path when running Subversion commands. To test the installation, open a DOS box (run either "cmd" or "command" from the Start menu's "Run..." menu option), change to the directory you installed the executables into, and run: C:\test>svn co https://svn.apache.org/repos/asf/subversion/trunk svn This will get the latest Subversion sources and put them into the "svn" subdirectory. If using a self-extracting .exe file, just run it instead of unzipping it, to install Subversion. E. Building the Latest Source under Windows ---------------------------------------- E.1 Prerequisites * Microsoft Visual Studio. Any recent (2005+) version containing the Visual C++ component will work (E.g. Professional, Express, Community Edition). Make sure you enable C++ support during setup. * Python 2.7 or higher, downloaded from https://www.python.org/ which is used to generate the project files. * Perl 5.8 or higher from https://www.perl.org/get.html * Awk is needed to compile Apache. Source code is available in tools\dev\awk, run the buildwin.bat program to compile. * Apache apr, apr-util, and optionally apr-iconv libraries, version 1.4 or later (1.2 for apr-iconv). If you are building from a Subversion checkout and have not downloaded Apache 2, then get these 3 libraries from https://www.apache.org/dist/apr/. * SQLite 3.24.0 or higher from https://www.sqlite.org/download.html (3.39.4 or higher recommended) * ZLib 1.2 or higher is required and can be obtained from http://www.zlib.net/ * Either a Subversion client binary from https://subversion.apache.org/packages.html to do the initial checkout of the Subversion source or the zip file source distribution. Additional Options * [Optional] Apache Httpd 2 source, downloaded from https://httpd.apache.org/download.cgi, these instructions assume version 2.0.58. This is only needed for building the Subversion server Apache modules. ### FIXME Apache 2.2 or greater required. * [Optional] Berkeley DB for backend support of the server components are available from http://www.oracle.com/technetwork/database/database-technologies/berkeleydb/downloads/index-082944.html (Version 4.4.20 or in specific cases some higher version recommended) For more information see Section I.C.9. * [Optional] Openssl can be obtained from https://www.openssl.org/source/ * [Optional] NASM can be obtained from http://www.nasm.us/ * [Optional] A modified version of GNU libintl, called svn-win32-libintl.zip, can be used for displaying localized messages. Available at: http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=2627 * [Optional] GNU gettext for generating message catalog (.mo) files from message translations. You can get the latest binaries from http://gnuwin32.sourceforge.net/. You'll need the binaries (gettext-0.14.1-bin.zip) and dependencies (gettext-0.14.1-dep.zip). E.2 Notes The Apache Serf library supports secure connections with OpenSSL and on-the-wire compression with zlib. If you want to use the secure connections feature, you should pass the option "--with-openssl" to the gen-make.py script. See Section I.C.7 for more details. E.3 Preparation This section describes how to unpack the files to make a build tree. * Make a directory SVN and cd into it. * Either checkout Subversion: svn co https://svn.apache.org/repos/asf/subversion/trunk src-trunk or unpack the zip file distribution and rename the directory to src-trunk. * Install Visual Studio Environment. You either have to tell the installer to register environment variables or run VCVARS32.BAT before building anything. If you are using a newer Visual Studio, use the 'Visual Studio 20xx Command Prompt' on the Start menu. * Install Python and add it to your path * Install Perl (it should add itself to the path) ### Subversion doesn't need perl. Only some dependencies need it (OpenSSL and some apr scripts) * Copy AWK (awk95.exe) to awk.exe (e.g. SVN\awk\awk.exe) and add the directory containing it (e.g. SVN\awk) to the path. ### Subversion doesn't need awk. Only some dependencies need it (some apr scripts) * [Optional] Install NASM and add it to your path ### Subversion doesn't need NASM. Only some dependencies need it optionally (OpenSSL) * [Optional] If you checked out Subversion from the repository and want to build Subversion with http/https access support then install the Apache Serf sources into SVN\src-trunk\serf. * [Optional] If you want BDB backend support, extract the Berkeley DB files into SVN\src-trunk\db4-win32. It's a good idea to add SVN\src-trunk\db4-win32\bin to your PATH, so that Subversion can find the Berkeley DB DLLs. [NOTE: This binary package of Berkeley DB is provided for convenience only. Please don't address questions about Berkeley DB that aren't directly related to using Subversion to the project mailing list.] If you build Berkeley DB from the source, you will have to copy the file db-x.x.x\build_win32\db.h to SVN\src-trunk\db4-win32\include, and all the import libraries to SVN\src-trunk\db4-win32\lib. Again, the DLLs should be somewhere in your path. ### Just use --with-serf instead of the hardcoded path * [Optional] If you want to build the server modules, extract Apache source into SVN\httpd-2.x.x. * If you are building from a checkout of Subversion, and you are NOT building Apache, then you will need the APR libraries. Depending on how you got your version of APR, either: - Extract the APR, APR-util and APR-iconv source distributions into SVN\apr, SVN\apr-util, and SVN\apr-iconv respectively. Or: - Extract the apr, apr-util and apr-iconv directories from the srclib folder in the Apache httpd source into SVN\apr, SVN\apr-util, and SVN\apr-iconv respectively. ### Just use --with-apr, etc. instead of the hardcoded paths * Extract the ZLib sources into SVN\zlib if you are not using the zlib included in the dependencies zip file. ### Just use --with-zlib instead of the hardcoded path * [Optional] If you want secure connection (https) client support extract OpenSSL into SVN\openssl ### And pass the path to both serf and gen-make.py * [Optional] If you want localized message support, extract svn-win32-libintl.zip into SVN\svn-win32-libintl and extract gettext-x.x.x-bin.zip and gettext-x.x.x-dep.zip into SVN\gettext-x.x.x-bin. Add SVN\gettext-x.x.x-bin\bin to your path. * Download the SQLite amalgamation from https://www.sqlite.org/download.html and extract it into SVN\sqlite-amalgamation. See I.C.12 for alternatives to using the amalgamation package. E.4 Building the Binaries To build the binaries either follow these instructions. Start in the SVN directory you created. Set up the environment (commands should be one line even if wrapped here). C:>set VER=trunk C:>set DIR=trunk C:>set BUILD_ROOT=C:\SVN C:>set PYTHONDIR=C:\Python27 C:>set AWKDIR=C:\SVN\Awk C:>set ASMDIR=C:\SVN\asm C:>set SDKINC="C:\Program Files\Microsoft SDK\include" C:>set SDKLIB="C:\Program Files\Microsoft SDK\lib" C:>set GETTEXTBIN=C:\SVN\gettext-0.14.1-bin\bin C:>PATH=%PATH%;%BUILD_ROOT%\src-%DIR%\db4-win32;%ASMDIR%; %PYTHONDIR%;%AWKDIR%;%GETTEXTBIN% C:>set INCLUDE=%SDKINC%;%INCLUDE% C:>set LIB=%SDKLIB%;%LIB% OpenSSL < 1.1.0 C:>cd openssl C:>perl Configure VC-WIN32 [*] C:>call ms\do_masm C:>nmake -f ms\ntdll.mak C:>cd out32dll C:>call ..\ms\test C:>cd ..\.. *Note: Use "call ms\do_nasm" if you have nasm instead of MASM, or "call ms\do_ms" if you don't have an assembler. Also if you are using OpenSSL >= 1.0.0 masm is no longer supported. You will have to use do_nasm or do_ms in this case. OpenSSL >= 1.1.0 C:>cd openssl C:>perl Configure VC-WIN32 C:>nmake C:>nmake test C:>cd .. Apache 2 This step is only required for building the server dso modules. ### FIXME Apache 2.2 or greater required. Old build instructions for VC6. C:>set APACHEDIR=C:\Program Files\Apache Group\Apache2 C:>msdev httpd-2.0.58\apache.dsw /MAKE "BuildBin - Win32 Release" APR If you downloaded APR / APR-UTIL / APR_ICONV by source, you will have to build these libraries first. Building these libraries on Windows is straight forward and in most cases as simple as issuing these two commands: C:>nmake -f Makefile.win C:>nmake -f Makefile.win install Please refer to the build instructions provided by the library source for actual build instructions. ZLib If you downloaded the zlib source, you will have to build ZLib first. Building ZLib using Visual Studio should be quite simple. Just open the appropriate solution and build the project zlibstat using the IDE. Please refer to the build instructions provided by the library source for actual build instructions. Note that you'd make sure to define ZLIB_WINAPI in the ZLib config header and move the lib-file into the zlib root-directory. Please note that you MUST NOT build ZLib with the included assembler optimized code. It is known to be buggy, see for example the discussion https://svn.haxx.se/dev/archive-2013-10/0109.shtml. This means that you must not define ASMV or ASMINF. Note that the VS projects in contrib\visualstudio define these in the Debug configuration. Apache Serf ### Section about Apache Serf might be required/useful to add. ### scons is required too and Apache Serf needs to be configured prior to ### be able to build Subversion using: ### scons APR=[PATH_TO_APR] APU=[PATH_TO_APU] OPENSSL=[PATH_TO_OPENSSL] ### ZLIB=[PATH_TO_ZLIB] PREFIX=[PATH_TO_SERF_DEST] ### scons check ### scons install Subversion Things to note: * If you don't want to build mod_dav_svn, omit the --with-httpd option. The zip file source distribution contains apr, apr-util and apr-iconv in the default build location. If you have downloaded the apr files yourself you will have to tell the generator where to find the APR libraries; the options are --with-apr, --with-apr-util and --with-apr-iconv. * If you would like a debug build substitute Debug for Release in the msbuild command. * There have been rumors that Subversion on Win32 can be built using the latest cygwin, you probably don't want the zip file source distribution though. ymmv. * You will also have to distribute the C runtime dll with the binaries. Also, since Apache/APR do not provide .vcproj files, you will need to convert the Apache/APR .dsp files to .vcproj files with Visual Studio before building -- just open the Apache .dsw file and answer 'Yes To All' when the conversion dialog pops up, or you can open the individual .dsp files and convert them one at a time. The Apache/APR projects required by Subversion are: apr-util\libaprutil.dsp, apr\libapr.dsp, apr-iconv\libapriconv.dsp, apr-util\xml\expat\lib\xml.dsp, apr-iconv\ccs\libapriconv_ccs_modules.dsp, and apr-iconv\ces\libapriconv_ces_modules.dsp. * If the server dso modules are being built and tested Apache must not be running or the copy of the dso modules will fail. C:>cd src-%DIR% If Apache 2 has been built and the server modules are required then gen-make.py will already have been run. If the source is from the zip file, Apache 2 has not been built so gen-make.py must be run: C:>python gen-make.py --vsnet-version=20xx --with-berkeley-db=db4-win32 --with-openssl=..\openssl --with-zlib=..\zlib --with-libintl=..\svn-win32-libintl Then build subversion: C:>msbuild subversion_vcnet.sln /t:__MORE__ /p:Configuration=Release C:>cd .. The binaries have now been built. E.5 Packaging the binaries You now need to copy the binaries ready to make the release zip file. You also need to do this to run the tests as the new binaries need to be in your path. You can use the build/win32/make_dist.py script in the Subversion source directory to do that. [TBD: Describe how to do this. Note dependencies on zip, jar, doxygen.] E.6 Testing the Binaries [TBD: It's been a long, long while since it was necessary to move binaries around for testing. win-tests.py does that automagically. Fix this section accordingly, and probably reorder, putting the packaging at the end.] The build process creates the binary test programs but it does not copy the client tests into the release test area. C:>cd src-%DIR% C:>mkdir Release\subversion\tests\cmdline C:>xcopy /S /Y subversion\tests\cmdline Release\subversion\tests\cmdline If the server dso modules have been built then copy the dso files and dlls into the Apache modules directory. C:>copy Release\subversion\mod_dav_svn\mod_dav_svn.so "%APACHEDIR%"\modules C:>copy Release\subversion\mod_authz_svn\mod_authz_svn.so "%APACHEDIR%"\modules C:>copy svn-win32-%VER%\bin\intl.dll "%APACHEDIR%\bin" C:>copy svn-win32-%VER%\bin\iconv.dll "%APACHEDIR%\bin" C:>copy svn-win32-%VER%\bin\libdb42.dll "%APACHEDIR%\bin" C:>cd .. Put the svn-win32-trunk\bin directory at the start of your path so you run the newly built binaries and not another version you might have installed. Then run the client tests: C:>PATH=%BUILD_ROOT%\svn-win32-%VER%\bin;%PATH% C:>cd src-%DIR% C:>python win-tests.py -c -r -v If the server dso modules were built configure Apache to use the mod_dav_svn and mod_authz_svn modules by making sure these lines appear uncommented in httpd.conf: LoadModule dav_module modules/mod_dav.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so And further down the file add location directives to point to the test repositories. Change the paths to the SVN directory you created (paths should be on one line even if wrapped here): <Location /svn-test-work/repositories> DAV svn SVNParentPath C:/SVN/src-trunk/Release/subversion/tests/cmdline/ svn-test-work/repositories </Location> <Location /svn-test-work/local_tmp/repos> DAV svn SVNPath c:/SVN/src-trunk/Release/subversion/tests/cmdline/ svn-test-work/local_tmp/repos </Location> Then restart Apache and run the tests: C:>python win-tests.py -c -r -v -u http://localhost C:>cd .. F. Building using CMake -------------------- Get the sources, either a release tarball or by checking out the official repository. The CMake build system currently only exists in /trunk and it will be included in the 1.15 release. The process for building on Unix and Windows is the same. $ python gen-make.py -t cmake $ cmake -B out [build options] $ cmake --build out "out" in the commands above is the build directory used by CMake. Build options can be added, for example: $ cmake -B out -DCMAKE_INSTALL_PREFIX=/usr/local/subversion -DSVN_ENABLE_RA_SERF=ON Build options can be listed using: $ cmake -LH Windows tricks: - Modern versions of Microsoft Visual Studio provide support for CMake projects out-of-box, including intellisense, integrated options editor, test explorer, and more. In order to use it for Subversion, open the source directory with Visual Studio, and the configuration should start automatically. For editing the cache (options), do right-click to the CMakeLists.txt file and clicking `CMake Settings for Subversion` will open the editor. After the required settings are configured, hit `F7` in order to build. For more info, check the article bellow: https://learn.microsoft.com/en-us/cpp/build/cmake-projects-in-visual-studio - There is a useful tool for bootstrapping the dependencies, vcpkg. It provides ports for the most of the Subversion's dependencies, which then could be installed via a single command. To start using it, download the registry from GitHub, bootstrap vcpkg, and install the dependencies: $ git clone https://github.com/microsoft/vcpkg $ cd vcpkg && .\bootstrap-vcpkg.bat -disableMetrics $ .\vcpkg install apr apr-util expat zlib sqlite3 [any other dependency] After this is done, vcpkg can be integrated into CMake by passing the vcpkg toolchain to CMAKE_TOOLCHAIN_FILE option. In order to do it with Visual Studio, open the CMake cache editor as explained in the previous step, and put the following into `CMake toolchain file` field, where VCPKG_ROOT is the path to vcpkg registry: <VCPKG_ROOT>/scripts/buildsystems/vcpkg.cmake III. BUILDING A SUBVERSION SERVER ============================ Subversion has two servers you can choose from: svnserve and Apache. svnserve is a small, lightweight server program that is automatically compiled when you build Subversion's source. Apache is a more heavyweight HTTP server, but tends to have more features. This section primarily focuses on how to build Apache and the accompanying mod_dav_svn server module for it. If you plan to use svnserve instead, jump right to section E for a quick explanation. A. Setting Up Apache Httpd ----------------------- 1. Obtaining and Installing Apache Httpd 2 Subversion tries to compile against the latest released version of Apache httpd 2.2+. The easiest thing for you to do is download a source tarball of the latest release and unpack that. If you have questions about the Apache httpd 2.2 build, please consult the httpd install documentation: https://httpd.apache.org/docs-2.2/install.html At the top of the httpd tree: $ ./buildconf $ ./configure --enable-dav --enable-so --enable-maintainer-mode The first arg says to build mod_dav. The second arg says to enable shared module support which is needed for a typical compile of mod_dav_svn (see below). The third arg says to include debugging information. If you built Subversion with --enable-maintainer-mode, then you should do the same for Apache; there can be problems if one was compiled with debugging and the other without. Note: if you have multiple db versions installed on your system, Apache might link to a different one than Subversion, causing failures when accessing the repository through Apache. To prevent this from happening, you have to tell Apache which db version to use and where to find db. Add --with-dbm=db4 and --with-berkeley-db=/usr/local/BerkeleyDB.4.2 to the configure line. Make sure this is the same db as the one Subversion uses. This note assumes you have installed Berkeley DB 4.2.52 at its default locations. For more info about the db requirement, see section I.C.9. You may also want to include other modules in your build. Add --enable-ssl to turn on SSL support, and --enable-deflate to turn on compression support, for example. Consult the Apache documentation for more details. All instructions below assume you configured Apache to install in its default location, /usr/local/apache2/; substitute appropriately if you chose some other location. Compile and install apache: $ make && make install B. Making and Installing the Subversion Apache Server Module --------------------------------------------------------- Go back into your subversion working copy and run ./autogen.sh if you need to. Then, assuming Apache httpd 2.2 is installed in the standard location, run: $ ./configure Note: do *not* configure subversion with "--disable-shared"! mod_dav_svn *must* be built as a shared library, and it will look for other libsvn_*.so libraries on your system. If you see a warning message that the build of mod_dav_svn is being skipped, this may be because you have Apache httpd 2.x installed in a non-standard location. You can use the "--with-apxs=" option to locate the apxs script: $ ./configure --with-apxs=/usr/local/apache2/bin/apxs Note: it *is* possible to build mod_dav_svn as a static library and link it directly into Apache. Possible, but painful. Stick with the shared library for now; if you can't, then ask. $ rm /usr/local/lib/libsvn* If you have old subversion libraries sitting on your system, libtool will link them instead of the `fresh' ones in your tree. Remove them before building subversion. $ make clean && make && make install After the make install, the Subversion shared libraries are in /usr/local/lib/. mod_dav_svn.so should be installed in /usr/local/libexec/ (or elsewhere, such as /usr/local/apache2/modules/, if you passed --with-apache-libexecdir to configure). Section II.E explains how to build the server on Windows. C. Configuring Apache Httpd for Subversion --------------------------------------- The following section is an abbreviated version of the information in the Subversion Book (https://svnbook.red-bean.com). Please read chapter 6 for more details. The following assumes you have already created a repository. For documentation on how to do that, see README. The following also assumes that you have modified /usr/local/apache2/conf/httpd.conf to reflect your setup. At a minimum you should look at the User, Group and ServerName directives. Full details on setting up apache can be found at: https://httpd.apache.org/docs-2.2/ First, your httpd.conf needs to load the mod_dav_svn module. If you pass --enable-mod-activation to Subversion's configure, 'make install' target should automatically add this line for you. In any case, if Apache HTTPD gives you an error like "Unknown DAV provider: svn", then you may want to verify that this line exists in your httpd.conf: LoadModule dav_svn_module modules/mod_dav_svn.so NOTE: if you built mod_dav as a dynamic module as well, make sure the above line appears after the one that loads mod_dav.so. Next, add this to the *bottom* of your httpd.conf: <Location /svn/repos> DAV svn SVNPath /absolute/path/to/repository </Location> This will give anyone unrestricted access to the repository. If you want limited access, read or write, you add these lines to the Location block: AuthType Basic AuthName "Subversion repository" AuthUserFile /my/svn/user/passwd/file And: a) For a read/write restricted repository: Require valid-user b) For a write restricted repository: <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> c) For separate restricted read and write access: AuthGroupFile /my/svn/group/file <LimitExcept GET PROPFIND OPTIONS REPORT> Require group svn_committers </LimitExcept> <Limit GET PROPFIND OPTIONS REPORT> Require group svn_committers Require group svn_readers </Limit> ### FIXME Tutorials section refers to old 2.0 docs These are only a few simple examples. For a complete tutorial on Apache access control, please consider taking a look at the tutorials found under "Security" on the following page: https://httpd.apache.org/docs-2.0/misc/tutorials.html In order for 'svn cp' to work (which is actually implemented as a DAV COPY command), mod_dav needs to be able to determine the hostname of the server. A standard way of doing this is to use Apache's ServerName directive to set the server's hostname. Edit your /usr/local/apache2/conf/httpd.conf to include: ServerName svn.myserver.org If you are using virtual hosting through Apache's NameVirtualHost directive, you may need to use the ServerAlias directive to specify additional names that your server is known by. If you have configured mod_deflate to be in the server, you can enable compression support for your repository by adding the following line to your Location block: SetOutputFilter DEFLATE NOTE: If you are unfamiliar with an Apache directive, or not exactly sure about what it does, don't hesitate to look it up in the documentation: https://httpd.apache.org/docs-2.2/mod/directives.html. NOTE: Make sure that the user 'nobody' (or whatever UID the httpd process runs as) has permission to read and write the Berkeley DB files! This is a very common problem. D. Running and Testing ------------------- Fire up apache 2: $ /usr/local/apache2/bin/apachectl stop $ /usr/local/apache2/bin/apachectl start Check /usr/local/apache2/logs/error_log to make sure it started up okay. Try doing a network checkout from the repository: $ svn co http://localhost/svn/repos wc The most common reason this might fail is permission problems reading the repository db files. If the checkout fails, make sure that the httpd process has permission to read and write to the repository. You can see all of mod_dav_svn's complaints in the Apache error logfile, /usr/local/apache2/logs/error_log. To run the regression test suite for networked Subversion, see the instructions in subversion/tests/cmdline/README. For advice about tracing problems, see "Debugging the server" in https://subversion.apache.org/docs/community-guide/. E. Alternative: 'svnserve' and ra_svn ----------------------------------- An alternative network layer is libsvn_ra_svn (on the client side) and the 'svnserve' process on the server. This is a simple network layer that speaks a custom protocol over plain TCP (documented in libsvn_ra_svn/protocol): $ svnserve -d # becomes a background daemon $ svn checkout svn://localhost/usr/local/svn/repository You can use the "-r" option to svnserve to set a logical root for repositories, and the "-R" option to restrict connections to read-only access. ("Read-only" is a logical term here; svnserve still needs write access to the database in this mode, but will not allow commits or revprop changes.) 'svnserve' has built-in CRAM-MD5 authentication (so you can use non-system accounts), and can also be tunneled over SSH (so you can use existing system accounts). It's also capable of using Cyrus SASL if libsasl2 is detected at ./configure time. Please read chapter 6 in the Subversion Book (https://svnbook.red-bean.com) for details on these features. IV. PROGRAMMING LANGUAGE BINDINGS (PYTHON, PERL, RUBY, JAVA) ======================================================== For Python, Perl and Ruby bindings, see the file ./subversion/bindings/swig/INSTALL For Java bindings, see the file ./subversion/bindings/javahl/README
06-24
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值