SSL connection error: SSL is required but the server doesn‘t support it

本文指导如何在MySQL Workbench中修复新建连接时强制使用SSL的问题,通过编辑connections.xml文件并调整'useSSL'值来重新连接。

错误原因

最新版本的mysql-workbench,新建连接时,强制使用SSL。

解决方案

C:\Users\xxxx\AppData\Roaming\MySQL\Workbench目录下找到onnections.xml文件。
编辑改文件,找到:

        <value type="string" key="sslCipher"></value>
        <value type="string" key="sslKey"></value>
        <value type="int" key="useSSL">2</value>
        <value type="string" key="userName">root</value>

<value type="int" key="useSSL">2</value>改为<value type="int" key="useSSL">1</value>

重新启动mysql-workbench。重新链接。

C:\Users\20685\AppData\Local\Programs\Python\Python310\python.exe D:\pythonProject_request\run.py ============================= test session starts ============================= platform win32 -- Python 3.10.11, pytest-8.3.5, pluggy-1.5.0 -- C:\Users\20685\AppData\Local\Programs\Python\Python310\python.exe cachedir: .pytest_cache metadata: {'Python': '3.10.11', 'Platform': 'Windows-10-10.0.26100-SP0', 'Packages': {'pytest': '8.3.5', 'pluggy': '1.5.0'}, 'Plugins': {'allure-pytest': '2.14.0', 'base-url': '2.1.0', 'html': '4.1.1', 'metadata': '3.1.1', 'order': '1.3.0', 'ordering': '0.6', 'rerunfailures': '15.1', 'xdist': '3.8.0'}, 'JAVA_HOME': 'C:\\Program Files\\Java\\jdk1.8.0_131', 'Base URL': ''} rootdir: D:\pythonProject_request configfile: pytest.ini plugins: allure-pytest-2.14.0, base-url-2.1.0, html-4.1.1, metadata-3.1.1, order-1.3.0, ordering-0.6, rerunfailures-15.1, xdist-3.8.0 collecting ... collected 1 item testcaes/test_all_case.py::TestAllCase::test_login[caseinfo0] FAILED ================================== FAILURES =================================== ______________________ TestAllCase.test_login[caseinfo0] ______________________ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x000002DA1B54E380> conn = <urllib3.connection.HTTPSConnection object at 0x000002DA1B54E980> method = 'POST' url = '/adminapi/login?Content_Type=application%2Fx-www-from-urlencoded' body = 'account=admin&pwd=123456&imgcode=' headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '33', 'Content-Type': 'application/x-www-form-urlencoded'} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) timeout = Timeout(connect=None, read=None, total=None), chunked = False response_conn = <urllib3.connection.HTTPSConnection object at 0x000002DA1B54E980> preload_content = False, decode_content = False, enforce_content_length = True def _make_request( self, conn: BaseHTTPConnection, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | None = None, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, chunked: bool = False, response_conn: BaseHTTPConnection | None = None, preload_content: bool = True, decode_content: bool = True, enforce_content_length: bool = True, ) -> BaseHTTPResponse: """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param response_conn: Set this to ``None`` if you will handle releasing the connection or set the connection to have the response release it. :param preload_content: If True, the response's body will be preloaded during construction. :param decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param enforce_content_length: Enforce content length checking. Body returned by server must match value of Content-Length header, if present. Otherwise, raise error. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) try: # Trigger any extra validation we need to do. try: > self._validate_conn(conn) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:468: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:1097: in _validate_conn conn.connect() C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py:642: in connect sock_and_verified = _ssl_wrap_socket_and_match_hostname( C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connection.py:783: in _ssl_wrap_socket_and_match_hostname ssl_sock = ssl_wrap_socket( C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\ssl_.py:471: in ssl_wrap_socket ssl_sock = _ssl_wrap_socket_impl(sock, context, tls_in_tls, server_hostname) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\ssl_.py:515: in _ssl_wrap_socket_impl return ssl_context.wrap_socket(sock, server_hostname=server_hostname) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\ssl.py:513: in wrap_socket return self.sslsocket_class._create( C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\ssl.py:1071: in _create self.do_handshake() _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <ssl.SSLSocket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=0> block = False @_sslcopydoc def do_handshake(self, block=False): self._check_connected() timeout = self.gettimeout() try: if timeout == 0.0 and block: self.settimeout(None) > self._sslobj.do_handshake() E ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\ssl.py:1342: SSLCertVerificationError During handling of the above exception, another exception occurred: self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x000002DA1B54E380> method = 'POST' url = '/adminapi/login?Content_Type=application%2Fx-www-from-urlencoded' body = 'account=admin&pwd=123456&imgcode=' headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '33', 'Content-Type': 'application/x-www-form-urlencoded'} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) redirect = False, assert_same_host = False timeout = Timeout(connect=None, read=None, total=None), pool_timeout = None release_conn = False, chunked = False, body_pos = None, preload_content = False decode_content = False, response_kw = {} parsed_url = Url(scheme=None, auth=None, host=None, port=None, path='/adminapi/login', query='Content_Type=application%2Fx-www-from-urlencoded', fragment=None) destination_scheme = None, conn = None, release_this_conn = True http_tunnel_required = False, err = None, clean_exit = False def urlopen( # type: ignore[override] self, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | bool | int | None = None, redirect: bool = True, assert_same_host: bool = True, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, pool_timeout: int | None = None, release_conn: bool | None = None, chunked: bool = False, body_pos: _TYPE_BODY_POSITION | None = None, preload_content: bool = True, decode_content: bool = True, **response_kw: typing.Any, ) -> BaseHTTPResponse: """ Get a connection from the pool and perform an HTTP request. This is the lowest level call for making a request, so you'll need to specify all the raw details. .. note:: More commonly, it's appropriate to use a convenience method such as :meth:`request`. .. note:: `release_conn` will only behave as expected if `preload_content=False` because we want to make `preload_content=False` the default behaviour someday soon without breaking backwards compatibility. :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param redirect: If True, automatically handle redirects (status codes 301, 302, 303, 307, 308). Each redirect counts as a retry. Disabling retries will disable redirect, too. :param assert_same_host: If ``True``, will make sure that the host of the pool requests is consistent else will raise HostChangedError. When ``False``, you can use the pool on an HTTP proxy and request foreign hosts. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param pool_timeout: If set and the pool is set to block=True, then this method will block for ``pool_timeout`` seconds and raise EmptyPoolError if no connection is available within the time period. :param bool preload_content: If True, the response's body will be preloaded into memory. :param bool decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param release_conn: If False, then the urlopen call will not release the connection back into the pool once a response is received (but will release if you read the entire contents of the response such as when `preload_content=True`). This is useful if you're not preloading the response's content immediately. You will need to call ``r.release_conn()`` on the response ``r`` to return the connection back into the pool. If None, it takes the value of ``preload_content`` which defaults to ``True``. :param bool chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param int body_pos: Position to seek to in file-like body in the event of a retry or redirect. Typically this won't need to be set because urllib3 will auto-populate the value when needed. """ parsed_url = parse_url(url) destination_scheme = parsed_url.scheme if headers is None: headers = self.headers if not isinstance(retries, Retry): retries = Retry.from_int(retries, redirect=redirect, default=self.retries) if release_conn is None: release_conn = preload_content # Check host if assert_same_host and not self.is_same_host(url): raise HostChangedError(self, url, retries) # Ensure that the URL we're connecting to is properly encoded if url.startswith("/"): url = to_str(_encode_target(url)) else: url = to_str(parsed_url.url) conn = None # Track whether `conn` needs to be released before # returning/raising/recursing. Update this variable if necessary, and # leave `release_conn` constant throughout the function. That way, if # the function recurses, the original value of `release_conn` will be # passed down into the recursive call, and its value will be respected. # # See issue #651 [1] for details. # # [1] <https://github.com/urllib3/urllib3/issues/651> release_this_conn = release_conn http_tunnel_required = connection_requires_http_tunnel( self.proxy, self.proxy_config, destination_scheme ) # Merge the proxy headers. Only done when not using HTTP CONNECT. We # have to copy the headers dict so we can safely change it without those # changes being reflected in anyone else's copy. if not http_tunnel_required: headers = headers.copy() # type: ignore[attr-defined] headers.update(self.proxy_headers) # type: ignore[union-attr] # Must keep the exception bound to a separate variable or else Python 3 # complains about UnboundLocalError. err = None # Keep track of whether we cleanly exited the except block. This # ensures we do proper cleanup in finally. clean_exit = False # Rewind body position, if needed. Record current position # for future rewinds in the event of a redirect/retry. body_pos = set_file_position(body, body_pos) try: # Request a connection from the queue. timeout_obj = self._get_timeout(timeout) conn = self._get_conn(timeout=pool_timeout) conn.timeout = timeout_obj.connect_timeout # type: ignore[assignment] # Is this a closed/new connection that requires CONNECT tunnelling? if self.proxy is not None and http_tunnel_required and conn.is_closed: try: self._prepare_proxy(conn) except (BaseSSLError, OSError, SocketTimeout) as e: self._raise_timeout( err=e, url=self.proxy.url, timeout_value=conn.timeout ) raise # If we're going to release the connection in ``finally:``, then # the response doesn't need to know about the connection. Otherwise # it will also try to release it and we'll have a double-release # mess. response_conn = conn if not release_conn else None # Make the request on the HTTPConnection object > response = self._make_request( conn, method, url, timeout=timeout_obj, body=body, headers=headers, chunked=chunked, retries=retries, response_conn=response_conn, preload_content=preload_content, decode_content=decode_content, **response_kw, ) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:791: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <urllib3.connectionpool.HTTPSConnectionPool object at 0x000002DA1B54E380> conn = <urllib3.connection.HTTPSConnection object at 0x000002DA1B54E980> method = 'POST' url = '/adminapi/login?Content_Type=application%2Fx-www-from-urlencoded' body = 'account=admin&pwd=123456&imgcode=' headers = {'User-Agent': 'python-requests/2.32.3', 'Accept-Encoding': 'gzip, deflate', 'Accept': '*/*', 'Connection': 'keep-alive', 'Content-Length': '33', 'Content-Type': 'application/x-www-form-urlencoded'} retries = Retry(total=0, connect=None, read=False, redirect=None, status=None) timeout = Timeout(connect=None, read=None, total=None), chunked = False response_conn = <urllib3.connection.HTTPSConnection object at 0x000002DA1B54E980> preload_content = False, decode_content = False, enforce_content_length = True def _make_request( self, conn: BaseHTTPConnection, method: str, url: str, body: _TYPE_BODY | None = None, headers: typing.Mapping[str, str] | None = None, retries: Retry | None = None, timeout: _TYPE_TIMEOUT = _DEFAULT_TIMEOUT, chunked: bool = False, response_conn: BaseHTTPConnection | None = None, preload_content: bool = True, decode_content: bool = True, enforce_content_length: bool = True, ) -> BaseHTTPResponse: """ Perform a request on a given urllib connection object taken from our pool. :param conn: a connection from one of our connection pools :param method: HTTP request method (such as GET, POST, PUT, etc.) :param url: The URL to perform the request on. :param body: Data to send in the request body, either :class:`str`, :class:`bytes`, an iterable of :class:`str`/:class:`bytes`, or a file-like object. :param headers: Dictionary of custom headers to send, such as User-Agent, If-None-Match, etc. If None, pool headers are used. If provided, these headers completely replace any pool-specific headers. :param retries: Configure the number of retries to allow before raising a :class:`~urllib3.exceptions.MaxRetryError` exception. Pass ``None`` to retry until you receive a response. Pass a :class:`~urllib3.util.retry.Retry` object for fine-grained control over different types of retries. Pass an integer number to retry connection errors that many times, but no other types of errors. Pass zero to never retry. If ``False``, then retries are disabled and any exception is raised immediately. Also, instead of raising a MaxRetryError on redirects, the redirect response will be returned. :type retries: :class:`~urllib3.util.retry.Retry`, False, or an int. :param timeout: If specified, overrides the default timeout for this one request. It may be a float (in seconds) or an instance of :class:`urllib3.util.Timeout`. :param chunked: If True, urllib3 will send the body using chunked transfer encoding. Otherwise, urllib3 will send the body using the standard content-length form. Defaults to False. :param response_conn: Set this to ``None`` if you will handle releasing the connection or set the connection to have the response release it. :param preload_content: If True, the response's body will be preloaded during construction. :param decode_content: If True, will attempt to decode the body based on the 'content-encoding' header. :param enforce_content_length: Enforce content length checking. Body returned by server must match value of Content-Length header, if present. Otherwise, raise error. """ self.num_requests += 1 timeout_obj = self._get_timeout(timeout) timeout_obj.start_connect() conn.timeout = Timeout.resolve_default_timeout(timeout_obj.connect_timeout) try: # Trigger any extra validation we need to do. try: self._validate_conn(conn) except (SocketTimeout, BaseSSLError) as e: self._raise_timeout(err=e, url=url, timeout_value=conn.timeout) raise # _validate_conn() starts the connection to an HTTPS proxy # so we need to wrap errors with 'ProxyError' here too. except ( OSError, NewConnectionError, TimeoutError, BaseSSLError, CertificateError, SSLError, ) as e: new_e: Exception = e if isinstance(e, (BaseSSLError, CertificateError)): new_e = SSLError(e) # If the connection didn't successfully connect to it's proxy # then there if isinstance( new_e, (OSError, NewConnectionError, TimeoutError, SSLError) ) and (conn and conn.proxy and not conn.has_connected_to_proxy): new_e = _wrap_proxy_error(new_e, conn.proxy.scheme) > raise new_e E urllib3.exceptions.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:492: SSLError The above exception was the direct cause of the following exception: self = <requests.adapters.HTTPAdapter object at 0x000002DA1B51A710> request = <PreparedRequest [POST]>, stream = False timeout = Timeout(connect=None, read=None, total=None), verify = True cert = None, proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: > resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py:667: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\connectionpool.py:845: in urlopen retries = retries.increment( _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = Retry(total=0, connect=None, read=False, redirect=None, status=None) method = 'POST' url = '/adminapi/login?Content_Type=application%2Fx-www-from-urlencoded' response = None error = SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)')) _pool = <urllib3.connectionpool.HTTPSConnectionPool object at 0x000002DA1B54E380> _stacktrace = <traceback object at 0x000002DA1B279980> def increment( self, method: str | None = None, url: str | None = None, response: BaseHTTPResponse | None = None, error: Exception | None = None, _pool: ConnectionPool | None = None, _stacktrace: TracebackType | None = None, ) -> Retry: """Return a new Retry object with incremented retry counters. :param response: A response object, or None, if the server did not return a response. :type response: :class:`~urllib3.response.BaseHTTPResponse` :param Exception error: An error encountered during the request, or None if the response was received successfully. :return: A new ``Retry`` object. """ if self.total is False and error: # Disabled, indicate to re-raise the error. raise reraise(type(error), error, _stacktrace) total = self.total if total is not None: total -= 1 connect = self.connect read = self.read redirect = self.redirect status_count = self.status other = self.other cause = "unknown" status = None redirect_location = None if error and self._is_connection_error(error): # Connect retry? if connect is False: raise reraise(type(error), error, _stacktrace) elif connect is not None: connect -= 1 elif error and self._is_read_error(error): # Read retry? if read is False or method is None or not self._is_method_retryable(method): raise reraise(type(error), error, _stacktrace) elif read is not None: read -= 1 elif error: # Other retry? if other is not None: other -= 1 elif response and response.get_redirect_location(): # Redirect retry? if redirect is not None: redirect -= 1 cause = "too many redirects" response_redirect_location = response.get_redirect_location() if response_redirect_location: redirect_location = response_redirect_location status = response.status else: # Incrementing because of a server error like a 500 in # status_forcelist and the given method is in the allowed_methods cause = ResponseError.GENERIC_ERROR if response and response.status: if status_count is not None: status_count -= 1 cause = ResponseError.SPECIFIC_ERROR.format(status_code=response.status) status = response.status history = self.history + ( RequestHistory(method, url, error, status, redirect_location), ) new_retry = self.new( total=total, connect=connect, read=read, redirect=redirect, status=status_count, other=other, history=history, ) if new_retry.is_exhausted(): reason = error or ResponseError(cause) > raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] E urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='192.168.116.137', port=443): Max retries exceeded with url: /adminapi/login?Content_Type=application%2Fx-www-from-urlencoded (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)'))) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\urllib3\util\retry.py:515: MaxRetryError During handling of the above exception, another exception occurred: self = <testcaes.test_all_case.TestAllCase object at 0x000002DA1B54E1D0> caseinfo = {'request': {'data': {'account': 'admin', 'imgcode': '', 'pwd': 123456}, 'method': 'post', 'params': {'Content_Type': 'application/x-www-from-urlencoded'}, 'url': 'https://192.168.116.137/adminapi/login'}, 'validate': None, 'verify': False} @pytest.mark.parametrize("caseinfo",read_yaml(yaml_path)) def duyo(self,caseinfo): new_caseinfo = verify_yaml(caseinfo) #在读取yaml内容后调取用例模板方法验证用例是否足够 #发送请求 > RequestUtil().send_all_request(**new_caseinfo.request) testcaes\test_all_case.py:17: _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ common\requests_util.py:27: in send_all_request res = RequestUtil.sess.request(**kwargs) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py:589: in request resp = self.send(prep, **send_kwargs) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\sessions.py:703: in send r = adapter.send(request, **kwargs) _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ self = <requests.adapters.HTTPAdapter object at 0x000002DA1B51A710> request = <PreparedRequest [POST]>, stream = False timeout = Timeout(connect=None, read=None, total=None), verify = True cert = None, proxies = OrderedDict() def send( self, request, stream=False, timeout=None, verify=True, cert=None, proxies=None ): """Sends PreparedRequest object. Returns Response object. :param request: The :class:`PreparedRequest <PreparedRequest>` being sent. :param stream: (optional) Whether to stream the request content. :param timeout: (optional) How long to wait for the server to send data before giving up, as a float, or a :ref:`(connect timeout, read timeout) <timeouts>` tuple. :type timeout: float or tuple or urllib3 Timeout object :param verify: (optional) Either a boolean, in which case it controls whether we verify the server's TLS certificate, or a string, in which case it must be a path to a CA bundle to use :param cert: (optional) Any user-provided SSL certificate to be trusted. :param proxies: (optional) The proxies dictionary to apply to the request. :rtype: requests.Response """ try: conn = self.get_connection_with_tls_context( request, verify, proxies=proxies, cert=cert ) except LocationValueError as e: raise InvalidURL(e, request=request) self.cert_verify(conn, request.url, verify, cert) url = self.request_url(request, proxies) self.add_headers( request, stream=stream, timeout=timeout, verify=verify, cert=cert, proxies=proxies, ) chunked = not (request.body is None or "Content-Length" in request.headers) if isinstance(timeout, tuple): try: connect, read = timeout timeout = TimeoutSauce(connect=connect, read=read) except ValueError: raise ValueError( f"Invalid timeout {timeout}. Pass a (connect, read) timeout tuple, " f"or a single float to set both timeouts to the same value." ) elif isinstance(timeout, TimeoutSauce): pass else: timeout = TimeoutSauce(connect=timeout, read=timeout) try: resp = conn.urlopen( method=request.method, url=url, body=request.body, headers=request.headers, redirect=False, assert_same_host=False, preload_content=False, decode_content=False, retries=self.max_retries, timeout=timeout, chunked=chunked, ) except (ProtocolError, OSError) as err: raise ConnectionError(err, request=request) except MaxRetryError as e: if isinstance(e.reason, ConnectTimeoutError): # TODO: Remove this in 3.0.0: see #2811 if not isinstance(e.reason, NewConnectionError): raise ConnectTimeout(e, request=request) if isinstance(e.reason, ResponseError): raise RetryError(e, request=request) if isinstance(e.reason, _ProxyError): raise ProxyError(e, request=request) if isinstance(e.reason, _SSLError): # This branch is for urllib3 v1.22 and later. > raise SSLError(e, request=request) E requests.exceptions.SSLError: HTTPSConnectionPool(host='192.168.116.137', port=443): Max retries exceeded with url: /adminapi/login?Content_Type=application%2Fx-www-from-urlencoded (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1007)'))) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\requests\adapters.py:698: SSLError ============================== warnings summary =============================== C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\config\__init__.py:1277 C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\config\__init__.py:1277: PytestAssertRewriteWarning: Module already imported so cannot be rewritten: allure_pytest self._mark_plugins_for_rewrite(hook) C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\config\__init__.py:1500 C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\config\__init__.py:1500: PytestConfigWarning: No files were found in testpaths; consider removing or adjusting your testpaths configuration. Searching recursively from the current directory instead. self.args, self.args_source = self._decide_args( C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\config\__init__.py:1441 C:\Users\20685\AppData\Local\Programs\Python\Python310\lib\site-packages\_pytest\config\__init__.py:1441: PytestConfigWarning: Unknown config option: Python_classes self._warn_or_fail_if_strict(f"Unknown config option: {key}\n") -- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html =========================== short test summary info =========================== FAILED testcaes/test_all_case.py::TestAllCase::test_login[caseinfo0] - reques... ======================== 1 failed, 3 warnings in 0.25s ======================== 进程已结束,退出代码为 0
最新发布
08-01
r"""HTTP/1.1 client library <intro stuff goes here> <other stuff, too> HTTPConnection goes through a number of "states", which define when a client may legally make another request or fetch the response for a particular request. This diagram details these state transitions: (null) | | HTTPConnection() v Idle | | putrequest() v Request-started | | ( putheader() )* endheaders() v Request-sent |\_____________________________ | | getresponse() raises | response = getresponse() | ConnectionError v v Unread-response Idle [Response-headers-read] |\____________________ | | | response.read() | putrequest() v v Idle Req-started-unread-response ______/| / | response.read() | | ( putheader() )* endheaders() v v Request-started Req-sent-unread-response | | response.read() v Request-sent This diagram presents the following rules: -- a second request may not be started until {response-headers-read} -- a response [object] cannot be retrieved until {request-sent} -- there is no differentiation between an unread response body and a partially read response body Note: this enforcement is applied by the HTTPConnection class. The HTTPResponse class does not enforce this state machine, which implies sophisticated clients may accelerate the request/response pipeline. Caution should be taken, though: accelerating the states beyond the above pattern may imply knowledge of the server's connection-close behavior for certain requests. For example, it is impossible to tell whether the server will close the connection UNTIL the response headers have been read; this means that further requests cannot be placed into the pipeline until it is known that the server will NOT be closing the connection. Logical State __state __response ------------- ------- ---------- Idle _CS_IDLE None Request-started _CS_REQ_STARTED None Request-sent _CS_REQ_SENT None Unread-response _CS_IDLE <response_class> Req-started-unread-response _CS_REQ_STARTED <response_class> Req-sent-unread-response _CS_REQ_SENT <response_class> """ import email.parser import email.message import errno import http import io import re import socket import sys import collections.abc from urllib.parse import urlsplit # HTTPMessage, parse_headers(), and the HTTP status code constants are # intentionally omitted for simplicity __all__ = ["HTTPResponse", "HTTPConnection", "HTTPException", "NotConnected", "UnknownProtocol", "UnknownTransferEncoding", "UnimplementedFileMode", "IncompleteRead", "InvalidURL", "ImproperConnectionState", "CannotSendRequest", "CannotSendHeader", "ResponseNotReady", "BadStatusLine", "LineTooLong", "RemoteDisconnected", "error", "responses"] HTTP_PORT = 80 HTTPS_PORT = 443 _UNKNOWN = 'UNKNOWN' # connection states _CS_IDLE = 'Idle' _CS_REQ_STARTED = 'Request-started' _CS_REQ_SENT = 'Request-sent' # hack to maintain backwards compatibility globals().update(http.HTTPStatus.__members__) # another hack to maintain backwards compatibility # Mapping status codes to official W3C names responses = {v: v.phrase for v in http.HTTPStatus.__members__.values()} # maximal line length when calling readline(). _MAXLINE = 65536 _MAXHEADERS = 100 # Header name/value ABNF (http://tools.ietf.org/html/rfc7230#section-3.2) # # VCHAR = %x21-7E # obs-text = %x80-FF # header-field = field-name ":" OWS field-value OWS # field-name = token # field-value = *( field-content / obs-fold ) # field-content = field-vchar [ 1*( SP / HTAB ) field-vchar ] # field-vchar = VCHAR / obs-text # # obs-fold = CRLF 1*( SP / HTAB ) # ; obsolete line folding # ; see Section 3.2.4 # token = 1*tchar # # tchar = "!" / "#" / "$" / "%" / "&" / "'" / "*" # / "+" / "-" / "." / "^" / "_" / "`" / "|" / "~" # / DIGIT / ALPHA # ; any VCHAR, except delimiters # # VCHAR defined in http://tools.ietf.org/html/rfc5234#appendix-B.1 # the patterns for both name and value are more lenient than RFC # definitions to allow for backwards compatibility _is_legal_header_name = re.compile(rb'[^:\s][^:\r\n]*').fullmatch _is_illegal_header_value = re.compile(rb'\n(?![ \t])|\r(?![ \t\n])').search # These characters are not allowed within HTTP URL paths. # See https://tools.ietf.org/html/rfc3986#section-3.3 and the # https://tools.ietf.org/html/rfc3986#appendix-A pchar definition. # Prevents CVE-2019-9740. Includes control characters such as \r\n. # We don't restrict chars above \x7f as putrequest() limits us to ASCII. _contains_disallowed_url_pchar_re = re.compile('[\x00-\x20\x7f]') # Arguably only these _should_ allowed: # _is_allowed_url_pchars_re = re.compile(r"^[/!$&'()*+,;=:@%a-zA-Z0-9._~-]+$") # We are more lenient for assumed real world compatibility purposes. # These characters are not allowed within HTTP method names # to prevent http header injection. _contains_disallowed_method_pchar_re = re.compile('[\x00-\x1f]') # We always set the Content-Length header for these methods because some # servers will otherwise respond with a 411 _METHODS_EXPECTING_BODY = {'PATCH', 'POST', 'PUT'} def _encode(data, name='data'): """Call data.encode("latin-1") but show a better error message.""" try: return data.encode("latin-1") except UnicodeEncodeError as err: raise UnicodeEncodeError( err.encoding, err.object, err.start, err.end, "%s (%.20r) is not valid Latin-1. Use %s.encode('utf-8') " "if you want to send it encoded in UTF-8." % (name.title(), data[err.start:err.end], name)) from None class HTTPMessage(email.message.Message): # XXX The only usage of this method is in # http.server.CGIHTTPRequestHandler. Maybe move the code there so # that it doesn't need to be part of the public API. The API has # never been defined so this could cause backwards compatibility # issues. def getallmatchingheaders(self, name): """Find all header lines matching a given header name. Look through the list of headers and find all lines matching a given header name (and their continuation lines). A list of the lines is returned, without interpretation. If the header does not occur, an empty list is returned. If the header occurs multiple times, all occurrences are returned. Case is not important in the header name. """ name = name.lower() + ':' n = len(name) lst = [] hit = 0 for line in self.keys(): if line[:n].lower() == name: hit = 1 elif not line[:1].isspace(): hit = 0 if hit: lst.append(line) return lst def _read_headers(fp): """Reads potential header lines into a list from a file pointer. Length of line is limited by _MAXLINE, and number of headers is limited by _MAXHEADERS. """ headers = [] while True: line = fp.readline(_MAXLINE + 1) if len(line) > _MAXLINE: raise LineTooLong("header line") headers.append(line) if len(headers) > _MAXHEADERS: raise HTTPException("got more than %d headers" % _MAXHEADERS) if line in (b'\r\n', b'\n', b''): break return headers def parse_headers(fp, _class=HTTPMessage): """Parses only RFC2822 headers from a file pointer. email Parser wants to see strings rather than bytes. But a TextIOWrapper around self.rfile would buffer too many bytes from the stream, bytes which we later need to read as bytes. So we read the correct bytes here, as bytes, for email Parser to parse. """ headers = _read_headers(fp) hstring = b''.join(headers).decode('iso-8859-1') return email.parser.Parser(_class=_class).parsestr(hstring) class HTTPResponse(io.BufferedIOBase): # See RFC 2616 sec 19.6 and RFC 1945 sec 6 for details. # The bytes from the socket object are iso-8859-1 strings. # See RFC 2616 sec 2.2 which notes an exception for MIME-encoded # text following RFC 2047. The basic status line parsing only # accepts iso-8859-1. def __init__(self, sock, debuglevel=0, method=None, url=None): # If the response includes a content-length header, we need to # make sure that the client doesn't read more than the # specified number of bytes. If it does, it will block until # the server times out and closes the connection. This will # happen if a self.fp.read() is done (without a size) whether # self.fp is buffered or not. So, no self.fp.read() by # clients unless they know what they are doing. self.fp = sock.makefile("rb") self.debuglevel = debuglevel self._method = method # The HTTPResponse object is returned via urllib. The clients # of http and urllib expect different attributes for the # headers. headers is used here and supports urllib. msg is # provided as a backwards compatibility layer for http # clients. self.headers = self.msg = None # from the Status-Line of the response self.version = _UNKNOWN # HTTP-Version self.status = _UNKNOWN # Status-Code self.reason = _UNKNOWN # Reason-Phrase self.chunked = _UNKNOWN # is "chunked" being used? self.chunk_left = _UNKNOWN # bytes left to read in current chunk self.length = _UNKNOWN # number of bytes left in response self.will_close = _UNKNOWN # conn will close at end of response def _read_status(self): line = str(self.fp.readline(_MAXLINE + 1), "iso-8859-1") if len(line) > _MAXLINE: raise LineTooLong("status line") if self.debuglevel > 0: print("reply:", repr(line)) if not line: # Presumably, the server closed the connection before # sending a valid response. raise RemoteDisconnected("Remote end closed connection without" " response") try: version, status, reason = line.split(None, 2) except ValueError: try: version, status = line.split(None, 1) reason = "" except ValueError: # empty version will cause next test to fail. version = "" if not version.startswith("HTTP/"): self._close_conn() raise BadStatusLine(line) # The status code is a three-digit number try: status = int(status) if status < 100 or status > 999: raise BadStatusLine(line) except ValueError: raise BadStatusLine(line) return version, status, reason def begin(self): if self.headers is not None: # we've already started reading the response return # read until we get a non-100 response while True: version, status, reason = self._read_status() if status != CONTINUE: break # skip the header from the 100 response skipped_headers = _read_headers(self.fp) if self.debuglevel > 0: print("headers:", skipped_headers) del skipped_headers self.code = self.status = status self.reason = reason.strip() if version in ("HTTP/1.0", "HTTP/0.9"): # Some servers might still return "0.9", treat it as 1.0 anyway self.version = 10 elif version.startswith("HTTP/1."): self.version = 11 # use HTTP/1.1 code for HTTP/1.x where x>=1 else: raise UnknownProtocol(version) self.headers = self.msg = parse_headers(self.fp) if self.debuglevel > 0: for hdr, val in self.headers.items(): print("header:", hdr + ":", val) # are we using the chunked-style of transfer encoding? tr_enc = self.headers.get("transfer-encoding") if tr_enc and tr_enc.lower() == "chunked": self.chunked = True self.chunk_left = None else: self.chunked = False # will the connection close at the end of the response? self.will_close = self._check_close() # do we have a Content-Length? # NOTE: RFC 2616, S4.4, #3 says we ignore this if tr_enc is "chunked" self.length = None length = self.headers.get("content-length") if length and not self.chunked: try: self.length = int(length) except ValueError: self.length = None else: if self.length < 0: # ignore nonsensical negative lengths self.length = None else: self.length = None # does the body have a fixed length? (of zero) if (status == NO_CONTENT or status == NOT_MODIFIED or 100 <= status < 200 or # 1xx codes self._method == "HEAD"): self.length = 0 # if the connection remains open, and we aren't using chunked, and # a content-length was not provided, then assume that the connection # WILL close. if (not self.will_close and not self.chunked and self.length is None): self.will_close = True def _check_close(self): conn = self.headers.get("connection") if self.version == 11: # An HTTP/1.1 proxy is assumed to stay open unless # explicitly closed. if conn and "close" in conn.lower(): return True return False # Some HTTP/1.0 implementations have support for persistent # connections, using rules different than HTTP/1.1. # For older HTTP, Keep-Alive indicates persistent connection. if self.headers.get("keep-alive"): return False # At least Akamai returns a "Connection: Keep-Alive" header, # which was supposed to be sent by the client. if conn and "keep-alive" in conn.lower(): return False # Proxy-Connection is a netscape hack. pconn = self.headers.get("proxy-connection") if pconn and "keep-alive" in pconn.lower(): return False # otherwise, assume it will close return True def _close_conn(self): fp = self.fp self.fp = None fp.close() def close(self): try: super().close() # set "closed" flag finally: if self.fp: self._close_conn() # These implementations are for the benefit of io.BufferedReader. # XXX This class should probably be revised to act more like # the "raw stream" that BufferedReader expects. def flush(self): super().flush() if self.fp: self.fp.flush() def readable(self): """Always returns True""" return True # End of "raw stream" methods def isclosed(self): """True if the connection is closed.""" # NOTE: it is possible that we will not ever call self.close(). This # case occurs when will_close is TRUE, length is None, and we # read up to the last byte, but NOT past it. # # IMPLIES: if will_close is FALSE, then self.close() will ALWAYS be # called, meaning self.isclosed() is meaningful. return self.fp is None def read(self, amt=None): if self.fp is None: return b"" if self._method == "HEAD": self._close_conn() return b"" if self.chunked: return self._read_chunked(amt) if amt is not None: if self.length is not None and amt > self.length: # clip the read to the "end of response" amt = self.length s = self.fp.read(amt) if not s and amt: # Ideally, we would raise IncompleteRead if the content-length # wasn't satisfied, but it might break compatibility. self._close_conn() elif self.length is not None: self.length -= len(s) if not self.length: self._close_conn() return s else: # Amount is not given (unbounded read) so we must check self.length if self.length is None: s = self.fp.read() else: try: s = self._safe_read(self.length) except IncompleteRead: self._close_conn() raise self.length = 0 self._close_conn() # we read everything return s def readinto(self, b): """Read up to len(b) bytes into bytearray b and return the number of bytes read. """ if self.fp is None: return 0 if self._method == "HEAD": self._close_conn() return 0 if self.chunked: return self._readinto_chunked(b) if self.length is not None: if len(b) > self.length: # clip the read to the "end of response" b = memoryview(b)[0:self.length] # we do not use _safe_read() here because this may be a .will_close # connection, and the user is reading more bytes than will be provided # (for example, reading in 1k chunks) n = self.fp.readinto(b) if not n and b: # Ideally, we would raise IncompleteRead if the content-length # wasn't satisfied, but it might break compatibility. self._close_conn() elif self.length is not None: self.length -= n if not self.length: self._close_conn() return n def _read_next_chunk_size(self): # Read the next chunk size from the file line = self.fp.readline(_MAXLINE + 1) if len(line) > _MAXLINE: raise LineTooLong("chunk size") i = line.find(b";") if i >= 0: line = line[:i] # strip chunk-extensions try: return int(line, 16) except ValueError: # close the connection as protocol synchronisation is # probably lost self._close_conn() raise def _read_and_discard_trailer(self): # read and discard trailer up to the CRLF terminator ### note: we shouldn't have any trailers! while True: line = self.fp.readline(_MAXLINE + 1) if len(line) > _MAXLINE: raise LineTooLong("trailer line") if not line: # a vanishingly small number of sites EOF without # sending the trailer break if line in (b'\r\n', b'\n', b''): break def _get_chunk_left(self): # return self.chunk_left, reading a new chunk if necessary. # chunk_left == 0: at the end of the current chunk, need to close it # chunk_left == None: No current chunk, should read next. # This function returns non-zero or None if the last chunk has # been read. chunk_left = self.chunk_left if not chunk_left: # Can be 0 or None if chunk_left is not None: # We are at the end of chunk, discard chunk end self._safe_read(2) # toss the CRLF at the end of the chunk try: chunk_left = self._read_next_chunk_size() except ValueError: raise IncompleteRead(b'') if chunk_left == 0: # last chunk: 1*("0") [ chunk-extension ] CRLF self._read_and_discard_trailer() # we read everything; close the "file" self._close_conn() chunk_left = None self.chunk_left = chunk_left return chunk_left def _read_chunked(self, amt=None): assert self.chunked != _UNKNOWN value = [] try: while True: chunk_left = self._get_chunk_left() if chunk_left is None: break if amt is not None and amt <= chunk_left: value.append(self._safe_read(amt)) self.chunk_left = chunk_left - amt break value.append(self._safe_read(chunk_left)) if amt is not None: amt -= chunk_left self.chunk_left = 0 return b''.join(value) except IncompleteRead as exc: raise IncompleteRead(b''.join(value)) from exc def _readinto_chunked(self, b): assert self.chunked != _UNKNOWN total_bytes = 0 mvb = memoryview(b) try: while True: chunk_left = self._get_chunk_left() if chunk_left is None: return total_bytes if len(mvb) <= chunk_left: n = self._safe_readinto(mvb) self.chunk_left = chunk_left - n return total_bytes + n temp_mvb = mvb[:chunk_left] n = self._safe_readinto(temp_mvb) mvb = mvb[n:] total_bytes += n self.chunk_left = 0 except IncompleteRead: raise IncompleteRead(bytes(b[0:total_bytes])) def _safe_read(self, amt): """Read the number of bytes requested. This function should be used when <amt> bytes "should" be present for reading. If the bytes are truly not available (due to EOF), then the IncompleteRead exception can be used to detect the problem. """ data = self.fp.read(amt) if len(data) < amt: raise IncompleteRead(data, amt-len(data)) return data def _safe_readinto(self, b): """Same as _safe_read, but for reading into a buffer.""" amt = len(b) n = self.fp.readinto(b) if n < amt: raise IncompleteRead(bytes(b[:n]), amt-n) return n def read1(self, n=-1): """Read with at most one underlying system call. If at least one byte is buffered, return that instead. """ if self.fp is None or self._method == "HEAD": return b"" if self.chunked: return self._read1_chunked(n) if self.length is not None and (n < 0 or n > self.length): n = self.length result = self.fp.read1(n) if not result and n: self._close_conn() elif self.length is not None: self.length -= len(result) return result def peek(self, n=-1): # Having this enables IOBase.readline() to read more than one # byte at a time if self.fp is None or self._method == "HEAD": return b"" if self.chunked: return self._peek_chunked(n) return self.fp.peek(n) def readline(self, limit=-1): if self.fp is None or self._method == "HEAD": return b"" if self.chunked: # Fallback to IOBase readline which uses peek() and read() return super().readline(limit) if self.length is not None and (limit < 0 or limit > self.length): limit = self.length result = self.fp.readline(limit) if not result and limit: self._close_conn() elif self.length is not None: self.length -= len(result) return result def _read1_chunked(self, n): # Strictly speaking, _get_chunk_left() may cause more than one read, # but that is ok, since that is to satisfy the chunked protocol. chunk_left = self._get_chunk_left() if chunk_left is None or n == 0: return b'' if not (0 <= n <= chunk_left): n = chunk_left # if n is negative or larger than chunk_left read = self.fp.read1(n) self.chunk_left -= len(read) if not read: raise IncompleteRead(b"") return read def _peek_chunked(self, n): # Strictly speaking, _get_chunk_left() may cause more than one read, # but that is ok, since that is to satisfy the chunked protocol. try: chunk_left = self._get_chunk_left() except IncompleteRead: return b'' # peek doesn't worry about protocol if chunk_left is None: return b'' # eof # peek is allowed to return more than requested. Just request the # entire chunk, and truncate what we get. return self.fp.peek(chunk_left)[:chunk_left] def fileno(self): return self.fp.fileno() def getheader(self, name, default=None): '''Returns the value of the header matching *name*. If there are multiple matching headers, the values are combined into a single string separated by commas and spaces. If no matching header is found, returns *default* or None if the *default* is not specified. If the headers are unknown, raises http.client.ResponseNotReady. ''' if self.headers is None: raise ResponseNotReady() headers = self.headers.get_all(name) or default if isinstance(headers, str) or not hasattr(headers, '__iter__'): return headers else: return ', '.join(headers) def getheaders(self): """Return list of (header, value) tuples.""" if self.headers is None: raise ResponseNotReady() return list(self.headers.items()) # We override IOBase.__iter__ so that it doesn't check for closed-ness def __iter__(self): return self # For compatibility with old-style urllib responses. def info(self): '''Returns an instance of the class mimetools.Message containing meta-information associated with the URL. When the method is HTTP, these headers are those returned by the server at the head of the retrieved HTML page (including Content-Length and Content-Type). When the method is FTP, a Content-Length header will be present if (as is now usual) the server passed back a file length in response to the FTP retrieval request. A Content-Type header will be present if the MIME type can be guessed. When the method is local-file, returned headers will include a Date representing the file's last-modified time, a Content-Length giving file size, and a Content-Type containing a guess at the file's type. See also the description of the mimetools module. ''' return self.headers def geturl(self): '''Return the real URL of the page. In some cases, the HTTP server redirects a client to another URL. The urlopen() function handles this transparently, but in some cases the caller needs to know which URL the client was redirected to. The geturl() method can be used to get at this redirected URL. ''' return self.url def getcode(self): '''Return the HTTP status code that was sent with the response, or None if the URL is not an HTTP URL. ''' return self.status class HTTPConnection: _http_vsn = 11 _http_vsn_str = 'HTTP/1.1' response_class = HTTPResponse default_port = HTTP_PORT auto_open = 1 debuglevel = 0 @staticmethod def _is_textIO(stream): """Test whether a file-like object is a text or a binary stream. """ return isinstance(stream, io.TextIOBase) @staticmethod def _get_content_length(body, method): """Get the content-length based on the body. If the body is None, we set Content-Length: 0 for methods that expect a body (RFC 7230, Section 3.3.2). We also set the Content-Length for any method if the body is a str or bytes-like object and not a file. """ if body is None: # do an explicit check for not None here to distinguish # between unset and set but empty if method.upper() in _METHODS_EXPECTING_BODY: return 0 else: return None if hasattr(body, 'read'): # file-like object. return None try: # does it implement the buffer protocol (bytes, bytearray, array)? mv = memoryview(body) return mv.nbytes except TypeError: pass if isinstance(body, str): return len(body) return None def __init__(self, host, port=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None, blocksize=8192): self.timeout = timeout self.source_address = source_address self.blocksize = blocksize self.sock = None self._buffer = [] self.__response = None self.__state = _CS_IDLE self._method = None self._tunnel_host = None self._tunnel_port = None self._tunnel_headers = {} (self.host, self.port) = self._get_hostport(host, port) self._validate_host(self.host) # This is stored as an instance variable to allow unit # tests to replace it with a suitable mockup self._create_connection = socket.create_connection def set_tunnel(self, host, port=None, headers=None): """Set up host and port for HTTP CONNECT tunnelling. In a connection that uses HTTP CONNECT tunneling, the host passed to the constructor is used as a proxy server that relays all communication to the endpoint passed to `set_tunnel`. This done by sending an HTTP CONNECT request to the proxy server when the connection is established. This method must be called before the HTTP connection has been established. The headers argument should be a mapping of extra HTTP headers to send with the CONNECT request. """ if self.sock: raise RuntimeError("Can't set up tunnel for established connection") self._tunnel_host, self._tunnel_port = self._get_hostport(host, port) if headers: self._tunnel_headers = headers else: self._tunnel_headers.clear() def _get_hostport(self, host, port): if port is None: i = host.rfind(':') j = host.rfind(']') # ipv6 addresses have [...] if i > j: try: port = int(host[i+1:]) except ValueError: if host[i+1:] == "": # http://foo.com:/ == http://foo.com/ port = self.default_port else: raise InvalidURL("nonnumeric port: '%s'" % host[i+1:]) host = host[:i] else: port = self.default_port if host and host[0] == '[' and host[-1] == ']': host = host[1:-1] return (host, port) def set_debuglevel(self, level): self.debuglevel = level def _tunnel(self): connect = b"CONNECT %s:%d HTTP/1.0\r\n" % ( self._tunnel_host.encode("ascii"), self._tunnel_port) headers = [connect] for header, value in self._tunnel_headers.items(): headers.append(f"{header}: {value}\r\n".encode("latin-1")) headers.append(b"\r\n") # Making a single send() call instead of one per line encourages # the host OS to use a more optimal packet size instead of # potentially emitting a series of small packets. self.send(b"".join(headers)) del headers response = self.response_class(self.sock, method=self._method) (version, code, message) = response._read_status() if code != http.HTTPStatus.OK: self.close() raise OSError(f"Tunnel connection failed: {code} {message.strip()}") while True: line = response.fp.readline(_MAXLINE + 1) if len(line) > _MAXLINE: raise LineTooLong("header line") if not line: # for sites which EOF without sending a trailer break if line in (b'\r\n', b'\n', b''): break if self.debuglevel > 0: print('header:', line.decode()) def connect(self): """Connect to the host and port specified in __init__.""" sys.audit("http.client.connect", self, self.host, self.port) self.sock = self._create_connection( (self.host,self.port), self.timeout, self.source_address) # Might fail in OSs that don't implement TCP_NODELAY try: self.sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) except OSError as e: if e.errno != errno.ENOPROTOOPT: raise if self._tunnel_host: self._tunnel() def close(self): """Close the connection to the HTTP server.""" self.__state = _CS_IDLE try: sock = self.sock if sock: self.sock = None sock.close() # close it manually... there may be other refs finally: response = self.__response if response: self.__response = None response.close() def send(self, data): """Send `data' to the server. ``data`` can be a string object, a bytes object, an array object, a file-like object that supports a .read() method, or an iterable object. """ if self.sock is None: if self.auto_open: self.connect() else: raise NotConnected() if self.debuglevel > 0: print("send:", repr(data)) if hasattr(data, "read") : if self.debuglevel > 0: print("sendIng a read()able") encode = self._is_textIO(data) if encode and self.debuglevel > 0: print("encoding file using iso-8859-1") while 1: datablock = data.read(self.blocksize) if not datablock: break if encode: datablock = datablock.encode("iso-8859-1") sys.audit("http.client.send", self, datablock) self.sock.sendall(datablock) return sys.audit("http.client.send", self, data) try: self.sock.sendall(data) except TypeError: if isinstance(data, collections.abc.Iterable): for d in data: self.sock.sendall(d) else: raise TypeError("data should be a bytes-like object " "or an iterable, got %r" % type(data)) def _output(self, s): """Add a line of output to the current request buffer. Assumes that the line does *not* end with \\r\\n. """ self._buffer.append(s) def _read_readable(self, readable): if self.debuglevel > 0: print("sendIng a read()able") encode = self._is_textIO(readable) if encode and self.debuglevel > 0: print("encoding file using iso-8859-1") while True: datablock = readable.read(self.blocksize) if not datablock: break if encode: datablock = datablock.encode("iso-8859-1") yield datablock def _send_output(self, message_body=None, encode_chunked=False): """Send the currently buffered request and clear the buffer. Appends an extra \\r\\n to the buffer. A message_body may be specified, to be appended to the request. """ self._buffer.extend((b"", b"")) msg = b"\r\n".join(self._buffer) del self._buffer[:] self.send(msg) if message_body is not None: # create a consistent interface to message_body if hasattr(message_body, 'read'): # Let file-like take precedence over byte-like. This # is needed to allow the current position of mmap'ed # files to be taken into account. chunks = self._read_readable(message_body) else: try: # this is solely to check to see if message_body # implements the buffer API. it /would/ be easier # to capture if PyObject_CheckBuffer was exposed # to Python. memoryview(message_body) except TypeError: try: chunks = iter(message_body) except TypeError: raise TypeError("message_body should be a bytes-like " "object or an iterable, got %r" % type(message_body)) else: # the object implements the buffer interface and # can be passed directly into socket methods chunks = (message_body,) for chunk in chunks: if not chunk: if self.debuglevel > 0: print('Zero length chunk ignored') continue if encode_chunked and self._http_vsn == 11: # chunked encoding chunk = f'{len(chunk):X}\r\n'.encode('ascii') + chunk \ + b'\r\n' self.send(chunk) if encode_chunked and self._http_vsn == 11: # end chunked transfer self.send(b'0\r\n\r\n') def putrequest(self, method, url, skip_host=False, skip_accept_encoding=False): """Send a request to the server. `method' specifies an HTTP request method, e.g. 'GET'. `url' specifies the object being requested, e.g. '/index.html'. `skip_host' if True does not add automatically a 'Host:' header `skip_accept_encoding' if True does not add automatically an 'Accept-Encoding:' header """ # if a prior response has been completed, then forget about it. if self.__response and self.__response.isclosed(): self.__response = None # in certain cases, we cannot issue another request on this connection. # this occurs when: # 1) we are in the process of sending a request. (_CS_REQ_STARTED) # 2) a response to a previous request has signalled that it is going # to close the connection upon completion. # 3) the headers for the previous response have not been read, thus # we cannot determine whether point (2) is true. (_CS_REQ_SENT) # # if there is no prior response, then we can request at will. # # if point (2) is true, then we will have passed the socket to the # response (effectively meaning, "there is no prior response"), and # will open a new one when a new request is made. # # Note: if a prior response exists, then we *can* start a new request. # We are not allowed to begin fetching the response to this new # request, however, until that prior response is complete. # if self.__state == _CS_IDLE: self.__state = _CS_REQ_STARTED else: raise CannotSendRequest(self.__state) self._validate_method(method) # Save the method for use later in the response phase self._method = method url = url or '/' self._validate_path(url) request = '%s %s %s' % (method, url, self._http_vsn_str) self._output(self._encode_request(request)) if self._http_vsn == 11: # Issue some standard headers for better HTTP/1.1 compliance if not skip_host: # this header is issued *only* for HTTP/1.1 # connections. more specifically, this means it is # only issued when the client uses the new # HTTPConnection() class. backwards-compat clients # will be using HTTP/1.0 and those clients may be # issuing this header themselves. we should NOT issue # it twice; some web servers (such as Apache) barf # when they see two Host: headers # If we need a non-standard port,include it in the # header. If the request is going through a proxy, # but the host of the actual URL, not the host of the # proxy. netloc = '' if url.startswith('http'): nil, netloc, nil, nil, nil = urlsplit(url) if netloc: try: netloc_enc = netloc.encode("ascii") except UnicodeEncodeError: netloc_enc = netloc.encode("idna") self.putheader('Host', netloc_enc) else: if self._tunnel_host: host = self._tunnel_host port = self._tunnel_port else: host = self.host port = self.port try: host_enc = host.encode("ascii") except UnicodeEncodeError: host_enc = host.encode("idna") # As per RFC 273, IPv6 address should be wrapped with [] # when used as Host header if host.find(':') >= 0: host_enc = b'[' + host_enc + b']' if port == self.default_port: self.putheader('Host', host_enc) else: host_enc = host_enc.decode("ascii") self.putheader('Host', "%s:%s" % (host_enc, port)) # note: we are assuming that clients will not attempt to set these # headers since *this* library must deal with the # consequences. this also means that when the supporting # libraries are updated to recognize other forms, then this # code should be changed (removed or updated). # we only want a Content-Encoding of "identity" since we don't # support encodings such as x-gzip or x-deflate. if not skip_accept_encoding: self.putheader('Accept-Encoding', 'identity') # we can accept "chunked" Transfer-Encodings, but no others # NOTE: no TE header implies *only* "chunked" #self.putheader('TE', 'chunked') # if TE is supplied in the header, then it must appear in a # Connection header. #self.putheader('Connection', 'TE') else: # For HTTP/1.0, the server will assume "not chunked" pass def _encode_request(self, request): # ASCII also helps prevent CVE-2019-9740. return request.encode('ascii') def _validate_method(self, method): """Validate a method name for putrequest.""" # prevent http header injection match = _contains_disallowed_method_pchar_re.search(method) if match: raise ValueError( f"method can't contain control characters. {method!r} " f"(found at least {match.group()!r})") def _validate_path(self, url): """Validate a url for putrequest.""" # Prevent CVE-2019-9740. match = _contains_disallowed_url_pchar_re.search(url) if match: raise InvalidURL(f"URL can't contain control characters. {url!r} " f"(found at least {match.group()!r})") def _validate_host(self, host): """Validate a host so it doesn't contain control characters.""" # Prevent CVE-2019-18348. match = _contains_disallowed_url_pchar_re.search(host) if match: raise InvalidURL(f"URL can't contain control characters. {host!r} " f"(found at least {match.group()!r})") def putheader(self, header, *values): """Send a request header line to the server. For example: h.putheader('Accept', 'text/html') """ if self.__state != _CS_REQ_STARTED: raise CannotSendHeader() if hasattr(header, 'encode'): header = header.encode('ascii') if not _is_legal_header_name(header): raise ValueError('Invalid header name %r' % (header,)) values = list(values) for i, one_value in enumerate(values): if hasattr(one_value, 'encode'): values[i] = one_value.encode('latin-1') elif isinstance(one_value, int): values[i] = str(one_value).encode('ascii') if _is_illegal_header_value(values[i]): raise ValueError('Invalid header value %r' % (values[i],)) value = b'\r\n\t'.join(values) header = header + b': ' + value self._output(header) def endheaders(self, message_body=None, *, encode_chunked=False): """Indicate that the last header line has been sent to the server. This method sends the request to the server. The optional message_body argument can be used to pass a message body associated with the request. """ if self.__state == _CS_REQ_STARTED: self.__state = _CS_REQ_SENT else: raise CannotSendHeader() self._send_output(message_body, encode_chunked=encode_chunked) def request(self, method, url, body=None, headers={}, *, encode_chunked=False): """Send a complete request to the server.""" self._send_request(method, url, body, headers, encode_chunked) def _send_request(self, method, url, body, headers, encode_chunked): # Honor explicitly requested Host: and Accept-Encoding: headers. header_names = frozenset(k.lower() for k in headers) skips = {} if 'host' in header_names: skips['skip_host'] = 1 if 'accept-encoding' in header_names: skips['skip_accept_encoding'] = 1 self.putrequest(method, url, **skips) # chunked encoding will happen if HTTP/1.1 is used and either # the caller passes encode_chunked=True or the following # conditions hold: # 1. content-length has not been explicitly set # 2. the body is a file or iterable, but not a str or bytes-like # 3. Transfer-Encoding has NOT been explicitly set by the caller if 'content-length' not in header_names: # only chunk body if not explicitly set for backwards # compatibility, assuming the client code is already handling the # chunking if 'transfer-encoding' not in header_names: # if content-length cannot be automatically determined, fall # back to chunked encoding encode_chunked = False content_length = self._get_content_length(body, method) if content_length is None: if body is not None: if self.debuglevel > 0: print('Unable to determine size of %r' % body) encode_chunked = True self.putheader('Transfer-Encoding', 'chunked') else: self.putheader('Content-Length', str(content_length)) else: encode_chunked = False for hdr, value in headers.items(): self.putheader(hdr, value) if isinstance(body, str): # RFC 2616 Section 3.7.1 says that text default has a # default charset of iso-8859-1. body = _encode(body, 'body') self.endheaders(body, encode_chunked=encode_chunked) def getresponse(self): """Get the response from the server. If the HTTPConnection is in the correct state, returns an instance of HTTPResponse or of whatever object is returned by the response_class variable. If a request has not been sent or if a previous response has not be handled, ResponseNotReady is raised. If the HTTP response indicates that the connection should be closed, then it will be closed before the response is returned. When the connection is closed, the underlying socket is closed. """ # if a prior response has been completed, then forget about it. if self.__response and self.__response.isclosed(): self.__response = None # if a prior response exists, then it must be completed (otherwise, we # cannot read this response's header to determine the connection-close # behavior) # # note: if a prior response existed, but was connection-close, then the # socket and response were made independent of this HTTPConnection # object since a new request requires that we open a whole new # connection # # this means the prior response had one of two states: # 1) will_close: this connection was reset and the prior socket and # response operate independently # 2) persistent: the response was retained and we await its # isclosed() status to become true. # if self.__state != _CS_REQ_SENT or self.__response: raise ResponseNotReady(self.__state) if self.debuglevel > 0: response = self.response_class(self.sock, self.debuglevel, method=self._method) else: response = self.response_class(self.sock, method=self._method) try: try: response.begin() except ConnectionError: self.close() raise assert response.will_close != _UNKNOWN self.__state = _CS_IDLE if response.will_close: # this effectively passes the connection to the response self.close() else: # remember this, so we can tell when it is complete self.__response = response return response except: response.close() raise try: import ssl except ImportError: pass else: class HTTPSConnection(HTTPConnection): "This class allows communication via SSL." default_port = HTTPS_PORT # XXX Should key_file and cert_file be deprecated in favour of context? def __init__(self, host, port=None, key_file=None, cert_file=None, timeout=socket._GLOBAL_DEFAULT_TIMEOUT, source_address=None, *, context=None, check_hostname=None, blocksize=8192): super(HTTPSConnection, self).__init__(host, port, timeout, source_address, blocksize=blocksize) if (key_file is not None or cert_file is not None or check_hostname is not None): import warnings warnings.warn("key_file, cert_file and check_hostname are " "deprecated, use a custom context instead.", DeprecationWarning, 2) self.key_file = key_file self.cert_file = cert_file if context is None: context = ssl._create_default_https_context() # send ALPN extension to indicate HTTP/1.1 protocol if self._http_vsn == 11: context.set_alpn_protocols(['http/1.1']) # enable PHA for TLS 1.3 connections if available if context.post_handshake_auth is not None: context.post_handshake_auth = True will_verify = context.verify_mode != ssl.CERT_NONE if check_hostname is None: check_hostname = context.check_hostname if check_hostname and not will_verify: raise ValueError("check_hostname needs a SSL context with " "either CERT_OPTIONAL or CERT_REQUIRED") if key_file or cert_file: context.load_cert_chain(cert_file, key_file) # cert and key file means the user wants to authenticate. # enable TLS 1.3 PHA implicitly even for custom contexts. if context.post_handshake_auth is not None: context.post_handshake_auth = True self._context = context if check_hostname is not None: self._context.check_hostname = check_hostname def connect(self): "Connect to a host on a given (SSL) port." super().connect() if self._tunnel_host: server_hostname = self._tunnel_host else: server_hostname = self.host self.sock = self._context.wrap_socket(self.sock, server_hostname=server_hostname) __all__.append("HTTPSConnection") class HTTPException(Exception): # Subclasses that define an __init__ must call Exception.__init__ # or define self.args. Otherwise, str() will fail. pass class NotConnected(HTTPException): pass class InvalidURL(HTTPException): pass class UnknownProtocol(HTTPException): def __init__(self, version): self.args = version, self.version = version class UnknownTransferEncoding(HTTPException): pass class UnimplementedFileMode(HTTPException): pass class IncompleteRead(HTTPException): def __init__(self, partial, expected=None): self.args = partial, self.partial = partial self.expected = expected def __repr__(self): if self.expected is not None: e = ', %i more expected' % self.expected else: e = '' return '%s(%i bytes read%s)' % (self.__class__.__name__, len(self.partial), e) __str__ = object.__str__ class ImproperConnectionState(HTTPException): pass class CannotSendRequest(ImproperConnectionState): pass class CannotSendHeader(ImproperConnectionState): pass class ResponseNotReady(ImproperConnectionState): pass class BadStatusLine(HTTPException): def __init__(self, line): if not line: line = repr(line) self.args = line, self.line = line class LineTooLong(HTTPException): def __init__(self, line_type): HTTPException.__init__(self, "got more than %d bytes when reading %s" % (_MAXLINE, line_type)) class RemoteDisconnected(ConnectionResetError, BadStatusLine): def __init__(self, *pos, **kw): BadStatusLine.__init__(self, "") ConnectionResetError.__init__(self, *pos, **kw) # for backwards compatibility error = HTTPException 解析这些代码 怎么联接这个服务器
06-19
请翻译下面内容为中文, ====================================== INSTALLING SUBVERSION A Quick Guide ====================================== $LastChangedDate$ Contents: I. INTRODUCTION A. Audience B. Dependency Overview C. Dependencies in Detail D. Documentation II. INSTALLATION A. Building from a Tarball B. Building the Latest Source under Unix C. Building under Unix in Different Directories D. Installing from a Zip or Installer File under Windows E. Building the Latest Source under Windows F. Building using CMake III. BUILDING A SUBVERSION SERVER A. Setting Up Apache Httpd B. Making and Installing the Subversion Apache Server Module C. Configuring Apache Httpd for Subversion D. Running and Testing E. Alternative: 'svnserve' and ra_svn IV. PROGRAMMING LANGUAGE BINDINGS (PYTHON, PERL, RUBY, JAVA) I. INTRODUCTION ============ A. Audience This document is written for people who intend to build Subversion from source code. Normally, the only people who do this are Subversion developers and package maintainers. If neither of these labels fits you, we recommend you find an appropriate binary package of Subversion and install that. While the Subversion project doesn't officially release binary packages, a number of volunteers have made such packages available for different operating systems. Most Linux and BSD distributions already have Subversion packages ready to go via standard packaging channels, and other volunteers have built 'installers' for both Windows and OS X. Visit this page for package links: https://subversion.apache.org/packages.html For those of you who still wish to build from source, Subversion follows the Unix convention of "./configure && make", but it has a number of dependencies. B. Dependency Overview You'll need the following build tools to compile Subversion: * autoconf 2.59 or later (Unix only) * libtool 1.4 or later (Unix only) * a reasonable C compiler (gcc, Visual Studio, etc.) Subversion also depends on the following third-party libraries: * libapr and libapr-util (REQUIRED for client and server) The Apache Portable Runtime (APR) library provides an abstraction of operating-system level services such as file and network I/O, memory management, and so on. It also provides convenience routines for things like hashtables, checksums, and argument processing. While it was originally developed for the Apache HTTP server, APR is a standalone library used by Subversion and other products. It is a critical dependency for all of Subversion; it's the layer that allows Subversion clients and servers to run on different operating systems. * SQLite (REQUIRED for client and server) Subversion uses SQLite to manage some internal databases. * libz (REQUIRED for client and server) Subversion uses zlib for compressing binary differences. These diff streams are used everywhere -- over the network, in the repository, and in the client's working copy. * utf8proc (REQUIRED for client and server) Subversion uses utf8proc for UTF-8 support, including Unicode normalization. * Apache Serf (OPTIONAL for client) The Apache Serf library allows the Subversion client to send HTTP requests. This is necessary if you want your client to access a repository served by the Apache HTTP server. There is an alternate 'svnserve' server as well, though, and clients automatically know how to speak the svnserve protocol. Thus it's not strictly necessary for your client to be able to speak HTTP... though we still recommend that your client be built to speak both HTTP and svnserve protocols. * OpenSSL (OPTIONAL for client and server) OpenSSL enables your client to access SSL-encrypted https:// URLs (using Apache Serf) in addition to unencrypted http:// URLs. To use SSL with Subversion's WebDAV server, Apache needs to be compiled with OpenSSL as well. * Netwide Assembler (OPTIONAL for client and server) The Netwide Assembler (NASM) is used to build the (optional) assembler modules of OpenSSL. As of OpenSSL 1.1.0 NASM is the only supported assembler. * Berkeley DB (DEPRECATED and OPTIONAL for client and server) When you create a repository, you have the option of specifying a storage 'back-end' implementation. Currently, there are two options. The newer and recommended one, known as FSFS, does not require Berkeley DB. FSFS stores data in a flat filesystem. The older implementation, known as BDB, has been deprecated and is not recommended for new repositories, but is still available. BDB stores data in a Berkeley DB database. This back-end will only be available if the BDB libraries are discovered at compile time. * libsasl (OPTIONAL for client and server) If the Cyrus SASL library is detected at compile time, then the svn client (and svnserve server) will be able to utilize SASL to do various forms of authentication when speaking the svnserve protocol. * Python, Perl, Java, Ruby (OPTIONAL) Subversion is mostly a collection of C libraries with well-defined APIs, with a small collection of programs that use the APIs. If you want to build Subversion API bindings for other languages, you need to have those languages available at build time. * py3c (OPTIONAL, but REQUIRED for Python bindings) The Python 3 Compatibility Layer for C Extensions is required to build the Python language bindings. * KDE Framework 5, libsecret, GNOME Keyring (OPTIONAL for client) Subversion contains optional support for storing passwords in KWallet via KDE Framework 5 libraries (preferred) or kdelibs4, and GNOME Keyring via libsecret (preferred) or GNOME APIs. * libmagic (OPTIONAL) If the libmagic library is detected at compile time, it will be used to determine mime-types of binary files which are added to version control. Note that mime-types configured via auto-props or the mime-types-file option take precedence. C. Dependencies in Detail Subversion depends on a number of third party tools and libraries. Some of them are only required to run a Subversion server; others are necessary just for a Subversion client. This section explains what other tools and libraries will be required so that Subversion can be built with the set of features you want. On Unix systems, the './configure' script will tell you if you are missing the correct version of any of the required libraries or tools, so if you are in a real hurry to get building, you can skip straight to section II. If you want to gather the pieces you will need before starting out, however, you should read the following. If you're just installing a Subversion client, the Subversion team has created a script that downloads the minimal prerequisite libraries (Apache Portable Runtime, Sqlite, and Zlib). The script, 'get-deps.sh', is available in the same directory as this file. When run, it will place 'apr', 'apr-util', 'serf', 'zlib', and 'sqlite-amalgamation' directories directly into your unpacked Subversion distribution. With the exception of sqlite-amalgamation, they will still need to be configured, built and installed explicitly, and Subversion's own configure script may need to be told where to find them, if they were not installed in standard system locations. Note: there are optional dependencies (such as OpenSSL, swig, and httpd) which get-deps.sh does not download. Note: Because previous builds of Subversion may have installed older versions of these libraries, you may want to run some of the cleanup commands described in section II.B before installing the following. 1. Apache Portable Runtime 1.4 or newer (REQUIRED) Whenever you want to build any part of Subversion, you need the Apache Portable Runtime (APR) and the APR Utility (APR-util) libraries. If you do not have a pre-installed APR and APR-util, you will need to get these yourself: https://apr.apache.org/download.cgi On Unix systems, if you already have the APR libraries compiled and do not wish to regenerate them from source code, then Subversion needs to be able to find them. There are a couple of options to "./configure" that tell it where to look for the APR and APR-util libraries. By default it will try to locate the libraries using apr-config and apu-config scripts. These scripts provide all the relevant information for the APR and APR-util installations. If you want to specify the location of the APR library, you can use the "--with-apr=" option of "./configure". It should be able to find the apr-config script in the standard location under that directory (e.g. ${prefix}/bin). Similarly, you can specify the location of APR-util using the "--with-apr-util=" option to "./configure". It will look for the apu-config script relative to that directory. For example, if you want to use the APR libraries you built with the Apache httpd server, you could run: $ ./configure --with-apr=/usr/local/apache2 \ --with-apr-util=/usr/local/apache2 ... Notes on Windows platforms: * Do not use APR version 1.7.3 as that release contains a bug that makes it impossible for Subversion to use it properly. This issue only affects APR builds on Windows. This issue was fixed in APR version 1.7.4. See: https://lists.apache.org/thread/xd5t922jvb9423ph4j84rsp5fxks1k0z * If you check out APR and APR-util sources from their Subversion repository, be sure to use a native Windows SVN client (as opposed to Cygwin's version) so that the .dsp files get carriage-returns at the ends of their lines. Otherwise Visual Studio will complain that it doesn't recognize the .dsp files. Notes on Unix platforms: * If you check out APR and APR-util sources from their Subversion repository, you need to run the 'buildconf' script in each library's directory to regenerate the configure scripts and other files required for compiling the libraries. Afterwards, configure, build, and install both libraries before running Subversion's configure script. For example: $ cd apr $ ./buildconf $ ./configure <options...> $ make $ make install $ cd .. $ cd apr-util $ ./buildconf $ ./configure <options...> $ make $ make install $ cd .. 2. SQLite (REQUIRED) Subversion requires SQLite version 3.24.0 or above. You can meet this dependency several ways: * Use an SQLite amalgamation file. * Specify an SQLite installation to use. * Let Subversion find an installed SQLite. To use an SQLite-provided amalgamation, just drop sqlite3.c into Subversion's sqlite-amalgamation/ directory, or point to it with the --with-sqlite configure option. This file also ships with the Subversion dependencies distribution, or you can download it from SQLite: https://www.sqlite.org/download.html 3. Zlib (REQUIRED) Subversion's binary-differencing engine depends on zlib for compression. Most Unix systems have libz pre-installed, but if you need it, you can get it from http://www.zlib.net/ 4. utf8proc (REQUIRED) Subversion uses utf8proc for UTF-8 support. Configure will attempt to locate utf8proc by default using pkg-config and known paths. If it is installed in a non-standard location, then use: --with-utf8proc=/path/to/libutf8proc Alternatively, a copy of utf8proc comes bundled with the Subversion sources. If configure should use the bundled copy, use: --with-utf8proc=internal 5. autoconf 2.59 or newer (Unix only) This is required only if you plan to build from the latest source (see section II.B). Generally only developers would be doing this. 6. libtool 1.4 or newer (Unix only) This is required only if you plan to build from the latest source (see section II.B). Note: Some systems (Solaris, for example) require libtool 1.4.3 or newer. The autogen.sh script knows about that. 7. Apache Serf library 1.3.4 or newer (OPTIONAL) If you want your client to be able to speak to an Apache server (via a http:// or https:// URL), you must link against Apache Serf. Though optional, we strongly recommend this. In order to use ra_serf, you must install serf, and run Subversion's ./configure with the argument --with-serf. If serf is installed in a non-standard place, you should use --with-serf=/path/to/serf/install instead. Apache Serf can be obtained via your system's package distribution system or directly from https://serf.apache.org/. For more information on Apache Serf and Subversion's ra_serf, see the file subversion/libsvn_ra_serf/README. 8. OpenSSL (OPTIONAL) ### needs some updates. I think Apache Serf automagically handles ### finding OpenSSL, but we may need more docco here. and w.r.t ### zlib. The Apache Serf library has support for SSL encryption by relying on the OpenSSL library. a. Using OpenSSL on the client through Apache Serf On Unix systems, to build Apache Serf with OpenSSL, you need OpenSSL installed on your system, and you must add "--with-ssl" as a "./configure" parameter. If your OpenSSL installation is hard for Apache Serf to find, you may need to use "--with-libs=/path/to/lib" in addition. In particular, on Red Hat (but not Fedora Core) it is necessary to specify "--with-libs=/usr/kerberos" for OpenSSL to be found. You can also specify a path to the zlib library using "--with-libs". Under Windows, you can specify the paths to these libraries by passing the options --with-zlib and --with-openssl to gen-make.py. b. Using OpenSSL on the Apache server You can also add support for these features to an Apache httpd server to be used for Subversion using the same support libraries. The Subversion build system will not provide them, however. You add them by specifying parameters to the "./configure" script of the Apache Server instead. For getting SSL on your server, you would add the "--enable-ssl" or "--with-ssl=/path/to/lib" option to Apache's "./configure" script. Apache enables zlib support by default, but you can specify a nonstandard location for the library with the "--with-z=/path/to/dir" option. Consult the Apache documentation for more details, and for other modules you may wish to install to enhance your Subversion server. If you don't already have it, you can get a copy of OpenSSL, including instructions for building and packaging on both Unix systems and Windows, at: https://www.openssl.org/ 9. Berkeley DB 4.X (DEPRECATED and OPTIONAL) You need the Berkeley DB libraries only if you are building a Subversion server that supports the older BDB repository storage back-end, or a Subversion client that can access local BDB repositories via the file:// URI scheme. The BDB back-end has been deprecated and is not recommended for new repositories. BDB may be removed in Subversion 2.0. We recommend the newer FSFS back-end for all new repositories. FSFS does not require the Berkeley DB libraries. If in doubt, the 'svnadmin info' command, added in Subversion 1.9, can identify whether an existing repository uses BDB or FSFS. The current recommended version of Berkeley DB is 4.4.20 or newer, which brings auto-recovery functionality to the Berkeley DB database environment. If you must use an older version of Berkeley DB, we *strongly* recommend using 4.3 or 4.2 over the 4.1 or 4.0 versions. Not only are these significantly faster and more stable, but they also enable Subversion repositories to automatically clean up database journal files to save disk space. You'll need Berkeley DB installed on your system. You can get it from: http://www.oracle.com/technetwork/database/database-technologies/berkeleydb/overview/index.html If you have Berkeley DB installed in a place not searched by default for includes and libraries, add something like this: --with-berkeley-db=db.h:/usr/local/include/db4.7:/usr/local/lib/db4.7:db-4.7 to your `configure' switches, and the build process will use the Berkeley DB header and library in the named directories. You may need to use a different path, of course. Note that in order for the detection to succeed, the dynamic linker must be able to find the libraries at configure time. 10. Cyrus SASL library (OPTIONAL) If the Simple Authentication and Security Layer (SASL) library is detected on your system, then the Subversion client and svnserve server can utilize its abilities for various forms of authentication. To learn more about SASL or to get the source code, visit: http://freshmeat.net/projects/cyrussasl/ 11. Apache Web Server 2.2.X or newer (OPTIONAL) (https://httpd.apache.org/download.cgi) The Apache httpd server is one of two methods to make your Subversion repository available over a network - the other is a custom server program called svnserve, which requires no extra software packages. Building Subversion, the Apache server, and the modules that Apache needs to communicate with Subversion are complicated enough that there is a whole section at the end of this document that describes how it is done: See section III for details. 12. Python 3.x or newer (https://www.python.org/) (OPTIONAL) Subversion does not require Python for its basic operation. However, Python is required for building and testing Subversion and for using Subversion's SWIG Python bindings or hook scripts coded in Python. The majority of Subversion's test suite is written in Python, as is part of Subversion's build system. In more detail, Python is required to do any of the following: * Use the SWIG Python bindings. * Use the ctypes Python bindings. * Use hook scripts coded in Python. * Build Subversion from a tarball on Unix-like systems and run Subversion's test suite as described in section II.B. * Build Subversion on Windows as described in section II.E. * Build Subversion from a working copy checked out from Subversion's own repository (whether or not running the test suite). * Build the SWIG Python bindings. * Build the ctypes Python bindings. * Testing as described in section III.D. The Python bindings are used by: * Third-party programs (e.g., ViewVC) * Scripts distributed with Subversion itself in the tools/ subdirectory. * Any in-house scripts you may have. Python is NOT required to do any of the following: * Use the core command-line binaries (svn, svnadmin, svnsync, etc.) * Use Subversion's C libraries. * Use any of Subversion's other language bindings. * Build Subversion from a tarball on Unix-like systems without running Subversion's test suite Although this section calls for Python 3.x, Subversion still technically works with Python 2.7. However, Support for Python 2.7 is being phased out. As of 1 January 2020, Python 2.7 has reached end of life. All users are strongly encouraged to move to Python 3. Note: If you are using a Subversion distribution tarball and want to build the Python bindings for Python 2, you should rebuild the build environment in non-release mode by running 'sh autogen.sh' before running the ./configure script; see section II.B for more about autogen.sh. 13. Perl 5.8 or newer (Windows only) (OPTIONAL) To build Subversion under any of the MS Windows platforms, you will also need Perl 5.8 or newer to run apr-util's w32locatedb.pl script. 14. pkg-config (Unix only, OPTIONAL) Subversion uses pkg-config to find appropriate options used at build time. 15. D-Bus (Unix only, OPTIONAL) D-Bus is a message bus system. D-Bus is required for support for KWallet and GNOME Keyring. pkg-config is needed to find D-Bus headers and library. 16. Qt 5 or Qt 4 (Unix only, OPTIONAL) Qt is a cross-platform application framework. QtCore, QtDBus and QtGui modules are required for support for KWallet. pkg-config is needed to find Qt headers and libraries. 17. KDE 5 Framework libraries or KDELibs 4 (Unix only, OPTIONAL) Subversion contains optional support for storing passwords in KWallet. Subversion will look for KF5Wallet, KF5CoreAddons, KF5I18n APIs by default, and needs kf5-config to find them. The KDELibs 4 api is also supported. KDELibs contains core KDE libraries. Subversion uses libkdecore and libkdeui libraries when support for KWallet is enabled. kde4-config is used to get some necessary options. pkg-config, D-Bus and Qt 4 are also required. If you want to build support for KWallet, then pass the '--with-kwallet' option to `configure`. If KDE is installed in a non-standard prefix, then use: --with-kwallet=/path/to/KDE/prefix 18. GLib 2 (Unix only, OPTIONAL) GLib is a general-purpose utility library. GLib is required for support for GNOME Keyring. pkg-config is needed to find GLib headers and library. 19. GNOME Keyring (Unix only, OPTIONAL) Subversion contains optional support for storing passwords in GNOME Keyring. pkg-config is needed to find GNOME Keyring headers and library. D-Bus and GLib are also required. If you want to build support for GNOME Keyring, then pass the '--with-gnome-keyring' option to `configure`. 20. Ctypesgen (OPTIONAL) Ctypesgen is Python wrapper generator for ctypes. It is used to generate a part of Subversion Ctypes Python bindings (CSVN). If you want to build CSVN, then pass the '--with-ctypesgen' option to `configure`. If ctypesgen.py is installed in a non-standard place, then use: --with-ctypesgen=/path/to/ctypesgen.py For more information on CSVN, see subversion/bindings/ctypes-python/README. 21. libmagic (OPTIONAL) Subversion's configure script attempts to find libmagic automatically. If it is installed in a non-standard location, then use: --with-libmagic=/path/to/libmagic/prefix The files include/magic.h and lib/libmagic.so.1.0 (or similar) are expected beneath this prefix directory. If they cannot be found Subversion will be compiled without support for libmagic. If libmagic is installed but support for it should not be compiled in, then use: --with-libmagic=no If configure should fail when libmagic is not present, but only the default locations should be searched, then use: --with-libmagic 22. LZ4 (OPTIONAL) Subversion uses LZ4 compression library version r129 or above. Configure will attempt to locate the system library by default using pkg-config and known paths. If it is installed in a non-standard location, then use: --with-lz4=/path/to/liblz4 If configure should use the version bundled with the sources, use: --with-lz4=internal 23. py3c (OPTIONAL) Subversion uses the Python 3 Compatibility Layer for C Extensions (py3c) library when building the Python language bindings. As py3c is a header-only library, it is needed only to build the bindings, not to use them. Configure will attempt to locate py3c by default using pkg-config and known paths. If it is installed in a non-standard location, then use: --with-py3c=/path/to/py3c/prefix The library can be downloaded from GitHub: https://github.com/encukou/py3c On Unix systems, you can also use the provided get-deps.sh script to download py3c and several other dependencies; see the top of section I.C for more about get-deps.sh. D. Documentation The primary documentation for Subversion is the free book "Version Control with Subversion", a.k.a. "The Subversion Book", obtainable from https://svnbook.red-bean.com/. Various additional documentation exists in the doc/ subdirectory of the Subversion source. See the file doc/README for more information. II. INSTALLATION ============ Subversion support three different build systems: - Autoconf/make, for Unix builds - Visual Studio vcproj, for Windows builds - CMake, for both Unix and Windows The first two have been in use since 2001. Sections A-E below describe the classic build system. The CMake build system was created in 2024 and is still under development. It will be included in Subversion 1.15 and is expected to be the default build system starting with Subversion 1.16. Section F below describes the CMake build system. A. Building from a Tarball ------------------------------ 1. Building from a Tarball Download the most recent distribution tarball from: https://subversion.apache.org/download/ Unpack it, and use the standard GNU procedure to compile: $ ./configure $ make # make install You can also run the full test suite by running 'make check'. Even in successful runs, some tests will report XFAIL; that is normal. Failed runs are indicated by FAIL or XPASS results, or a non-zero exit code from "make check". B. Building the Latest Source under Unix ------------------------------------- These instructions assume you have already installed Subversion and checked out a working copy of Subversion's own code -- either the latest /trunk code, or some branch or tag. You also need to have already installed whatever prerequisites that version of Subversion requires (if you haven't, the ./configure step should complain). You can discard the directory created by the tarball; you're about to build the latest, greatest Subversion client. This is the procedure Subversion developers use. First off, if you have any Subversion libraries lying around from previous 'make installs', clean them up first! # rm -f /usr/local/lib/libsvn* # rm -f /usr/local/lib/libapr* # rm -f /usr/local/lib/libserf* Start the process by running "autogen.sh": $ sh ./autogen.sh This script will make sure you have all the necessary components available to build Subversion. If any are missing, you will be told where to get them from. (See the 'Dependency Overview' in section I.) Note: if the command "autoconf" on your machine does not run autoconf 2.59 or later, but you do have a new enough autoconf available, then you can specify the correct one with the AUTOCONF variable. (The AUTOHEADER variable is similar.) This may be required on Debian GNU/Linux, where "autoconf" is actually a Perl script that attempts to guess which version is required -- because of the interaction between Subversion's and APR's configuration systems, the Perl script may get it wrong. So for example, you might need to do: $ AUTOCONF=autoconf2.59 sh ./autogen.sh Once you've prepared the working copy by running autogen.sh, just follow the usual configuration and build procedure: $ ./configure $ make # make install (Optionally, you might want to pass --enable-maintainer-mode to the ./configure script. This enables debugging symbols in your binaries (among other things) and most Subversion developers use it.) Since the resulting binary depends on shared libraries, the destination library directory must be identified in your operating system's library search path. That is in either /etc/ld.so.conf or $LD_LIBRARY_PATH for Linux systems and in /etc/rc.conf for FreeBSD, followed by a run of the 'ldconfig' program. Check your system documentation for details. By identifying the destination directory, Subversion will be able to dynamically load repository access plugins. If you try to do a checkout and see an error like: subversion/libsvn_ra/ra_loader.c:209: (apr_err=170000) svn: Unrecognized URL scheme 'https://svn.apache.org/repos/asf/subversion/trunk' It probably means that the dynamic loader/linker can't find all of the libsvn_* libraries. C. Building under Unix in Different Directories -------------------------------------------- It is possible to configure and build Subversion on Unix in a directory other than the working copy. For example $ svn co https://svn.apache.org/repos/asf/subversion/trunk svn $ cd svn $ # get SQLite amalgamation if required $ chmod +x autogen.sh $ ./autogen.sh $ mkdir ../obj $ cd ../obj $ ../svn/configure [...with options as appropriate...] $ make puts the Subversion working copy in the directory svn and builds it in a separate, parallel directory obj. Why would you want to do this? Well there are a number of reasons... * You may prefer to avoid "polluting" the working copy with files generated during the build. * You may want to put the build directory and the working copy on different physical disks to improve performance. * You may want to separate source and object code and only backup the source. * You may want to remote mount the working copy on multiple machines, and build for different machines from the same working copy. * You may want to build multiple configurations from the same working copy. The last reason above is possibly the most useful. For instance you can have separate debug and optimized builds each using the same working copy. Or you may want a client-only build and a client-server build. Using multiple build directories you can rebuild any or all configurations after an edit without the need to either clean and reconfigure, or identify and copy changes into another working copy. D. Installing from a Zip or Installer File under Windows ----------------------------------------------------- Of all the ways of getting a Subversion client, this is the easiest. Download a Zip or self-extracting installer via: https://subversion.apache.org/packages.html#windows For a Zip file extract the DLLs and EXEs to a directory of your choice. Included in the download are among other tools the SVN client, the SVNADMIN administration tool and the SVNLOOK reporting tool. You may want to add the bin directory in the Subversion folder to your PATH environment variable so as to not have to use the full path when running Subversion commands. To test the installation, open a DOS box (run either "cmd" or "command" from the Start menu's "Run..." menu option), change to the directory you installed the executables into, and run: C:\test>svn co https://svn.apache.org/repos/asf/subversion/trunk svn This will get the latest Subversion sources and put them into the "svn" subdirectory. If using a self-extracting .exe file, just run it instead of unzipping it, to install Subversion. E. Building the Latest Source under Windows ---------------------------------------- E.1 Prerequisites * Microsoft Visual Studio. Any recent (2005+) version containing the Visual C++ component will work (E.g. Professional, Express, Community Edition). Make sure you enable C++ support during setup. * Python 2.7 or higher, downloaded from https://www.python.org/ which is used to generate the project files. * Perl 5.8 or higher from https://www.perl.org/get.html * Awk is needed to compile Apache. Source code is available in tools\dev\awk, run the buildwin.bat program to compile. * Apache apr, apr-util, and optionally apr-iconv libraries, version 1.4 or later (1.2 for apr-iconv). If you are building from a Subversion checkout and have not downloaded Apache 2, then get these 3 libraries from https://www.apache.org/dist/apr/. * SQLite 3.24.0 or higher from https://www.sqlite.org/download.html (3.39.4 or higher recommended) * ZLib 1.2 or higher is required and can be obtained from http://www.zlib.net/ * Either a Subversion client binary from https://subversion.apache.org/packages.html to do the initial checkout of the Subversion source or the zip file source distribution. Additional Options * [Optional] Apache Httpd 2 source, downloaded from https://httpd.apache.org/download.cgi, these instructions assume version 2.0.58. This is only needed for building the Subversion server Apache modules. ### FIXME Apache 2.2 or greater required. * [Optional] Berkeley DB for backend support of the server components are available from http://www.oracle.com/technetwork/database/database-technologies/berkeleydb/downloads/index-082944.html (Version 4.4.20 or in specific cases some higher version recommended) For more information see Section I.C.9. * [Optional] Openssl can be obtained from https://www.openssl.org/source/ * [Optional] NASM can be obtained from http://www.nasm.us/ * [Optional] A modified version of GNU libintl, called svn-win32-libintl.zip, can be used for displaying localized messages. Available at: http://subversion.tigris.org/servlets/ProjectDocumentList?folderID=2627 * [Optional] GNU gettext for generating message catalog (.mo) files from message translations. You can get the latest binaries from http://gnuwin32.sourceforge.net/. You'll need the binaries (gettext-0.14.1-bin.zip) and dependencies (gettext-0.14.1-dep.zip). E.2 Notes The Apache Serf library supports secure connections with OpenSSL and on-the-wire compression with zlib. If you want to use the secure connections feature, you should pass the option "--with-openssl" to the gen-make.py script. See Section I.C.7 for more details. E.3 Preparation This section describes how to unpack the files to make a build tree. * Make a directory SVN and cd into it. * Either checkout Subversion: svn co https://svn.apache.org/repos/asf/subversion/trunk src-trunk or unpack the zip file distribution and rename the directory to src-trunk. * Install Visual Studio Environment. You either have to tell the installer to register environment variables or run VCVARS32.BAT before building anything. If you are using a newer Visual Studio, use the 'Visual Studio 20xx Command Prompt' on the Start menu. * Install Python and add it to your path * Install Perl (it should add itself to the path) ### Subversion doesn't need perl. Only some dependencies need it (OpenSSL and some apr scripts) * Copy AWK (awk95.exe) to awk.exe (e.g. SVN\awk\awk.exe) and add the directory containing it (e.g. SVN\awk) to the path. ### Subversion doesn't need awk. Only some dependencies need it (some apr scripts) * [Optional] Install NASM and add it to your path ### Subversion doesn't need NASM. Only some dependencies need it optionally (OpenSSL) * [Optional] If you checked out Subversion from the repository and want to build Subversion with http/https access support then install the Apache Serf sources into SVN\src-trunk\serf. * [Optional] If you want BDB backend support, extract the Berkeley DB files into SVN\src-trunk\db4-win32. It's a good idea to add SVN\src-trunk\db4-win32\bin to your PATH, so that Subversion can find the Berkeley DB DLLs. [NOTE: This binary package of Berkeley DB is provided for convenience only. Please don't address questions about Berkeley DB that aren't directly related to using Subversion to the project mailing list.] If you build Berkeley DB from the source, you will have to copy the file db-x.x.x\build_win32\db.h to SVN\src-trunk\db4-win32\include, and all the import libraries to SVN\src-trunk\db4-win32\lib. Again, the DLLs should be somewhere in your path. ### Just use --with-serf instead of the hardcoded path * [Optional] If you want to build the server modules, extract Apache source into SVN\httpd-2.x.x. * If you are building from a checkout of Subversion, and you are NOT building Apache, then you will need the APR libraries. Depending on how you got your version of APR, either: - Extract the APR, APR-util and APR-iconv source distributions into SVN\apr, SVN\apr-util, and SVN\apr-iconv respectively. Or: - Extract the apr, apr-util and apr-iconv directories from the srclib folder in the Apache httpd source into SVN\apr, SVN\apr-util, and SVN\apr-iconv respectively. ### Just use --with-apr, etc. instead of the hardcoded paths * Extract the ZLib sources into SVN\zlib if you are not using the zlib included in the dependencies zip file. ### Just use --with-zlib instead of the hardcoded path * [Optional] If you want secure connection (https) client support extract OpenSSL into SVN\openssl ### And pass the path to both serf and gen-make.py * [Optional] If you want localized message support, extract svn-win32-libintl.zip into SVN\svn-win32-libintl and extract gettext-x.x.x-bin.zip and gettext-x.x.x-dep.zip into SVN\gettext-x.x.x-bin. Add SVN\gettext-x.x.x-bin\bin to your path. * Download the SQLite amalgamation from https://www.sqlite.org/download.html and extract it into SVN\sqlite-amalgamation. See I.C.12 for alternatives to using the amalgamation package. E.4 Building the Binaries To build the binaries either follow these instructions. Start in the SVN directory you created. Set up the environment (commands should be one line even if wrapped here). C:>set VER=trunk C:>set DIR=trunk C:>set BUILD_ROOT=C:\SVN C:>set PYTHONDIR=C:\Python27 C:>set AWKDIR=C:\SVN\Awk C:>set ASMDIR=C:\SVN\asm C:>set SDKINC="C:\Program Files\Microsoft SDK\include" C:>set SDKLIB="C:\Program Files\Microsoft SDK\lib" C:>set GETTEXTBIN=C:\SVN\gettext-0.14.1-bin\bin C:>PATH=%PATH%;%BUILD_ROOT%\src-%DIR%\db4-win32;%ASMDIR%; %PYTHONDIR%;%AWKDIR%;%GETTEXTBIN% C:>set INCLUDE=%SDKINC%;%INCLUDE% C:>set LIB=%SDKLIB%;%LIB% OpenSSL < 1.1.0 C:>cd openssl C:>perl Configure VC-WIN32 [*] C:>call ms\do_masm C:>nmake -f ms\ntdll.mak C:>cd out32dll C:>call ..\ms\test C:>cd ..\.. *Note: Use "call ms\do_nasm" if you have nasm instead of MASM, or "call ms\do_ms" if you don't have an assembler. Also if you are using OpenSSL >= 1.0.0 masm is no longer supported. You will have to use do_nasm or do_ms in this case. OpenSSL >= 1.1.0 C:>cd openssl C:>perl Configure VC-WIN32 C:>nmake C:>nmake test C:>cd .. Apache 2 This step is only required for building the server dso modules. ### FIXME Apache 2.2 or greater required. Old build instructions for VC6. C:>set APACHEDIR=C:\Program Files\Apache Group\Apache2 C:>msdev httpd-2.0.58\apache.dsw /MAKE "BuildBin - Win32 Release" APR If you downloaded APR / APR-UTIL / APR_ICONV by source, you will have to build these libraries first. Building these libraries on Windows is straight forward and in most cases as simple as issuing these two commands: C:>nmake -f Makefile.win C:>nmake -f Makefile.win install Please refer to the build instructions provided by the library source for actual build instructions. ZLib If you downloaded the zlib source, you will have to build ZLib first. Building ZLib using Visual Studio should be quite simple. Just open the appropriate solution and build the project zlibstat using the IDE. Please refer to the build instructions provided by the library source for actual build instructions. Note that you'd make sure to define ZLIB_WINAPI in the ZLib config header and move the lib-file into the zlib root-directory. Please note that you MUST NOT build ZLib with the included assembler optimized code. It is known to be buggy, see for example the discussion https://svn.haxx.se/dev/archive-2013-10/0109.shtml. This means that you must not define ASMV or ASMINF. Note that the VS projects in contrib\visualstudio define these in the Debug configuration. Apache Serf ### Section about Apache Serf might be required/useful to add. ### scons is required too and Apache Serf needs to be configured prior to ### be able to build Subversion using: ### scons APR=[PATH_TO_APR] APU=[PATH_TO_APU] OPENSSL=[PATH_TO_OPENSSL] ### ZLIB=[PATH_TO_ZLIB] PREFIX=[PATH_TO_SERF_DEST] ### scons check ### scons install Subversion Things to note: * If you don't want to build mod_dav_svn, omit the --with-httpd option. The zip file source distribution contains apr, apr-util and apr-iconv in the default build location. If you have downloaded the apr files yourself you will have to tell the generator where to find the APR libraries; the options are --with-apr, --with-apr-util and --with-apr-iconv. * If you would like a debug build substitute Debug for Release in the msbuild command. * There have been rumors that Subversion on Win32 can be built using the latest cygwin, you probably don't want the zip file source distribution though. ymmv. * You will also have to distribute the C runtime dll with the binaries. Also, since Apache/APR do not provide .vcproj files, you will need to convert the Apache/APR .dsp files to .vcproj files with Visual Studio before building -- just open the Apache .dsw file and answer 'Yes To All' when the conversion dialog pops up, or you can open the individual .dsp files and convert them one at a time. The Apache/APR projects required by Subversion are: apr-util\libaprutil.dsp, apr\libapr.dsp, apr-iconv\libapriconv.dsp, apr-util\xml\expat\lib\xml.dsp, apr-iconv\ccs\libapriconv_ccs_modules.dsp, and apr-iconv\ces\libapriconv_ces_modules.dsp. * If the server dso modules are being built and tested Apache must not be running or the copy of the dso modules will fail. C:>cd src-%DIR% If Apache 2 has been built and the server modules are required then gen-make.py will already have been run. If the source is from the zip file, Apache 2 has not been built so gen-make.py must be run: C:>python gen-make.py --vsnet-version=20xx --with-berkeley-db=db4-win32 --with-openssl=..\openssl --with-zlib=..\zlib --with-libintl=..\svn-win32-libintl Then build subversion: C:>msbuild subversion_vcnet.sln /t:__MORE__ /p:Configuration=Release C:>cd .. The binaries have now been built. E.5 Packaging the binaries You now need to copy the binaries ready to make the release zip file. You also need to do this to run the tests as the new binaries need to be in your path. You can use the build/win32/make_dist.py script in the Subversion source directory to do that. [TBD: Describe how to do this. Note dependencies on zip, jar, doxygen.] E.6 Testing the Binaries [TBD: It's been a long, long while since it was necessary to move binaries around for testing. win-tests.py does that automagically. Fix this section accordingly, and probably reorder, putting the packaging at the end.] The build process creates the binary test programs but it does not copy the client tests into the release test area. C:>cd src-%DIR% C:>mkdir Release\subversion\tests\cmdline C:>xcopy /S /Y subversion\tests\cmdline Release\subversion\tests\cmdline If the server dso modules have been built then copy the dso files and dlls into the Apache modules directory. C:>copy Release\subversion\mod_dav_svn\mod_dav_svn.so "%APACHEDIR%"\modules C:>copy Release\subversion\mod_authz_svn\mod_authz_svn.so "%APACHEDIR%"\modules C:>copy svn-win32-%VER%\bin\intl.dll "%APACHEDIR%\bin" C:>copy svn-win32-%VER%\bin\iconv.dll "%APACHEDIR%\bin" C:>copy svn-win32-%VER%\bin\libdb42.dll "%APACHEDIR%\bin" C:>cd .. Put the svn-win32-trunk\bin directory at the start of your path so you run the newly built binaries and not another version you might have installed. Then run the client tests: C:>PATH=%BUILD_ROOT%\svn-win32-%VER%\bin;%PATH% C:>cd src-%DIR% C:>python win-tests.py -c -r -v If the server dso modules were built configure Apache to use the mod_dav_svn and mod_authz_svn modules by making sure these lines appear uncommented in httpd.conf: LoadModule dav_module modules/mod_dav.so LoadModule dav_fs_module modules/mod_dav_fs.so LoadModule dav_svn_module modules/mod_dav_svn.so LoadModule authz_svn_module modules/mod_authz_svn.so And further down the file add location directives to point to the test repositories. Change the paths to the SVN directory you created (paths should be on one line even if wrapped here): <Location /svn-test-work/repositories> DAV svn SVNParentPath C:/SVN/src-trunk/Release/subversion/tests/cmdline/ svn-test-work/repositories </Location> <Location /svn-test-work/local_tmp/repos> DAV svn SVNPath c:/SVN/src-trunk/Release/subversion/tests/cmdline/ svn-test-work/local_tmp/repos </Location> Then restart Apache and run the tests: C:>python win-tests.py -c -r -v -u http://localhost C:>cd .. F. Building using CMake -------------------- Get the sources, either a release tarball or by checking out the official repository. The CMake build system currently only exists in /trunk and it will be included in the 1.15 release. The process for building on Unix and Windows is the same. $ python gen-make.py -t cmake $ cmake -B out [build options] $ cmake --build out "out" in the commands above is the build directory used by CMake. Build options can be added, for example: $ cmake -B out -DCMAKE_INSTALL_PREFIX=/usr/local/subversion -DSVN_ENABLE_RA_SERF=ON Build options can be listed using: $ cmake -LH Windows tricks: - Modern versions of Microsoft Visual Studio provide support for CMake projects out-of-box, including intellisense, integrated options editor, test explorer, and more. In order to use it for Subversion, open the source directory with Visual Studio, and the configuration should start automatically. For editing the cache (options), do right-click to the CMakeLists.txt file and clicking `CMake Settings for Subversion` will open the editor. After the required settings are configured, hit `F7` in order to build. For more info, check the article bellow: https://learn.microsoft.com/en-us/cpp/build/cmake-projects-in-visual-studio - There is a useful tool for bootstrapping the dependencies, vcpkg. It provides ports for the most of the Subversion's dependencies, which then could be installed via a single command. To start using it, download the registry from GitHub, bootstrap vcpkg, and install the dependencies: $ git clone https://github.com/microsoft/vcpkg $ cd vcpkg && .\bootstrap-vcpkg.bat -disableMetrics $ .\vcpkg install apr apr-util expat zlib sqlite3 [any other dependency] After this is done, vcpkg can be integrated into CMake by passing the vcpkg toolchain to CMAKE_TOOLCHAIN_FILE option. In order to do it with Visual Studio, open the CMake cache editor as explained in the previous step, and put the following into `CMake toolchain file` field, where VCPKG_ROOT is the path to vcpkg registry: <VCPKG_ROOT>/scripts/buildsystems/vcpkg.cmake III. BUILDING A SUBVERSION SERVER ============================ Subversion has two servers you can choose from: svnserve and Apache. svnserve is a small, lightweight server program that is automatically compiled when you build Subversion's source. Apache is a more heavyweight HTTP server, but tends to have more features. This section primarily focuses on how to build Apache and the accompanying mod_dav_svn server module for it. If you plan to use svnserve instead, jump right to section E for a quick explanation. A. Setting Up Apache Httpd ----------------------- 1. Obtaining and Installing Apache Httpd 2 Subversion tries to compile against the latest released version of Apache httpd 2.2+. The easiest thing for you to do is download a source tarball of the latest release and unpack that. If you have questions about the Apache httpd 2.2 build, please consult the httpd install documentation: https://httpd.apache.org/docs-2.2/install.html At the top of the httpd tree: $ ./buildconf $ ./configure --enable-dav --enable-so --enable-maintainer-mode The first arg says to build mod_dav. The second arg says to enable shared module support which is needed for a typical compile of mod_dav_svn (see below). The third arg says to include debugging information. If you built Subversion with --enable-maintainer-mode, then you should do the same for Apache; there can be problems if one was compiled with debugging and the other without. Note: if you have multiple db versions installed on your system, Apache might link to a different one than Subversion, causing failures when accessing the repository through Apache. To prevent this from happening, you have to tell Apache which db version to use and where to find db. Add --with-dbm=db4 and --with-berkeley-db=/usr/local/BerkeleyDB.4.2 to the configure line. Make sure this is the same db as the one Subversion uses. This note assumes you have installed Berkeley DB 4.2.52 at its default locations. For more info about the db requirement, see section I.C.9. You may also want to include other modules in your build. Add --enable-ssl to turn on SSL support, and --enable-deflate to turn on compression support, for example. Consult the Apache documentation for more details. All instructions below assume you configured Apache to install in its default location, /usr/local/apache2/; substitute appropriately if you chose some other location. Compile and install apache: $ make && make install B. Making and Installing the Subversion Apache Server Module --------------------------------------------------------- Go back into your subversion working copy and run ./autogen.sh if you need to. Then, assuming Apache httpd 2.2 is installed in the standard location, run: $ ./configure Note: do *not* configure subversion with "--disable-shared"! mod_dav_svn *must* be built as a shared library, and it will look for other libsvn_*.so libraries on your system. If you see a warning message that the build of mod_dav_svn is being skipped, this may be because you have Apache httpd 2.x installed in a non-standard location. You can use the "--with-apxs=" option to locate the apxs script: $ ./configure --with-apxs=/usr/local/apache2/bin/apxs Note: it *is* possible to build mod_dav_svn as a static library and link it directly into Apache. Possible, but painful. Stick with the shared library for now; if you can't, then ask. $ rm /usr/local/lib/libsvn* If you have old subversion libraries sitting on your system, libtool will link them instead of the `fresh' ones in your tree. Remove them before building subversion. $ make clean && make && make install After the make install, the Subversion shared libraries are in /usr/local/lib/. mod_dav_svn.so should be installed in /usr/local/libexec/ (or elsewhere, such as /usr/local/apache2/modules/, if you passed --with-apache-libexecdir to configure). Section II.E explains how to build the server on Windows. C. Configuring Apache Httpd for Subversion --------------------------------------- The following section is an abbreviated version of the information in the Subversion Book (https://svnbook.red-bean.com). Please read chapter 6 for more details. The following assumes you have already created a repository. For documentation on how to do that, see README. The following also assumes that you have modified /usr/local/apache2/conf/httpd.conf to reflect your setup. At a minimum you should look at the User, Group and ServerName directives. Full details on setting up apache can be found at: https://httpd.apache.org/docs-2.2/ First, your httpd.conf needs to load the mod_dav_svn module. If you pass --enable-mod-activation to Subversion's configure, 'make install' target should automatically add this line for you. In any case, if Apache HTTPD gives you an error like "Unknown DAV provider: svn", then you may want to verify that this line exists in your httpd.conf: LoadModule dav_svn_module modules/mod_dav_svn.so NOTE: if you built mod_dav as a dynamic module as well, make sure the above line appears after the one that loads mod_dav.so. Next, add this to the *bottom* of your httpd.conf: <Location /svn/repos> DAV svn SVNPath /absolute/path/to/repository </Location> This will give anyone unrestricted access to the repository. If you want limited access, read or write, you add these lines to the Location block: AuthType Basic AuthName "Subversion repository" AuthUserFile /my/svn/user/passwd/file And: a) For a read/write restricted repository: Require valid-user b) For a write restricted repository: <LimitExcept GET PROPFIND OPTIONS REPORT> Require valid-user </LimitExcept> c) For separate restricted read and write access: AuthGroupFile /my/svn/group/file <LimitExcept GET PROPFIND OPTIONS REPORT> Require group svn_committers </LimitExcept> <Limit GET PROPFIND OPTIONS REPORT> Require group svn_committers Require group svn_readers </Limit> ### FIXME Tutorials section refers to old 2.0 docs These are only a few simple examples. For a complete tutorial on Apache access control, please consider taking a look at the tutorials found under "Security" on the following page: https://httpd.apache.org/docs-2.0/misc/tutorials.html In order for 'svn cp' to work (which is actually implemented as a DAV COPY command), mod_dav needs to be able to determine the hostname of the server. A standard way of doing this is to use Apache's ServerName directive to set the server's hostname. Edit your /usr/local/apache2/conf/httpd.conf to include: ServerName svn.myserver.org If you are using virtual hosting through Apache's NameVirtualHost directive, you may need to use the ServerAlias directive to specify additional names that your server is known by. If you have configured mod_deflate to be in the server, you can enable compression support for your repository by adding the following line to your Location block: SetOutputFilter DEFLATE NOTE: If you are unfamiliar with an Apache directive, or not exactly sure about what it does, don't hesitate to look it up in the documentation: https://httpd.apache.org/docs-2.2/mod/directives.html. NOTE: Make sure that the user 'nobody' (or whatever UID the httpd process runs as) has permission to read and write the Berkeley DB files! This is a very common problem. D. Running and Testing ------------------- Fire up apache 2: $ /usr/local/apache2/bin/apachectl stop $ /usr/local/apache2/bin/apachectl start Check /usr/local/apache2/logs/error_log to make sure it started up okay. Try doing a network checkout from the repository: $ svn co http://localhost/svn/repos wc The most common reason this might fail is permission problems reading the repository db files. If the checkout fails, make sure that the httpd process has permission to read and write to the repository. You can see all of mod_dav_svn's complaints in the Apache error logfile, /usr/local/apache2/logs/error_log. To run the regression test suite for networked Subversion, see the instructions in subversion/tests/cmdline/README. For advice about tracing problems, see "Debugging the server" in https://subversion.apache.org/docs/community-guide/. E. Alternative: 'svnserve' and ra_svn ----------------------------------- An alternative network layer is libsvn_ra_svn (on the client side) and the 'svnserve' process on the server. This is a simple network layer that speaks a custom protocol over plain TCP (documented in libsvn_ra_svn/protocol): $ svnserve -d # becomes a background daemon $ svn checkout svn://localhost/usr/local/svn/repository You can use the "-r" option to svnserve to set a logical root for repositories, and the "-R" option to restrict connections to read-only access. ("Read-only" is a logical term here; svnserve still needs write access to the database in this mode, but will not allow commits or revprop changes.) 'svnserve' has built-in CRAM-MD5 authentication (so you can use non-system accounts), and can also be tunneled over SSH (so you can use existing system accounts). It's also capable of using Cyrus SASL if libsasl2 is detected at ./configure time. Please read chapter 6 in the Subversion Book (https://svnbook.red-bean.com) for details on these features. IV. PROGRAMMING LANGUAGE BINDINGS (PYTHON, PERL, RUBY, JAVA) ======================================================== For Python, Perl and Ruby bindings, see the file ./subversion/bindings/swig/INSTALL For Java bindings, see the file ./subversion/bindings/javahl/README
06-24
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

夏 克

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值