SOAP on HTTP的先天缺陷 - Where HTTP Fails SOAP

本文探讨了HTTP协议在高并发场景下的局限性,并提出通过ESB整合多种协议来满足企业级应用需求。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

本人的主要讲述了 由于HTTP本身不具有request identifier(请求标识符)机制,导致HTTP协议不能实现高并发情况下的连接共享机制,限制了它在关键领域的应用。
本文建议在使用SOAP时可以采用其它协议,如IIOP,MQ,JMS等,将它们与HTTP混合使用。这样,即可以保证互通性又可以满足企业级应用的需求。这也是ESB(extensible service bus)的核心思想。判断一个产品是不是真正的ESB,就要看它是不是支持多种传输协议,传输协议支持的多少直接反映了一个ESB产品的档次。


Web services allow for the delivery of SOAP messages over any protocol. A common misconception is that all SOAP messages must be transmitted over HTTP. While that approach is useful in many cases, there are situations where it makes sense to use alternatives. This paper investigates situations where HTTP does not scale sufficiently for enterprise Web service deployments and looks at available alternatives.

HTTP and Scalability

HTTP was designed for serving Web pages under the assumption that the protocol would only be required to send a request and receive a response. This paradigm has worked very well for the World Wide Web and has been ubiquitously accepted as its standard protocol. When a person makes a request on an interactive Web site, they are typically interacting with an application server tier (J2EE, .NET, or scripting languages such as Python and Perl). Each person is running a browser client and doing only one thing at a time on that particular site. In a typical enterprise, however, an application server fronts a number of back-office systems that provide critical business services. The application server usually supports a number of users concurrently. This implies that it typically needs to make a number of concurrent requests on those back-office systems.

Many people believe that a move to service-oriented architecture (SOA) implies a move to SOAP/HTTP as the ubiquitous protocol throughout the enterprise. What few seem to realize, however, is that the SOAP/HTTP approach has inherent scalability limitations under certain circumstances. Simple Web browsers have been the de facto HTTP client to date, and they are in essence single-threaded clients as far as the server is concerned, making only one request at a time over a given connection. This has created a perception that HTTP can be scaled as needed. To date, it has been scaled only for communication between browsers and application/Web servers, typically through clustering, replication, and the use of hardware load balancers. Unfortunately, scaling communications between an application server and back-office services cannot be solved satisfactorily using the same techniques.

For example, assume we have an online banking system with support for up to 4000 concurrent users. The Web tier comprises a cluster of application server instances behind a hardware HTTP load balancer. In order to fulfill the online banking business function, there are three Unix-hosted services and a mainframe-hosted service utilized by the application server. In a world where SOAP/HTTP is the only protocol, the application server will have to support an incoming connection from the browser, and one additional connection out to each of the four back-office services for each concurrent user. This is because HTTP demands that you wait for a response before you send your next request over that same connection. It has no concept of a request identifier, which is a core requirement to enable connection sharing.

One could of course just serialize the requests, awaiting each response before sending the next request on each given connection. However, this is a waste of resources because the back-office server is not doing anything with this connection until the response is sent back to the client application, and most back-office systems have the capacity to handle multiple concurrent requests.

Interleaving requests over a single connection would be the ideal. It would allow an enterprise to achieve the same level of concurrency while using fewer resources. One would send a number of requests to a server over a single connection and receive responses as they become available. The client can correlate responses based on a request identifier. This would allow responses to be returned as soon as they are ready (which may differ from the order in which requests were sent). Unfortunately, the HTTP specification forbids such interleaving.

The obvious conclusion is that a standards-based protocol that allows for request interleaving is needed. This would allow the sharing of a single connection between the application server and each back-office system. In the example previously outlined, if the application server had an upper limit of 1000 connections it can open at once (file descriptor limit), in our SOAP/HTTP world, each application server would be limited to concurrently supporting only 1000/5, or 200, clients. A typical workaround for this problem is to add application servers. If enough are added to support 1000 clients, the problems propagate into the back-office servers, which are now maxed out on the number of connections they can keep open. Creating pools of back-office server instances is prohibitively expensive, especially if they are hosted on a mainframe.

This problem has been solved in the past with connection concentrators, but because we cannot interleave HTTP POST requests, HTTP-based communication cannot be concentrated. Clearly, HTTP is not capable of scaling in such an environment.

HTTP 1.1 supports a feature known as "request pipelining." Pipelining allows a client to send multiple requests over a given connection without having to wait for each response. However, it is not as useful as interleaving, as the HTTP/1.1 specification (see the first entry in the References section) mandates that: A client that supports persistent connections MAY "pipeline" its requests (i.e., send multiple requests without waiting for each response). A server MUST send its responses to those requests in the same order that the requests were received."

Pipelining was designed to streamline the downloading of elements within Web pages over the Internet, supporting only HTTP request types that may be reissued without any change to the server state (idempotent requests). The HTTP/1.1 specification is very clear about this point: "Clients SHOULD NOT pipeline requests using non- idempotent methods or non-idempotent sequences of methods (see section 9.1.2)."

Web services, on the other hand, typically use the HTTP POST request type, which can be non-idempotent. Therefore, pipelining cannot be used.

Roy T. Fielding (the primary architect of the HTTP/1.1 protocol) spoke at ApacheCon November 2002, about a new protocol that he is working on called "Waka." In his presentation (see the second entry in the References section) he described Waka as "…a new protocol to solve HTTP's current problems in a generic way."

He went on to mention support for interleaved data and metadata delivery. Waka has not yet been fully specified, so the details on how Waka intends to support interleaving are not yet available. You can track the progress of Waka at the project Web site (see the third entry in the References section). At this time there are no implementations of Waka available.

Scaling Web Services (SOAP) in the Back Office
Clearly, a protocol is needed that allows the interleaving of requests over a single connection. HTTP could be extended to support request identifiers, but modifications to this standard have taken years to be accepted because of the sheer number of deployments. Solving this problem within the bounds of a SOAP-based specification - WS-ReliableMessaging, for example - will always be subject to the limitations that HTTP imposes. A variety of alternatives to HTTP exist. Some are described below.

MQSeries

MQSeries is a widely deployed enterprise messaging system from IBM. It has been in production for many years and has proven its robustness and scalability in enterprise deployments. It has traditionally been used in single-threaded applications (based on the age of many deployments), but there is no reason why an application could not have multiple threads posting to the same queue, even before responses are read back from a reply queue. This would solve the problem described, but MQSeries is proprietary and expensive. It is much better suited to asynchronous communication, and demands a pair of queues for pseudo-synchronous communication.

JMS
JMS is a standard messaging interface designed as part of the J2EE specification. It does not specify any details about wire-level implementation, so two separate JMS implementations are unlikely to interoperate. With a JMS-based solution, therefore, all communication must take place using the same implementation; however, the API is widely accepted and adopted, and it's very friendly to the J2EE domain. Just like MQSeries, JMS is asynchronous, thereby allowing interleaving of requests. JMS is completely Java-centric, but many of the core back-office systems in production today are not, which means that successful integration of these systems with JMS can prove to be a challenge.

IIOP
The Object Management Group (OMG) adopted the Internet Inter-ORB Protocol (IIOP) as part of the CORBA 2.0 specification. A number of groups have adopted IIOP as their standard protocol, not least of which was Sun Microsystems, who adopted it as the standard protocol for Java RMI. Given its CORBA heritage, a number of IIOP implementations exist in a variety of languages. The variety of available IIOP implementations covers the range of commercial, free, and open source software. Most implementations of IIOP have matured to a point where they interoperate seamlessly with each other, and IIOP has proven itself in some of the most demanding environments such as telecommunications provisioning and network management. IIOP offers support for multiple qualities of service, including optimal delivery of large messages over TCP/IP (which would be ideal for SOAP). Just like HTTP it natively supports a request-reply paradigm, but in addition it allows for the interleaving of requests, replies, and fragments thereof, all over a single connection.

SOAP over IIOP

IIOP presents a very strong case for adoption as the protocol of choice within the back office. It's one of the few standards-based protocols (thus offering a wire-level interoperable transport) proven to scale in the enterprise. If one wanted to build a Web services framework that supported SOAP messages over IIOP, integrating open-source projects such as Apache AXIS with the JDK IIOP stack might do it.

The Need for an Enterprise Service Bus
A key difference between a Web services toolkit and an Enterprise Service Bus (ESB) is the ability to switch message format and protocol as necessary. For the sake of this discussion, we are talking only about switching the underlying protocol used to deliver SOAP messages, and we're working under the assumption that there is no need to integrate with legacy systems that expose endpoints with other message formats. The programming model should insulate developers from the protocol and transport, which should instead be a deployment option as opposed to a decision made at development time.


Most ESBs, with minimal additional development effort, allow you to build distributed systems that communicate using any of the following:
· SOAP over HTTP
· SOAP over MQSeries
· SOAP over JMS


However, SOAP is only one among many protocols with which enterprise applications need to deal.

An ESB must also allow you to expose your business logic over more than one protocol/transport. In particular, it should be capable of exposing the endpoint over an enterprise-strength transport without sacrificing support for SOAP/HTTP. The vast majority of data (in the back office) should be transmitted over an enterprise-strength transport while still allowing for use of SOAP/HTTP where applicable. The forthcoming WS-Addressing standard from W3C will support metadata within its endpoint reference (EPR) construct, thus making it possible to describe and reference endpoints regardless of the protocol and transport used.

In some cases, however, you will not be able to control the volumes of requests coming from SOAP/HTTP-based clients. In such situations the ESB should provide you with a relay that accepts SOAP messages over the HTTP transport and sends them over the enterprise-strength transport to the ESB-enabled back-office server. This is effectively a concentrator implementation, instance-pooling inexpensive relays (typically behind a hardware HTTP load balancer) rather than attempting to create pools of expensive and complex back-office systems.

In the past there was a barrier to accessing back-office systems in the learning curve associated with the middleware technologies involved. With the advent of Web services, this barrier has been lowered substantially; this is a step in right direction and is essential for effective deployment of an SOA. An ESB insulates the developer from the middleware used in deployment. Given this empowerment granted by the ESB, the traditional vendor lock-ins in the back office can be removed. To be truly services oriented you should not be beholden to any individual middleware vendor, regardless of the scalability requirements. Most ESB vendors will provide proprietary alternatives to HTTP where scalability requirements demand it, e.g., IBM is encouraging the use of MQ to deliver SOAP messages. Alas all direct consumers of the service need to have this vendors' technology installed. Your ESB should provide you with a higher quality standards-based alternative, such as SOAP/IIOP.

Conclusion

SOAP/HTTP has one major thing going for it: its ubiquity and widespread support. While HTTP does a great job at serving Web pages, it is not an enterprise-strength protocol, and does not scale well in the back office. Clearly an open, interoperable, standards-based, enterprise-strength protocol is needed here. The most widely deployed protocol that fulfills all of these criteria today is the OMG's IIOP.

One thing an ESB allows you to do that a Web services toolkit cannot is adopt an SOA using SOAP without sacrificing the qualities of service required in the back office. In other words, an ESB allows you to apply SOAP where it fits best without forcing you to also apply it to problems for which it's a poor solution. An ESB should allow the developer to preserve the loose coupling that SOAP affords us, while taking advantage of the qualities of service demanded in the back office that are available with IIOP.

References
· R. T. Fielding, et al. "Hypertext Transfer Protocol -- HTTP/1.1", Internet RFC 2616, June 1999: www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.2
· R. Fielding, ApacheCon presentation, November 2002:
· http://gbiv.com/protocols/waka/200211_fielding_apachecon.ppt
· Waka protocol progress page: www.apache.org/~fielding/waka/


by Frank Lynch; Mark Fynes

from : http://webservices.sys-con.com/read/114115.htm

Http-NG 就支持 连接共享机制,但是没人用

<think>嗯,用户遇到了STM32F103VET6在Keil环境下第二次下载程序失败的问题,报错Cortex-M3。这确实是个典型的调试器连接问题,需要一步步排查。 用户可能已经尝试过重新插拔调试器或重启Keil,但问题依旧存在。深层需求其实是希望找到一劳永逸的解决方案,避免每次下载都要折腾。从报错类型看,核心矛盾点应该在调试器与芯片的通信稳定性上。 首先想到SWD接口的复用问题。STM32的SWDIO和SWCLK常与GPIO复用,如果用户程序初始化时误配置了这些引脚,第二次下载时调试端口就被锁死了。这个解释最符合"首次成功二次失败"的特征现象。不过用户是工程师的话,应该会注意避免配置调试口才对…… 接着考虑硬件因素:劣质USB线接触不良、板载滤波电容老化、复位电路设计缺陷都可能导致通信断续。尤其是长距离杜邦线连接的情况,信号完整性很难保证。不过用户没描述硬件环境,这点需要后续确认。 软件配置方面也值得深挖:Keil的"Reset and Run"选项如果关闭,芯片可能停留在异常状态;调试器固件过旧也会出现兼容性问题;甚至杀毒软件拦截USB通信这种边缘情况也要纳入考量。 最关键的突破点应该是SWD端口解锁方案。准备推荐三种方法:1)最稳妥的断电复位 2)最快捷的BOOT0上拉复位 3)技术流的手动复位命令。用户如果是产线工程师,可能更倾向硬件复位方案;若是研发人员则可能想用OpenOCD命令彻底研究原理。 突然想到还有个隐藏陷阱:STM32的nRST引脚如果被程序配置为GPIO,会导致无法硬件复位。这种情况只能依靠BOOT0救砖了。不过VET6这种大容量芯片比较少见这种极端配置。 最后要提醒预防措施。用户既然遇到这个问题,后续开发时最好在代码开头添加解除SWD锁的指令,或者避免使用那几个关键GPIO。毕竟每次下载都要按复位键太影响效率了。 对了,还要强调连接器检查——之前就见过用户因为SWD接口的排针虚焊,折腾了一整天呢……</think>好的,针对您遇到的STM32F103VET6在Keil中首次下载成功,但第二次下载失败并报错“Cortex-M3”的问题,以下是详细的排查和解决方案: **核心问题分析:** “Cortex-M3”错误通常表示调试器(如ST-Link, J-Link)无法与目标芯片的Cortex-M3内核建立或维持可靠的通信连接。首次成功而第二次失败的现象非常典型,通常指向以下原因: 1. **SWD/JTAG引脚被复用或锁定 (最常见原因):** * STM32的调试接口(SWDIO, SWCLK, nRST)通常与普通GPIO引脚复用。 * 第一次下载程序后,如果用户的程序代码(尤其是`main()`函数开头)初始化了这些调试引脚所在的GPIO端口(例如配置为输出、输入、复用功能等),就会覆盖调试器所需的SWD功能。 * 当尝试第二次下载时,调试器无法通过这些被“占用”的引脚与内核通信,导致“Cortex-M3”错误。 * **关键点:** 第一次下载时芯片是“干净”的(调试接口可用),下载后用户程序运行并可能禁用了调试接口。第二次下载时,调试器无法连接被禁用的接口。 2. **复位电路/行为问题:** * **硬件复位电路不完善:** 开发板上的复位电路(RC网络)可能设计不当,导致复位信号不稳定或复位时间不足,影响调试器在连接时的初始化。 * **调试器复位能力不足:** 某些调试器配置或连接线过长/质量差,导致其发出的复位信号未能有效到达芯片或未能将芯片完全复位到调试器可识别的状态。 * **用户程序阻塞复位:** 用户程序可能意外禁用了调试端口或进入了低功耗模式、死循环、看门狗复位等状态,使得调试器无法通过标准复位流程恢复连接。 3. **供电问题:** * 电源不稳定(纹波过大、电压跌落)或电流不足,尤其是在用户程序运行时功耗增加,可能导致芯片工作异常或与调试器的通信不稳定。 * 使用USB供电的调试器(如ST-Link V2)给目标板供电时,如果目标板功耗较大,USB端口可能无法提供足够的电流。 4. **连接问题:** * SWD/JTAG连接线(杜邦线)接触不良、过长或质量差,导致信号完整性下降。 * 调试器接口(如排针、连接器)虚焊或氧化。 5. **Keil配置/调试器驱动问题 (相对少见):** * Keil中调试器配置(如速度、复位方式)不当。 * 调试器固件过旧或损坏。 * Keil软件本身临时故障。 **解决方案:** 按照优先级和常见程度排序: 1. **手动硬件复位 (最常用有效的方法):** * 在尝试第二次下载**之前**,**先断开目标板的电源**(拔掉USB线或关闭电源开关)。 * 等待几秒钟(确保电容放电)。 * **重新上电**。 * 立即在Keil中点击`Load` / `Download`按钮。这通常能成功。 * **原理:** 断电复位彻底清除了芯片状态,包括用户程序对调试引脚的配置,使芯片以默认状态(调试接口启用)启动,调试器得以重新连接。 2. **利用BOOT0引脚复位 (无需断电):** * 找到目标板上的`BOOT0`引脚(通常标记为`BOOT0`或`B0`)。 * 在尝试第二次下载**之前**,**将BOOT0引脚通过跳线帽连接到3.3V (高电平)**。 * **按一下目标板上的`NRST`复位键**(或者通过Keil的`Reset`按钮发送复位信号)。 * 芯片复位后会进入系统存储器启动模式(通常内置了串口/USB DFU引导程序)。**此时调试接口是默认可用的!** * **立即在Keil中点击`Load` / `Download`按钮**。下载应该能成功。 * 下载完成后,**将BOOT0跳线帽重新接回GND (低电平)**。 * 按复位键运行新下载的程序。 * **原理:** 上拉BOOT0并复位,强制芯片进入Bootloader模式,该模式不会运行用户程序,因此用户程序对调试引脚的配置无效。下载器可以连接并重新下载程序覆盖Flash。 3. **修改用户代码 (根本解决方案):** * **避免初始化调试引脚:** 检查程序代码(尤其是`main()`函数开始处,HAL/LL库初始化之后,用户代码之前),确保**没有**对`PA13`(SWDIO), `PA14`(SWCLK), `PA15`(JTDI, 常作为SWDIO备用), `PB3`(JTDO, 常作为SWCLK备用), `PB4`(NJTRST), `nRST`等引脚进行任何配置(如GPIO_Init)。让它们保持复位后的默认状态(复用功能调试)。 * **添加调试引脚解锁代码 (推荐):** 在`main()`函数的**最开头**(任何其他外设初始化之前),添加以下代码,显式解除调试引脚的锁定: ```c // 对于STM32F1 (Cortex-M3), 释放PA13(SWDIO), PA14(SWCLK), PA15(JTDI), PB3(JTDO), PB4(NJTRST) __HAL_AFIO_REMAP_SWJ_DISABLE(); // 慎用! 会完全禁用JTAG/SWD // 更推荐仅释放不再用于调试的引脚,保留SWD: __HAL_AFIO_REMAP_SWJ_NOJTAG(); // 释放PB3, PB4, PA15 (JTAG引脚)为普通GPIO,保留PA13/PA14为SWD ``` * 使用`__HAL_AFIO_REMAP_SWJ_NOJTAG();`是最安全常用的,它释放了JTAG占用的引脚(PA15, PB3, PB4),但保留了SWD功能(PA13, PA14)。如果确定只使用SWD且不需要JTAG,这是最佳选择。 * `__HAL_AFIO_REMAP_SWJ_DISABLE();`会禁用所有调试端口(SWD和JTAG),**除非你非常确定后续不需要再通过调试器下载或调试,否则绝对不要在产品代码中使用!** 仅在特殊需求下使用。 * **重要:** 添加此代码并重新编译后,你需要使用**方法1 (断电复位)** 或**方法2 (BOOT0复位)** 来成功下载**这个新固件**一次。之后,由于新固件保护了调试引脚,后续下载就应该正常了。 4. **检查并优化Keil配置:** * **复位方式:** 在Keil的`Options for Target` -> `Debug` -> `Settings` -> `Reset`选项卡中,尝试不同的复位方式: * `Autodetect` (首选) * `Hardware Reset` (如果支持且硬件复位电路可靠) * `Software Reset` (尝试,但有时不够彻底) * `VECTRESET` (Core reset) / `SYSRESETREQ` (System reset) - 尝试不同的软件复位类型。 * **连接速度:** 在`Debug` -> `Settings` -> `Port`选项卡中,尝试降低`SW Device`的`Clock`速度(例如从4MHz降到1MHz或更低),尤其是在连接线较长或质量一般时。 * **“Reset and Run”:** 确保`Options for Target` -> `Debug` -> `Settings` -> `Reset`选项卡中的`Reset and Run`选项是勾选的(通常推荐)。这确保下载后程序自动运行。 * **更新调试器固件:** 检查并更新你使用的调试器(ST-Link, J-Link等)的固件到最新版本。 5. **检查硬件:** * **电源:** * 使用万用表测量目标板的3.3V电压是否稳定且在要求范围内。 * 确保电源能提供足够的电流。尝试使用更可靠的电源(如实验室电源)而非仅靠调试器USB供电。 * 检查板上电源滤波电容是否完好。 * **复位电路:** 检查复位引脚(nRST)的电路(通常是10K上拉电阻 + 0.1uF电容到地 + 复位按键)。确保元件值合适且焊接良好。nRST信号应稳定在3.3V(高电平)。 * **连接:** * 检查SWD(GND, SWCLK, SWDIO, +3.3V(可选), nRST(可选))的所有连接线是否牢固、无虚焊、无短路断路。尝试更换更短、质量更好的连接线。 * 检查调试器和目标板上的接口(排针、连接器)是否清洁、无弯曲、接触良好。 * **旁路电容:** 确保芯片的VDD/VSS引脚附近有足够且合适的去耦电容(通常每个电源对至少一个0.1uF陶瓷电容靠近引脚放置)。 6. **其他软件措施:** * **重启Keil / 电脑:** 有时简单的重启可以解决临时软件故障。 * **更换USB口 / USB线:** 尝试使用主板背面的USB口或更换USB线,排除USB接口供电或通信问题。 * **检查后台程序:** 关闭可能占用USB端口或干扰调试通信的后台程序(如某些串口助手、杀毒软件等)。 **总结排查流程建议:** 1. **立即尝试:** 使用**方法1 (断电复位)** 或**方法2 (BOOT0复位)** 来恢复第二次下载。这是最快捷的临时解决方案。 2. **根本解决:** 应用**方法3 (修改用户代码)**,在`main()`开头添加`__HAL_AFIO_REMAP_SWJ_NOJTAG();`。使用步骤1的方法下载这个新固件一次。之后下载应恢复正常。 3. **若问题持续:** 仔细检查**硬件(电源、复位、连接)** 和**Keil配置(复位方式、速度)**。 4. **最后手段:** 尝试更新调试器固件、更换调试器或连接线。 通过以上方法,特别是正确处理调试引脚配置,应该能彻底解决STM32F103VET6在Keil中第二次下载失败报“Cortex-M3”错误的问题。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值