Message Driven Bean Proxy

部署运行你感兴趣的模型镜像
Message Driven Beans consume messages from predefined JMS Topics or Queues. Applications that may need to utilize several Message Driven Beans to handle various messages are faced with a problem of defining a number of JMS destinations and implementing different MDBs to listen to them. This may cause the Application Server to be cluttered with a variety of JMS resources and MDB deployments that perform potentially similar operations.

A solution to this issue can be the Message Driven Bean Proxy pattern. It should be applied to the situations where multiple message types need to be processed by an application and MDBs consuming them may or may not access shared business logic components to complete their work. MDBs performing highly specialized tasks, processing large amounts of data, or desiring to meet very high performance criteria should be excluded. The solution establishes a single application-wide MDB listening to a JMS Topic or Queue as a proxy for the specific operations that need to be performed in response to the message arrival. In this scenario, a single MDB will be responsible for handling all the messages for an application. The decision what functionality to execute will be based on the message properties assigned to it by the sender.

A factory pattern can be utilized to encapsulate the decision making process. When a message is received by the MDB, a factory mechanism is called to create a specific class that will perform all the necessary message handling. The factory class -- Message Handling Factory -- is responsible for instantiating the correct processing class -- Message Handling Class. Each message type should have a corresponding Message Handling Class that performs necessary operations.

When implementing the MDB Proxy pattern, the client sending a JMS message needs to set message properties with specific data uniquely identifying the message. The Message Handling Factory can decide which Message Handling Class to create based on these properties. Message properties are set and extracted through methods exposed in the JMS Message interface. Each Message Handling Class should contain the logic specific to the particular message type that it is processing. They, however, should not implement any business logic but instead call appropriate methods of business logic components. See Message Driven Bean Strategy pattern for more information. This approach limits the proxy MDB's job to creating a Message Handling Factory, instantiating a new Message Handling Class through it and passing the received message to the class for processing.

This pattern eliminates the need for creating multiple JMS destinations and MDB implementations for an application and establishes a common message handling framework. When a new message type is created, the only thing that needs to be done is develop a new instance of the Message Handling Class and update Message Handling Factory logic. This, however, introduces a need to re-deploy and re-test the proxy Bean but this operation should be more efficient than deploying and testing a new Message Driven Bean. A single JMS destination and MDB per application concept may create performance issues but a good load balancing or clustering technique can remedy the situation.
1 replies in this thread Reply
Read more Message Driven Bean Proxy
Posted By: Gal Binyamini on December 14, 2001 in response to this message.
As usual, Leo posts and I argue :)

First of all, I'll make a note about sharable business logic (SBL from here on) which sets the context of my argument.
Suppose I'm developing a bogus bookstore application. I would consider the following to be SBL (the list is not exhaustive):
- update order status
- update inventory status
- find pending orders for a specific item (for items we don't have in the inventory).
Suppose I'm implementing the bogus backend using MDBs. When a message comes in from the inventory saying a new item is available again, I may want to:
1. update the inventory status
2. find all the orders for this item that were pending for it.
3. update as many of them as I can (as now the item is available).
I do not view this chain of operations as SBL. It is the algorithm of a very specific application, my bogus backend application. I see no sense in putting this code in a component. Components should be used to compose applications such as this, not to contain the application in themselves. So in this case, I would put this code in the MDB (perhaps by delegating to a seperate class, but not to a seperate component).

My argument is that the scope of the pattern is ill-defined, and that the pattern fails to address it.
I see two elements in the scope:
- The first, and more heavily discussed element involves the problem of having "too many" topics and MDB deployments. The pattern claims it's own implementation would be more efficient, put less load on the app server, etc. The pattern does this by using JMS as a "flat namespace" and nest it's own namespace inside JMS using message properties. If this approach was so efficient, why wouldn't the JMS provider use it? And how is it that the message handlers created by the factory will be more effiient than MDBs? Why do MDBs strike you as something so heavy and difficuilt to maintain? The message handlers in the pattern are just MDBs stripped of the pooling capabilities. The application server manages pool's the MDBs and uses LRU caching, hit-counting schemes, etc to manage them efficiently. Why do you think you will do a better job?
I think that in fact, no matter how well we implement the caching of these message handlers, we will still end up with a less efficient and scalable application. Here are just a couple of reasons (there are more, but I think these two make the point):
1. The JMS server will handle the routing better than you possibly can. For instance, in a cluster, if you used JMS facilities only nodes that need a specific message would get it. With your implementation, all nodes will recieve all messages, because the JMS server doesn't know anything about who needs what.
2. If just one of your MDBs needs durable subscription, it will have to subscribe durably for the big topic. This means that all messages will be stored up for your MDB, instead of just the important durable ones. It also means that a failure in a single MDB can cause all the system's messages to stack up.
Aside from this, I also doubt even incredibly expirienced programmers could implement the caching as well as the app server (see note below about specific cases). There are also threading issues that an MDB programmer cannot control that may cause additional efficiency problems (I can list specific if anyone is interested).
Reasons such as the ones above are one of the motivations behind the current design of JMS, which provides it's own namespace rather than let you make your own (although it may seem simple enogth to do that with message properties). You can find some good arguments in the JMs specs.

- The second element involves the creation of a "common message handling framework". I argue that in any case where JMS selectors suffice, you don't gain anything. When JMS selectors suffice, you can think of the app server as your central MDB and factory. It will create "message handlers" (in this case simply MDBs) according to your deployment settings. This factory can be just as good as yours, and maybe even better. It supports changing configuration "on-the-fly" (in every app server I know) and allows you to add new message handlers without shutting down the central MDB for redeployment (which you may need with this pattern, unless you go around this with dynamic code loading... I can't think of a "by-the-spec" way to do that, cause you can't change the accessible classes for your MDB without redeploying it's JAR).

I could go on about how this pattern is non-standard, will be less compatible with "normal systems" and harder to learn for developers that already know JMS and EJB, but I don't think these are the key problems (and this post is getting to big).

I think the only place where you should use this pattern is where the JMS topic seperation and selectors just aren't good enogth to route your messages to handler. You can allmost allways go around this by adding message properties, but I can't argue that this is allways the case. I would try my best shot at using selectors, and only if it's impossible backfall to this pattern. Even when that happens, the "central" MDB should only be responsible for these specific hard-to-route messages.

Note: I mentioned efficiency and scalability a couple of times. I know these are not the intent of the pattern. I tried to show that the pattern doesn't give you any real advantage, and that it also hurts these fields, which are allways important to some degree.

Regards
Gal
1 replies in this thread Reply
Read more Message Driven Bean Proxy
Posted By: Leo Shuster on December 14, 2001 in response to this message.
As always Gal makes excellent points. I do not disagree with them at all but it seems that the arguments are made due to slight misunderstanding of what the pattern tries to achieve.

First of all, this pattern does not propose to implement MDB’s specific message handling logic in separate components -- simply classes (see Message Handling Classes description). Second of all, the pattern does not offer its own implementation of an MDB mechanism -- just an extension on top of the existing one. The proxy MDB will still be executed in the container, so MDB pooling and caching will be handled by the app server. And so will JMS routing. This pattern does not even attempt to provide an alternative implementation of these mechanisms.

The pattern stated in one of the first paragraphs: “MDBs performing highly specialized tasks, processing large amounts of data, or desiring to meet very high performance criteria should be excluded.” The example of an MDB subscribing to a durable topic would constitute an exclusion condition.

This pattern is based on a simple fact that most often MDBs’ differences lie in the onMessage method. Bulk of functionality is performed there. If you wanted to handle another message, you would need to write the same skeleton for a new Bean and place the specific logic in the onMessage method. Knowing that a JMS message can be handled generically through the Message interface, a single proxy MDB can handle most of application’s messages. In essence, this is what this pattern is about.

The only difference between this and a normal MDB implementation is an extra call to the factory class. The performance of an individual transaction should remain virtually unchanged. Arguably, efficiency may suffer with this pattern since one Bean will now be responsible for handling multiple messages but if the proxy MDB pool setting is increased what is the difference between instantiating 100 proxy Beans as opposed to 50 Beans X and 50 Beans Y? Furthermore, this pattern does not introduce any non-standard features but is rather based on standard J2EE and JMS functionality.
1 replies in this thread Reply
Read more Message Driven Bean Proxy
Posted By: Gal Binyamini on December 14, 2001 in response to this message.
Leo makes good points as well :) but I too think that he didn't understand exactly what I mean...

"First of all, this pattern does not propose to implement MDB’s specific message handling logic in separate components -- simply classes (see Message Handling Classes description)."
OK. I was just making sure that's what we both mean, because if you ment to put all logic in components I would have to word my arguments differently.

"Second of all, the pattern does not offer its own implementation of an MDB mechanism -- just an extension on top of the existing one. The proxy MDB will still be executed in the container, so MDB pooling and caching will be handled by the app server. And so will JMS routing. This pattern does not even attempt to provide an alternative implementation of these mechanisms."
I understood that's what you ment. However, a large part of JMS's routing capabilities come from it's non-flat namespace. By flatting the namespace down, you damage some of these features. For instance:
1. A smart JMS server will only send messages to listeners that want them. For instance, if you have four nodes in a cluster and each takes care of different tasks (maybe some tasks are shared), messages will only get sent to the listeners that want them. While this is still true with your pattern, the JMS server doesn't "know about" which listeners want what. All the messages go to a single topic, and everybody listen on it. So in effect, the message filtering JMS provides won't be able to help you.
I made some other examples (also involving the pooling of instances), maybe they will be easier to understand in this context, or am I still missing your point?

"The pattern stated in one of the first paragraphs: “MDBs performing highly specialized tasks, processing large amounts of data, or desiring to meet very high performance criteria should be excluded.” The example of an MDB subscribing to a durable topic would constitute an exclusion condition."
I wouldn't call durable subscriptions a "specialized task", but I guess that depends on what kind of systems you are working on. Anyway, this was just one example.

"This pattern is based on a simple fact that most often MDBs’ differences lie in the onMessage method. Bulk of functionality is performed there. If you wanted to handle another message, you would need to write the same skeleton for a new Bean and place the specific logic in the onMessage method."
This is indeed the case, IMHO. My question is, what about the "obvious" solution of building a superclass for MDBs that implements MDB related methods? After all, there are just two of them: set context and remove. Both can be implemented to provide basic functionality (save context, nop on remove) and be overriden when needed. This seems to me just as easy as what you describe. What gains does the pattern give you over this?

"The only difference between this and a normal MDB implementation is an extra call to the factory class. The performance of an individual transaction should remain virtually unchanged. Arguably, efficiency may suffer with this pattern since one Bean will now be responsible for handling multiple messages but if the proxy MDB pool setting is increased what is the difference between instantiating 100 proxy Beans as opposed to 50 Beans X and 50 Beans Y? Furthermore, this pattern does not introduce any non-standard features but is rather based on standard J2EE and JMS functionality."
If you don't implement caching then this pattern adds an extra instanciation to every message. But perhaps more importantly, it may hurt the app server's optimization attempts. The app server is given less knowledge. This may interfere with the LRU impl, and make hit counting irrelevant. It's not that the cache won't work, it's just that the optimization will not help as much as it would normally. It's not critical, but I still don't understand why should you agree to this, when other approaches are just as good and don't have this de-optimizing effect. I guess I'm still missing the intent of the pattern.

As for the non-standart comment, I can explain in more detail what I ment, but as I said it's less important so I will wait until after I've understood the issues above.

Gal
1 replies in this thread Reply
Read more Message Driven Bean Proxy
Posted By: Leo Shuster on December 15, 2001 in response to this message.
I think I understand now where our thinking diverges. All of the examples you provide deal with JMS topics, while this pattern is most suitable for queues. After reviewing the original post, it is clear that this limitation was not explicitly stated. Topics represent a significantly more complex mechanism and, as well articulated in Gal’s messages, are not suitable for this pattern in most situations.

The intent of this pattern is to minimize the amount of JMS destinations and MDB deployments. I like the idea of a superclass but it does not resolve the above issue. On the other hand, it presents a nice implementation mechanism and minimizes the amount of code that needs to be written. This approach would be great for all MDB implementation that fall outside of this pattern’s scope. Gal, how about writing this up and posting it on the site? This way, I can ask you to clarify your statements for a change. :-)
1 replies in this thread Reply
Read more Message Driven Bean Proxy
Posted By: Gal Binyamini on December 15, 2001 in response to this message.
Now when I understand the pattern addresses queues (atleast mostly), I can more easily understand Leo's logic.
I think that even in the case of queues, there is no reason to use just one queue. The pattern doesn't say you should use just one, but it doesn't provide guidelines as to which types of messages should go to the same queue. I would use this seperation as a rule of thumb:
- group the commands your server provides using async invocation in normal Java interfaces.
- make a queue for each interface.
- make messages that invoke operations from the same interface go in the same queue.
Note that this is just a theoretical way to create the destination seperation. You don't actually need Java interfaces. However, I do actually create these Java interfaces and use them in the client. Their implementation can be made generic using Java dynamic proxies (I can describe my impl if anybody is interested, the actual code is copyright :( ).
If we both agree on this kind of seperation scheme (not neccesarily this percise one), our differences were just semantic. I got the idea that you ment to send all the messages in your system to just one queue, which I think is a stronger requirement than the pattern's goal (avoid cluttering of server with many queues). If you ment something like what I described above, i.e simply avoid creating a queue for every single "method" of the systems, I completely agree. I also think that, for this kind of seperation scheme, you can also use topics rather than queues. The problems with topics arrise because *all* messages go to the same topic. If only messages to the same logical object went to the same topic, I think the harmful side-effects would be to small to notice.

As for the idea of using an MDB superclass, I don't know if it can be called a "pattern". Actually, many of the so called "patterns" in the Pattern section of TSS are not design patterns, IMHO. That isn't to say they do not provide good advices, cool implementation, etc. But a high/low key generation algorithm isn't a design pattern :) Pardon my drifting.

Besides, if I were to post a pattern you would comment on my vague writing and I would have to clarify... Where is the fun in that? :)

Regards
Gal
1 replies in this thread Reply
Read more Message Driven Bean Proxy
Posted By: Leo Shuster on December 15, 2001 in response to this message.
The actual implementation of this pattern depends on your needs. You can separate destinations by using the interface approach you described (which is quite interesting) or other means. However, you could also handle all the messages through a single MDB. The pattern simply offers a solution to consolidate JMS destinations (primarily queues) and minimize the amount of MDBs that need to be deployed. In case of topics, “if only messages to the same logical object went to the same topic”, that would make them virtually identical to queues. Thus, the pattern would be applicable in this situation.

If you read the pattern submission page, it states that it is "looking for J2EE patterns, design strategies, tips and tricks". So, the MDB superclass approach would definitely fit into these requirements.
1 replies in this thread Reply
Read more Message Driven Bean Proxy
Posted By: Gal Binyamini on December 15, 2001 in response to this message.
"However, you could also handle all the messages through a single MDB"
I think that this is a fundenmental problem.
I've listed some scalabillity/efficiency issues, but now I'd like to view this problem from a plain design perspective.
One of the key advantages of using MOM is that the set of message handlers is dynamic and heterogenous. You can use different boxes to do different things, and the client doesn't have to know about it. For example, the parts of my system that did JPEG manipulation ran on proper hardware that could apply affine transforms using a hardware card. We didn't have these to begin with, but we easily added them without stopping the system.
If you use this pattern, the computational set that implements your message handling must be homogenous. That is, they all need to be able to handle all the messages. I find this unacceptable.
The only way around this is to use JMS selectors to get the right messages to the right consumers. As I mentioned in my first post, if you do that then you might as well use different queues. It may seem like having multiple queues would be harder on the messaging server, but it isn't. If the best way to go was hold a single queue with a property stating which logical queue each message belongs to, the messaging server would have done that itself. A decent messaging server won't do it that way, because there are much better alternatives in terms of efficiency.
Anyway, my point is that if you use selectors to imitate "logical queues", you're doing something wrong. This pattern forces you to either do that or bind yourself to homogenous computational sets (i.e, each message handler can handle all messages). As I mentioned above, this is (for me) one of the key reasons for using MOM in the first place.

My solution to this problem is to create distinct queues, each representing a homogenous set. The queues are, amongst themselves, heterogenous. This is the approach I described above. You can make an analogy to RMI (CORBA, etc): you don't want to bind your entire system to running on the same box (in our case, the same type of box). So you create different remote interfaces (queues). Each interface represents functions that will run on the same box (type of box), but the system as a whole can span different boxes.


About the Patterns section, I didn't know it should contain anything other than design pattern. I don't know what fooled me, I think it was the name...
Anyway, it seems to me that the MDB superclass idea is to trivial to post. It's like posting a message that sais "when you have a class that logically extends another class, use inheritance".
If you think this idea is worth posting, you have my blessing :)

Gal
1 replies in this thread Reply
Read more Message Driven Bean Proxy
Posted By: Leo Shuster on December 16, 2001 in response to this message.
Gal, as always, makes excellent points. However, the discussion has been taken a little bit out of the context. The solution to handle all messages through a single MDB is a hypothetical one. It is often hard to group messages due to the issues pointed out in Gal’s post. The pattern only offers a generic solution to a problem where a number of messages are handled by different MDBs, which require similar resources to perform the processing. These MDBs can be consolidated into a single proxy MDB. How the messages are grouped and whether or not they can be grouped is left for the developers to decide. The solution offered by Gal represents one of the potential examples.
1 replies in this thread Reply
Read more MDB Dispatcher Pattern
Posted By: Dmitri Colebatch on December 17, 2001 in response to this message.
Coming in a bit late here, but would like to hear arguments based on the following input. I've thought through a similar pattern, and in fact posted something a while back (http://www.theserverside.com/discussion/thread.jsp?thread_id=9713). The post could be clearer, as I'm no expert in communicating patterns. In essence, it has some similar goals to what Leo describes.

Minimal number of queues: The key advantage of this, is that the sender doesn't need to know what happens to the message next. To use multiple queues, the sender must have knowledge of where it should send the message - I called my pattern the Message Dispatcher Pattern, which hopefully communicates this objective. This allows for a message to be processed multiple times, mutating as it goes, and the dispatcher decides, based on message state, what should happen next.

I do however agree with many of the points that Gal makes about performance and pooling, and about implementing something that really is container logic. But if one has a requirement for a system with loosely coupled messages, then what I (and now Leo) suggested is one option to do it in a spec compliant manner. The alternative - using separate queues - restricts the system in that the sender of the message must know what should be done with it. This means that you gain asynchronous processing, but still have relatively tight coupling.

I didn't get any feedback on what I suggested when I posted it, so if either of you have any comments on it I'd be glad to hear them.

cheers
dim
2 replies in this thread Reply
Read more MDB Dispatcher and Proxy Patterns
Posted By: Leo Shuster on December 17, 2001 in response to this message.
The MDB Dispatcher Pattern proposed by Dmitri seems to be similar to the MDB Proxy approach, although it suggests a much more definitive implementation. The Proxy pattern is more general and does not offer a specific solution. It also does not assume that the whole message will be given to the SLSBs for processing but rather allows it to be handled by a Message Handling Class. The main difference here lies in the fact that message parsing and business logic should be separated. The intent of both patterns, however, seems to be the same and approach similar. I think these patterns can be considered variations on the same theme.

I did not see Dmitri's pattern when I was submitting my post, even though I looked for similar submissions. I think somehow it got hidden from the pattern page. Maybe, that's why it did not receive any comments.
0 replies in this thread Reply
Read more MDB Dispatcher Pattern
Posted By: Gal Binyamini on December 18, 2001 in response to this message.
I was un-aware of Dmitri's pattern and now, having taken a look at it, I agree with Leo that it is a variation of the same theme.

I would like to comment about Dmitri's rationale for the pattern, which I think is different than Leo's.
First of all, I think before using this pattern you must ask yourself if you really need another indirection. Queues and Topics (as used in MOM) are essentially objects that reduce coupling amongst components. By using a Queue, you achieve a certain level of indirection, which is commonly enogth. That's, IMHO, why objects such as Queues have been so strongly standartized, and multiplexed-queues (which are logical equivalents of this pattern, implemented at messaging server level) have not been as widely adopted.
I think it is also useful in this context to view this issue from the "object analogy" I listed earlier. The issue you are describing, in analogous terms, is that you have a client which shouldn't know exactly what object it is talking to, only make sure the job is done. Here are a couple of well known patterns you can use in that case:
1. Indirect the call to another object. In this case, you can indirect the call to a SLSB that has information about the specific objects (queues) responsible for each type of request. The SLSB can place the message in the correct queue.
2. Implement something similar to interfaces, by predefining queues for each type of request and have the servants (message listeners) serve messages on these queues. By defining "logical" queues for each type of request, you rid the client of having to know about the implementation of the servant. Servants are decoupled simply by the use of queues.
3. Use chain of responsibillity. Here you pass messages around from queue to queue until they can be properly handled. This pattern is less common and is not applicable when messages are not light-weight.

I think in allmost every case using these patterns will achieve the same degree of decoupling without the problems listed in earlier posts. Of course, you came up with this pattern given a specific problem. Perhaps if you describe that problem I can more easily understand the usefulness of this pattern.

Regards
Gal

您可能感兴趣的与本文相关的镜像

ACE-Step

ACE-Step

音乐合成
ACE-Step

ACE-Step是由中国团队阶跃星辰(StepFun)与ACE Studio联手打造的开源音乐生成模型。 它拥有3.5B参数量,支持快速高质量生成、强可控性和易于拓展的特点。 最厉害的是,它可以生成多种语言的歌曲,包括但不限于中文、英文、日文等19种语言

一、实验目的 1.从体系结构层面,梳理CORBA、EJB、DCOM的核心组件、层级关系与交互流程,能使用标准UML部署图/组件图绘制三者的体系结构示意图,精准体现框架核心特征。 2.从策略与实现层面,深入分析三种框架的通信策略、部署策略、服务管理策略及关键实现技术,理解各框架的设计本质与技术细节。 3.结合具体业务场景,能基于框架的特性与适用范围完成分布式架构选型,阐述选型的核心依据,提升架构设计的决策能力。 二、实验设备 计算机,windows操作系统,MicrosoftVisio,Java开发环境 三、实验内容 系统1:图书馆管理系统 (1)借书、还书; (2)在图书馆中增加/删除一本书; (3)按作者名或专业领域检索一批书; (4)找出被某位读者借出的一批书; (5)找出最近借走某本图书的读者。 该系统有两类用户:图书管理员与普通读者。功能(4)可供普通读者查找他们自己借出的书目。功能(1)、(2)、(5)只供图书管理员使用。该系统必须满足以下限制: (1)馆中所有未借出的书籍能够供读者随时借阅; (2)在同一时刻,一本书不能既被借出,又可供借阅; (3)一个读者一次借出的书籍书目不能超过预定值。 四、实验原理 分布式体系结构核心本质:通过中间件技术屏蔽底层网络异构性、硬件差异和操作系统差异,实现分布式环境中组件的透明通信、协同工作和资源共享,核心解决“组件位置透明”“跨平台互操作”“系统级服务复用”问题。 EJB分布构件框架: 体系结构:基于Java平台,以“EJB容器”为核心,包含Client、EJBServer、EJBContainer、EnterpriseBean(SessionBean/EntityBean/Message-DrivenBean)、数据库等组件,形成多层分布式结构。 核心策略:容器管理(事务、持久化、安全性)、组件化开发(业务逻辑封装为Bean)、同步/异步通信支持、持久化机制(BMP/CMP)。 实现基础:依赖JavaEE环境,通过RMI进行远程调用,Bean与容器通过特定机制交互,部署时需遵循EJB规范。 DCOM分布构件框架: 体系结构:基于COM组件模型,扩展为分布式场景,核心组件包括COM对象、DCOM代理/存根、RPC运行时、网络传输层,支持进程内/进程外组件调用。 核心策略:位置独立性(组件位置透明)、语言无关性(支持多语言开发)、连接管理(优化网络交互)、安全性配置(访问控制、网络安全)、负载平衡与容错。 实现基础:依赖Windows操作系统,基于RPC协议,通过注册表管理组件,支持组件灵活配置与扩展。 CORBA分布构件框架: 体系结构:基于OMG对象模型,核心为“对象请求代理(ORB)”,包含IDL接口定义、Stub(客户端存根)、Skeleton(服务端骨架)、IIOP协议、对象服务/公共设施等,遵循OMA参考模型。 核心策略:跨平台/跨语言互操作(IDL语言映射)、ORB介导的远程调用、标准化对象服务(如命名服务、事务服务)。 实现基础:依赖ORB中间件(如JacORB),通过IDL定义组件接口,生成跨语言的Stub和Skeleton,基于IIOP协议实现异构系统通信。 五、实验步骤 任务1:体系结构梳理与示意图绘制 针对CORBA、EJB、DCOM三者,分别梳理以下核心内容(结合PPT知识点): 核心组件:列出各框架的关键组件(如EJB的Container、Bean;CORBA的ORB、IDL;DCOM的COM对象、代理/存根)。 层级结构:明确组件的层级划分(如EJB的客户端层、应用服务层、数据层;CORBA的应用层、ORB层、传输层)。 交互流程:简要描述客户端调用分布式组件的核心流程(如CORBA客户端→Stub→ORB→Skeleton→服务端组件)。 绘制体系结构示意图: 采用标准UML部署图/组件图规范,清晰标注组件名称、层级关系、交互路径(用箭头表示调用方向)。 重点标注各框架的核心特征组件(如EJB的容器、CORBA的ORB、DCOM的RPC运行时)。 示例要求:CORBA图需体现IDL→Stub/Skeleton→ORB→服务端的映射关系;EJB图需体现Bean与容器的交互;DCOM图需体现位置独立性的实现逻辑。 任务2:核心策略与实现细节分析 针对每一种框架,从“体系结构、通信策略、组件部署策略、实现关键技术以及跨平台/跨语言支持”等层面完成分析表格。 任务3:分布式场景适配与对比分析 针对以下3个分布式场景,结合框架特性与场景需求,选择最适合的框架并说明理由(需包含“框架优势匹配场景需求”“其他框架不适配原因”两部分): 场景1:某企业需开发跨Windows、Linux平台,支持Java、C++语言混合开发的分布式物流管理系统。 场景2:某电商企业基于Java技术栈,需快速开发具备事务管理、持久化功能的订单处理系统。 场景3:某Windows环境下的办公自动化系统,需复用现有COM组件,实现跨进程的组件调用。 任务4:简单实例验证 选择一个框架,结合所选系统的基础功能(按学号划分的系统),完成分布式组件调用实例开发,需体现框架核心特性。 选项A(EJB):无状态SessionBean实现基础查询 核心功能:实现所选系统的基础查询(如图书馆“按作者查图书”、实验室“按设备号查状态”)。 必须体现的框架特性: 无状态SessionBean注解:@Stateless; 简单事务管理:@TransactionAttribute(TransactionAttributeType.SUPPORTS); 客户端通过RMI调用Bean。 代码结构: 接口(如BookQueryService):定义查询方法(List<String>queryByAuthor(Stringauthor)); Bean实现类(如BookQueryServiceImpl):实现接口方法,模拟查询逻辑; 客户端调用类(如EJBClient):通过InitialContext获取Bean实例,调用查询方法。 验证要求:部署Bean到JBoss/GlassFish容器,运行客户端,截图“容器部署成功提示”“客户端查询结果”。 选项B(CORBA):IDL接口实现基础调用 核心功能:实现所选系统的基础接口调用(如社团“按类型查社团”、书店“按分类查图书”)。 必须体现的框架特性: IDL接口定义(包含1个查询方法); 通过JacORB生成Stub/Skeleton; 客户端通过ORB获取服务端引用并调用。 代码结构: IDL文件(如ClubQuery.idl):定义ClubQueryService接口与queryByType方法; 服务端实现类(如ClubQueryServiceImpl):继承生成的Skeleton,实现查询逻辑; 客户端调用类(如CORBAClient):初始化ORB,获取服务端引用,调用方法。 验证要求:运行服务端与客户端,截图“ORB启动成功”“客户端调用结果”,并标注生成Stub/Skeleton的命令(如idlj-fallClubQuery.idl)。 选项C(DCOM):COM组件实现跨进程调用 核心功能:实现所选系统的基础组件调用(如社团“查成员人数”、实验室“查预约记录”)。 必须体现的框架特性: COM组件接口定义(如IQueryService); 组件注册到Windows注册表(regsvr32命令); 客户端跨进程调用组件。 代码结构: COM组件类(如QueryComponent):实现IQueryService接口,模拟查询逻辑; 组件注册:通过regsvr32QueryComponent.dll注册组件; 客户端调用类(如DCOMClient):创建组件实例,调用查询方法。 验证要求:运行客户端,截图“组件注册成功提示”“客户端调用结果”,并标注注册步骤。六、实验要求2.提交三个分布式框架结构示意图,每个结构需单独绘制,示意图需包含“核心组件、层级关系、交互路径”三个关键要素,缺一不可。 3.分析表格需覆盖列出的所有对比维度。 4.场景选型理由需结合框架特性与场景需求,给出任务三的三个应用场景的选型,以及理由。 5.提交实现实例的核心代码
最新发布
12-08
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值