XML Publisher and Out of Memory Errors

本文探讨了在使用XML Publisher时遇到的“Out of Memory”错误,并汇总了几种可能的解决方案,包括应用补丁、调整参数设置以及创建配置文件指定临时目录。

I’m getting a lot of comments on the XML Publisher posts about ‘Out of Memory’ errors. I’ve not experienced this myself so can’t give a fix or workaround, but wondered whether anyone else had hit this issue and found a resolution?

Doing a bit of digging there are a number of suggestions, none of them relate to the PeopleSoft implementation of XMLP however so I’m unconvinced by any solution in particular.

There’s a Tools patch out for this issue (8.48.06 and up contains the fix), so check your version of Tools:
http://www.peoplesoft.com/psp/portprd/CUSTOMER/CRM/c/C1C_MENU.C1_SOLN_SUMMARY.GBL?page=C1_SOLN_SUMMARY&SETID=SHARE&SOLUTION_ID=201046967

There’s a suggestion to change a parameter for XMLP (I guess this would be on the Process Defn or more likely the Process Type in PeopleSoft):
http://forums.oracle.com/forums/thread.jspa?threadID=568832&tstart=0&messageID=2116283

There’s a suggestion to create a config file (it is not delivered) to specify a temporary directory for processing large files:
http://asun.ifmo.ru/docs/XMLP/help/en_US/htmfiles/B25951_01/T421739T422152.htm

And there’s also a suggestion that a patch will fix it:
http://www.oracle.com/technology/products/xml-publisher/docs/AboutXMLP562.htm

If I was troubleshooting it and was already on 8.48.06 or greater, I’d probably try the parameter followed by the config file.

If anyone has encountered this error and fixed it, it’d be great if you posted a comment to let us know the resolution.

### 关于流数据未指定内容长度导致内存缓冲问题的解决方案 当处理流数据时,如果未指定内容长度(Content-Length),可能会导致接收方无法预知何时结束读取操作。这通常会引发内存缓冲区溢出的问题,因为整个流可能被加载到内存中而没有适当的边界控制。 一种常见的解决方法是在协议设计阶段引入分隔符或者帧定界机制来替代固定的内容长度字段。例如,在HTTP/1.1中虽然可以不设置`Content-Length`头信息,但如果启用了Transfer-Encoding: chunked,则可以通过chunk编码方式逐块传输数据[^4]: ```http HTTP/1.1 200 OK Transfer-Encoding: chunked <length_of_chunk_1>\r\n data_for_first_chunk\r\n <length_of_chunk_2>\r\n data_for_second_chunk\r\n ... 0\r\n ``` 对于自定义的应用层协议而言,也可以采用类似的策略——即通过发送每一块数据之前先告知其大小,从而让接收端能够逐步处理而不必一次性将全部数据载入内存之中[^5]。 另一种可行的办法就是利用超时机制配合最大允许包尺寸限制共同作用下完成对异常连接的有效管理。具体来说就是在每次接收到新部分的数据之后重新计时;一旦超过预定时间仍未获取更多有效负载则终止当前会话并释放所占用资源[^6]。 此外还可以考虑实施滑动窗口算法以动态调整缓存容量上限值,并依据实际网络状况灵活改变读写速度比例关系进而达到优化性能的目的同时防止因突发流量造成系统崩溃风险增加的情况发生[^7]。 最后值得注意的一点是无论采取何种措施都应当充分考虑到兼容性和可扩展性方面的要求以便未来维护升级工作更加便捷高效[^8]。 ```python import time def process_stream_with_timeout(stream, max_size=1e6, timeout=30): start_time = time.time() total_data = b"" while True: elapsed = time.time() - start_time if elapsed >= timeout: raise TimeoutError("Stream processing exceeded allowed duration.") try: chunk = stream.read(1024) # Read small chunks if not chunk: break total_data += chunk if len(total_data) > max_size: raise MemoryError("Exceeded maximum allowable memory usage for this stream.") except Exception as e: handle_exception(e) return total_data.decode('utf-8') ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值