Can I access non-RT shared memory objects and mutexes from a Xenomai real time task?

本文探讨了在使用Xenomai实时内核补丁的复杂软件中,如何实现实时进程与普通pthread进程间的通信。讨论了共享内存、信号量、管道等多种通信方式的特点,并推荐使用Xenomai原生API创建命名管道。


0

down vote

favorite

 

I am programming a slaightly complex software with multiple multi-threaded processes. Since in one of them I need real-time capabilities (for robustness, basically) I patched my target kernel for Xenomai and programmed it using Xenomai's native skin.

Now I need to communicate two processes: one running real-time tasks and another running simplypthreads (with the latter compiled without Xenomai's real-time libraries/skin).

My question is: can I communicate them somehow? For instance, can I create a shared memory object (shm_open) and share mutexes even if one of them is in the RT environment?

  • If the answer is yes, should I use the POSIX skin in the Xenomai one?
  • If the answer were no, how can I safely share data/communicate them? Named-pipes is the only approach I can think of...

 

 

 


I would suggest you to use Xenomai native API to create Named Pipes, such as rt_pipe_create()and so on.

There is one more thing which you can use: Message Queues. However I have always chosen Named Pipe over Message Queues.

Both shared memory and message queues can be used to exchange information between processes. The difference is in how they are used.

Shared memory is exactly what you'd think: it's an area of storage that can be read and written by more than one process. It provides no inherent synchronization; in other words, it's up to the programmer to ensure that one process doesn't clobber another's data. But it's efficient in terms of throughput: reading and writing are relatively fast operations.

A message queue is a one-way pipe: one process writes to the queue, and another reads the data in the order it was written until an end-of-data condition occurs. When the queue is created, the message size (bytes per message, usually fairly small) and queue length (maximum number of pending messages) are set. Access is slower than shared memory because each read/write operation is typically a single message. But the queue guarantees that each operation will either processes an entire message successfully or fail without altering the queue. So the writer can never fail after writing only a partial message, and the reader will either retrieve a complete message or nothing at all.

Essentially, pipes - whether named or anonymous - are used like message passing. Someone sends a piece of information to the recipient and the recipient can receive it. Shared memory is more like publishing data - someone puts data in shared memory and the readers (potentially many) must use synchronization e.g. via semaphores to learn about the fact that there is new data and must know how to read the memory region to find the information.

With pipes the synchronization is simple and built into the pipe mechanism itself - your reads and writes will freeze and unfreeze the app when something interesting happens. With shared memory, it is easier to work asynchronously and check for new data only once in a while - but at the cost of much more complex code. Plus you can get many-to-many communication but it requires more work again. Also, due to the above, debugging of pipe-based communication is easier than debugging shared memory.

A minor difference is that fifos are visible directly in the file system while shared memory regions need special tools like ipcs for their management in case you e.g. create a shared memory segment but your app dies and doesn't clean up after itself (same goes for semaphores and many other synchronization mechanisms which you might need to use together with shared memory).

Shared memory also gives you more control over buffering and resource use - within limits allowed by the OS it's you who decides how much memory to allocate and how to use it. With pipes, the OS controls things automatically, so once again you loose some flexibility but are relieved of much work.

Summary of most important points: pipes for one-to-one communication, less coding and letting the OS handle things, shared memory for many-to-many, more manual control over things but at the cost of more work and harder debugging

 

【永磁同步电机】基于模型预测控制MPC的永磁同步电机非线性终端滑模控制仿真研究(Simulink&Matlab代码实现)内容概要:本文围绕永磁同步电机(PMSM)的高性能控制展开,提出了一种结合模型预测控制(MPC)与非线性终端滑模控制(NTSMC)的先进控制策略,并通过Simulink与Matlab进行系统建模与仿真验证。该方法旨在克服传统控制中动态响应慢、鲁棒性不足等问题,利用MPC的多步预测和滚动优化能力,结合NTSMC的强鲁棒性和有限时间收敛特性,实现对电机转速和电流的高精度、快速响应控制。文中详细阐述了系统数学模型构建、控制器设计流程、参数整定方法及仿真结果分析,展示了该复合控制策略在抗干扰能力和动态性能方面的优越性。; 适合人群:具备自动控制理论、电机控制基础知识及一定Matlab/Simulink仿真能力的电气工程、自动化等相关专业的研究生、科研人员及从事电机驱动系统开发的工程师。; 使用场景及目标:①用于深入理解模型预测控制与滑模控制在电机系统中的融合应用;②为永磁同步电机高性能控制系统的仿真研究与实际设计提供可复现的技术方案与代码参考;③支撑科研论文复现、课题研究或工程项目前期验证。; 阅读建议:建议读者结合提供的Simulink模型与Matlab代码,逐步调试仿真环境,重点分析控制器设计逻辑与参数敏感性,同时可尝试在此基础上引入外部扰动或参数变化以进一步验证控制鲁棒性。
评论
成就一亿技术人!
拼手气红包6.0元
还能输入1000个字符
 
红包 添加红包
表情包 插入表情
 条评论被折叠 查看
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值