ELEC230 Robotic Systems - 3C/C++

Java Python Department of Electrical Engineering and Electronics

ELEC230 Robotic Systems - Assignment 3

Roomba Vacuum Cleaner Gazebo Simulation in C++

Module

ELEC230

Coursework name

Assignment 3

Component weight

25%

Semester

2

HE Level

5

Lab location

Bld 502 PC room B and C

Work

Individual

Timetabled time

28 hours - including all lab time & lectures

Suggested private study

10 hours – including report-writing

Time spent?

Please record how long it took you athttps://bit.ly/EEECARES

 

Assessment method

Individual, formal, word-processed reports in the format instructed in the Marking Criteria, along with other files listed in “What to hand in” .

Submission format

Online via CANVAS.

Plagiarism / collusion

Standard University penalties and procedures apply for plagiarism and collusion.

Submission deadline

Sunday 24th  March 2024, 23:59

Late submission

Standard University penalties apply.

Resit opportunity

August resit period (if total module failed).

Marking policy

Marked and moderated independently.

Anonymous marking

Yes

Feedback

Via comments on your CANVAS submission, online

 

 

 

 

Learning outcomes

•    (BH1) Understanding Linux and the mechanisms provided for multi-tasking.

•    (BH2) Understanding the features of an Object-Orientated Programming language and the ability to code in C++.

•    (BH4) Understanding the ROS system.

•    More specifically: understanding the package management system in ROS; understanding and implementing robot simulation using

ROS, Turtlesim, Gazebo.

Marking Criteria

Marking criteria submissions will be marked against, are given below. A gradient from Fail, good, very good and excellent will be applied to each.

Component

Criteria

Pts

 

 

 

 

 

 

Files submitted

Report: Title Page Present?

1

Report: Are figures captioned?

1

Report: Table of Contents Present?

1

Code: CMakeList.txt file submitted and compiles?

2

Code:  Launch File Submitted and runs without

error?

 

2

package.xml file submitted, with description?

 

2

 

 

 

 

 

Report

Background section (Focussing on ROS and

Gazebo etc)

 

5

Screenshots Provided? (Do they show something

useful and are done systematically?)

 

15

Discussion (Show understanding of what the commands are doing, understanding of the

different files, logic the robot follows etc...)

 

20

 

 

 

 

 

Code

Code Commented (More than original comments)

5

Does the Robot Move?

3

Does the Robot Stop when it detects Obstacle?

5

Does the Robot Rotate intelligently after it Stops?

(turning both ways will result in higher pts)

 

30

Does the Robot Navigate for a prolonged time

without crashing?

 

8

 

Penalty Points e.g. going over page count,   amendments had to be made to get code to

compile and run etc.

 

0

Total

100

Aim of this assignment

The aim of this assignment is to give you away into writing software for controlling a (simulated) robot using the Robot Operating System (ROS).

In this assignment you will create a ROS node to drive the robot around with a simple wanderer algorithm, very like a ‘Roomba’ robot vacuum cleaner. The robot should move forward until it reaches an obstacle, then rotate in the same position until the way ahead is clear, then move forward again and repeat. While this can be achieved with a robot that can rotate in one direction, e.g. left only, higher marks will be awarded for submissions that rotate in both directions. The robot in the simulation should be able to move around its environment for a prolonged period (3-5 minutes), without colliding with an object.

Rules and Requirements of your Assignment

Your code must be executable in a Linux environment without any modification. Code submitted which requires modification to run will result in penalty points. For you to work and be able to execute the assignment, make sure you have a ROS2 Distribution (i.e., Foxy) with Gazebo installed. For more information, please have a look at the lecture notes and lab sessions in CANVAS.

Before you start this assignment, you should make sure that you understand basic ROS concepts such as compiling and running a node, both from the command line and through launch files. Finally, make sure that the turtlebot_gazebo packages are installed on your machine, these were presented in Week 4 lecture notes.

The Assignment

1.   Pull code from github into your ROS2 workspace /src folder

$ cd src

$ git clone https://github.com/LJDevlin/Assignment3_ELEC230.git

2.  Your task is now to adapt the code supplied to achieve the task. The code should implement a simple algorithm:

•    If the robot is moving sufficiently close to an obstacle in front of it, then rotate it in the direction that is freer from obstacles (i.e., if there is an obstacle on the robot’s right, it should turn left) until the way ahead is clear;

•    If there is no obstacle blocking its path, move forward as a default.

3.  Verify that your implementation works, by running a launch file (one has been supplied to you in the code in canvas) and watching the robot in the simulator to see that it is operating as intended.

4.  Once you are happy with your code, change the packages.xml file to give a brief description of what the package does.

5.  Write a short document which introduces the assignment, what you did, what worked and what didn’t work, as well as the logic your  robot follows as  it navigates the environment. The full requirements for the report are given below.

The Rules

1.   Make sure that your code is tidy and well-commented. Comments should go beyond what has already been supplied.

2.  This goes without saying: you should do this lab work on your own. All the work you turn in should be yours, and not done in collaboration with anyone else. If you      use      any      external      sources       of      inspiration,      other      than https://docs.ros.org/en/foxy/index.html ,then let us know in a README file.

3.   If you  have  taken  a  VirtualBox  from  a  demonstrator,  then  make  sure  that screenshots  include  a  unique  username.  ELEC230  is  not  a  valid  username. Screenshots should clearly show the time and date.

4.   ROS1 submissions will not be accepted.

5.   Source code must be written in C++.

6.   Page limits must be adhered to, as specified in “What to hand in” below. Penalty points will be applied for going over the limit.

What to Hand In

You should hand in everything that someone else needs to run your code. For this assignment, that means:

•    Your source code. This should be adequately commented, so that each distinct part of the code is clearly explained.

•    manifest file,

•    CMakeLists.txt,

•    launch files.

The structure of the code should be the one that exists in your linux environment. Such that the marker will be able to run your code without moving your files around. For

that reason, please compress the files as they exist in your <ROS_WS>/src

You should also include (alongside your compressed file and not inside it):

•    A Word document including:

o    A cover sheet with title, background, and academic integrity declaration.

o   A discussion of: an introduction; procedures followed; what worked and what did not; the testing carried out; and the logic your robot follows.

This should be 4 pages maximum. Penalty points will be applied if it goes beyond this.

o   The   rest   of  your  Word  document  should   include  screenshots  as described  in  the  Marking  Criteria.  These  should  be  presented  as numbered figures, referred to (where relevant) in your discussion.

You should not hand in executable files e.g from your /install or /build folders, or any other files that can be regenerated.

Your code should be able to run after writing in a terminal:

$ colcon build

Then navigating to your launch file

$ ros2 launch elec230.lauch.py

This is assuming you haven’t changed the name of the launch file we have already supplied to you.

The two things you need to submit are:

1) Your ROS package in a compressed format.

2) A report in the form. of a Word document         

FAILED: microphone.elf C:\WINDOWS\system32\cmd.exe /C "cd . && C:\Users\27660\.espressif\tools\xtensa-esp-elf\esp-14.2.0_20241119\xtensa-esp-elf\bin\xtensa-esp32s3-elf-g++.exe -mlongcalls -fno-builtin-memcpy -fno-builtin-memset -fno-builtin-bzero -fno-builtin-stpcpy -fno-builtin-strncpy -Wl,--cref -Wl,--defsym=IDF_TARGET_ESP32S3=0 "-Wl,--Map=D:/AI robotic/microphone_test001/microphone/build/microphone.map" -Wl,--no-warn-rwx-segments -Wl,--orphan-handling=warn -fno-rtti -fno-lto -Wl,--gc-sections -Wl,--warn-common -T esp32s3.peripherals.ld -T esp32s3.rom.ld -T esp32s3.rom.api.ld -T esp32s3.rom.bt_funcs.ld -T esp32s3.rom.libgcc.ld -T esp32s3.rom.wdt.ld -T esp32s3.rom.version.ld -T esp32s3.rom.newlib.ld -T memory.ld -T sections.ld @CMakeFiles\microphone.elf.rsp -o microphone.elf && cd ." C:/Users/27660/.espressif/tools/xtensa-esp-elf/esp-14.2.0_20241119/xtensa-esp-elf/bin/../lib/gcc/xtensa-esp-elf/14.2.0/../../../../xtensa-esp-elf/bin/ld.exe: esp-idf/main/libmain.a(mjpeg.c.obj):(.literal.decode_task+0x14): undefined reference to `jpeg_decoder_process&#39; C:/Users/27660/.espressif/tools/xtensa-esp-elf/esp-14.2.0_20241119/xtensa-esp-elf/bin/../lib/gcc/xtensa-esp-elf/14.2.0/../../../../xtensa-esp-elf/bin/ld.exe: esp-idf/main/libmain.a(mjpeg.c.obj):(.literal.mjpeg_decoder_init+0x4): undefined reference to `jpeg_decoder_init&#39; C:/Users/27660/.espressif/tools/xtensa-esp-elf/esp-14.2.0_20241119/xtensa-esp-elf/bin/../lib/gcc/xtensa-esp-elf/14.2.0/../../../../xtensa-esp-elf/bin/ld.exe: esp-idf/main/libmain.a(mjpeg.c.obj): in function `decode_task&#39;: D:/AI robotic/microphone_test001/microphone/main/mjpeg.c:79:(.text.decode_task+0x5f): undefined reference to `jpeg_decoder_process&#39; C:/Users/27660/.espressif/tools/xtensa-esp-elf/esp-14.2.0_20241119/xtensa-esp-elf/bin/../lib/gcc/xtensa-esp-elf/14.2.0/../../../../xtensa-esp-elf/bin/ld.exe: esp-idf/main/libmain.a(mjpeg.c.obj): in function `mjpeg_decoder_init&#39;: D:/AI robotic/microphone_test001/microphone/main/mjpeg.c:114:(.text.mjpeg_decod
03-10
<think>好的,我现在需要帮助用户解决在ESP32-S3项目中遇到的编译错误,具体是mjpeg.c文件中出现未定义引用错误,涉及jpeg_decoder_process和jpeg_decoder_init函数,以及链接器报错xtensa-esp-elf-g++ linker error。首先,我要分析可能的原因,然后逐步给出解决方案。 首先,用户提到的错误是未定义的引用,这说明在链接阶段编译器找不到这两个函数的实现。可能的原因包括: 1. **源文件未添加到编译系统**:检查是否将包含这两个函数的C文件(比如jpeg_decoder.c)添加到项目的编译列表中。在ESP-IDF项目中,通常需要在CMakeLists.txt或component.mk文件中添加源文件。 2. **头文件路径未正确配置**:如果函数的声明在头文件中,但头文件路径没有包含到项目中,可能导致编译器找不到声明,从而引发链接错误。需要确认头文件路径是否在CMakeLists.txt中正确设置。 3. **依赖库未正确链接**:如果这两个函数属于某个外部库(如esp32-camera组件或JPEG解码库),需要确保项目正确包含该库,并且在链接阶段将其包含进来。例如,在menuconfig中启用相关配置,或者在CMakeLists.txt中添加对应的库依赖。 4. **函数实现与声明不一致**:检查函数声明(在头文件中)和实现(在C文件中)是否一致,包括函数名、参数列表和返回类型。如果有不一致,会导致链接器找不到匹配的实现。 接下来,根据用户提供的引用内容,尤其是引用[4]提到的esp32-camera配置,可能涉及到摄像头相关的库配置。用户可能在配置摄像头组件时,没有正确启用JPEG解码功能,或者相关源文件未被包含。 步骤分解: 1. **检查源文件包含**:确认jpeg_decoder.c或其所在目录是否被添加到项目的CMakeLists.txt中。例如,在CMakeLists.txt中使用target_sources()添加源文件。 2. **验证头文件包含**:确保mjpeg.c中包含了正确的头文件,如#include "jpeg_decoder.h",并且头文件路径在CMakeLists.txt中通过include_directories()添加。 3. **确认组件依赖**:在ESP-IDF项目中,可能需要依赖esp32-camera组件。检查项目的CMakeLists.txt是否包含了对该组件的依赖,例如使用target_link_libraries()链接相应的库。 4. **配置menuconfig**:根据引用[1]和[4],用户可能需要通过menuconfig启用PSRAM或摄像头相关配置。例如,在Component config -> ESP32S3-Specific中启用外部PSRAM,这可能影响内存分配,进而影响JPEG解码库的运行。 5. **检查函数实现**:确认jpeg_decoder_init和jpeg_decoder_process函数的实现是否存在,并且没有拼写错误。如果这些函数属于第三方库,可能需要检查库的版本或是否正确安装。 6. **链接库顺序**:有时链接器的库顺序可能导致问题,确保依赖库在链接时按正确的顺序出现。 7. **清理并重新编译**:有时候旧的编译文件可能导致问题,执行idf.py fullclean然后重新编译。 根据引用[2]和[3],用户可能在使用ESP32-CAM_MJPEG2SD项目,该项目可能依赖特定的库或配置,需要确保所有依赖项正确安装和配置。 可能的解决方案步骤: - 确保jpeg_decoder.c被添加到编译列表。 - 在menuconfig中启用相关JPEG解码选项。 - 链接esp32-camera组件或其他必要的库。 - 检查函数声明和实现是否一致。</think>针对ESP32-S3项目中编译时出现的`mjpeg.c undefined reference`错误,以下是逐步解决方案: ### 1. 检查源文件包含 - **确认JPEG解码模块是否加入编译** 在`CMakeLists.txt`中添加相关源文件: ```cmake target_sources(${COMPONENT_LIB} PRIVATE "jpeg_decoder.c" "mjpeg.c" ) ``` 若使用ESP-IDF组件,需在组件目录的`CMakeLists.txt`中声明源文件。 ### 2. 验证头文件路径 - **确保头文件路径正确包含** 在`CMakeLists.txt`中添加: ```cmake target_include_directories(${COMPONENT_LIB} PRIVATE "include" "components/jpeg_decoder/include" ) ``` ### 3. 配置外部依赖组件 - **启用esp32-camera组件** 执行`idf.py menuconfig`: - 路径:`Component config → ESP32S3-Specific → Camera Pins` - 启用`Support for external SPI RAM`(PSRAM相关配置)[^1][^4] ### 4. 链接库配置 - **添加JPEG解码库依赖** 在`CMakeLists.txt`中链接库(示例): ```cmake target_link_libraries(${COMPONENT_LIB} esp32-camera jpeg m ) ``` ### 5. 检查函数实现一致性 - **对比声明与实现** 确保头文件中的函数签名与C文件一致: ```c // jpeg_decoder.h void jpeg_decoder_init(void); int jpeg_decoder_process(uint8_t* data, size_t len); // jpeg_decoder.c void jpeg_decoder_init() { /* 实现 */ } int jpeg_decoder_process(uint8_t* data, size_t len) { /* 实现 */ } ``` ### 6. 清理并重新编译 ```bash idf.py fullclean && idf.py build ``` ### 其他可能原因 - **内存分配问题** ESP32-S3需通过`menuconfig`启用PSRAM支持,否则大尺寸JPEG解码会失败[^1]。 - **工具链版本** 确认使用最新的ESP-IDF版本(推荐v4.4+),旧版本可能缺少必要驱动。 ---
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值