rising里的boot.img文件

本文详细介绍了rising Linux系统的启动配置文件及其内容。包括boot.msg中的启动提示信息、f2中的硬盘类型提示、syslinux.cfg中的内核引导参数配置等关键信息。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

  boot.img
  |----boot.msg 引导时的各种提示信息文件
  |----f2 硬盘类型提示信息文件
  |----german.kbd
  |----ldlinux.sys 引导Linux的系统文件
  |----logo.16 启动画面的图象文件
  |----miniroot.gz 一个压缩文件,包含linuxrc,建立 RamDisk
  |----syslinux.cfg Linux内核引导参数配置文件
  |----vmlinuz Linux内核
------------------------------------------------------------------------------
boot.msg:
//----------------------------------------------------------------------------
//RISING LINUX BOOT V1.0 RELEASE: 2005-07-04
//Copyright (c) 1998-2006 Beijing Rising Technology Corp.,Ltd. All rights reserved
//Press Enter to boot without SCSI disk support, or type scsi to boot with SCSI //disk support.
//The computer will boot without SCSI disk support in 30 seconds.
//BOOT MODE:
//normal boot without SCSI support
//scsi boot with SCSI support
//-----------------------------------------------------------------------------
rising的启动信息,boot.msg是ASCII文件,直接使用vi编辑即可,如果您的logo.16是640x400象素,您可在boot.msg中写四行文本信息。


F2:
//------------------------------------------------------------------------------
//normal boot without SCSI support
//scsi boot with SCSI support
//硬盘类型信息
//------------------------------------------------------------------------------

syslinux.cfg:
DEFAULT vmlinuz
APPEND ramdisk_size=100000 init=/etc/init lang=us apm=power-off hda=scsi hdb=scsi hdc=scsi hdd=scsi hde=scsi hdf=scsi hdg=scsi hdh=scsi vga=0x314 initrd=miniroot.gz nomce quiet BOOT_IMAGE=rising pnpbios=off console=/dev/null
TIMEOUT 300

PROMPT 1
DISPLAY boot.msg
F1 boot.msg
F2 f2
LABEL normal
KERNEL vmlinuz
APPEND ramdisk_size=100000 init=/etc/init lang=us apm=power-off hda=scsi hdb=scsi hdc=scsi hdd=scsi hde=scsi hdf=scsi hdg=scsi hdh=scsi vga=0x314 initrd=miniroot.gz nomce quiet BOOT_IMAGE=rising pnpbios=off console=/dev/null scsi=no
LABEL scsi
KERNEL vmlinuz
APPEND ramdisk_size=100000 init=/etc/init lang=us apm=power-off hda=scsi hdb=scsi hdc=scsi hdd=scsi hde=scsi hdf=scsi hdg=scsi hdh=scsi vga=0x314 initrd=miniroot.gz nomce quiet BOOT_IMAGE=rising pnpbios=off console=/dev/null scsi=yes
//----------------------------------------------------------------------
//default vmlinuz
//append init=/etc/init lang=us boot_image=rising initrd=miniroot.gz
//rising可能是光盘上的杀毒程序
//-----------------------------------------------------------------------


GERMAN.KBD

ldinux.sys
//引导Linux的系统文件

vmlinuz
//Linux内核

logo.16:
logo.16文件是启动画面的图象文件

miniroot.gz:
包含miniroot文件

版权声明:本文为博主原创文章,未经博主允许不得转载。

下面是手势识别模型的main.py代码: # generated by maixhub, tested on maixpy3 v0.4.8 # copy files to TF card and plug into board and power on # 手势识别模型 import sensor, image, lcd, time import KPU as kpu from machine import UART import gc, sys from fpioa_manager import fm input_size = (224, 224) labels = ['go', 'stop'] anchors = [1.41, 2.13, 1.25, 1.72, 1.94, 2.53, 1.16, 2.06, 1.66, 2.19] def lcd_show_except(e): import uio err_str = uio.StringIO() sys.print_exception(e, err_str) err_str = err_str.getvalue() img = image.Image(size=input_size) img.draw_string(0, 10, err_str, scale=1, color=(0xff,0x00,0x00)) lcd.display(img) class Comm: def __init__(self, uart): self.uart = uart def send_detect_result(self, objects, labels): msg = "" for obj in objects: pos = obj.rect() p = obj.value() idx = obj.classid() label = labels[idx] msg += "{}:{}:{}:{}:{}:{:.2f}:{}, ".format(pos[0], pos[1], pos[2], pos[3], idx, p, label) if msg: msg = msg[:-2] + "\n" self.uart.write(msg.encode()) def init_uart(): fm.register(10, fm.fpioa.UART1_TX, force=True) fm.register(11, fm.fpioa.UART1_RX, force=True) uart = UART(UART.UART1, 115200, 8, 0, 0, timeout=1000, read_buf_len=256) return uart def main(anchors, labels = None, model_addr="/sd/m.kmodel", sensor_window=input_size, lcd_rotation=0, sensor_hmirror=False, sensor_vflip=False): sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) sensor.set_windowing(sensor_window) sensor.set_hmirror(sensor_hmirror) sensor.set_vflip(sensor_vflip) sensor.run(1) lcd.init(type=1) lcd.rotation(lcd_rotation) lcd.clear(lcd.WHITE) if not labels: with open('labels.txt','r') as f: exec(f.read()) if not labels: print("no labels.txt") img = image.Image(size=(320, 240)) img.draw_string(90, 110, "no labels.txt", color=(255, 0, 0), scale=2) lcd.display(img) return 1 try: img = image.Image("startup.jpg") lcd.display(img) except Exception: img = image.Image(size=(320, 240)) img.draw_string(90, 110, "loading model...", color=(255, 255, 255), scale=2) lcd.display(img) uart = init_uart() comm = Comm(uart) try: task = None task = kpu.load(model_addr) kpu.init_yolo2(task, 0.5, 0.3, 5, anchors) # threshold:[0,1], nms_value: [0, 1] while(True): img = sensor.snapshot() t = time.ticks_ms() objects = kpu.run_yolo2(task, img) t = time.ticks_ms() - t if objects: for obj in objects: pos = obj.rect() img.draw_rectangle(pos) img.draw_string(pos[0], pos[1], "%s : %.2f" %(labels[obj.classid()], obj.value()), scale=2, color=(255, 0, 0)) comm.send_detect_result(objects, labels) img.draw_string(0, 200, "t:%dms" %(t), scale=2, color=(255, 0, 0)) img.draw_string(0, 2, "Upgrade to MaixCAM to use YOLOv8", scale=1.2, color=(255, 0, 0)) img.draw_string(0, 30, "wiki.sipeed.com/maixcam", scale=1.2, color=(255, 0, 0)) lcd.display(img) except Exception as e: raise e finally: if not task is None: kpu.deinit(task) if __name__ == "__main__": try: # main(anchors = anchors, labels=labels, model_addr=0x300000, lcd_rotation=0) main(anchors = anchors, labels=labels, model_addr="/sd/model-174293.kmodel") except Exception as e: sys.print_exception(e) lcd_show_except(e) finally: gc.collect() 下面是人脸识别模型的代码: # 人脸识别模型 import sensor import image import lcd import KPU as kpu import time from Maix import FPIOA, GPIO import gc from fpioa_manager import fm from board import board_info import utime from machine import UART # 初始化串口 uart = UART(UART.UART1, 115200, 8, 0, 1, timeout=1000, read_buf_len=4096) fm.register(6, fm.fpioa.UART1_TX) fm.register(7, fm.fpioa.UART1_RX) # 加载模型 task_fd = kpu.load("/sd/FaceDetection.smodel") task_ld = kpu.load("/sd/FaceLandmarkDetection.smodel")# task_fe = kpu.load("/sd/FeatureExtraction.smodel") clock = time.clock() # 按键初始化 fm.register(board_info.BOOT_KEY, fm.fpioa.GPIOHS0) key_gpio = GPIO(GPIO.GPIOHS0, GPIO.IN) start_processing = False BOUNCE_PROTECTION = 50 def set_key_state(*_): global start_processing start_processing = True utime.sleep_ms(BOUNCE_PROTECTION) key_gpio.irq(set_key_state, GPIO.IRQ_RISING, GPIO.WAKEUP_NOT_SUPPORT) # 摄像头初始化 lcd.init() sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) sensor.set_hmirror(1) sensor.set_vflip(1) sensor.run(1) # 人脸检测参数 anchor = (1.889, 2.5245, 2.9465, 3.94056, 3.99987, 5.3658, 5.155437, 6.92275, 6.718375, 9.01025) dst_point = [(44, 59), (84, 59), (64, 82), (47, 105), (81, 105)] # 标准人脸关键点位置 a = kpu.init_yolo2(task_fd, 0.5, 0.3, 5, anchor) img_lcd = image.Image() img_face = image.Image(size=(128, 128)) a = img_face.pix_to_ai() record_ftr = [] record_ftrs = [] names = ['worker1', 'worker2', 'workerr3', 'worker4', 'worker5', 'worker6', 'worker7', 'worker8', 'worker9', 'worker10'] ACCURACY = 85 # 识别准确率阈值 while True: img = sensor.snapshot() clock.tick() code = kpu.run_yolo2(task_fd, img) if code: for i in code: # 裁剪并调整人脸大小 a = img.draw_rectangle(i.rect()) face_cut = img.cut(i.x(), i.y(), i.w(), i.h()) face_cut_128 = face_cut.resize(128, 128) a = face_cut_128.pix_to_ai() # 人脸关键点检测 fmap = kpu.forward(task_ld, face_cut_128) plist = fmap[:] le = (i.x() + int(plist[0] * i.w() - 10), i.y() + int(plist[1] * i.h())) re = (i.x() + int(plist[2] * i.w()), i.y() + int(plist[3] * i.h())) nose = (i.x() + int(plist[4] * i.w()), i.y() + int(plist[5] * i.h())) lm = (i.x() + int(plist[6] * i.w()), i.y() + int(plist[7] * i.h())) rm = (i.x() + int(plist[8] * i.w()), i.y() + int(plist[9] * i.h())) # 在图像上绘制关键点 a = img.draw_circle(le[0], le[1], 4) a = img.draw_circle(re[0], re[1], 4) a = img.draw_circle(nose[0], nose[1], 4) a = img.draw_circle(lm[0], lm[1], 4) a = img.draw_circle(rm[0], rm[1], 4) # 对齐人脸到标准位置 src_point = [le, re, nose, lm, rm] T = image.get_affine_transform(src_point, dst_point) a = image.warp_affine_ai(img, img_face, T) a = img_face.ai_to_pix() del (face_cut_128) # 计算人脸特征向量 fmap = kpu.forward(task_fe, img_face) feature = kpu.face_encode(fmap[:]) # 与已记录的特征比较 scores = [] for j in range(len(record_ftrs)): score = kpu.face_compare(record_ftrs[j], feature) scores.append(score) max_score = 0 index = 0 for k in range(len(scores)): if max_score < scores[k]: max_score = scores[k] index = k if max_score > ACCURACY: # 在图像上显示识别结果 a = img.draw_string(i.x(), i.y(), names[index], color=(0, 255, 0), scale=2) # 通过串口发送识别结果 uart.write(names[index] + "\n") print(names[index] + "\n") else: a = img.draw_string(i.x(), i.y(), "Unknown", color=(255, 0, 0), scale=2) uart.write("Unknown") print("Unknown") #received = uart.read() # 读取回环数据 #if received: #print("发送成功,收到回环数据:", received.decode()) #else: #print("发送失败") # 如果按键按下,记录当前人脸特征 if start_processing: record_ftr = feature record_ftrs.append(record_ftr) start_processing = False uart.write("New face registered\n") print("New face registered\n") break # 显示帧率 fps = clock.fps() a = lcd.display(img) gc.collect() 我提供的是可在k210板子上的使用的模型的main.py文件,分别是手势识别模型的和人脸识别模型的,我现在想要实现先通过了人脸识别后再进行手势识别的功能,我下面的代码有没有实现这些功能########################################################################### # Pipeline 框架(可复用) ########################################################################### class Stage: """所有模型阶段的基类,返回 True 表示通过,False 表示不通过""" def enter(self): # 第一次进入该阶段 pass def run(self, img): # 每帧运行 return True def exit(self): # 离开该阶段 pass class Pipeline: def __init__(self): self.stages = [] self.cur = 0 def add_stage(self, stage): self.stages.append(stage) def run(self): while True: if self.cur >= len(self.stages): # 所有阶段完成,重启 self.cur = 0 stage = self.stages[self.cur] if self.cur == 0 or self.stages[self.cur-1].__class__ != stage.__class__: stage.enter() # 只有第一次进入时调用 img = sensor.snapshot() ok = stage.run(img) lcd.display(img) if ok: # 通过,进入下一阶段 stage.exit() self.cur += 1 gc.collect() ########################################################################### # 人脸识别阶段 ########################################################################### import sensor, image, lcd, time, KPU as kpu, gc from Maix import GPIO, FPIOA from machine import UART from board import board_info import uos class FaceAuthStage(Stage): def __init__(self, threshold=80): self.thresh = threshold self.task_fd = None # face detect self.task_ld = None # landmark self.task_fe = None # feature extraction self.record = [] # 已注册特征 self.names = ['worker1','worker2','worker3','worker4','worker5', 'worker6','worker7','worker8','worker9','worker10'] self.dst_pt = [(44,59),(84,59),(64,82),(47,105),(81,105)] self.anchor = (1.889,2.5245,2.9465,3.94056,3.99987,5.3658, 5.155437,6.92275,6.718375,9.01025) # 按键注册新人脸 fm.register(board_info.BOOT_KEY, fm.fpioa.GPIOHS0) self.key = GPIO(GPIO.GPIOHS0, GPIO.IN) self.need_reg = False self.key.irq(self._key_cb, GPIO.IRQ_RISING) def _key_cb(self, *_): self.need_reg = True def enter(self): lcd.clear(lcd.WHITE) lcd.draw_string(10, 10, "Face Auth Stage", lcd.RED, lcd.WHITE) self.task_fd = kpu.load("/sd/FaceDetection.smodel") self.task_ld = kpu.load("/sd/FaceLandmarkDetection.smodel") self.task_fe = kpu.load("/sd/FeatureExtraction.smodel") kpu.init_yolo2(self.task_fd, 0.5, 0.3, 5, self.anchor) # 如果 SD 根目录有 features.txt 则自动加载 if "features.txt" in uos.listdir("/sd"): with open("/sd/features.txt") as f: import json data = json.load(f) self.record = data["feats"] self.names = data["names"] def exit(self): for t in (self.task_fd, self.task_ld, self.task_fe): if t: kpu.deinit(t) # 保存注册结果 with open("/sd/features.txt", "w") as f: import json json.dump({"feats": self.record, "names": self.names}, f) def run(self, img): code = kpu.run_yolo2(self.task_fd, img) ok = False if code: for face in code: rect = face.rect() cut = img.cut(rect[0], rect[1], rect[2], rect[3]) cut128 = cut.resize(128,128); cut128.pix_to_ai() # landmark fmap = kpu.forward(self.task_ld, cut128) pts = fmap[:] src = [(rect[0]+int(pts[i]*rect[2]), rect[1]+int(pts[i+1]*rect[3])) for i in range(0,10,2)] # align M = image.get_affine_transform(src, self.dst_pt) warped = image.Image(size=(128,128)) image.warp_affine_ai(img, warped, M) warped.ai_to_pix() # feature fmap = kpu.forward(self.task_fe, warped) feat = kpu.face_encode(fmap[:]) if self.need_reg and len(self.record) < len(self.names): self.record.append(feat) self.need_reg = False print("New face registered") # 比对 max_score = 0 idx = 0 for j,f in enumerate(self.record): score = kpu.face_compare(f, feat) if score > max_score: max_score, idx = score, j if max_score > self.thresh: img.draw_string(rect[0], rect[1]-20, self.names[idx], scale=2, color=(0,255,0)) ok = True # 通过认证 else: img.draw_string(rect[0], rect[1]-20, "Unknown", scale=2, color=(255,0,0)) break return ok ########################################################################### # 手势识别阶段 ########################################################################### class GestureStage(Stage): def __init__(self): self.task = None self.labels = ['go', 'stop'] self.anchors = [1.41,2.13,1.25,1.72,1.94,2.53,1.16,2.06,1.66,2.19] def enter(self): lcd.clear(lcd.WHITE) lcd.draw_string(10, 10, "Gesture Stage", lcd.RED, lcd.WHITE) self.task = kpu.load("/sd/model-174293.kmodel") kpu.init_yolo2(self.task, 0.5, 0.3, 5, self.anchors) def exit(self): if self.task: kpu.deinit(self.task) def run(self, img): objs = kpu.run_yolo2(self.task, img) for obj in objs: rect = obj.rect() label = self.labels[obj.classid()] img.draw_rectangle(rect) img.draw_string(rect[0], rect[1], "{}:{:.2f}".format(label, obj.value())) return True # 始终通过,如果想“按手势结束”再改 ########################################################################### # 主程序 ########################################################################### if __name__ == "__main__": sensor.reset() sensor.set_pixformat(sensor.RGB565) sensor.set_framesize(sensor.QVGA) sensor.set_hmirror(1) sensor.set_vflip(1) sensor.run(1) lcd.init() lcd.rotation(2) pipe = Pipeline() pipe.add_stage(FaceAuthStage(threshold=80)) pipe.add_stage(GestureStage()) try: pipe.run() except KeyboardInterrupt: pass finally: gc.collect() 如果实现了,是否完整实现了最开始的两个py文件的功能,请帮我解决为什么在板子上会显示灰屏,看不到画面
07-31
### 使用 `s35932.vhdl` 文件进行 FPGA 开发 #### 配置开发环境 为了有效地使用 `s35932.vhdl` 文件,在配置开发环境中需注意几个关键方面: - **安装必要的软件工具**:确保已安装最新版本的 Vivado 或其他支持 VHDL 的综合工具。这些工具提供了编写、编译和调试 VHDL 代码所需的全部功能。 - **设置项目结构**:创建一个新的工程目录用于存放所有相关文件,包括但不限于源码、约束文件以及测试平台脚本等。对于特定于 S35932 设备的应用程序来说,合理规划文件夹布局有助于提高工作效率并简化后续维护工作。 ```bash mkdir -p ~/projects/s35932_vhdl/src/constraints/tb/ cd ~/projects/s35932_vhdl ``` #### 添加 s35932.vhdl 到项目中 将下载得到的 `s35932.vhdl` 放入项目的 src 文件夹下,并确认其路径正确无误以便被 IDE 正确识别加载。如果采用的是命令行方式,则可通过如下指令完成操作: ```bash cp /path/to/downloaded/s35932.vhdl ./src/ ``` #### 编写顶层模块与实例化组件 根据具体需求定义一个顶层实体(entity),并在其中声明所需端口;接着利用 component 关键字引入目标 IP 核作为内部元件之一。下面给出一段简单的示例代码片段展示这一过程: ```vhdl library IEEE; use IEEE.STD_LOGIC_1164.ALL; entity top_level is Port ( clk : in STD_LOGIC; -- Clock signal rst_n : in STD_LOGIC; -- Active low reset data_in : in STD_LOGIC_VECTOR(7 downto 0); -- Input Data Bus ready_out : out STD_LOGIC ); -- Ready Signal Output end entity; architecture Behavioral of top_level is component s35932 port( ... ); end component; signal internal_signals : std_logic_vector(...); begin uut: s35932 port map ( ... => ..., ... => ... ); process(clk, rst_n) begin if(rst_n='0')then -- Reset actions here... elsif rising_edge(clk) then -- Sequential logic goes here... end if; end process; ready_out <= '1' when (...)=true else '0'; end architecture; ``` 此处省略了部分细节以保持简洁性,实际应用时应参照官方文档补充完整的接口描述[^1]。 #### 进行仿真验证 构建合适的 Testbench 对设计进行全面的功能性和时序特性检验是非常重要的一步。考虑到某些情况下难以获取到理想的模型库支持,可考虑借助第三方提供的替代方案或是自行搭建简易版来满足初步评估的需求[^3]。 #### 实现物理层连接 当准备就绪之后便可以着手安排 PCB 板级联接事宜了。务必仔细对照数据手册中的电气参数表单选取匹配度最高的元器件型号规格加以组装焊接,从而保障最终成品能够稳定可靠运行。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值