Yolo人脸检测¶
本节例程的位置在 百度云盘资料\野火K210 AI视觉相机\1-教程文档_例程源码\例程\10-KPU\yolo_face_detect\yolo_face_detect.py
例程¶
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 | import sensor, image, time, lcd
from maix import KPU
import gc
lcd.init()
sensor.reset(dual_buff=True) # Reset and initialize the sensor. It will
# run automatically, call sensor.run(0) to stop
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.skip_frames(time = 1000) # Wait for settings take effect.
clock = time.clock() # Create a clock object to track the FPS.
od_img = image.Image(size=(320,256))
anchor = (0.893, 1.463, 0.245, 0.389, 1.55, 2.58, 0.375, 0.594, 3.099, 5.038, 0.057, 0.090, 0.567, 0.904, 0.101, 0.160, 0.159, 0.255)
kpu = KPU()
kpu.load_kmodel("/sd/KPU/yolo_face_detect/yolo_face_detect.kmodel")
kpu.init_yolo2(anchor, anchor_num=9, img_w=320, img_h=240, net_w=320 , net_h=256 ,layer_w=10 ,layer_h=8, threshold=0.7, nms_value=0.3, classes=1)
while True:
#print("mem free:",gc.mem_free())
clock.tick() # Update the FPS clock.
img = sensor.snapshot()
a = od_img.draw_image(img, 0,0)
od_img.pix_to_ai()
kpu.run_with_output(od_img)
dect = kpu.regionlayer_yolo2()
fps = clock.fps()
if len(dect) > 0:
print("dect:",dect)
for l in dect :
a = img.draw_rectangle(l[0],l[1],l[2],l[3], color=(0, 255, 0))
a = img.draw_string(0, 0, "%2.1ffps" %(fps), color=(0, 60, 128), scale=2.0)
lcd.display(img)
gc.collect()
kpu.deinit()
|
例程解析¶
1 2 3 4 | import sensor, image, time, lcd
from maix import KPU
import gc
|
这些库提供了对摄像头、图像处理、时间、LCD显示和内存管理等的支持。
1 2 3 4 5 6 7 | lcd.init()
sensor.reset(dual_buff=True) # Reset and initialize the sensor. It will
# run automatically, call sensor.run(0) to stop
sensor.set_pixformat(sensor.RGB565) # Set pixel format to RGB565 (or GRAYSCALE)
sensor.set_framesize(sensor.QVGA) # Set frame size to QVGA (320x240)
sensor.skip_frames(time = 1000) # Wait for settings take effect.
clock = time.clock() # Create a clock object to track the FPS.
|
初始化LCD显示和摄像头设置,包括双缓冲、像素格式、帧大小,并跳过一些帧以确保设置生效。同时创建一个时钟对象来跟踪帧率(FPS)。
1 2 3 4 5 6 | od_img = image.Image(size=(320,256))
anchor = (0.893, 1.463, 0.245, 0.389, 1.55, 2.58, 0.375, 0.594, 3.099, 5.038, 0.057, 0.090, 0.567, 0.904, 0.101, 0.160, 0.159, 0.255)
kpu = KPU()
kpu.load_kmodel("/sd/KPU/yolo_face_detect/yolo_face_detect.kmodel")
kpu.init_yolo2(anchor, anchor_num=9, img_w=320, img_h=240, net_w=320 , net_h=256 ,layer_w=10 ,layer_h=8, threshold=0.7, nms_value=0.3, classes=1)
|
创建一个用于神经网络输入的图像对象od_img,加载一个预训练的KPU模型用于人脸检测,并初始化YOLO v2神经网络。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | while True:
#print("mem free:",gc.mem_free())
clock.tick() # Update the FPS clock.
img = sensor.snapshot()
a = od_img.draw_image(img, 0,0)
od_img.pix_to_ai()
kpu.run_with_output(od_img)
dect = kpu.regionlayer_yolo2()
fps = clock.fps()
if len(dect) > 0:
print("dect:",dect)
for l in dect :
a = img.draw_rectangle(l[0],l[1],l[2],l[3], color=(0, 255, 0))
a = img.draw_string(0, 0, "%2.1ffps" %(fps), color=(0, 60, 128), scale=2.0)
lcd.display(img)
gc.collect()
|
捕获一帧图像。
将捕获的图像绘制到od_img上,并转换为神经网络输入格式。
运行KPU模型进行人脸检测。
如果检测到人脸,则在原始图像上绘制矩形框。
在图像上显示当前的FPS。
将图像显示在LCD上。
进行垃圾回收以释放内存。
1 | kpu.deinit()
|
在循环结束后,清理KPU资源。