Rubik Pi3 YOLO tracking

Hi I’m working on YOLO tracking with Rubik Pi3.
I have written a script in python that using appsink extracts the labels along with the bounding boxes and puts them in a list in the format: . Now I am wondering how I can override the input video to get the live output with bounding boxes from my python script, because currently I only get text from my pipeline. I can’t deal with the combination in pipeline of video output and text output simultanously from mlvdetection. I was thinking to do the tracking already in python code, unless there is a plugin I can plug in after detection in pipelin?

I’ve attached my pipeline below:

pipeline_str = “”"
qtiqmmfsrc name=camsrc !
video/x-raw(memory:GBM),format=NV12,width=640,height=480,framerate=30/1,compression=ubwc !
queue ! qtimlvconverter ! queue !
qtimlsnpe delegate=dsp model=/opt/yolov8n_quantized.dlc layers=“</model.22/Mul_2, /model.22/Sigmoid>” !
queue ! qtimlvdetection threshold=51.0 results=10 module=yolov8 labels=/opt/yolov8_v2.labels !
capsfilter caps=“text/x-raw, format=utf8” !
appsink name=myappsink emit-signals=true sync=true
“”"

you have to try
qtimetamux, qtioverlay.

Okay, thanks. So far I’ve managed to reprocess the detections in a python script and in the terminal it prints the coordinates of the tracked objects, and the output video shows the detections from the mlvdetection plugin. Did you manage to override the output video to display your bounding boxes?

After the detection branch, only one of text/x-raw or video/x-raw is possible. So, I overlaid the result of text/x-raw on the original raw image.

I’m curious what AI model you used.

I’m using yolov8 model exported to .dlc format. Could you send me your pipeline so I can get deeper understanding