Hi, I want to do a benchmark of the inferencing time of various YOLO models on the Rubik Pi3. I’m having trouble getting to the inference times. I send detections via appsink to a Python script that calculates the inference time based on the time difference between the sent detection buffers. The problem is that by not setting a frame-rate in the pipeline I always get an inference time of 33ms, regardless of the size of the model. On the other hand, when I increase the frame-rate, the inference time increases by a few ms. Is there any way to effectively get to the inference time to calculate FPS from the buffer information?
I am uploading my pipeline below:
“”"
qtimetamux name=metamux ! queue ! qtioverlay ! queue ! waylandsink sync=true fullscreen=false
qtiqmmfsrc name=qmmf camera=0 !
video/x-raw(memory:GBM),format=NV12 !
tee name=split ! queue ! metamux.
split. ! queue name=detect_q ! qtimlvconverter ! queue !
qtimlsnpe delegate=dsp model={args.model_path} layers=“</model.22/Mul_2, /model.22 /Sigmoid>” !
queue ! qtimlvdetection name=detection threshold={args.detection_threshold * 100} results={args.max_detections} module=yolov8 labels={args.labels_path} !
text/x-raw, format=utf8 !
queue ! tee name=split2
split2. ! queue ! metamux.
split2. ! queue ! appsink name=sink emit-signals=true sync=true drop=false
“”"
Greetings