Can't use qnn npu

i’m trying to use rubik pi’s npu

so following this page:

  • cpu
    • it works well
      INFO: Created TensorFlow Lite XNNPACK delegate for CPU.
      Found face: x=120, y=186, w=62, h=79, score=0.8306506276130676
      Found face: x=311, y=125, w=66, h=81, score=0.8148472309112549
      Found face: x=424, y=173, w=64, h=86, score=0.8093323111534119

Inference took (on average): 32.0ms. per image

  • –use-qnn

with a lot of logs

Initializing HtpProvider
Specified config SOC, ignoring on real target
/prj/qct/webtech_scratch20/mlg_user_admin/qaisw_source_repo/rel/qairt-2.39.0/release/snpe_src/avante-tools/prebuilt/dsp/hexagon-sdk-5.4.0/ipc/fastrpc/rpcmem/src/rpcmem_android.c:38:dummy call to rpcmem_init, rpcmem APIs will be used from libxdsprpc
[1] [4294967295] has incorrect Value 1895298384, expected equal to 0.
QnnBackend_validateOpConfig failed 3110
Failed to validate op node_id_0_op_type_Conv2d_op_count_0 with error 0xc26
Logs will be sent to the system’s default channel
[1] [4294967295] has incorrect Value 1895298384, expected equal to 0.
QnnBackend_validateOpConfig failed 3110
Failed to validate op node_id_0_op_type_Conv2d_op_count_0 with error 0xc26

It’s not working even though i just follow tutorial.
Anyone know about it?

May I inquire whether you are utilizing Ubuntu Desktop or Ubuntu Server?

Here are the specifications for my Rubik’s pi3, still in its original with ubuntu 24.04 server, unsolved condition:

  • Linux ubuntu 6.8.0-1056-qcom #57-Ubuntu SMP PREEMPT_DYNAMIC Tue Oct 21 15:36:21 UTC 2025 aarch64 aarch64 aarch64 GNU/Linux
  • PRETTY_NAME=“Ubuntu 24.04.3 LTS”
    NAME=“Ubuntu”
    VERSION_ID=“24.04”
    VERSION=“24.04.3 LTS (Noble Numbat)”
    VERSION_CODENAME=noble
    ID=ubuntu
    ID_LIKE=debian
    UBUNTU_CODENAME=noble
    LOGO=ubuntu-logo
    • gcc (Ubuntu 13.3.0-6ubuntu2~24.04) 13.3.0

(.venv) ubuntu@ubuntu:~/yolov11$ qnn-platform-validator --backend dsp --testBackend
PF_VALIDATOR: DEBUG: Calling PlatformValidator->setBackend
PF_VALIDATOR: DEBUG: Calling PlatformValidator->isBackendHardwarePresent
PF_VALIDATOR: DEBUG: Calling PlatformValidator->isBackendAvailable
PF_VALIDATOR: DEBUG: Should be able to access atleast one of libraries from : libc.so.6
PF_VALIDATOR: DEBUG: dlOpen successfull for library : libc.so.6
PF_VALIDATOR: DEBUG: Should be able to access atleast one of libraries from : libcdsprpc.so
PF_VALIDATOR: DEBUG: dlOpen successfull for library : libcdsprpc.so
Backend DSP Prerequisites: Present.
PF_VALIDATOR: DEBUG: Calling PlatformValidator->backendCheck
PF_VALIDATOR: DEBUG: Should be able to access atleast one of libraries from : libc.so.6
PF_VALIDATOR: DEBUG: dlOpen successfull for library : libc.so.6
PF_VALIDATOR: DEBUG: Should be able to access atleast one of libraries from : libcdsprpc.so
PF_VALIDATOR: DEBUG: dlOpen successfull for library : libcdsprpc.so
PF_VALIDATOR: DEBUG: Starting calculator test
PF_VALIDATOR: DEBUG: Loading sample stub: libQnnHtpV68CalculatorStub.so
PF_VALIDATOR: DEBUG: Successfully loaded DSP library - ‘libQnnHtpV68CalculatorStub.so’. Setting up pointers.
PF_VALIDATOR: DEBUG: Success in executing the sum function
Unit Test on the backend DSP: Passed.
QNN is supported for backend DSP on the device.
*********** Results Summary ***********
Backend = DSP
{
Backend Hardware : Supported
Backend Libraries : Found
Library Version : Not Queried
Core Version : Not Queried
Unit Test : Passed
}

And i don’t know it’s right way to run npu but when i use impulse platform it seems working it with tutorial.

We have tested that the model runs successfully on our side. Could you kindly share the contents of the ~/aihub-npu directory on your device?
May I also ask which Python version you are using when running the model?

It doesn’t have aihub-npu folder.
And python version is 3.12.3

Should i do flash to new version of os?

Double check that you have run sudo apt install libqnn-dev in order to make sure all QNN dependencies are installed.

Yes, it is installed

May I confirm whether you followed the steps in the link below?

If not, please tell me the exact commands or screenshots of the document you used for testing.
Alternatively, you may flash the latest release and repeat the test.

New flash and try again work! thank you!

Good news! Thank you for your response. :smiling_face_with_three_hearts:

1 Like