Thank you for your response.
I also believe the issue is unlikely to be hardware-related. However, could you please confirm whether the CCI I2C on the RubikPi operates at 1.8V? Additionally, it seems there is a device tree mismatch. Upon investigation, I noticed that qcm6490-addons-idp.dts is currently being used. Based on our setup, one of the qcs variants may be more appropriate—could you kindly advise which of the following should be used?
vsdall@vsdall-LOQ-15IRX9:~/work/rubik_pi3/kernel/arch/arm64/boot/dts/qcom$ ls qcs6490*
qcs6490-addons-rb3gen2-hsp.dts
qcs6490-addons-rb3gen2-ia-mezz.dts
qcs6490-addons-rb3gen2-ptz-mezz.dts
qcs6490-addons-rb3gen2-video-mezz.dts
qcs6490-addons-rb3gen2-vision-mezz-hsp.dts
qcs6490-addons-rb3gen2-vision-mezz.dts
qcs6490-addons-rb3gen2.dts
qcs6490-addons-rb3gen2.dtsi
qcs6490-rb3gen2.dts
Regarding your question about the necessity of using V4L2:
Our company designs image sensors, and although Qualcomm’s built-in ISP is technically available, the cost of support makes it impractical for us. As a result, we are limited to receiving raw bayer data without any ISP post-processing, which leads to issues such as dark image corners and color inaccuracies. To resolve this, we have integrated our own ISP with the sensor, which supports MIPI continuous mode—hence my earlier question on that support.
Furthermore, there are many existing camera modules on the market that already combine sensors with ISP, often implemented via FPGA, with outputs that are not always MIPI. As you may know, MIPI and LVDS interfaces have limitations in cable length, making them unsuitable for long-distance transmission. On the other hand, while Ethernet can support long-distance connections, it generally requires lossy compression even for resolutions like Full HD at 30fps to ensure reliable transmission. This is why analog interfaces such as AHD, TVI, and CVI are often used in such scenarios.
We provide a bridge chip(ASIC) that converts analog video (NTSC, PAL, AHD, TVI, CVI) into MIPI, and this is a frequent topic(not related to us) among Raspberry Pi users. Typical issues such as “the board doesn’t receive NTSC input properly” are usually due to mismatches in formats like RGB24, RGB565, or YUV422.
In cases where the input video has already been color converted through an external ISP, we would like to ask:
Is V4L2 strictly required for handling the camera interface?
Or is there a feasible way to write a driver or process video at the application layer, directly displaying or saving frames without V4L2?
I am aware of tools like yavta, but I am not sure whether yavta works independently or still depends on V4L2 APIs under the hood. This uncertainty is one of the reasons we default to using V4L2.
Lastly, our company is working with partners on an AI-based data acquisition and analysis platform that requires input from at least five cameras, with OpenGL and OpenCL support for real-time parallel processing. Among several platforms, RubikPi is the best solution for its balanced combination of CPU, NPU, and 3D acceleration.
The camera modules for this project deliver pre-processed ISP output, which the RubikPi must receive. Unfortunately, as of now, we haven’t been able to get even a single camera stream working, let alone five. 












If RubikPi is to be adopted more widely in smaller-scale video processing projects, like those often addressed by the Raspberry Pi, I strongly believe it will be essential to provide a path for receiving and processing camera input without relying on the internal Qualcomm ISP.
I apologize if this message was overly detailed or if I’ve overstepped in any way. Thank you very much for your time and consideration.