megi's PinePhone Development Log RSS

Surgenons in Gaza Surgeons in Gaza

2022–06–23: Further Pinephone Pro camera development

I am continuing to work on tuning the cameras, which is quite a complicated process. I've bought some specialized tools to help me with the tuning, like a calibrated color card with 24 common colors, but regular tuning procedure would require specialized expensive and bulky tooling, like light boxes with standardized lights, opal based light difusor, and knowledge of color theory. I'm only interested in acquiring the last thing in that list.

Instead, I am trying to avoid specialized equipment where I can, using life hacks like unfocused focus lens position to perform some amount of difusion, using screen of another phone to get evenly distributed source of light of a particular „temperature“, figuring out ways to calibrate against other calibrated cameras instead of against standardized sources of light, etc. Throwing creativity at the problem instead of resources and money. Results may turn out to be subpar in general, but this is not really something properly solvable in a cheap home lab, I think, now that I have some sense of how the real calibration is supposed to be performed. OTOH, I may be able to perfectly calibrate the phone cameras to my particular real-life indoor ligting, lol. This is all a big experiment and it remains to be seen what's achievable without specialized tools.

Output of the tuning process is lens shading correction (LSC) and shielded pixel calibration (SPC) data for the sensor, and a lot of different data and parameters for the RK3399 ISP (image signal processor) for various scene types. Some data for the ISP is available for Android camera stack in a XML file present inside the Android factory image. I very much doubt this data is useful on Pinephone Pro as is, because when I use some parts of it, like LSC data for ISP (yes, LSC can also be done on ISP level), the resulting image still has very visible vignetting. Applying DPCC paramteres as present in the XML file does not remove dead pixels at all, and so on. It's almost certain that there was no camera calibration done for the factory image, so the data in the XML file are not terribly helpful, and the calibration process will need to be done again.

Normally, phone manufacturers include an EEPROM on the board and store calibration data for the sensor there. There's no such thing on Pinephone Pro, so the calibration needs to be done at both the sensor and ISP levels from scratch. It is not very helpful that for example lens shading effects differ based on a spectral characteristics of light present in the scene. So it may be necessary to upload different parameters for shooting at low light, for shooting under daylight, for shooting under tungsten lighting, etc. LSC is applied early in the process, so the rest of the calibration also needs to be re-done for each light source type.

Rockchip has a special application that allows tuning the camera ISP for dark current subtraction, lens shading correction, gamma correction, color correction, dead/stuck pixel removal, noise filtering, etc. This application is not available publicly, and not very useful anyway for any non-Android stack use.

I started writing a GTK4 based app that connects to the Pinephone Pro over WiFi and allows to modify parameters inside the sensor and ISP, while monitoring the effects of various correction in real time, inspect histograms for various color components, and in general to experiment with the cameras and the ISP fairly painlessly. This should help with the calibration process as much as possible. It will still be a very tedious manual process, though. To give an idea of the scale: the incomplete PDF guide for using the official calibration app from Rockchip for the Android stack has about 60 pages of steps to perform, with many steps also undocumented and hardcoded as algorithms in the calibration application itself, which is not available anywhere. Many of the steps need to be re-done for each standardized light source, individually. It sounds like a ton of work even if you know what you're doing, and have access to the Rockchip's calibration app.

Documentation for ISP used on RK3399 is somewhat lacking, so this process requires quite a bit of experimentation. Sensor level documentation is also quite lacking in some key areas, SPC doesn't work as advertised, in particular. This means that the grid of tens of thousands of artificially darkened pixels that's present accross the entire sensor is not possible to remove at the sensor level in real time. Newer Rockchip ISP variants allow to handle this at the ISP level, but not so the SoC inside the Pinephone Pro. I use several workaround, but it's a very annoying issue specific to PDAF variant of IMX258 in particular.

Aside from calibration efforts, I've extended my ppp-cam app to allow highly optimized live mjpeg video streaming over HTTP, I've started designing the UI for my app, thinking very much about the ergonomics of manual controls, etc. I'm a bit of an optimization nut, so the app is designed to be able to run on a single little CPU core clocked to almost minimum, and still be able to perform all of it features without any UI lag, by utilizing HW offload where possible.

It's possible to have live preview of both cameras at once on screen, encoding video to JPEG via Hantro VPU, and streaming it off over HTTP, with negligible CPU load. There's CPU offload processing HW on the SoC for pretty much every processing step. It's a lot of fun tying it all together. :)

Here's some video from my experiments https://megous.com/dl/tmp/ppp-wificam2.mp4 from a few weeks ago, streaming mjpeg video from the phone over WiFi (still without calibration).

And here's an early screenshot from the calibration app: https://megous.com/dl/tmp/a42fbb7180f34729.png if you're curious about that one. :)

My current plan is to complete enough features of the calibration app to be able to get past LSC stage, and to the actual color calibration via my 24 color card. I've added an overlay to the app to be able to line up the color boxes for readout by the app https://megous.com/dl/tmp/2ef7ab8564619185.png from known positions and some algorithm needs to be figured out to figure out color transform matrix parameters based on expected and captured color values. This will likely need some optimization algorithm, to find parameters that will optimize the overall error over all the color boxes to minimum.

Afterwards it will be possible to turn back to playing with AE/AWB/AF algorihms, and to finally start shooting pictures of reasonable quality out of the box.

This is still some months into the future. I'm thinking of doing another binary release of my ppp-cam app in the meantime that would allow others to experiment with the calibration process, too. The calibration app would be opensource and would connect to the ppp-cam app over the TCP socket interface, which is already implemented. ppp-cam app itself will stay closed source.