Open sourcing the calibration process

Hi,
Is it possible to know how you do the calibration so we can directly do it, without having to send you the calibration videos + wait for the answer ? This is because I want to test multiple settings with multiple cameras over time.

I have been able to get an intrinsic matrix for my cameras and distortion coefficients, but it does not yield a perfect calibrated image in ActionStitch.

For now, I filmed using a GoPro 9 and 11 Mini, both in 2.7K Superview.

Thanks!

Stitching by doing my own calibration gave me this result :

All it matters is this line:

ret, intrinsic_matrix, distCoeff, rvecs, tvecs = cv2.calibrateCamera(opts, ipts, grey_image.shape[::-1],None,None,None, None, cv2.CALIB_RATIONAL_MODEL)

cv2.CALIB_RATIONAL_MODEL means it is using the 8 parameter distortion model.

If the total reprojection error is smaller than 0.1 then the calibration should be good enough.

Thank you, I was missing the cv2.CALIB_RATIONAL_MODEL. How would you compute the reprojection error?

I used the standard one from https://docs.opencv.org/4.x/dc/dbb/tutorial_py_calibration.html:

mean_error = 0
for i in range(len(objpoints)):
    imgpoints2, _ = cv.projectPoints(objpoints[i], rvecs[i], tvecs[i], mtx, dist)
    error = cv.norm(imgpoints[i], imgpoints2, cv.NORM_L2)/len(imgpoints2)
    mean_error += error
 
print( "total error: {}".format(mean_error/len(objpoints)) )
1 Like