-
Notifications
You must be signed in to change notification settings - Fork 28
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Project Maturity & Getting Started Question #12
Comments
Hi Dave, |
Thanks very much Richard. I will absolutely use it. I appreciate the effort that has gone into this project. |
Hi Richard @rgerum I'm finally about to integrate. Do you, by any chance, provide the ability to install via conda? We use conda-forge as our repository for all of our python dependencies. I did find this clickpointslink. It talks about conda installation. However it seems to suggest a specialized repository (rgerum). I've been consistent with using only conda-forge to minimize dependency nightmares. I dont see a similar reference to conda here. |
well I haven't worked with conda-forge yet. As it became easier to install with pip, I don't see so much of the point of using conda if pip can also supply wheels. Therefore also for clickpoints, the newer versions are mostly just on pip. Ah well the conda-forge was needed for some dependency of clickpoints. |
@rgerum Ok.. not a problem at all, and thanks for the fast response. It should be easy to layer packages into our conda env, using pip. |
Was easy: As described here the only magic was to install pip into the conda virtual env and use that Reflected in Conda Environment |
Now to implement my cameratransform hello-world... After that, my guess is that within just an hour or two we should have the base capability in place. What is the minimal code that I need for a "hello-world" where I have a fixed camera position, and a coordinate (or a rectangle) in the camera field of view? I'm assuming that the fixed camera position, would be in latitude, longitude, height above sea level. I'm also assuming that I'll provide an azimuth angle (relative to True, or Magnetic, North), and an elevation angle relative to the plane out to the horizon. Lastly I'm assuming that I'll provide a field of view in horizontal degrees. |
I am glad that the installation went well!. |
I'll implement that. I had seen this before. I wasnt quite sure how the following were defined
If I am interpreting this correctly, I'd set "tilt_deg" to 90, if the camera sits on the ground (or on a building) and points out across the surface, and at the horizon. If its on the roof of a building looking slightly down at the surface then perhaps "tilt_deg" would be about 89. In other words tilt_deg is relative to the plane stretching out to the horizon. Am I interpreting tilt_deg properly? If so, I'll see if I can articulate a similar intepretation of the other params. |
Yes you interpreted this correctly. You can have a more detailed describtion here: |
Ty Sir!! Will keep you abreast of our progress using the package that you've put so much work into. :) |
The hello world is working. However I'd like to see if I can set some known values. The values when calling spaceFromImage dont seem to match the values in the documentation (though I am setting the input parameters in identically the fashion described in the link you provided. Is there a page that shows a few test scenarios that I can implement to verify that I get the appropriate values? Assumptions:
|
The space coordinate system is just assumed as a flat euclidean system. For sufficiently small environments it is a better assumption to use a flat plane. |
What is sufficiently small? My application will use cameras that are roughly 10 to 500 or 1000 yards from the objects of interest. |
Well this is considerably small. |
@rgerum I figured. :) But what is your recommendation on a maximum distance from target, given that you are assuming a "flat earth" .. relative to the camera position? |
we have used it in our applications to distances of about 8km. But at this distance normally the uncertainty of the pixels is already bigger than the uncertainty of the assumption of a flat earth. So unless you are matching cameras at lots of different locations, there should not be any measurable errors with this assumption. |
Got it. That makes a lot of sense. |
Good Afternoon Richard @rgerum The documentation suggests that the parameter to use is heading_deg. Yet unlike tilt_deg, roll_deg (within Getting Started), I cannot set it as follows:
Our cameras (in our launch application) are in fixed locations, with fixed orientations. Each camera will point in a given direction (E.g. North, East, South, West, and other points on the compass). The cameras wont move or rotate. |
Btw.. if your suggestion is that we find the latitude/longitude of landmarks visible within the image, and then to use the fitting methods. We can do that. However I would like to know how to set the heading_deg, in the case when we know it already. In this case we would not need to specify the position of visible landmarks. |
Ah.. I looked at this and missed it, but clearly this is how to set heading_deg
|
Hmm strange, cam.heading_deg = 90 should also work. I will look into this. |
Ya.. I looked at the code thinking the same. But it doesn't seem to. If this truly is a bug then glad I can help isolate such things! |
Can you give an example where you run into problems?
|
@rgerum , after experimenting with fitting (see #13) - both manual and using the fitting, I am now wondering if I am doing something wrong in the software. @davesargrad and I are working together with your software. I am more focused on the configuration of the software. I have decided, for the moment, to fix my focal length and sensor size to some thing reasonable and not too far off from actual. Using Google maps, I have identified where the camera is located to get initial Lat / Lon and I have estimated the height of the camera (in meters), as well as other orientation parameters. I have initialized the camera as follows:
results: -163.98, 106.37, 161.5 Likewise converted XY point to LLA:
results: 53.63814, 10.00132, 39.01 As I was writing this, I found that I had a couple of problems in my code - I was using the wrong methods to convert between GPS and image data and I forgot to compensate for AMSL information. My results are much better now. So, I now feel that I am understanding how to use the libraries better. I will continue testing with other points to see if my implementation is working. |
So you are using the "space" to "gps" transfrom, which relates the flat euclidian space with the geocoordinates. |
I added a page to the documentation to elaborate more on the different coordinate systems: |
@rgerum That is simply awesome. Ty sir. Its a real pleasure to use this software, and to get your help. We will be looking to demo some of our capability in the november timeframe.. and your transformation library will be a key part of that demonstration. :) |
you can access the fitted parameters with cam.elevation_m, cam.tilt_deg I guess the wrong focal length can mess up your setup quite a lot as if your focal length is off by a substantial factor, all the distances will be off by the same factor. But yes a RMS of 187 pixels sounds quite off a lot. |
I added the convenience function camera.printTraceSummary() that prints the mean and std of the fit parameters obtained by the metropolis sampling. I also added the parameter focellength_px to simultaneously set the x and y focal length in pixel to be used in fitting, e.g.: |
@rgerum - thanks for your answers above. I did do an iteration around various focal lengths and sensor size, in my code, and found that the combination I have, which, more or less, produces the lowest RMSE. This leads me to think something else is wrong in my setup. I used the plotFitInformation method you suggested and the points are not coming up were I expect (I assume this is plotting the XY as a circle and the altitude as the '+'). I am reevaluating this. With respect to fitting, the results of the fitting do not seem to all obey the lower / upper constraints on the parameter value. Using the following in the fit parameters:
I get the following results: elevation_m: 5.20244 (NOK), heading_deg 337.4910 (OK), tilt_deg: 114.5446 (NOK). While initial values are estimates, they are based on a combination of looking at the images and Google maps, so there are not too far off. However, two of the three computed values do not obey the lower / upper limits. The outputted plots don't show the first values of the two Not OKs being near the initial value: I will try to incorporate the updates you made yesterday and see if that helps me at all. Thanks for those. |
What you could also do is look at the plotFitInformation before running the fit just with setting the values manually. This way you might be able to find rough estimates for the values quicker. It currently looks as though the algorithm finds a solution where maybe the camera is placed below the landmarks and looks up to the landmarks. |
@rgerum - thanks once again for your answers. I have been trying to understand what could be going on with my implementation. I have, among other things re-read this chain. As well looking at my implementation. So, I have a serious of questions:
|
|
@rgerum - Thanks for yesterday's information. It was very helpful. Following the idea you had of plotting fit information, I plotted my calculated XY, as follows: ` self.cam.addHorizonInformation( np.array([[XY pairs of horizon] ])), uncertainty=10)
` This resulted in the following image (sorry about the black image). Blue is horizon, orange is landmarks and red is the additional XY plots. The pluses line up pretty well with the horizon and landmarks. My expectation was that the conversion of the lm_points_gps to xy would line up closely with each other. However, as you can see below, the plotted xy data is right on the GPS points (orange circles). So, is my expectation wrong? Should the xy points line up with the lm_points_px? Or is this a projection issue onto the 2D frame? |
It looks like your horizon is not a straight line (the horizion in the image, the blue +). Are you using a GoPro style camera or a camera with a strong fish eye lens? Then you should try to make a calibration for a lens correction before trying to use the camera. That the red x and the orange O are on top of each other is clear as there you once go gps->space->image and gps-> image which has to result in the same point. |
Thanks, this is really awesome and helps us understand some critical concepts. For example, you are exactly right! Now that we are looking at the image, there is a distinct curvature in the horizon. It did not even dawn on us that this could be an issue. Our first look at this particular camera was largely arbitrary. We recognized from the beginning that once we had a proper camera, we would be in a far better position with our results. Our initial look was to really understand how to use your camera software. You have now helped us over that hurdle. Part of what we are now doing is specifying the required cameras. In fact, we hope to get one purchased and set up in an experimental fashion. We have learned that the following parameters are critical in our camera selection: • Focal length We think we now know how to use the cameratransform software properly. We are increasingly excited to use it. We hope to have a specified camera on line in the near future. We would love your thoughts on camera specification. In a subsequent post, we will summarize our observability requirements that will impact camera selection. Thanks very much for your diligence and responsiveness. We will continue to keep you in the loop with our progress and additional questions. |
I would simply like to echo @ralphflat sentiment. You have really helped us get to a point where we have a systems engineering perspective on how to select a proper camera, and how to leverage your software. Thanks so much for getting us to this point. We will hopefully take the next steps that Ralph has described.. in the next few weeks. |
I am happy that I could help you. |
Hi Richard. @rgerum Are the landmarks, object height information, and horizon information only used in the fitting process? Or are they always used in the basic conversion from image coordinates to GPS coordinates? In the documentation they are only described in the context of fitting. |
Hi Richard @rgerum
|
They are only used for the fitting process. The transforms just depend on the camera parameters.
…-------- Original-Nachricht --------
Am 13. Sep. 2021, 08:45, David Sargrad schrieb:
Hi Richard. ***@***.***(https://github.com/rgerum)
Are the landmarks, object height information, and horizon information only used in the fitting process? Or are they always used in the basic conversion from screen coordinates to GPS coordinates?
—
You are receiving this because you were mentioned.
Reply to this email directly, [view it on GitHub](#12 (comment)), or [unsubscribe](https://github.com/notifications/unsubscribe-auth/ADL7KW6WHGJSWGPTM6DUMSDUBXW6FANCNFSM5BVV35CA).
Triage notifications on the go with GitHub Mobile for [iOS](https://apps.apple.com/app/apple-store/id1477376905?ct=notification-email&mt=8&pt=524675) or [Android](https://play.google.com/store/apps/details?id=com.github.android&referrer=utm_campaign%3Dnotification-email%26utm_medium%3Demail%26utm_source%3Dgithub).
|
I see. Ty. |
Hi Richard @rgerum How easy would it be to support the tangential (prism) component of the Brown model? It would seem that in cases where a lower end camera is being used that the misalignments in lens components could lead to distortions that might benefit from inclusion of the tangential. |
about the projection. The parameter is currently unused. I should just remove it. about the tangential components. I did not implement those because like that the transform is not invertible. This means you can not transform from the space directly to the distorted image. |
@rgerum Ty.. On both points! |
@rgerum Hi Richard. Silly question for you.. In my call to gpsFromImage I am seeing only 2 points of precision in the returned latitude/longitude. I dont think this is just a printing issue, since I see the same in the debugger (within the numpy array). What do I need to do to extract the full floating point precision of the latitude/longitude?
|
Well normally you should just get the value with the float precision:
yields: So maybe it just depends on your coordinate inputs. You could try "print(c)" to print the parameters of your camera. And print the pixel point you want to transform. Maybe there are some string things with your parameters. Maybe some are accidentally for some reason stored as integers. |
@richard - @davesargrad and I I have been playing with the Lat / Lon precision related to the question @davesargrad asked today. We have two code snippets that are doing the same gpsFromImage call with the same camera initialization data and we are getting two different structures resulting from the gpsFromImage call. The screenshot below shows the code we are developing (left hand side) and the test code I had developed earlier (right hand). The results of the call to gspFromImage results in an ndarray of 1, 3 on the right hand side (red highlight circle), while the code on the left hand side results in an ndarray of 3, (null) (blue circle). One key thing to point out is that code being developed is using the 1.1 code from github tag, while the test code used some of the enhancements you made during our discussions (i.e. trunk). Is it possible that you fixed a bug in the 1.1 code that is in the trunk? Also, below is new test code replicated the camera calls from our developed code. This works with the trunk version - that is, it returns a ndarray of 1, 3. `import numpy as np import cameratransform.cameratransform as ct c = {"focal_length": 6, "latitude": 53.631495, "longitude": 10.005011, "altitude": 39.01} spatial_orientation = ct.SpatialOrientation(elevation_m=vso["elevation"], rectilinear_projection = ct.RectilinearProjection(focallength_mm=c['focal_length'], lens_distortion = ct.NoDistortion()
cam = ct.Camera(rectilinear_projection, orientation=spatial_orientation, lens=lens_distortion) cam_pts = np.array([[476.31516456604004, 166.61448150873184]]) |
hmm sounds strange. But to investigate it more, I would need to also have a minimal example of the code that does not work. The only thing to consider, if you input just one point, e.g. a (2) array, the output should be a (3) array. If you input a (1,2) array, the output should be a (1,3) array. |
@rgerum - I just tried our working test code (code from above) with the 1.1 distribution and we reproduced the same problem we were setting. Specifically, the geo_pts is a ndarray (3,): The code using the "enhanced" functionality produces: This would seem to indicate that something changed between the 1.1 and trunk version. If you would like, we can write a separate issue to handle this apparent bug. Please let me know if there is more information you would like. Outside of the cameratransform version, the environment is the same in both situations. |
ok if it is gone in the trunk version, I should maybe just make a new release to fix this issue. |
@rgerum and @ralphflat Thanks both for helping to resolve this. It does seem that there is a bug in the 1.1 version .. assuming that you did not expect the output of gpsFromImage to change form (from 1.1 to trunk).. As soon as you tag a new release.. we will try it out. I think this relates to the lat/lon precision issue.. but really i think it will fix a bigger bug. Part of what Ralph and I saw was that the numpy array coming from gpsFromImage had some odd values (for example the dtype was an object that we didnt expect) |
@rgerum - thanks for the 1.2 release. This works consistently for the two bits of code we have. |
I just upgraded in this fashion:
I'll do some testing today. |
Yay.. 1.2 up and working.. and I'm getting more sensible numbers. |
Hi.
This project seems to have great documentation. I see that it is tagged with a 1.1 release. However I don't see many issues or activity on the project.
Is the software mature? Is it something that I should pick up and experiment with, or is it better for me to look elsewhere for a library to take me from camera into geodetic coordinates?
If the author (@rgerum) still supports this, then I am very excited to use this. I simply love the effort that I see here.
Thanks,
Dave
The text was updated successfully, but these errors were encountered: