-
-
Notifications
You must be signed in to change notification settings - Fork 21.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature Proposal: Camera Shift #13302
Comments
Not against it but I'm not sure this a very requested feature. |
It'd probably be cool as a module... but I'm not sure about it being a standard feature. |
Looks like a cool feature to me, even though I can't find any use right now. I don't think it would bloat the engine to have it though, and importing a module just for this sounds like a waste. (On the other hand, if it can be done via <10 lines of GDScript, then it might make less sense to have it in C++) |
I would suggest to implement a proper control over View and Projection matrices of the camera (working on a PR for that) - it would enable immediately lots of AR/VR things that are currently not possible. I would suggest this is related to #7499 |
@seichter you are aware we added VR and AR capabilities to Godot over the past year? http://docs.godotengine.org/en/3.0/classes/class_arvrserver.html The topic of changing the camera properties has come up many times, the problem is that for the vast majority of Godot users it just ends up getting confusing. I did a PR for both Godot 2 (#7777 and #7778) and 3 (#7121) last year to get complete frustum control of the camera but after talking with the team at lengths about this, we went down a different route all together. Mainly the wish in the long term to properly handle effective stereoscopic rendering (we're about halfway to where we wish to end) within the core and not leave it up to the person creating a game to have to reinvent the wheel and understand the concepts behind this to the n'th degree. There are actually methods added to the internal CameraMatrix code so there are a few helper methods for those who work on the core engine but the logic is used by the AR or VR interfaces directly. Most of the AR and VR platforms will actually provide you with the projection matrices you should use which are perfectly aligned with the optics used. |
@BastiaanOlij - well, I am fully aware of the AR/VR server. However, it is far too limited to do many of the serious VR/AR things. But I agree with you that you want to have a sleek API for a common user. But for a AR/VR professional this is too limiting. I would think if we could discuss again about proper control of the transformation stack and layer the AR/VR server on top of it this would suit both use cases: the casual AR/VR app developer and professionals. Right now I have mulled my work on integrating Vuforia or my own SSTT tracking engine as it would be too many swamps to wade through right now. I was also planning to run our PowerWall (a projector based VR system) with Godot, but again there are too many hurdles right now. edit: I looked at your PR - setting the Projection matrix via Frustum is still limiting - I basically need full control over all four base vectors to apply a correct projection matrix. View matrix is easily be adaptable with the current method of the transform, even I would prefer to have full control over it. |
@seichter I'd like to know more about the limits you perceive with the AR/VR Server. You would be correct that not enough is exposed to GDScript and that would not be the direction we would want to go to. For new platforms you would go down the route of either implementing a direct subclass of ARVRInterface in the core or preferably implement a GDNative module for a new ARVRInterface class. That gives you full control over the projection matrix used. The advantage of the approach we took is that it sits far closer to the rendering pipeline then would ever be possible if we bring it into GDScript. Case in point, we have a fully working ARKit implementation that does exactly what you're after. Given I need to polish it up as I haven't touched it in awhile and there are some other issues preventing us from merging it in but still. My old PRs were merely to show we went down that road before. They will never be merged into the product and have long since been closed. |
@BastiaanOlij - is the ARKit implementation in a PR or is there a branch to look at. I found it mentioned but seems not in master. Anyways ... Some considerations: I think tightly coupling Godot with a specific API might be interesting from a user point of view but not for a developers perspective. For instance do I need a specific way to render the video background when doing videobased AR. It comes down to use a undistortion mesh to match the actual distortion in the image with the capabilities the GL/Vulkan pipeline (pin-hole camera) provides. Just rendering an image into a sprite or OrthoCamera doesn't cut it for a proper visual registration. Shameless self-plug here: I was a contributor and maintainer to ARToolKit, co-author of osgART, worked on stuff in Vuforia (when it still was StudierStube) and showed how to do AR in Unity long before such things became mainstream. |
@seichter owh I totally agree with your "user point of view" remark, its just that we come to different conclusions on how to deal with that. What you do in the Godot editor is targeted at a user base where we want to assume they know as little about the underlying complexity as possible. So we don't want to bother that person with the intricacies of optics and camera positions and such, we want Godot to take care of all of that in the background and present to them a working end result they can just enable. We platform developers on the other hand do need that level of control, and we have that level of control once you start implementing an ARVR interface for your platform either as a core module or as a GDNative module. The ARVR approach is build so that when you move the device (HMD, phone, etc) through the real world, it will move the camera in the virtual world. The ARVROrigin point gives you a reference point so that whatever you see as the centre in the real world, is mapped to a place in your virtual world. A bit more tricky in AR because often this centre starts off as being whatever location your phone is in when the app starts and everything is tracked in relation to that, while with VR often this is a point calculated by the VR platform and represents a centre point on the ground of your real world room. The end result however is that the movement of the camera in virtual space is in sync with the real movement of the phone and therefor in sync with what the real camera is recording. Add to that getting the proper projection matrix from the SDK and it works well. This is an old video of it in action for ARKit: https://youtu.be/7yiY8IB0cro There are two PRs that are important here and yes they are not merged into master (and won't be for some time), mainly because ARKit only works on iOS 11 and the moment we force Godot to be compiled for iOS 11 we leave anything older then an iPhone 6 behind. Still too many people with iPhone 5's making 2D games with Godot :) The first PR is one that purely looks at getting the camera image into the background of the viewport. I do have some ideas to do this very differently though, I'll be experimenting with a different approach soon for doing something similar but using camera input to do green screen VR/real world stuff. Anyway its #10643 The other PR is the ARKit work itself. I haven't touched this for some time so I don't know if it still runs but it should give you a good idea of how it works: #9967 Mind you, my expertise mostly lies with the VR part. Other then ARKit which only does pretty basic AR I've not played around with other AR platforms so I may be missing a very obvious gap here :) |
@seichter note btw these two parts that may interest you, first getting the camera view matrices and camera projection matrices from ARKit in the the process loop: and then returning resp. the view and projection matrices to the renderengine when requested: |
I believe this has been implemented to an extent by #26064. |
CC @JFonS, is the new feature sufficient to close this issue? |
I'm not sure about the tilt-shift terminology, but it does seem like the same effect can be achieved with the current "Frustum" camera mode. Or at least it should be possible to write a script that takes tilt and shift parameters and modifies the Frustum parameters accordingly. Just as a reference, the camera mode I implemented works the same way as Blender cameras and their "shift" property. |
Thanks. It does sound quite similar so I'll close it as fixed by #26064 for now. If any part of the original request are not fulfilled with what was currently implemented, it would likely be best discussed in a new issue specific to how the Frustum camera mode should evolve to handle that use case. |
I propose adding new functionality to the 3D perspective camera: camera shift. This acts the same way as tilt-shift lenses for real world cameras. The tilt feature would have to be implemented as a shader to achieve the out of focus areas and is not part of this proposal. The shift effect is achieved by changing the frustum shape for the 3D projection matrix. See pictures below for the shift effect in action.
I have it functioning so included are a few visual samples of it running. I won't make a pull request unless I get approval.
Currently there are 2 3D cameras modes: perspective and orthogonal. I've added a third and called it perspective shift. Two parameters are added in perspective shift mode: shift rotation and shift tilt. This mimics the functionality of real lenses. Rotation is about the z axis with zero degrees at top and increasing values rotate clockwise. Shift tilt is how far the frustum near plane is shifted in the rotation direction.
Current perspective mode. I've not changed anything in this mode.
Perspective shift mode. Addition of two new parameters to control the shift effect. Note that if shift tilt is at zero this mode behaves exactly like the normal perspective mode.
Normal perspective view of objects. Camera is rotated 45 degrees on x axis and camera is at origin. Top of objects are 45 degrees from camera on x axis.
Perspective shift mode with camera still at origin. Camera rotation zeroed out to point down z axis. Shift tilt is set to 45 degrees.
Gizmo also changes shape to match the shift rotation and angle. The gizmo retains the same geometry as the regular perspective gizmo.
Feedback appreciated. This feature could easily be added to the current perspective mode without requiring a third mode but I am hesitant to do that.
The variables could also be presented as x and y values, as seen in the camera shift implementation in Blender. X and y values would make is easier to lerp animation of the camera shift but I think would be harder for users to code where the camera frustum is aimed. Each has its advantages and drawbacks.
The text was updated successfully, but these errors were encountered: