At the end of the last post I alluded to 'perspective effects' which might be throwing things off a little, meaning that pictures aren't going to be as good as the algorithm expects them to be.
If you followed my last post, you will remember that the maths considers a line coming out of the camera and places the dome on this line. The problem is that constellation cam does not look in a thin beam, or in straight lines of any kind. It looks in a wide-splaying cone/pyramid.
As the diagram shows, even though the centre of the camera's field of view does go through the centre point of the dome door, there is going to be asymmetric obstruction of the image.
The edge of the image on the left leaves the dome with plenty clearance but the edge of the field on the right is blocked by dome. This is because of the way that the area needing to be clear gets larger and larger as you get further from the point of the view cone. This the algorithm cannot account for.
One evening on the train home I took the original algorithm and I began to generalize it. (I do this kind of thing for fun believe it or not!). Originally, it wasn't taking into account the position of the camera up and down the optical axis (as may have been implied in my last post). For the sake of simplicity it merely worked out a point somewhere on the optical axis of the camera and called that the camera position (m). This was fine before but needed doing now.
The next thing I did was make it able to consider light paths in any direction from the camera position (d). I did this in a particular way such that said direction offset was relative to the camera itself and not to the dome. That was to make it possible to map any point in the image onto the dome. So if you wanted to know where on the dome the top left corner of the image is, you don't need to worry about what way up the camera is and where in the sky it's pointing. You just say up a bit left a bit and it all nicely works. This is not as straightforward to do as it might sound.
I had a go at a new algorithm which calculated the azimuths of the 4 corners of the image on both the inside and outside of the dome, did a bit of selective averaging, and put the dome there. This improved the dome position in some of the sky but began to make things much worse closer to the zenith, as azimuths started wrapping round the pole and throwing off the averages. After trying to fudge it in various ways I gave up on that.
The next thing I did was to unpick all the maths and reassemble it in reverse. So now I could also take a position on the dome and say exactly what pixel on the camera/image would see that position. The idea behind this was to centre the dome door in the image, rather than the image in the dome door. This new algorithm starts by doing exactly what it has always done to get a rough estimate of where the door should be. It then finds the point on both sides of the dome door which will be closest to the centre of the field of view (more fun maths), doing this for both inside and outside of the dome. It maps these points onto the image and works out how far the visible edge of the dome is from the centre of the image on each side. It then takes a rough guess on how far it needs to move the dome to get it centred. It's guess is always good but never quite perfect so it just does the same thing again a few more times, now with its improved dome position as the starting position, getting closer and closer to optimal location each time.
This algorithm is much better as it takes into account perspective, using the cameras field of view and the width of the dome, as well as treating the dome as a thick shell, choosing whether to use the inside or the outside of the dome depending on which is most significant on each side of the image.
As a result, this algorithm provides a noticeable improvement in almost all areas of the sky. As shown by the image below. Again, hover over to see the effect.
|hover to see the improvement with the new algorithm|
This algorithm isn't live yet. If and when it goes live will depend on Chris and Dan having the desire/time to add it to the control server.
Another upshot of these mathematical forays, as you might have already considered, is that I have finally been able to ditch the 3D mount model as the base of all my work. Now than I can take any point on the camera and say where exactly on the dome that point represents, and having clear mathematical representations of the geometry of the dome door, I can now say which pixels are and are not looking through the dome door, without the nasty business of drawing a whole load of polygons and calculating surface normals and texture maps and calculating lighting and all the other faff that necessarily goes on in the background when using a huge 3D library. It has now all been boiled down to only the necessary mathematics to solve the problem.
This means that simulations that previously took over 6 hours on my work laptop now take 30 minutes (even still running in flash, which isn't the most efficient language under the sun) and on my much faster home pc considerably less time.
The other advantage of it being entirely mathematical and not being built on top of large language-specific libraries is that the whole thing can now be transferred into different languages and applications with very little effort. It would be trivially simple now to automatically generate a mask of which parts of a constellation image are dome and which are sky, which could be used in stacking images for instance.
For those still following, this concludes my adventures with domes. I have learnt a lot about the dome algorithm and how it works and maybe you have too. I hope you will be pleased with the improved images from constellation cam which will now be being returned to you all!