-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question: Best way to process a multi-beam observation? #210
Comments
Hmmm...
So if the 6 beams were indeed observed at the same time, then I'm afraid that you'll have to process them separately. What you can do is to use the same directions for the different beams so that you get identical facet boundaries. Then you can combine the measurement-sets from the different beams (and thus Factor runs) for the common facets and image those together. (Not sure if that is useful, though.) |
About prefactor: If the different beams were taken at different times, each with their own calibrator observations, then you obviously need to process them separately. If they were taken at the same time, then you still should process the target beams separately: prefactor uses the same logic to group MSs into groups that can be concatenated in frequency that Factor does. And this code just doesn't consider the possibility that there might be different datasets at the same time and the same frequency. You could run initial-subtract on all data together, but then you would run into the "hole or not subtracted" issue that I mentioned above. |
Ok thanks for all that - I thought running separately was going to be the answer. And yes, I see what you mean with defining the same directions, but it still may be better to process separately and combine everything at the end I guess. Playing around with one beam I encounter the following that suggests that perhaps the frequencies are going to make things difficult. There are only four bands of 10 SBs per beam (centred on 124, 149, 156, 185 - the 149 was intended to be 142, human error) but these are not evenly spaced to create the dummy ones in between as factor is attempting below. So is this likely to be a no go? Or is there a way to get around the dummy bands? As I said the data is years old so it was never planned with factor in mind. Perhaps it's possible to force only the four bands, but the lack of frequency coverage I would guess might cause problems further on. I should say in prefactor I hard coded the banding structure into the frequency groups script to force the 4 bands of 10 SBs each, with no dummys. From the first facet patch log:
|
You need to concatenate the the MSs in prefactor into bands that Factor can then concatenate together. You can also have partially filled bands. E.g. bands of 12 SBs (or so) where the first or last SBs are filled with dummy data. What are the actual subbands that you observed? (Did you observe some of the subbands twice?) |
The frequencies observed were (no repeats):
It's the two middle ones that are the problem as they are closer together because of the 149 mistake. From playing around with it, to avoid having datasets where only 1 or 2 of the SBs are actually real, I could band into groups of 18. This keeps all the sub bands together, if you see what I mean. If I combine 12 for example it gets a bit messy as it splits the two middle frequency ranges over a few banded datasets. The low and high bands are indeed kept in one band. But perhaps not going higher than 12 is more advisable and it's ok to have datasets where there are lot of dummies? Does it count for the 'flagged data'? As then I'd be careful with the percentage flagged parameter. |
Firstly I have also neglected to mention the snapshot nature of the data - it is 2 x 11 minute snapshots (which I imagine might also create issues). I tried running factor with bands of 18 and got past the original error above. Now I'm running into the issue below where it seems to be that 'all pixels in the image are blanked'. I have a feeling this time it might be something to do with the lack of data either in time or frequency. When I attempt to open the MFS-image.fits in question in kvis I get the bad data notification and it fails to open. Full log: https://www.dropbox.com/s/g0ef1z7tcgry5xx/facet_patch_167.out.log?dl=0
|
Usually this problem (fully blanked image) is due to the input data being completely flagged. Indeed, I see that DPPP is flagging a lot of data during apply:
When this happens, it's usually due to the solutions all being NaNs. Can you check one of the |
I've had a look and I only have one This was run on 1.2 so it may not have the fixes mentioned. I upgraded to 1.3 on Friday - I will perform a clean run on this version and see what I get. |
OK -- I believe the plotting script does plot ones for the amplitudes and zeros for the phase when NaNs are found, so I think we're on the right track. Oh, and there should indeed be just one |
Unfortunately v1.3 gives the same error. Here's the log file but it's the same as before with very high flagging in the apply cal step. I'm still working my way around factor but this seems to be in the prepare_imaging_data step? Though from what you said before this comes about due to a failed calibration previously? |
The problem seems to occur when the parmdbs are merged. I suspect it has something to do with the fact that you have only two short observations, but I'm not sure. Can you send me (or post somewhere) all the parmdbs (everything that matches |
I've copied all |
Well, it appears that the dTEC solve produced NaNs for all solutions. I see that it is solving in small frequency blocks (once per band), perhaps because the default |
Yep thanks for that, changing the So now everything seems to be running ok but resulting in a failed self calibration. I also tried it with the I'm guessing that while it seems to improve, it's not improving enough, or as expected, due to the limited data. I think it is checking the residuals? (From what I saw in #181). So yes perhaps the limited data is not able to reach some defined threshold? |
Yes, it checks that the max peak residual after selfcal is below 0.75 Jy/beam. You can change this value in the last line of |
Thanks for pointing out where the limit is set. I looked at the mapfiles of verify_subtract and it gives: verify_subtract.maxvalpost.mapfile: verify_subtract.maxvalpre.mapfile: And setting to 0.95 ( |
Ah -- it also checks that the residuals don't get a lot worse (which they did here). The question is why they are so much worse after selfcal, as it looks from your images that selfcal didn't go too badly. How do the solutions look? |
Did you have a look at the pre and post images that Factor made? (E.g. with key "v" in |
Ok yes looking at the verify images something doesn't seem quite right. The negative spot on the post image is quite strong, along with some structure being introduced in the image. Odd because as you say the self cal doesn't look too bad. Is it possible that imaging settings are not ideal for this amount of data? Looking through the solution plots nothing jumps out at me immediately, though the amp plots are very flat (and at 1). But the short time scale of observations perhaps slightly flat would be expected. Plots - https://www.dropbox.com/sh/rbil44a42rfon73/AAD97COHQXIA2Pgh2jbxzKt9a?dl=0 |
I'm interested in attempting to use factor on some old data where the observation consists of 6 beams observed simultaneously. These 6 beams are overlapping with the aim to mosaic them together such that I end with a final image of a large patch of sky.
What is the best way to organise the data to do this? Do I process each beam individually and join the images at the end? Or is there a better way?
I've been reading the docs of both prefactor and factor and I was unclear on what the best method would be, or even if it really supports processing different directions - so I just wanted to clarify.
Of course being a multi-beam dataset this also means the frequency coverage is not continuous but rather 4 x 10 SBs of pre-dertmined bands. So the usual size bands but not covering the full frequency range but are instead spread out - will this also cause problems with factor? Pre-factor manages this ok.
The text was updated successfully, but these errors were encountered: