You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I was trying to run factor but it keeps failing and I don't understand what went wrong. Among others, it gives me a warning that there is a frequency gap of 12 bands. Does that mean that somewhere during prefactor I did something wrong or do I have to change some of the values in the factor parset? The factor output is the following:
`/opt/cep/lofar/lofar_versions/LOFAR-Release-2_21_5/lofar_build/install/gnucxx11_opt/lib64/python2.7/site-packages/lofarpipe/support/utilities.pyc : Using default subprocess module!
/opt/cep/lofar/lofar_versions/LOFAR-Release-2_21_5/lofar_build/install/gnucxx11_opt/lib64/python2.7/site-packages/lofar/parmdb/init.py:22: RuntimeWarning: to-Python converter for std::vector<double, std::allocator > already registered; second conversion method ignored.
from _parmdb import ParmDB
/opt/cep/lofar/lofar_versions/LOFAR-Release-2_21_5/lofar_build/install/gnucxx11_opt/lib64/python2.7/site-packages/lofar/parmdb/init.py:22: RuntimeWarning: to-Python converter for std::vector<std::string, std::allocatorstd::string > already registered; second conversion method ignored.
from _parmdb import ParmDB
INFO - factor:parset - Reading parset file: new_factor.parset
INFO - factor:parset - Using the following groupings for directions: 1:0
INFO - factor:parset - Using up to 40 CPU(s) per node
INFO - factor:parset - Using up to 90.0% of the memory per node for WSClean jobs
INFO - factor:parset - Processing up to 1 direction(s) in parallel per node
INFO - factor:parset - Running up to 7 IO-intensive job(s) in parallel per node
INFO - factor:parset - =========================================================
INFO - factor:parset - Working directory is /data/scratch/maria/factor
INFO - factor:parset - Input MS directory is /data/scratch/maria/target_L530789/results
INFO - factor:parset - Working on 17 input files.
INFO - factor - Setting up cluster/node parameters...
INFO - factor - Using cluster setting: "local" (Single node).
INFO - factor - Checking input bands...
INFO - factor - Building local sky model for source avoidance and DDE calibrator selection (if desired)...
INFO - factor:directions - Found 6 sources through thresholding
INFO - factor - Setting up directions...
INFO - factor:directions - Reading directions file: /data/scratch/maria/factor/factor_directions.txt
Trying CDLL(libgeos_c.so.1)
Library path: 'libgeos_c.so.1'
DLL: <CDLL 'libgeos_c.so.1', handle 5b8c6a0 at 5bbe8d0>
Trying CDLL(libc.so.6)
Library path: 'libc.so.6'
DLL: <CDLL 'libc.so.6', handle 7effc8de34c8 at 5bbe910>
INFO - factor:directions - Adjusting facets to avoid sources...
/home/rafferty/LOFAR/LSMTool/lsmtool/skymodel.py:1457: FutureWarning: np.average currently does not preserve subclasses, but will do so in the future to match the behavior of most other numpy functions such as np.mean. In particular, this means calls which returned a scalar may return a 0-d subclass object instead.
return np.average(c, axis=0)
INFO - factor:directions - Including target (20h18m03.92, +28d39m55.2) in facet adjustment
/home/rafferty/LOFAR/factor/factor/process.py:653: RuntimeWarning: invalid value encountered in double_scalars
effective_flux_jy = peak_flux_jy_bm * (total_flux_jy / peak_flux_jy_bm)**0.667
INFO - factor - Self calibrating 1 direction(s) in Group 1
INFO - factor:facet_patch_3 - Detected a frequency gap of 12 bands; setting wsclean_nchannels_factor to 13 to avoid having a fully flagged WSClean channel
INFO - factor:scheduler - <-- Operation facetselfcal started (direction: facet_patch_3)
log4cplus:ERROR No appenders could be found for logger (LCS.Common.EXCEPTION).
log4cplus:ERROR Please initialize the log4cplus system properly.
ERROR - factor:scheduler - Operation facetselfcal failed due to an error (direction: facet_patch_3)
ERROR - factor:scheduler - Caught an (Keyboard-)Interrupt, stopping all pipelines.
ERROR - factor:scheduler - One or more operations failed due to an error. Exiting...
`
The text was updated successfully, but these errors were encountered:
Hello,
I was trying to run factor but it keeps failing and I don't understand what went wrong. Among others, it gives me a warning that there is a frequency gap of 12 bands. Does that mean that somewhere during prefactor I did something wrong or do I have to change some of the values in the factor parset? The factor output is the following:
`/opt/cep/lofar/lofar_versions/LOFAR-Release-2_21_5/lofar_build/install/gnucxx11_opt/lib64/python2.7/site-packages/lofarpipe/support/utilities.pyc : Using default subprocess module!
/opt/cep/lofar/lofar_versions/LOFAR-Release-2_21_5/lofar_build/install/gnucxx11_opt/lib64/python2.7/site-packages/lofar/parmdb/init.py:22: RuntimeWarning: to-Python converter for std::vector<double, std::allocator > already registered; second conversion method ignored.
from _parmdb import ParmDB
/opt/cep/lofar/lofar_versions/LOFAR-Release-2_21_5/lofar_build/install/gnucxx11_opt/lib64/python2.7/site-packages/lofar/parmdb/init.py:22: RuntimeWarning: to-Python converter for std::vector<std::string, std::allocatorstd::string > already registered; second conversion method ignored.
from _parmdb import ParmDB
INFO - factor:parset - Reading parset file: new_factor.parset
INFO - factor:parset - Using the following groupings for directions: 1:0
INFO - factor:parset - Using up to 40 CPU(s) per node
INFO - factor:parset - Using up to 90.0% of the memory per node for WSClean jobs
INFO - factor:parset - Processing up to 1 direction(s) in parallel per node
INFO - factor:parset - Running up to 7 IO-intensive job(s) in parallel per node
INFO - factor:parset - =========================================================
INFO - factor:parset - Working directory is /data/scratch/maria/factor
INFO - factor:parset - Input MS directory is /data/scratch/maria/target_L530789/results
INFO - factor:parset - Working on 17 input files.
INFO - factor - Setting up cluster/node parameters...
INFO - factor - Using cluster setting: "local" (Single node).
INFO - factor - Checking input bands...
INFO - factor - Building local sky model for source avoidance and DDE calibrator selection (if desired)...
INFO - factor:directions - Found 6 sources through thresholding
INFO - factor - Setting up directions...
INFO - factor:directions - Reading directions file: /data/scratch/maria/factor/factor_directions.txt
Trying
CDLL(libgeos_c.so.1)
Library path: 'libgeos_c.so.1'
DLL: <CDLL 'libgeos_c.so.1', handle 5b8c6a0 at 5bbe8d0>
Trying
CDLL(libc.so.6)
Library path: 'libc.so.6'
DLL: <CDLL 'libc.so.6', handle 7effc8de34c8 at 5bbe910>
INFO - factor:directions - Adjusting facets to avoid sources...
/home/rafferty/LOFAR/LSMTool/lsmtool/skymodel.py:1457: FutureWarning: np.average currently does not preserve subclasses, but will do so in the future to match the behavior of most other numpy functions such as np.mean. In particular, this means calls which returned a scalar may return a 0-d subclass object instead.
return np.average(c, axis=0)
INFO - factor:directions - Including target (20h18m03.92, +28d39m55.2) in facet adjustment
/home/rafferty/LOFAR/factor/factor/process.py:653: RuntimeWarning: invalid value encountered in double_scalars
effective_flux_jy = peak_flux_jy_bm * (total_flux_jy / peak_flux_jy_bm)**0.667
INFO - factor - Self calibrating 1 direction(s) in Group 1
INFO - factor:facet_patch_3 - Detected a frequency gap of 12 bands; setting wsclean_nchannels_factor to 13 to avoid having a fully flagged WSClean channel
INFO - factor:scheduler - <-- Operation facetselfcal started (direction: facet_patch_3)
log4cplus:ERROR No appenders could be found for logger (LCS.Common.EXCEPTION).
log4cplus:ERROR Please initialize the log4cplus system properly.
ERROR - factor:scheduler - Operation facetselfcal failed due to an error (direction: facet_patch_3)
ERROR - factor:scheduler - Caught an (Keyboard-)Interrupt, stopping all pipelines.
ERROR - factor:scheduler - One or more operations failed due to an error. Exiting...
`
The text was updated successfully, but these errors were encountered: