The ALMA TP data is generally reduced using the TP data reduction pipeline which performs calibration and imaging; only in exceptional cases are ALMA TP data manually reduced using standard scripts. Details regarding the calibration and imaging of TP data using both the TP data reduction pipeline and manual reduction can be found in the following sections.
Pipeline Calibrated TP Data
The TP calibration with the ALMA pipeline is described in the ALMA Science Pipeline User's Guide:
You can determine whether your data was reduced by Pipeline by looking for a file "PPR...xml" in the "script" subdirectory (below the directory containing this README). If such a file is present, your data was pipeline-reduced.
TP datasets will often consist of many execution blocks (EBs). Each of them was calibrated individually. Imaging is done on all calibrated data together.
The images included in delivery have native frequency resolution and a cell size that is 1/9 of beam size. If you want to change them to your preferred frequency resolution and cell size, import the delivered FITS cubes to CASA and regrid it using the task imregrid.
One important aspect of the calibration is the conversion of the data from K to Jy/beam. This is done per spectral window, antenna, and EB. In case of pipeline-reduced data, the Jy per K conversion factors can be found via the pipeline weblog in the "qa" directory or in the file "jyperk.csv" under the "calibration" directory of this package. The conversion factors were derived from an observatory database for which the obervatory conducts regular observations of standard single-dish calibrators and stores the measurements. This analysis is done using standard scripts. We are not providing those data by default to the users, but in case you would be interested to have them, you are welcome to contact the helpdesk of your region.
In order to reprocess your data, you first need to obtain the raw data in ASDM format from the request handler. If you downloaded and untarred all available files for this delivery as described in the notification email, then you will already see (in addition to the directories shown in the tree listing above) a directory "raw" containing your raw data in subdirectories named "uid*.asdm.sdm" and no further action is necessary. If you do not have a raw directory, you will need to download and untar the tar balls of the raw data belonging to this delivery.If you untar the raw data tarballs in the same directory that you untarred the tarball of the products then they should appear in the "raw" directory in your "member_ouss_..." directory.
If you wish to rerun the pipeline from scratch, you will need the right version of CASA to be installed. Please find the line starting with "CASA version used for reduction:" in your QA2 report or README. The version indicated there is what you need to use for running the scriptForPI.
Once the raw data is in place, cd into directory "script", start
Please note that processing may take a significant amount of time and may need a significant amount of resources. To know how long it will take, please see the "Execution Duration" which is shown on the top-page of the weblog. The data to be processed in pipeline cannot exceed limits 31GB of raw data for a reduction machine with 64GB of RAM.
If you want to perform baseline subtraction using your preferred mask range rather than the pipeline-decided range, we recommend to either do it on the images using the task imcontsub or using the task tsdbaseline during your own manual calibration
(see https://casaguides.nrao.edu/index.php/M100_Band3_SingleDish_5.1 ).