I have noticed slight object-oriented differences in plotting on mirzam (v8.6) vs
on deneb or canopus (v8.3)
–joel
I have noticed slight object-oriented differences in plotting on mirzam (v8.6) vs
on deneb or canopus (v8.3)
–joel
on thuban2, go to
/var/share/data/ ≡ /data/psrdata/ on other machines.
/var/share/astro ≡ /usr/share/astro/ on other machines.
taken roughly from https://www.cyberciti.biz/faq/bash-for-loop/
either chmod this file to make it executable and run it by typing /.filename; or else
type bash ./filename
#!/bin/bash
for i in {1..9}
do
ls -als puppi_blah$i.fits
done
on the other hand, infinite loop (straight from above link)
for (( ; ; )) do echo "infinite loops [ hit CTRL+C to stop]" done
Loop with break on error (straight from above link)
for I in 1 2 3 4 5 do statements1 #Executed for all values of ''I'', up to a disaster-condition if any. statements2 if (disaster-condition) then break #Abandon the loop. fi statements3 #While good and, no disaster-condition. done
sudo su –
thanks Bruce!
Carl Thomas got running DIALOG_PICKFILE in IDL, which enables one to point and click one’s way to selecting a desired file, say for processing. I (Joel ) embellished it a little to get it to allow choice of both file AND subdir. See spectraoverlay.pro in /data/psrdata/arecibowappHI/overlayLAB-EBHIS/
idl 8.3 is on deneb and canopus. I have no maintenance contract on them any more, so they are frozen at that version. They reside on local disks on each machine at /usr/local/exelis. . .
i dropped maint on mirzam for about 10 months in 2016(?). When I picked it up again, we loaded idl 8.6 into /usr/local/harris/ . . .
–Joel
We (Bruce mainly) installed psrfits file read/write routines. they required regular fits file read/write routines (cfitsio).
cfitsio is in /usr/share/astro/cfitsio/
psrfits routines are in /usr/share/astro/psrfits_utils-master/
inside it are two executables:
psrfits_singlepulse and psrfits_subband
psrfits_singlepulse -h : Usage: psrfits_singlepulse [options] input_filename_base or filenames
psrfits_subband: Partially de-disperse and subband PSRFITS search-mode data.
Note: also check http://www.atnf.csiro.au/research/pulsar/index.html?n=Main.Psrfits for psrfits definition.
At least for now, function wmeanandsig.pro is in /data/psrdata/rmgalprojs
If you create an array of RM and an equal-sized array of sig_RM (with no ∞ sig_RM’s), then:
wmean_and_sdm=wmeanandsig(RM,
will put the weighted mean and weighted sdm into a two-element array called wmean_and_sdm
(and the way functions work, you can replace the variable that above i call “wmean_and_sdm”, with
a variable of any name you wish)
Right now there is a “driver” program in the same directory, called, “wmeanandsigDRIVER.pro” . It
is a module that tests wmeanandsig.pro . I used it extensively to do so, to the point where I can now
call wmeanandsig.pro “ready to go” . (Luckily the NIST web pages give sample input/output for weighted
mean and for weighted SDM, which I endeavored to match.)
Caution: In 2018 April, I changed the calculation of the weighted SDM. Previously, we had used the NIST definition, which changed the data weights so as to achieve reduced χ^2 to 1. (The NIST webpage gave a formula for wgtd SD but did not note that it included this readjustment. ) In 2018 April, I changed the calculation to be
σ^2_wgtdmean = 1 / [ Σ 1/σ^2_j]. Note that this calculation depends entirely on the σ_j’s of the data points, and nothing else (such as deviation from mean).
Here’s how to figure out the contents of a save set.
1. This is the classic way:
restore,’filename.sav’,/verbose will list the variable names in the saveset.
help, variable_name will then give details on any such variable.
2. This is the modern (object-oriented) way. Note that it does not require a restore:
[After http://northstar-www.dartmouth.edu/doc/idl/html_6.2/IDL_Savefile.html#wp1034474 ] :
Use the IDL_Savefile object to query the contents of a SAVE file containing data and selectively restore variables.
So first, create a savefile object out of of the .sav file: sObj = obj_new(‘IDL_Savefile’, ‘p2206.B1913+16.wapp4.53958.0004_shifted.sav’)
Typically, IDL_Savefile::Contents is the first method called on a savefile object; the information retrieved indicates the number of different types of items contained in the file (variables, routines, etc.).
sContents = sObj->Contents()
one type of query of sContents is: print, sContents.N_var
20
Next, a call to IDL_Savefile::Names is used to obtain the names of items of a specific type.
sNames = sObj ->Names()
print,snames
ACTUAL_NRECS BINSHIFT BW DELFREQ IDLFILE LSRK NBIN NCHAN OBSJULDATE OBSTIME PROFILE_VERSION ROUNDBINSHIFT SCANNO SHIFTONOFFA SHIFTONOFFB
SHIFTTOTA SHIFTTOTB SRC VEL WAPPHOST
Then, IDL_Savefile::Size can be used to determine the size of a given (named) variable in the file:
size_of_shiftonoffa = sObj->Size(‘shiftonoffA’)
print, size_of_shiftonoffA
1 512 4 512, which means:
If no keywords are set, SIZE returns a vector of integer type. The first element is equal to the number of dimensions of Expression. This value is zero if Expression is scalar or undefined. The next elements contain the size of each dimension, one element per dimension (none if Expression is scalar or undefined). After the dimension sizes, the last two elements contain the type code (zero if undefined) and the number of elements in Expression, respectively. The type codes are listed at https://www.harrisgeospatial.com/docs/size.html. (the four here means floating point).
–Joel
It is possible to access the ATNF catalog from the command line. Then various unix commands may be used on the output to create tailored data. The catalog software is now in /data/psrdata/psrcat. There is a README there that gives directions on how to use it.
Note that it accesses a database file on disk which may or may not be the latest version!
Also note that you need not supply the db_file parameter if you desire that it default to the location given in the environment variable PSRCAT_FILE (you may find the value of that variable by typing
printenv PSRCAT_FILE
if you got your .cshrc from me (Joel) relatively recently, you will probably get
/usr/share/astro/psrhome/psrcat/psrcat.db
I have linked that locataion to the psrcat.db file that is “actually” in /data/psrdata/psrcat
so when i type psrcat -v from that directory, I indeed get the current catalogue version, v1.57:
“Software version: 1.49
“Catalogue version number = 1.57”
I may or may not update the catalog in the future.
Note also that the psrchive suite now accesses the same catalog through a link. That is, the only file in
/usr/share/astro/psrhome/psrcat is a link to the above /data/psrdata/psrcat/psrcat.db
(OK I also did the same with glitch.db)