desihub / desisurvey Goto Github PK
View Code? Open in Web Editor NEWCode for desi survey planning and implementation
License: BSD 3-Clause "New" or "Revised" License
Code for desi survey planning and implementation
License: BSD 3-Clause "New" or "Revised" License
Write new tools for determining which tiles are eligible for fiber assignment, i.e. all of the lower pass tiles that cover it have been observed and processed. Integrate the output of that with the inputs to fiber assignment. Another output of these tools is the list of tiles eligible for observing used as an input to afternoon planning described in #17 .
See DESI-2728 for a discussion of when fiber assignment should be rerun on what tiles and how that interacts with the survey strategy.
I'm assigning this ticket in desisurvey (not fiberassign) since this is a survey strategy choice, not the choice of fiber assignment itself.
This issue came up as a corner case when using a non-standard tile file (as part of the SV data challenge).
As I understand it, the issue actually arises in desimodel.footprint.pass2program, and should maybe be raised there as well, but I think the only practical consequence is here, in desisurvey
.
Here in Progress.get_exposures()
by calling pass2program
we use the nominal tile file to map pass number to program name, which may not be what we want in a SV-like tile file.
No big deal, but in my case this issue didn't cause any problems until much later in the simulations, which took a while to track down.
Specifically, my tile file only had one program name, DARK
, for a handful of tiles. Then, here when the program column gets added on-the-fly
output[name.upper()] = astropy.table.Column(program, description='Program name')
that column ends up inheriting an str4
data type. Then, when calibration exposures get added by surveysim.util.add_calibration_exposures
, which have a nominal program name of CALIB
, the program name for those get clipped to CALI
. Consequently, when desisim/bin/wrap-newexp
is called, in this line the calibration exposures get ignored. Phew!
So one possible fix is to ensure that the program
column in the exposures
table has enough bytes, although I'm sure there are other more elegant solutions.
Add options to surveyplan
to specify the start/stop dates of the plan. These should be arbitrary, i.e. could be outside of the bounds of the nominal 5-year DESI survey defined in the config file.
I think it probably would be ok to raise an error if the requested dates are not a subset of the dates used for the initial surveyplan --create ...
call (which itself should allow specifying start/stop dates). Alternatively, that could trigger an update of the scheduler.fits
file to include the new dates.
Add options to afternoon planning to allow overriding tile priorities, either high or low. This could be used by the survey coordinator to prioritize fields to fill in gaps in the coverage, or to repeat calibration fields each year.
This could be as simple as afternoon planning not updating priorities for tiles with priority 0-2 or 8-10, while shuffling priorities in the 3-7 range.
In afternoon planning, prioritize earlier passes over later passes. e.g. if an unobserved pass 0 field is visible, it should have a higher priority than any pass 1 field.
Getting this into a ticket for tracking.
I'm trying to follow the instructions at https://github.com/desihub/surveysim/blob/master/doc/tutorial.md to run the survey simulations at NERSC. I got about half way through the survey and then the loop stopped. I didn't realize until much later that it hadn't completed the survey and by then I didn't have the window history where the exact loop had run, but when I go back and run surveyplan I get:
[cori04 desisim] surveyplan ${PLAN_ARGS}
INFO:utils.py:94:freeze_iers: Freezing IERS table used by astropy time, coordinates.
INFO:ephemerides.py:94:__init__: Loaded ephemerides from /global/homes/s/sjbailey/desi/dev/end2end/output/ephem_2019-08-28_2024-07-13.fits for 2019-08-28 to 2024-07-13
INFO:progress.py:128:__init__: Loaded progress from /global/homes/s/sjbailey/desi/dev/end2end/output/progress.fits.
INFO:progress.py:262:save: Saved progress to /global/homes/s/sjbailey/desi/dev/end2end/output/progress_2022-05-24.fits.
INFO:surveyplan.py:121:main: Planning observations for 2022-05-24 to 2022-09-01.
INFO:plan.py:266:update: Updating plan for 2022-05-24 to 2022-09-01
INFO:plan.py:237:update_active: Adding 24 active tiles from group 1 priority 5
INFO:plan.py:237:update_active: Adding 7 active tiles from group 2 priority 7
INFO:plan.py:237:update_active: Adding 269 active tiles from group 5 priority 4
INFO:plan.py:237:update_active: Adding 15 active tiles from group 6 priority 6
Optimizing 31 active DARK tiles.
INFO:optimize.py:153:__init__: DARK program: 177.3h to observe 31 tiles (texp_nom 1000.0 s).
Optimizing 0 active GRAY tiles.
INFO:optimize.py:153:__init__: GRAY program: 39.0h to observe 0 tiles (texp_nom 1000.0 s).
Cannot improve MSE.
[cori04 desisim] echo $?
255
[cori04 desisim] echo $PLAN_ARGS
--duration 100 --verbose --plots
In a private email thread, @dkirkby commented
I suspect the problem is that it tried to optimize zero tiles for the gray program, which it should be able to handle gracefully but didn't. I guess this is relatively rare since I didn't run into this with my tests.
I am in the process of reworking this logic and my dev branch has diverged enough from what you are running that it probably doesn't make sense to track this bug down. Instead, I need to finish up my latest round of changes and get them merged.
After that refactor and prior to the next big tag, we should verify that the tutorial instructions work at NERSC on master.
The function utils.radec2altaz()
takes args (ra, dec, lst) which are not sufficient to include the effects of polar motion. This issue is to change the API to take (ra, dec, mjd) and include polar motion. This should have no performance penalty since lst is currently being calculated from mjd.
The new implementation should either use routines astropy.coordinates
or at least be validated against them (if we find performance or accuracy issues with astropy that would justify maintaining our own version of these transforms).
Originally from desihub/surveysim#45 but bringing it over here since it applies to files written by desisurvey:
In desisim.quickcat we use 'EBMV' (E(B-V)) and LINTRANS to estimate the redshift efficiency. Would it be possible to have those two columns in the progress_* files?
I'm trying to run surveysim
as in https://github.com/desihub/quicksurvey_example/blob/master/survey/surveysim_HA.sh
The first call to surveysim after making the initial surveyplan fails with the following:
Traceback (most recent call last):
File "/gpfs/data/DESI/software/modules/surveysim/0.7.1/bin/surveysim", line 4, in <module>
__import__('pkg_resources').run_script('surveysim==0.6.0.dev1', 'surveysim')
File "/gpfs/data/dph3apc/anaconda/3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 719, in run_script
File "/gpfs/data/dph3apc/anaconda/3/lib/python3.5/site-packages/setuptools-23.0.0-py3.5.egg/pkg_resources/__init__.py", line 1504, in run_script
File "/gpfs/data/DESI/software/modules/surveysim/0.7.1/lib/python3.5/site-packages/surveysim-0.6.0.dev1-py3.5.egg/EGG-INFO/scripts/surveysim", line 15, in <module>
surveysim.scripts.surveysim.main(args)
File "/gpfs/data/DESI/software/modules/surveysim/0.7.1/lib/python3.5/site-packages/surveysim-0.6.0.dev1-py3.5.egg/surveysim/scripts/surveysim.py", line 162, in main
while simulator.next_day():
File "/gpfs/data/DESI/software/modules/surveysim/0.7.1/lib/python3.5/site-packages/surveysim-0.6.0.dev1-py3.5.egg/surveysim/simulator.py", line 148, in next_day
self.plan, self.gen)
File "/gpfs/data/DESI/software/modules/surveysim/0.7.1/lib/python3.5/site-packages/surveysim-0.6.0.dev1-py3.5.egg/surveysim/nightops.py", line 97, in nightOps
strategy, plan)
File "/gpfs/data/DESI/software/modules/desisurvey/penalty_value/lib/python3.5/site-packages/desisurvey-0.8.2.dev415-py3.5.egg/desisurvey/schedule.py", line 449, in next_tile
when, cutoff, seeing, transparency, progress, snr2frac, mask)
File "/gpfs/data/DESI/software/modules/desisurvey/penalty_value/lib/python3.5/site-packages/desisurvey-0.8.2.dev415-py3.5.egg/desisurvey/schedule.py", line 223, in instantaneous_efficiency
eff /= desisurvey.etc.transparency_exposure_factor(transparency)
File "/gpfs/data/DESI/software/modules/desisurvey/penalty_value/lib/python3.5/site-packages/desisurvey-0.8.2.dev415-py3.5.egg/desisurvey/etc.py", line 69, in transparency_exposure_factor
raise ValueError('Got unlikely transparency value < 1e-9.')
ValueError: Got unlikely transparency value < 1e-9.
Not sure if this is a bug -- if not, suggestions for how to proceed would be appreciated.
I'm starting to modify the minitest notebook in two_percent_DESI to use the recent changes to Progress.get_exposures()
to eventually create an exposures.fits file that can be loaded into a database. It is called like this:
from desisurvey.progress import Progress
p = Progress(restore='progress.fits')
explist = p.get_exposures()
explist.write(os.path.join(surveydir, 'exposures.fits'), overwrite=True)
With the modified Progress.get_exposures()
, I get an AssertionError
:
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-32-1af1c9dc1e70> in <module>()
1 from desisurvey.progress import Progress
2 p = Progress(restore='progress.fits')
----> 3 explist = p.get_exposures()
4 explist.write(os.path.join(surveydir, 'exposures.fits'), overwrite=True)
~/software/cori/code/desisurvey/my-master/py/desisurvey/progress.py in get_exposures(self, start, stop, tile_fields, exp_fields)
560 desimodel.io.load_tiles(onlydesi=True, extra=False,
561 tilesfile=config.tiles_file()))
--> 562 assert np.all(tileinfo['TILEID'] == table['tileid'])
563 output[name.upper()] = tileinfo['EBV_MED'][tile_index]
564 else:
AssertionError:
I'm going to guess that the set of tiles in the Progress
object is not the same as the set of tiles returned by desimodel.io.load_tiles()
.
Note also that the loading is somewhat odd (note the comments):
tileinfo = None # tileinfo is None by definition!
output = astropy.table.Table()
for name in tile_fields.split(','):
name = name.lower()
if name == 'index':
output[name.upper()] = tile_index
elif name == 'ebmv':
if tileinfo is None: # this will never be false, so why bother with it?
config = desisurvey.config.Configuration()
tileinfo = astropy.table.Table(
desimodel.io.load_tiles(onlydesi=True, extra=False,
tilesfile=config.tiles_file()))
assert np.all(tileinfo['TILEID'] == table['tileid'])
output[name.upper()] = tileinfo['EBV_MED'][tile_index]
It almost looks like tileinfo
could be overridden, but that overriding tileinfo
has never been implemented. Also tileinfo
is only ever used to load EBMV
values.
Add a new input to afternoon planning: a list of tiles that are eligible to observe. Only those tiles would be ranked and included in the outputs.
Tiles not on that list don't have fiber assignment finalized yet; see DESI-2728 for background.
Add the 10% survey to full depth to afternoon planning prioritization. Note that this is an exception to the pass prioritization in #15.
There are several places throughout desisurvey that hardcode the known programs to be 'DARK', 'GRAY', or 'BRIGHT'. We may need more flexibility for commissioning and SV. Consider generalizing this so that the set of programs used is derived from the programs listed in the config.yaml file (which should also be a superset of the programs in the input tiles file).
This could get tricky, so if it starts taking a lot of time or complexity, re-evaluate whether this flexibility is really needed.
Related to PR #83 .
Add option to surveyplan to specify the list of input tiles to override the default of $DESIMODEL/data/footprint/desi-tiles.fits. In this case, the plan and HA assignments should be updated as if those were the only tiles available.
NED converts J2000 RA,dec=(15,20) into l,b=(125.67488266, -42.82605200); see http://ned.ipac.caltech.edu/forms/calculator.html and full example with inputs (beware: the NED web form expects RA in hours, not degrees)
while desisurvey.utils.equ2gal_J2000
gets a very different result:
>>> import desisurvey.utils
>>> desisurvey.utils.equ2gal_J2000(15,20)
(133.13095566600495, -47.305321119006678)
When this is understood, include in desisurvey.utils.equ2gal_J2000
the source of the transformation coordinates used.
The normal interaction between the scheduler and fiber assignment is:
Run once after the target catalog is finalized and before the survey starts, and assign fibers to all tiles that could be scheduled on the first night (i.e., tiles in DARK-0, GRAY-4 or any BRIGHT pass).
Run during each full moon break to assign fibers to tiles whose "covering tiles" have been observed. The set of tiles to assign each full month is not known in advance since it depends on an actual observing conditions and survey progress.
With this scheme, each tile has fibers assigned during exactly one pass, but which tiles are assigned in which pass is stochastic (except for the initial pass).
This issue is to update the surveyinit
and afternoonplan
entry points to optionally invoke fiber assignment with the appropriate list of tile IDs.
This should wait until desihub/fiberassign#153 is merged.
Do we need to have a real target catalog to test this, or could something simple be mocked up on the fly to exercise the fiberassign algorithms in this mode?
desisurvey doesn't work with the new desimodel desi-tiles.fits that changed the GRAY program to layer 0 instead of layer 4. Tests fail sporadically since one of the symptoms is Scheduler.tnom
being initialized with np.empty
but then not completely initialized since self.tiles['program']
has values 3 and 4 instead of 1,2,3 around line 90 of desisurvey/schedule.py. I haven't chased upstream to see what is going wrong with the program assignment.
We may need to temporarily revert to the old desi-tiles.fits, but this will need to be fixed eventually, preferably in a way that doesn't hardcode which layer is which program (desi-tiles.fits should own that mapping).
Several requests for desisurvey.nextobservation.nextFieldSelector
inputs:
conditions
dictionary is a required input but it isn't used yet. Add comments to docstring defining what keys will be required, and add code like assert 'seeing' in conditions
in the meantime to ensure that people are calling it with the required inputs, even if using them isn't implemented yet.
moon_alt
and moon_az
are required inputs, but they are completely derivable from the input mjd
and the location of KPNO. Drop them as inputs. Alternatively, if we want to keep them for testing flexibility, they could have defaults of None
implying "calculate them from where the moon really is on that MJD" and if they aren't None
then "use these values regardless of where the moon really is". If keepign moon_alt
and moon_az
, then moon_frac
(illumination fraction) should be added too.
Give previous_ra
and previous_dec
default values of None so that they can be optional inputs if slew=False
(e.g. at beginning of night).
When simulating split exposures (i.e. multiple exposures of the same tile in a row), all exposures receive the same MOONALT and MOONSEP even though the moon has moved from one exposure to another. Additionally, they all have the same MOONFRAC for the full night.
The underlying cached ephemeris is on a 1 hour grid with cubic interpolation in (ra, cos(dec), sin(dec)), but it appears that we are sampling that only once for the beginning of the exposure sequence rather than updating it for each exposure.
I think this is a desisurvey issue rather than a surveysim issue, but feel free to move it if needed.
This issue is to study and implement an optional term in the tile cost function that rewards observing a tile in pass N that allows a tile in pass N+1 to be scheduled (based on discussions at the SLC workshop).
Document how to change Configuration object parameters after they have already been read from the yaml config file. e.g. for changing the start date:
c.first_day._value = desisurvey.utils.get_date(...)
Please update the interface to make that more obvious and not require _private variables. Changing the start/stop dates is a specific case, but in general we should support the ability to algorithmically change any config variable without having to go via a custom yaml file.
Write code to combine next field selector output plus fiber view camera data into fibermap files.
Filing under desisurvey since this is code that will run at the mountain as part of operations to generate the fibermap files to be included in the datastream alongside the spectrograph raw data. The actual code may live somewhere under ICS.
Note: the fibermap format is very fundamental to the spectro pipeline operations, but it also needs to be updated to synchronize with upstream information from target selection and fiber assignment.
desimodel/data/footprint/desi-tiles.fits has been updated, including moving the GRAY tiles to layer 0, which apparently breaks desisurvey unit tests:
======================================================================
FAIL: test_overlap (desisurvey.test.test_tiles.TestTiles)
----------------------------------------------------------------------
Traceback (most recent call last):
File "/global/common/software/desi/cori/desiconda/20190804-1.3.0-spec/code/desisurvey/master/py/desisurvey/test/test_tiles.py", line 44, in test_overlap
self.assertFalse(np.any(tile_over[DARK1]))
AssertionError: True is not false
I think this is also breaking surveyinit in the minitest notebook (and thus blocking our standard reference run for a software release):
surveyinit --config-file /global/cscratch1/sd/sjbailey/minitest-20.3/survey/desisurvey-config.yaml
...
INFO:surveyinit.py:199:calculate_initial_plan: [099] dHA=0.007deg RMSE=280.10% LOSS= 5.67% delta(score)= -0.0%
INFO:surveyinit.py:199:calculate_initial_plan: [100] dHA=0.006deg RMSE=280.12% LOSS= 5.67% delta(score)= +0.0%
INFO:surveyinit.py:211:calculate_initial_plan: DARK plan uses 1.9h with 125.6h avail (6431.7% margin).
INFO:optimize.py:158:__init__: GRAY: 33.3h for 1 tiles (texp_nom 1000.0 s).
INFO:optimize.py:232:__init__: Center flat initial HA assignments at LST -60 deg.
/global/common/software/desi/cori/desiconda/20190804-1.3.0-spec/code/desisurvey/master/py/desisurvey/optimize.py:600: RuntimeWarning: invalid value encountered in true_divide
avg_ha = self.smoothing_weights.dot(self.ha) / self.smoothing_sums
Found invalid plan_tiles in use_plan().
(see /global/cscratch1/sd/sjbailey/minitest-20.3/survey/surveyinit.log for full log)
@dkirkby I thought desisurvey had previously been updated to support the new tile file, but then we (I) waited a long time before merging it and desisurvey may have broken again. Could you check this out? Thanks.
From @weaverba137 in desihub/two_percent_DESI#6:
I'd really like to get back to using full UTC timestamps instead of MJD. Databases like timestamps. Again, not an issue for this notebook.
Years ago we agreed to use UTC as the primary time metric (see https://desi.lbl.gov/trac/wiki/Pipeline/FormatsAndNumbering), but MJD has crept into desisurvey/surveysim as the defacto primary time metric. My understanding is that ICS will provide both UTC timestamps (ISO 8601 format) and MJD float in the raw data headers, with the UTC timestamp being the canonical reference time and the MJD float provided for convenience.
The primary issue with MJD is the ambiguity with definition on days with leap seconds. Astropy handles this self-constently, but it isn't clear that there is any cross package standard for doing this, or how ICS will handle this. UTC is explicit in how leap seconds are handled.
I'm not convinced that a sub-second ambiguity on the start time of a 1000 second exposure actually matters for anything, and the UTC timestamp will still be available for the rare cases where it really does. Continuing to use MJD may be pragmatic.
If we want to store a UTC timestamp in a FITS binary table, it is unclear to me what the right column format is. A 19-character text string in ISO-8601 format is probably not particularly practical. What's the right solution here? TAI float is unambiguous for time differences, but I also haven't found a canonical standard for what TAI=0 means.
As pointed out in desihub/surveysim#45, the progress object does not track some fields that are required to model redshift efficiency: the atmospheric transparency and median E(B-V).
freeze_iers()
currently has two purposes:
This creates a dependency on desisurvey for code that would not normally ever need that dependency. If freeze_iers()
were moved to desiutil, this extra dependency would be unnecessary.
In the class desisurvey.scheduler.Scheduler
line 264, self.moon_RADEC is set to desisurvey.ephem.get_object_interpolator
self.moon_RADEC = desisurvey.ephem.get_object_interpolator(self.night_ephem, 'moon', altaz=False)
The interpolator of desisurvey.ephem.get_object_interpolator
outputs (dec, ra) however in desisurvey.scheduler.Scheduler.next_tile
when self.moon_RADEC
is called (line 362) the RA and Dec order is reversed:
moonRA, moonDEC = self.moon_RADEC(mjd_now)
The moon_RADEC is also a misleading name.
@dkirkby @sbailey I'm trying to execute some very simple survey simulations but running into trouble. I'm sure I'm missing something simple---could you advise?
Note that I'm using the desisurvey-config.yaml file from the minitest notebook and the master
branches of all the DESI code.
Set one needed environment variable:
export DESISURVEY_OUTPUT=./survey
mkdir -p $DESISURVEY_OUTPUT
Build a simple tile file (here, with four tiles: two DARK, one GRAY, and one BRIGHT).
import os
from astropy.table import Table
import desimodel.io
alltiles = Table(desimodel.io.load_tiles())
ii = (150 < alltiles['RA']) & (alltiles['RA']<152) & (30<alltiles['DEC']) & (alltiles['DEC']<32)
tiles = Table(alltiles[ii])
tiles.write(os.path.join(os.getenv('DESISURVEY_OUTPUT'), 'test-tiles.fits'), overwrite=True)
tiles
<Table length=4>
TILEID RA DEC PASS IN_DESI EBV_MED AIRMASS STAR_DENSITY EXPOSEFAC PROGRAM OBSCONDITIONS
int32 float64 float64 int16 int16 float32 float32 float32 float32 str6 int32
------ ------------------ ------- ----- ------- ----------- --------- ------------ --------- ------- -------------
11108 150.87 31.23 1 1 0.018903468 1.0254319 1451.6671 1.157668 DARK 1
16870 151.96000000000004 31.21 2 1 0.02075856 1.0254421 1403.4232 1.1708232 DARK 1
28408 150.73000000000002 30.52 4 1 0.019678533 1.0258721 1478.4001 1.1637645 GRAY 2
34170 151.82 30.5 5 1 0.021682816 1.0258868 1414.4895 1.1780643 BRIGHT 4
Run surveyinit
:
surveyinit --config-file ./desisurvey-config.yaml
INFO:utils.py:76:freeze_iers: Freezing IERS table used by astropy time, coordinates.
INFO:ephem.py:84:get_ephem: Building ephemerides for (2019-01-01,2025-12-31)...
INFO:ephem.py:89:get_ephem: Saved ephemerides for (2019-01-01,2025-12-31) to ./survey/ephem_2019-01-01_2025-12-31.fits
INFO:tiles.py:288:get_tiles: Initialized tiles from "./test-tiles.fits".
INFO:tiles.py:293:get_tiles: DARK passes(tiles): 1(1), 2(1).
INFO:tiles.py:293:get_tiles: GRAY passes(tiles): 4(1).
INFO:tiles.py:293:get_tiles: BRIGHT passes(tiles): 5(1).
INFO:optimize.py:158:__init__: DARK: 125.6h for 2 tiles (texp_nom 1000.0 s).
INFO:optimize.py:232:__init__: Center flat initial HA assignments at LST -60 deg.
INFO:surveyinit.py:199:calculate_initial_plan: [001] dHA=1.000deg RMSE=400.64% LOSS=57.15% delta(score)= -0.9%
INFO:surveyinit.py:199:calculate_initial_plan: [002] dHA=0.950deg RMSE=396.36% LOSS=55.08% delta(score)= -1.4%
INFO:surveyinit.py:199:calculate_initial_plan: [003] dHA=0.902deg RMSE=405.42% LOSS=53.49% delta(score)= +1.7%
INFO:surveyinit.py:199:calculate_initial_plan: [004] dHA=0.857deg RMSE=405.01% LOSS=52.25% delta(score)= -0.4%
INFO:surveyinit.py:199:calculate_initial_plan: [005] dHA=0.815deg RMSE=413.06% LOSS=50.29% delta(score)= +1.3%
INFO:surveyinit.py:199:calculate_initial_plan: [006] dHA=0.774deg RMSE=405.65% LOSS=49.57% delta(score)= -1.8%
INFO:surveyinit.py:199:calculate_initial_plan: [007] dHA=0.735deg RMSE=409.29% LOSS=48.98% delta(score)= +0.7%
[snip]
INFO:surveyinit.py:199:calculate_initial_plan: [100] dHA=0.006deg RMSE=415.78% LOSS=43.91% delta(score)= +0.0%
INFO:surveyinit.py:211:calculate_initial_plan: DARK plan uses 1.1h with 125.6h avail (11470.9% margin).
INFO:optimize.py:158:__init__: GRAY: 33.3h for 1 tiles (texp_nom 1000.0 s).
INFO:optimize.py:232:__init__: Center flat initial HA assignments at LST -60 deg.
/Users/ioannis/repos/desihub/desisurvey/py/desisurvey/optimize.py:600: RuntimeWarning: invalid value encountered in true_divide
avg_ha = self.smoothing_weights.dot(self.ha) / self.smoothing_sums
Found invalid plan_tiles in use_plan().
unit tests are failing on the IERS download, since http://maia.usno.navy.mil/ser7/finals2000A.all is offline. Check whether this has moved somewhere else or whether we just need to wait for it to come back. I get slightly different errors on my laptop vs. NERSC, but one version is:
======================================================================
ERROR: test_update_iers (desisurvey.test.test_utils.TestUtils)
Test updating the IERS table. Requires a network connection.
----------------------------------------------------------------------
Traceback (most recent call last):
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/urllib/request.py", line 1318, in do_open
encode_chunked=req.has_header('Transfer-encoding'))
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/http/client.py", line 1239, in request
self._send_request(method, url, body, headers, encode_chunked)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/http/client.py", line 1285, in _send_request
self.endheaders(body, encode_chunked=encode_chunked)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/http/client.py", line 1234, in endheaders
self._send_output(message_body, encode_chunked=encode_chunked)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/http/client.py", line 1026, in _send_output
self.send(msg)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/http/client.py", line 964, in send
self.connect()
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/http/client.py", line 936, in connect
(self.host,self.port), self.timeout, self.source_address)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/socket.py", line 704, in create_connection
for res in getaddrinfo(host, port, 0, SOCK_STREAM):
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/socket.py", line 745, in getaddrinfo
for res in _socket.getaddrinfo(host, port, family, type, proto, flags):
socket.gaierror: [Errno 8] nodename nor servname provided, or not known. requested URL: http://maia.usno.navy.mil/ser7/finals2000A.all
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/Users/sbailey/desi/git/desisurvey/py/desisurvey/test/test_utils.py", line 55, in test_update_iers
utils.update_iers(save_name)
File "/Users/sbailey/desi/git/desisurvey/py/desisurvey/utils.py", line 165, in update_iers
iers = astropy.utils.iers.IERS_A.open(astropy.utils.iers.IERS_A_URL)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/site-packages/astropy/utils/iers/iers.py", line 159, in open
kwargs.update(file=download_file(file, cache=cache))
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/site-packages/astropy/utils/iers/iers.py", line 78, in download_file
return utils.data.download_file(*args, **kwargs)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/site-packages/astropy/utils/data.py", line 1088, in download_file
raise e
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/site-packages/astropy/utils/data.py", line 1021, in download_file
with urllib.request.urlopen(remote_url, timeout=timeout) as remote:
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/urllib/request.py", line 223, in urlopen
return opener.open(url, data, timeout)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/urllib/request.py", line 526, in open
response = self._open(req, data)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/urllib/request.py", line 544, in _open
'_open', req)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/urllib/request.py", line 504, in _call_chain
result = func(*args)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/urllib/request.py", line 1346, in http_open
return self.do_open(http.client.HTTPConnection, req)
File "/Users/sbailey/anaconda3/envs/desi/lib/python3.6/urllib/request.py", line 1320, in do_open
raise URLError(err)
urllib.error.URLError: <urlopen error [Errno 8] nodename nor servname provided, or not known. requested URL: http://maia.usno.navy.mil/ser7/finals2000A.all>
This is likely because they are tests of old code that has been replaced, but this needs to be cleaned up so that tests are correct and useful.
There is a bug in the implementation of the definition of gray time. Below is a plot of moon altitude vs. moon illumination fraction from a surveysim run for dark (blue) bright (red) and gray (black) observations. The upper panel is the assignments from obslist_all.fits; bottom panel is what they should be based upon the DESI-0311 site alteratives definition:
Note: there are no illum>0.85 points on these plots because of the 1 week engineering/other downtime around full moon.
I haven't tracked down if this definition code is in desisurvey or desisim, but I'm filing the ticket against desisurvey since that is where it should be :)
Guess: a radians vs. degrees bug.
Related to #24 (update next field selector interface) and desihub/desitarget#232 (GFA targets)
Finalize the file format for how next field selector information is passed to ICS. This file should include everything needed to request an exposure, e.g. so that a year later one could re-use the same file to request another observation of a calibration field. Similarly, a test or commissioning program should be able to write a file of this format and use it to request an exposure, completely independently from the NFS.
This might just be an update to the fiber assignment file format, or it might require merging in other sources of data. To do:
Originally we had defined the NFS -> ICS interface as a function call returning a data structure defining the observation request. We still wanted to archive the state of that request in a file, and later moved to the file itself being the interface.
The current fiber assignment format is not yet sufficient, at minimum because it doesn't include GFA targets or proper motion data. Sorting out the NFS -> ICS interface may identify other missing pieces.
Exception to the "everything needed to define an exposure" requirement? We had discussed including the expected exposure time, which depends upon current conditions. I think this is only informational and is not required to be able to take an exposure. Is there anything else that we want to include in NFS -> ICS that couldn't be calculated well in advance?
desisurvey.nightopts
is simulating the nightly operations, and thus should be moved to surveysim. desisurvey should be reserved for code like afternoon planning and the next field selector that will get used in real operations.
Since several people are working on desisurvey / surveysim right now, assign this ticket to yourself when you start working on it so that others won't do the same refactor at the same time, then fix it and submit a PR quickly.
The survey simulations currently use the same design HAs during the whole survey, but the underlying Optimizer class is designed to support periodic re-optimization, e.g., to adjust to better / worse weather than initially assumed.
This issue is to update desisurvey.plan.Planner.afternoon_plan
to periodically perform this re-optimization (during the annual monsoon shutdown?) and run some simulations to study the impact.
When created, the 'dirname' part of the calendar is of type 'S8' if computed, but 'U8' if read from file. This causes en error in the line 281 of afternoonplan.py:
filename = 'obsplan' + day_stats['dirName'].decode('ascii') + '.fits'
When read from file, the line should not have the .decode('ascii') .
When surveyplan runs, if $DESISURVEY isn't set, it defaults outputting into the installed code directory, e.g.
surveyplan --create --duration 100 --verbose --plots
INFO:utils.py:94:freeze_iers: Freezing IERS table used by astropy time, coordinates.
INFO:ephemerides.py:238:__init__: Saving ephemerides to /global/common/cori/contrib/desi/code/desisurvey/0.8.1/ephem_2019-08-28_2024-07-13.fits
INFO:schedule.py:612:initialize: Footprint contains 1154 pixels.
INFO:schedule.py:706:initialize: Starting Sep 2019 (completed 4/1781 nights)
INFO:schedule.py:706:initialize: Starting Oct 2019 (completed 34/1781 nights)
...
INFO:schedule.py:706:initialize: Starting Jul 2024 (completed 1769/1781 nights)
INFO:schedule.py:805:initialize: Plan initialization saved to /global/common/cori/contrib/desi/code/desisurvey/0.8.1/scheduler.fits
INFO:progress.py:262:save: Saved progress to /global/common/cori/contrib/desi/code/desisurvey/0.8.1/progress_2019-08-28.fits.
It is user error that I forgot to set $DESISURVEY
before starting, but
it seems that a better default would be to write into the pwd rather than the code installation directory (I'm surprised our NERSC installation directory permissions even allowed me to do that...).
Based on a recently-produced exposures.fits file, generated by Progress.get_exposures()
, the RA, Dec columns do not have units assigned to them at all. Obviously the units should be degrees. This may be inherited from the progress.fits file, which also does not assign units for RA, Dec.
Originally from desihub/surveysim#45 but bringing over here since it goes with files generated by desisurvey:
If I read the progress_* files with Table.read() I get the columns in lower case but I need them in upper case to easily process the data through quicksurvey. Is it easy to solve that astropy or could this instead be fixed here in surveysim?
Finalize the details of how afternoon planning will be run during operations and write any necessary wrapper code. Example issues to sort out (probably not a complete list):
surveyplan
uses progress.fits to know what was observed or not. How will it get that information in real operations? How will it handle discrepancies between what ETC/ICS thought and the offline pipeline though for S/N and overall tile quality?The default fiberassign cadence is once a lunation, with a delay of 1.
desisurvey/py/desisurvey/data/config.yaml
Line 136 in 055a7b9
Is this what we want as a default? This default results in much worse first-year, full-depth performance, and I need to remember to change it when writing about what I consider to be "baseline" performance. I'd propose updating "delay" to 0.
Relatedly, my impression is that the default plan is now depth-first, so I'd consider renaming rules.yaml to rules-breadth.yaml and renaming rules-depth.yaml to rules.yaml. Or alternatively renaming rules.yaml to rules-breadth.yaml and copying rules-depth.yaml to rules.yaml, so that the default out-of-the-box is closer to what people are now expecting.
The tile file may need to grow or shrink to accommodate changes in programs and weather. The implementation in desisurvey.tiles needs to be able to handle this; likewise tile LST assignments, etc..
desisurvey.schedule
assumes that all tiles have IN_DESI==1
(see here) but for Survey Validation mock-ups (and perhaps Survey Validation itself) it would be convenient if there was an option of using non-DESI tiles / pointings.
(Capturing some recent discussion in desi-data)
From Michael Levi:
There should be a minimum exposure time limit in the code as well. There are thermal
restrictions on how often we can run the robots. This might have an impact on BGS.
The design minimum is 800secs but it might be possible to go faster, at 400secs the
focal plane heats up 9C (delta from ambient) and we have to keep the focal plane temp
under control so that it doesn't disrupt other factors (eg. guiding, seeing, robot lifetime,
thermal interlocks). I think you might set that time to t_min=800s and then see what
impact that has.
I expect this will impact the BRIGHT program, where the shortest exposures are now ~300s. We will need to decide what to do in the scheduler when, e.g., there is a BRIGHT tile that could reach its target SNR in 400s. Should we:
Another impact will be how cosmic splits are implemented. We currently split any exposure that is forecast to last 20 < texp < 40 mins into two exposures of texp/2. For 20 < texp < 27 mins, this would violate the 800s ~ 13 min constraint.
From a recent email by DJS to desi-data:
The Mayall with the upgraded TCS has overheads that are well-fit with
Overhead = 24.5 + 2.3 * Slew [sec, for slews < 10 deg]
= 27 + 2.3 * Slew [sec, for slews > 10 deg]
where Slew is the maximum of the HA and Declination slews.
I don’t know the reason for the discontinuity at 10 deg slews.
This looks to be ~17 sec more than what’s currently being assumed.
The scatter of some overhead times to larger values may be due to
dome slews; I’ve not dug into those cases.
Dome slews are another item that we should be considering. We ignored
this for the MzLS imaging survey, yet if one’s in a situation of observing
multiple fields right around HA=0, one could find oneself rotating the dome
180 deg between exposures. It may be enough to have the afternoon
planning optimize some of these out.
The pyephem library is now deprecated. This issue is to replace it with either astropy or skyfield, preferring astropy unless it is missing some functionality or too slow.
The use of pyephem is an implementation detail and will not change any public API or the format of the current ephemerides file, so this is not an urgent task (but still worth doing given the DESI timescale).
Add .travis.yml to run tests automatically
Currently, nextfield.py calculates the local apparent sidereal time which is valid for the the current epoch (i.e. JNow), but the RA and Dec of tiles is in J2000. This will affect the determination of which tiles are within 1 hr of right ascension of the meridian, and also the calculation of their altitude.
Proposed change/fix:
This issue is to discuss and converge on a first implementation of a BRIGHT program exposure time model. As a reminder, we currently use the DARK/GRAY model for BRIGHT exposures.
I am following the same approach already used for the DARK/GRAY model here, which has two components:
nominal_conditions:
# Moon below the horizon
seeing: 1.1 arcsec
airmass: 1.0
transparency: 1.0
EBV: 0.0
I believe @moustakas is working now on this component and I hope he can post updates here. We currently assume the answer is 300s, but if the answer is much different we need to know soon.
The auto-generated docs have some errors like:
ImportError: No module named ‘desiutil’
I suspect this is a simple RTD/sphinx config issue so I am assigning to our resident expert @weaverba137.
The instructions in surveysim/doc/tutorial.md work on my laptop, but when I try to run the first step of surveyplan
using the pre-installed code at NERSC, it crashes. Traceback below. I get the same error with both desisurvey/0.8.1 on cori and desisurvey/master on edison.
NERSC has numpy 1.11.3 while the laptop tests have numpy 1.13.0. On my laptop I also don't get the "FutureWarning: in the future, boolean array-likes will be handled as a boolean array index" prior to the exception.
[cori07 end2end] surveyplan --create --duration 100 --verbose --plots
INFO:utils.py:94:freeze_iers: Freezing IERS table used by astropy time, coordinates.
INFO:ephemerides.py:238:__init__: Saving ephemerides to /global/homes/s/sjbailey/desi/dev/end2end/output/ephem_2019-08-28_2024-07-13.fits
INFO:schedule.py:612:initialize: Footprint contains 1154 pixels.
INFO:schedule.py:706:initialize: Starting Sep 2019 (completed 4/1781 nights)
INFO:schedule.py:706:initialize: Starting Oct 2019 (completed 34/1781 nights)
...
INFO:schedule.py:706:initialize: Starting Jun 2024 (completed 1739/1781 nights)
INFO:schedule.py:706:initialize: Starting Jul 2024 (completed 1769/1781 nights)
INFO:schedule.py:805:initialize: Plan initialization saved to /global/homes/s/sjbailey/desi/dev/end2end/output/scheduler.fits
INFO:progress.py:262:save: Saved progress to /global/homes/s/sjbailey/desi/dev/end2end/output/progress_2019-08-28.fits.
INFO:surveyplan.py:121:main: Planning observations for 2019-08-28 to 2019-12-06.
INFO:plan.py:266:update: Updating plan for 2019-08-28 to 2019-12-06
INFO:plan.py:237:update_active: Adding 324 active tiles from group 1 priority 9
INFO:plan.py:237:update_active: Adding 321 active tiles from group 2 priority 9
INFO:plan.py:237:update_active: Adding 322 active tiles from group 3 priority 9
INFO:plan.py:237:update_active: Adding 319 active tiles from group 4 priority 9
INFO:plan.py:237:update_active: Adding 322 active tiles from group 5 priority 9
INFO:plan.py:237:update_active: Adding 319 active tiles from group 6 priority 9
Optimizing 645 active DARK tiles.
/global/common/cori/contrib/desi/code/desisurvey/0.8.1/lib/python3.5/site-packages/desisurvey-0.8.1-py3.5.egg/desisurvey/optimize.py:107: FutureWarning: in the future, boolean array-likes will be handled as a boolean array index
lst = wrap(e['lst'][sel.flat], origin)
Traceback (most recent call last):
File "/global/common/cori/contrib/desi/code/desisurvey/0.8.1/bin/surveyplan", line 4, in <module>
__import__('pkg_resources').run_script('desisurvey==0.8.1', 'surveyplan')
File "/global/common/cori/contrib/desi/code/desiconda/20170613-1.1.4-spectro_conda/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg/pkg_resources/__init__.py", line 744, in run_script
File "/global/common/cori/contrib/desi/code/desiconda/20170613-1.1.4-spectro_conda/lib/python3.5/site-packages/setuptools-27.2.0-py3.5.egg/pkg_resources/__init__.py", line 1499, in run_script
File "/global/common/cori/contrib/desi/code/desisurvey/0.8.1/lib/python3.5/site-packages/desisurvey-0.8.1-py3.5.egg/EGG-INFO/scripts/surveyplan", line 15, in <module>
desisurvey.scripts.surveyplan.main(args)
File "/global/common/cori/contrib/desi/code/desisurvey/0.8.1/lib/python3.5/site-packages/desisurvey-0.8.1-py3.5.egg/desisurvey/scripts/surveyplan.py", line 134, in main
nopts=(args.nopts,), plot_basename=plots)
File "/global/common/cori/contrib/desi/code/desisurvey/0.8.1/lib/python3.5/site-packages/desisurvey-0.8.1-py3.5.egg/desisurvey/plan.py", line 275, in update
popt = get_optimizer(plan, scheduler, program, start, stop, init)
File "/global/common/cori/contrib/desi/code/desisurvey/0.8.1/lib/python3.5/site-packages/desisurvey-0.8.1-py3.5.egg/desisurvey/plan.py", line 254, in get_optimizer
scheduler, program, plan['tileid'][sel], start, stop, init=init)
File "/global/common/cori/contrib/desi/code/desisurvey/0.8.1/lib/python3.5/site-packages/desisurvey-0.8.1-py3.5.egg/desisurvey/optimize.py", line 111, in __init__
lst, bins=nbins, range=(origin, origin + 360), weights=wgt)
File "/global/common/cori/contrib/desi/code/desiconda/20170613-1.1.4-spectro_conda/lib/python3.5/site-packages/numpy/lib/function_base.py", line 487, in histogram
'weights should have the same shape as a.')
ValueError: weights should have the same shape as a.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.