GithubHelp home page GithubHelp logo

jwebbinar_prep's Introduction

JWebbinar material

Welcome to the maintained version of the JWebbinar notebooks. If you are looking for the version that matches the recordings and PDF versions of the notebooks, you should navigate to the appropriate branch (branch "webbinar1" for JWebbinar 1, branch "webbinar2" for JWebbinar 2, and so on).

The majority of the notebooks in this repository run in an environment with the latest released versions of the JWST Calibration pipeline and Jdaviz (which installs astropy, specutils, matplotlib, and other packages required for JWST analysis).

To download the material, you can clone this repository:
git clone https://github.com/spacetelescope/jwebbinar_prep.git ~/my_jwebbinar_prep
cd ~/my_jwebbinar_prep

Alternatively, you can manually download the individual notebooks by clicking on the pipeline_product_session folder, then click on each file, and right-click on "Raw" to select "Save link as".

To run these notebooks locally, you will need a Python environment set up that is capable of running Jupyter notebooks. If you are new to Python, you may want to look here: https://www.python.org/about/gettingstarted/ for some resources on how to install and learn the basics of Python. Depending on your preferences and system choices, you may find the install instructions there sufficient, but note that many scientists find it easier to use the Anaconda python distribution and package manager: https://www.anaconda.com/products/individual.

Using anaconda o miniconda, you first create an environment with the name of your choice (we use jwebbinar for this example) and then you install the necessary packages:
conda create -n jwebbinar python=3.9
conda activate jwebbinar
pip install jwst
pip install jdaviz

For JWebbinar 6 (pointsource_imaging), you should also include:
pip install pysynphot
pip install git+https://github.com/spacetelescope/webbpsf.git
Make sure to follow the instruction on where to store the reference files needed by these two packages.

For JWebbinar 10 (simulators_session) and 15 (pdrs4all), you will need to install the simulation software MIRAGE. Instructions can be found here:
https://mirage-data-simulator.readthedocs.io/en/latest/install.html

You can now run jupyter notebook in the folder where you have downloaded the webinar notebooks.

If you have problems or questions, feel free to reach out to the JWST Help Desk at http://jwsthelp.stsci.edu/.

jwebbinar_prep's People

Contributors

aliciacanipe avatar astroolivine avatar bhilbert4 avatar bmorris3 avatar camipacifici avatar cslocum avatar drlaw1558 avatar drvdputt avatar duytnguyendtn avatar eteq avatar ewislowski avatar javerbukh avatar jaymedina avatar jaytmiller avatar jmuzerolle avatar jotaylor avatar kecnry avatar mcorrenti avatar mlibralato avatar msanchezst avatar nespinoza avatar ojustino avatar orifox avatar pahdb avatar patrickogle avatar pllim avatar robelgeda avatar rosteen avatar skendrew avatar ttdu avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

jwebbinar_prep's Issues

Imaging mode notebooks don't work with the latest version of the pipeline

The simulated data used in the notebooks in the imaging_mode directory cause the latest version of the pipeline to crash. This is due to some schema changes in the datamodels between the time that the notebooks were created and now.

Given that we have working versions of these notebooks in the pipeline_inflight directory, maybe we should remove the imaging_mode notebooks?

specutils issue

In prepping for a JWebbinar I found the specutils notebook no longer works. Looks like a problem with labelling axis in microns? Attached is a screen grab for context, long error below

Version is 1.4.2.dev37+gd7507aa2 which we got during the setup.

image

Totally reproducible

`/usr/local/lib/python3.8/dist-packages/pyparsing/core.py in parse_string(self, instring, parse_all, parseAll)
1106 # catch and re-raise exception from here, clearing out pyparsing internal stack trace
-> 1107 raise exc.with_traceback(None)
1108 else:

ParseFatalException: Unknown symbol: \mu, found '' (at char 8), (line:1, col:9)

The above exception was the direct cause of the following exception:

ValueError Traceback (most recent call last)
/usr/local/lib/python3.8/dist-packages/matplotlib/pyplot.py in post_execute()
137 def post_execute():
138 if matplotlib.is_interactive():
--> 139 draw_all()
140
141 # IPython >= 2

/usr/local/lib/python3.8/dist-packages/matplotlib/_pylab_helpers.py in draw_all(cls, force)
135 for manager in cls.get_all_fig_managers():
136 if force or manager.canvas.figure.stale:
--> 137 manager.canvas.draw_idle()
138
139

/usr/local/lib/python3.8/dist-packages/matplotlib/backend_bases.py in draw_idle(self, *args, **kwargs)
2053 if not self._is_idle_drawing:
2054 with self._idle_draw_cntx():
-> 2055 self.draw(*args, **kwargs)
2056
2057 def get_width_height(self):

/usr/local/lib/python3.8/dist-packages/matplotlib/backends/backend_agg.py in draw(self)
404 (self.toolbar._wait_cursor_for_draw_cm() if self.toolbar
405 else nullcontext()):
--> 406 self.figure.draw(self.renderer)
407 # A GUI class may be need to update a window using this draw, so
408 # don't forget to call the superclass.

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
72 @wraps(draw)
73 def draw_wrapper(artist, renderer, *args, **kwargs):
---> 74 result = draw(artist, renderer, *args, **kwargs)
75 if renderer._rasterizing:
76 renderer.stop_rasterizing()

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
49 renderer.start_filter()
50
---> 51 return draw(artist, renderer, *args, **kwargs)
52 finally:
53 if artist.get_agg_filter() is not None:

/usr/local/lib/python3.8/dist-packages/matplotlib/figure.py in draw(self, renderer)
2788
2789 self.patch.draw(renderer)
-> 2790 mimage._draw_list_compositing_images(
2791 renderer, self, artists, self.suppressComposite)
2792

/usr/local/lib/python3.8/dist-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
130 if not_composite or not has_images:
131 for a in artists:
--> 132 a.draw(renderer)
133 else:
134 # Composite any adjacent images together

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
49 renderer.start_filter()
50
---> 51 return draw(artist, renderer, *args, **kwargs)
52 finally:
53 if artist.get_agg_filter() is not None:

/usr/local/lib/python3.8/dist-packages/matplotlib/_api/deprecation.py in wrapper(*inner_args, **inner_kwargs)
429 else deprecation_addendum,
430 **kwargs)
--> 431 return func(*inner_args, **inner_kwargs)
432
433 return wrapper

/usr/local/lib/python3.8/dist-packages/matplotlib/axes/_base.py in draw(self, renderer, inframe)
2919 renderer.stop_rasterizing()
2920
-> 2921 mimage._draw_list_compositing_images(renderer, self, artists)
2922
2923 renderer.close_group('axes')

/usr/local/lib/python3.8/dist-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
130 if not_composite or not has_images:
131 for a in artists:
--> 132 a.draw(renderer)
133 else:
134 # Composite any adjacent images together

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
49 renderer.start_filter()
50
---> 51 return draw(artist, renderer, *args, **kwargs)
52 finally:
53 if artist.get_agg_filter() is not None:

/usr/local/lib/python3.8/dist-packages/matplotlib/axis.py in draw(self, renderer, *args, **kwargs)
1153 self._update_label_position(renderer)
1154
-> 1155 self.label.draw(renderer)
1156
1157 self._update_offset_text_position(ticklabelBoxes, ticklabelBoxes2)

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
49 renderer.start_filter()
50
---> 51 return draw(artist, renderer, *args, **kwargs)
52 finally:
53 if artist.get_agg_filter() is not None:

/usr/local/lib/python3.8/dist-packages/matplotlib/text.py in draw(self, renderer)
677
678 with _wrap_text(self) as textobj:
--> 679 bbox, info, descent = textobj._get_layout(renderer)
680 trans = textobj.get_transform()
681

/usr/local/lib/python3.8/dist-packages/matplotlib/text.py in _get_layout(self, renderer)
312 clean_line, ismath = self._preprocess_math(line)
313 if clean_line:
--> 314 w, h, d = renderer.get_text_width_height_descent(
315 clean_line, self._fontproperties, ismath=ismath)
316 else:

/usr/local/lib/python3.8/dist-packages/matplotlib/backends/backend_agg.py in get_text_width_height_descent(self, s, prop, ismath)
233 if ismath:
234 ox, oy, width, height, descent, fonts, used_characters =
--> 235 self.mathtext_parser.parse(s, self.dpi, prop)
236 return width, height, descent
237

/usr/local/lib/python3.8/dist-packages/matplotlib/mathtext.py in parse(self, s, dpi, prop, _force_standard_ps_fonts)
450 # mathtext.fontset rcParams also affect the parse (e.g. by affecting
451 # the glyph metrics).
--> 452 return self._parse_cached(s, dpi, prop, _force_standard_ps_fonts)
453
454 @functools.lru_cache(50)

/usr/local/lib/python3.8/dist-packages/matplotlib/mathtext.py in _parse_cached(self, s, dpi, prop, force_standard_ps_fonts)
471 self.class._parser = _mathtext.Parser()
472
--> 473 box = self._parser.parse(s, font_output, fontsize, dpi)
474 font_output.set_canvas_size(box.width, box.height, box.depth)
475 return font_output.get_results(box)

/usr/local/lib/python3.8/dist-packages/matplotlib/_mathtext.py in parse(self, s, fonts_object, fontsize, dpi)
2277 result = self._expression.parseString(s)
2278 except ParseBaseException as err:
-> 2279 raise ValueError("\n".join(["",
2280 err.line,
2281 " " * (err.column - 1) + "^",

ValueError:
\mathrm{\mu m}
^
Unknown symbol: \mu, found '' (at char 8), (line:1, col:9)


ParseFatalException Traceback (most recent call last)
/usr/local/lib/python3.8/dist-packages/matplotlib/_mathtext.py in parse(self, s, fonts_object, fontsize, dpi)
2276 try:
-> 2277 result = self._expression.parseString(s)
2278 except ParseBaseException as err:

/usr/local/lib/python3.8/dist-packages/pyparsing/core.py in parse_string(self, instring, parse_all, parseAll)
1106 # catch and re-raise exception from here, clearing out pyparsing internal stack trace
-> 1107 raise exc.with_traceback(None)
1108 else:

ParseFatalException: Unknown symbol: \mu, found '' (at char 8), (line:1, col:9)

The above exception was the direct cause of the following exception:

ValueError Traceback (most recent call last)
/usr/local/lib/python3.8/dist-packages/IPython/core/formatters.py in call(self, obj)
339 pass
340 else:
--> 341 return printer(obj)
342 # Finally look for special method names
343 method = get_real_method(obj, self.print_method)

/usr/local/lib/python3.8/dist-packages/IPython/core/pylabtools.py in (fig)
251
252 if 'png' in formats:
--> 253 png_formatter.for_type(Figure, lambda fig: print_figure(fig, 'png', **kwargs))
254 if 'retina' in formats or 'png2x' in formats:
255 png_formatter.for_type(Figure, lambda fig: retina_figure(fig, **kwargs))

/usr/local/lib/python3.8/dist-packages/IPython/core/pylabtools.py in print_figure(fig, fmt, bbox_inches, **kwargs)
135 FigureCanvasBase(fig)
136
--> 137 fig.canvas.print_figure(bytes_io, **kw)
138 data = bytes_io.getvalue()
139 if fmt == 'svg':

/usr/local/lib/python3.8/dist-packages/matplotlib/backend_bases.py in print_figure(self, filename, dpi, facecolor, edgecolor, orientation, format, bbox_inches, pad_inches, bbox_extra_artists, backend, **kwargs)
2228 else suppress())
2229 with ctx:
-> 2230 self.figure.draw(renderer)
2231
2232 if bbox_inches:

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
72 @wraps(draw)
73 def draw_wrapper(artist, renderer, *args, **kwargs):
---> 74 result = draw(artist, renderer, *args, **kwargs)
75 if renderer._rasterizing:
76 renderer.stop_rasterizing()

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
49 renderer.start_filter()
50
---> 51 return draw(artist, renderer, *args, **kwargs)
52 finally:
53 if artist.get_agg_filter() is not None:

/usr/local/lib/python3.8/dist-packages/matplotlib/figure.py in draw(self, renderer)
2788
2789 self.patch.draw(renderer)
-> 2790 mimage._draw_list_compositing_images(
2791 renderer, self, artists, self.suppressComposite)
2792

/usr/local/lib/python3.8/dist-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
130 if not_composite or not has_images:
131 for a in artists:
--> 132 a.draw(renderer)
133 else:
134 # Composite any adjacent images together

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
49 renderer.start_filter()
50
---> 51 return draw(artist, renderer, *args, **kwargs)
52 finally:
53 if artist.get_agg_filter() is not None:

/usr/local/lib/python3.8/dist-packages/matplotlib/_api/deprecation.py in wrapper(*inner_args, **inner_kwargs)
429 else deprecation_addendum,
430 **kwargs)
--> 431 return func(*inner_args, **inner_kwargs)
432
433 return wrapper

/usr/local/lib/python3.8/dist-packages/matplotlib/axes/_base.py in draw(self, renderer, inframe)
2919 renderer.stop_rasterizing()
2920
-> 2921 mimage._draw_list_compositing_images(renderer, self, artists)
2922
2923 renderer.close_group('axes')

/usr/local/lib/python3.8/dist-packages/matplotlib/image.py in _draw_list_compositing_images(renderer, parent, artists, suppress_composite)
130 if not_composite or not has_images:
131 for a in artists:
--> 132 a.draw(renderer)
133 else:
134 # Composite any adjacent images together

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
49 renderer.start_filter()
50
---> 51 return draw(artist, renderer, *args, **kwargs)
52 finally:
53 if artist.get_agg_filter() is not None:

/usr/local/lib/python3.8/dist-packages/matplotlib/axis.py in draw(self, renderer, *args, **kwargs)
1153 self._update_label_position(renderer)
1154
-> 1155 self.label.draw(renderer)
1156
1157 self._update_offset_text_position(ticklabelBoxes, ticklabelBoxes2)

/usr/local/lib/python3.8/dist-packages/matplotlib/artist.py in draw_wrapper(artist, renderer, *args, **kwargs)
49 renderer.start_filter()
50
---> 51 return draw(artist, renderer, *args, **kwargs)
52 finally:
53 if artist.get_agg_filter() is not None:

/usr/local/lib/python3.8/dist-packages/matplotlib/text.py in draw(self, renderer)
677
678 with _wrap_text(self) as textobj:
--> 679 bbox, info, descent = textobj._get_layout(renderer)
680 trans = textobj.get_transform()
681

/usr/local/lib/python3.8/dist-packages/matplotlib/text.py in _get_layout(self, renderer)
312 clean_line, ismath = self._preprocess_math(line)
313 if clean_line:
--> 314 w, h, d = renderer.get_text_width_height_descent(
315 clean_line, self._fontproperties, ismath=ismath)
316 else:

/usr/local/lib/python3.8/dist-packages/matplotlib/backends/backend_agg.py in get_text_width_height_descent(self, s, prop, ismath)
233 if ismath:
234 ox, oy, width, height, descent, fonts, used_characters =
--> 235 self.mathtext_parser.parse(s, self.dpi, prop)
236 return width, height, descent
237

/usr/local/lib/python3.8/dist-packages/matplotlib/mathtext.py in parse(self, s, dpi, prop, _force_standard_ps_fonts)
450 # mathtext.fontset rcParams also affect the parse (e.g. by affecting
451 # the glyph metrics).
--> 452 return self._parse_cached(s, dpi, prop, _force_standard_ps_fonts)
453
454 @functools.lru_cache(50)

/usr/local/lib/python3.8/dist-packages/matplotlib/mathtext.py in _parse_cached(self, s, dpi, prop, force_standard_ps_fonts)
471 self.class._parser = _mathtext.Parser()
472
--> 473 box = self._parser.parse(s, font_output, fontsize, dpi)
474 font_output.set_canvas_size(box.width, box.height, box.depth)
475 return font_output.get_results(box)

/usr/local/lib/python3.8/dist-packages/matplotlib/_mathtext.py in parse(self, s, fonts_object, fontsize, dpi)
2277 result = self._expression.parseString(s)
2278 except ParseBaseException as err:
-> 2279 raise ValueError("\n".join(["",
2280 err.line,
2281 " " * (err.column - 1) + "^",

ValueError:
\mathrm{\mu m}
^
Unknown symbol: \mu, found '' (at char 8), (line:1, col:9)

`

FileExistsError

I'm getting this error while running jwebbinar_prep/pipeline_products_session/jwst-data-products-part1-solutions.ipynb

---------------------------------------------------------------------------
FileExistsError                           Traceback (most recent call last)
<ipython-input-5-5dd118496b0b> in <module>
     11 # Create local links to the cached copy - this is not necessary - you can use the `demo_file`/`demo_ex_file`
     12 # names directly.  But this is a convenient to see what you've downloaded and remind yourself laiter
---> 13 os.symlink(demo_file, uncal_obs)
     14 os.symlink(demo_ex_file, exercise_obs)

FileExistsError: [Errno 17] File exists: '/home/jovyan/.astropy/cache/download/url/97f84d7cd7e7d94a04d4b6526e51b301/contents' -> 'example_nircam_imaging_uncal.fits'

Maybe you can check to see if those symlinks already exist?

Who is responsible for which contents here?

We get help desk calls about these notebooks because they are linked to from JDox articles. But how do we know who is responsible for what here? This repo is not well maintained and things keep breaking.

Failed caching mapping files

While running the imaging_mode_stage_1.ipynb, an error occured in creating an instance of the pipeline class:
Failed caching mapping files: Error fetching data for 'jwst_system_datalvl_0002.rmap' at CRDS server 'https://jwst-crds-pub.stsci.edu/' with mode 'http' : [Errno 13] Permission denied: '/path'
image

Happy to provide more information if required.

error in jwst-data-products-part2-solutions datamodels.MultiSpecModel() method

Thanks for putting these webinars together...you don't really know how much work and complexity goes into space telescope images you just see the final image...just poking through the notebooks here just starts to touch the tip of the iceberg.

Anyway, in the 4.2.-Spectroscopy section of jwst-data-products-part2-solutions, I'm getting an error on the following cell:

# Open the same file using a MultiSpecModel (use: spec)
spec = datamodels.MultiSpecModel(wfss_x1d_file[1])

It has been almost a year since the webbinar, and the version numbers of the modules I have loaded are all a fair amount higher. Maybe the data schemas have change and so this is OBE, but figured I'd point it out in case someone else runs into this problem.

The trace is

---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
Input In [36], in <cell line: 2>()
      1 # Open the same file using a MultiSpecModel (use: spec)
----> 2 spec = datamodels.MultiSpecModel(wfss_x1d_file[1])

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/jwst/datamodels/multispec.py:59, in MultiSpecModel.__init__(self, init, **kwargs)
     56     self.spec[0].spec_table = init.spec_table
     57     return
---> 59 super(MultiSpecModel, self).__init__(init=init, **kwargs)

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/model_base.py:244, in DataModel.__init__(self, init, schema, memmap, pass_invalid_values, strict_validation, validate_on_assignment, cast_fits_arrays, validate_arrays, ignore_missing_extensions, **kwargs)
    241         init_fitsopen = init
    243     hdulist = fits.open(init_fitsopen, memmap=memmap)
--> 244     asdffile = fits_support.from_fits(
    245         hdulist, self._schema, self._ctx, **kwargs
    246     )
    247     self._file_references.append(_FileReference(hdulist))
    249 elif file_type == "asdf":

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/fits_support.py:628, in from_fits(hdulist, schema, context, skip_fits_update, **kwargs)
    623 # Determine whether skipping the FITS loading can be done.
    624 skip_fits_update = _verify_skip_fits_update(
    625     skip_fits_update, hdulist, ff, context
    626 )
--> 628 known_keywords, known_datas = _load_from_schema(
    629     hdulist, schema, ff.tree, context, skip_fits_update=skip_fits_update
    630 )
    631 if not skip_fits_update:
    632     _load_extra_fits(hdulist, known_keywords, known_datas, ff.tree)

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/fits_support.py:548, in _load_from_schema(hdulist, schema, tree, context, skip_fits_update)
    542                 recurse(schema['items'],
    543                         path + [i],
    544                         combiner,
    545                         {'hdu_index': i})
    546             return True
--> 548 mschema.walk_schema(schema, callback)
    549 return known_keywords, known_datas

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/schema.py:149, in walk_schema(schema, callback, ctx)
    147 if ctx is None:
    148     ctx = {}
--> 149 recurse(schema, [], None, ctx)

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/schema.py:137, in walk_schema.<locals>.recurse(schema, path, combiner, ctx)
    135 if schema.get('type') == 'object':
    136     for key, val in schema.get('properties', {}).items():
--> 137         recurse(val, path + [key], combiner, ctx)
    139 if schema.get('type') == 'array':
    140     items = schema.get('items', {})

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/schema.py:124, in walk_schema.<locals>.recurse(schema, path, combiner, ctx)
    123 def recurse(schema, path, combiner, ctx):
--> 124     if callback(schema, path, combiner, ctx, recurse):
    125         return
    127     for c in ['allOf', 'not']:

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/fits_support.py:542, in _load_from_schema.<locals>.callback(schema, path, combiner, ctx, recurse)
    540 if has_fits_hdu:
    541     for i in range(max_extver):
--> 542         recurse(schema['items'],
    543                 path + [i],
    544                 combiner,
    545                 {'hdu_index': i})
    546     return True

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/schema.py:137, in walk_schema.<locals>.recurse(schema, path, combiner, ctx)
    135 if schema.get('type') == 'object':
    136     for key, val in schema.get('properties', {}).items():
--> 137         recurse(val, path + [key], combiner, ctx)
    139 if schema.get('type') == 'array':
    140     items = schema.get('items', {})

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/schema.py:124, in walk_schema.<locals>.recurse(schema, path, combiner, ctx)
    123 def recurse(schema, path, combiner, ctx):
--> 124     if callback(schema, path, combiner, ctx, recurse):
    125         return
    127     for c in ['allOf', 'not']:

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/fits_support.py:526, in _load_from_schema.<locals>.callback(schema, path, combiner, ctx, recurse)
    522             properties.put_value(path, result, tree)
    524 elif 'fits_hdu' in schema and (
    525         'max_ndim' in schema or 'ndim' in schema or 'datatype' in schema):
--> 526     result = _fits_array_loader(
    527         hdulist, schema, ctx.get('hdu_index'), known_datas, context)
    529     if result is None and context._validate_on_assignment:
    530         validate.value_change(path, result, schema, context)

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/fits_support.py:478, in _fits_array_loader(hdulist, schema, hdu_index, known_datas, context)
    475     return None
    477 known_datas.add(hdu)
--> 478 return from_fits_hdu(hdu, schema, context._cast_fits_arrays)

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/fits_support.py:667, in from_fits_hdu(hdu, schema, cast_arrays)
    664     listeners = None
    666 # Cast array to type mentioned in schema
--> 667 data = properties._cast(data, schema)
    669 # Casting a table loses the column listeners, so restore them
    670 if listeners is not None:

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/properties.py:84, in _cast(val, schema)
     81             t['shape'] = shape
     83 dtype = ndarray.asdf_datatype_to_numpy_dtype(schema['datatype'])
---> 84 val = util.gentle_asarray(val, dtype)
     86 if dtype.fields is not None:
     87     val = _as_fitsrec(val)

File ~/miniconda3/envs/jwst/lib/python3.10/site-packages/stdatamodels/util.py:59, in gentle_asarray(a, dtype)
     57     out_dtype.names = in_dtype.names
     58 else:
---> 59     raise ValueError(
     60         "Column names don't match schema. "
     61         "Schema has {0}. Data has {1}".format(
     62             str(out_names.difference(in_names)),
     63             str(in_names.difference(out_names))))
     65 new_dtype = []
     66 for i in range(len(out_dtype.fields)):

ValueError: Column names don't match schema. Schema has {'bkgd_var_flat', 'flux_var_rnoise', 'flux_var_poisson', 'sb_var_flat', 'bkgd_var_rnoise', 'sb_var_rnoise', 'bkgd_error', 'flux_var_flat', 'flux_error', 'bkgd_var_poisson', 'sb_var_poisson'}. Data has {'berror', 'error'}

Javascript Error: IPython is not defined

In jwst-data-products-part3-static.ipynb, I get "Javascript Error: IPython is not defined" following plot_spectra(spectrum, median_filter=11).

image

Note: this is in lab mode.

Remaining edits for MRS_Notebook1.ipynb

Since we needed to merge #46 before the completion of the technical review, I am listing the remaining needed edits in this issue. @drlaw1558 has said that late July could be a possible time to get these done.

  1. Complete the data loading procedure. It's good that we've given the data its own folder on Box and that there's now a note that the notebook will not run without downloading data from Box. The last move is to make a cell that carries out the download without extra input needed from the user. It's OK to give the user the option to skip it, but there should be functionality to fetch the data if they choose to run all cells consecutively.
    • (If you're not sure how to download the data within the context of the notebook, see the "Download the data" section of nirspec_fs.ipynb from the same session for an example.)
  2. Address all remaining PEP8 issues. @drlaw1558 can either follow the instructions given in this repository's documentation (link forthcoming) or open a new pull request where I can add a commit that shows which cells still have issues. You can also view warnings from a previous commit here.

TSO notebook: PEP8

TSO notebook was merged after review without fixing the PEP8 because of lack of time (#107).

This issue should be addressed in the next few months to fix what was missing from the review.

Possible error/performance problem with image_catalog.show_in_notebook()

I am seeing possible performance issues with the line image_catalog.show_in_notebook() in pipeline_products_session/jwst-data-products-part3-static.ipynb when running in the JWebbinar platform. This cell worked for me in Lab mode, but appears to be causing my Firefox tab to hang indefinitely when running in Notebook mode. The table appears to be attempting to display the entire thing in the output cell? Saving the notebook after this cell (in Lab mode) also leads to difficulty/slowness loading the notebook (in Notebook and Lab modes) the next time.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.