GithubHelp home page GithubHelp logo

digitalearthafrica / deafrica-sandbox-notebooks Goto Github PK

View Code? Open in Web Editor NEW
161.0 19.0 125.0 1.93 GB

Repository for Digital Earth Africa Sandbox, including: Jupyter notebooks, scripts, tools and workflows for geospatial analysis with Open Data Cube and xarray

License: Apache License 2.0

Jupyter Notebook 99.88% Python 0.12%
python jupyter-notebooks opendatacube remotesensing earthobservation xarray hacktoberfest

deafrica-sandbox-notebooks's Introduction

Digital Earth Africa Notebooks


License: The code in this repository is licensed under the Apache License, Version 2.0. Digital Earth Africa data is licensed under the Creative Commons by Attribution 4.0 license.

Contact: If you need assistance with any of the Jupyter Notebooks or Python code in this repository, please post a question on the Open Data Cube Slack channel or on the GIS Stack Exchange using the open-data-cube tag (you can view previously asked questions here. If you would like to report an issue with any of the scripts or notebooks in this repository, you can file one on the Github issues page.

Citing DE Africa Notebooks: If you use any of the notebooks, code or tools in this repository in your work, please reference them using the following citation:

Krause, C., Dunn, B., Bishop-Taylor, R., Adams, C., Burton, C., Alger, M., Chua, S., Phillips, C., Newey, V., Kouzoubov, K., Leith, A., Ayers, D., Hicks, A., DEA Notebooks contributors 2021. Digital Earth Australia notebooks and tools repository. Geoscience Australia, Canberra. https://doi.org/10.26186/145234

The Digital Earth Africa Notebooks repository (deafrica-sandbox-notebooks) hosts Jupyter Notebooks, Python scripts and workflows for analysing Digital Earth Africa (DE Africa) satellite data and derived products. This documentation is designed to provide a guide to getting started with DE Africa, and to showcase the wide range of geospatial analyses that can be achieved using DE Africa data and open-source software including Open Data Cube and xarray.

The repository is based around the following directory structure (from simple to increasingly complex applications):

  1. Beginners_guide: Introductory notebooks aimed at introducing Jupyter Notebooks and how to load, plot and interact with DE Africa data.

  2. Datasets: Notebooks introducing DE Africa's satellite datasets and derived products, including how to load each dataset and any special features of the data. Some external datasets that are useful for analysing and interpreting DE Africa products are also covered.

  3. Frequently_used_code: A recipe book of simple code examples demonstrating how to perform common geospatial analysis tasks using DE Africa and open-source software.

  4. Real_world_examples: More complex workflows demonstrating how DE Africa can be used to address real-world problems.

  5. Use Cases: Notebooks in this collection are developed for specific use-cases of the Digital Earth Africa platform and may not run as seamlessly as notebooks in the other folders of this repository. Notebooks may contain less descriptive markdown, contain more complicated or bespoke analysis, and may take a long time to run. However, they contain useful analysis procedures and provide further examples for advanced users.

The supporting scripts and data for the notebooks are kept in the following directories:

  • Tools: Python functions and algorithms developed to assist in analysing DE Africa data (e.g. loading data, plotting, spatial analysis, machine learning).

  • Supplementary_data: Supplementary files required for the analyses above (e.g. images, rasters, shapefiles, training data).


Getting started with DE Africa Notebooks

To get started with using deafrica-sandbox-notebooks, visit the DE Africa Notebooks Wiki page. This page includes guides for getting started on the DE Africa Sandbox.

Once you're set up, the main option for interacting with deafrica-sandbox-notebooks and contributing back to the repository is through:

  • DE Africa notebooks using Git: Git is a version-control software designed to help track changes to files and collaborate with multiple users on a project. Using git is the recommended workflow for working with deafrica-sandbox-notebooks as it makes it easy to stay up to date with the latest versions of functions and code, and makes it impossible to lose your work.

  • Set up Git authentication tokens: Git requires multi-factor authentication when using the command line or API. Set up a personal access token by following instructions from the GitHub Docs.


Contributing to DE Africa Notebooks

Main and working branches

The deafrica-sandbox-notebooks repository uses 'branches' to manage individuals' notebooks, and to allow easy publishing of notebooks ready to be shared. There are two main types of branches:

  • Main branch: The main branch contains DE Africa's collection of publicly available notebooks. The main branch is protected, and is only updated after new commits a reviewed and approved by the DE Africa team.
  • Working branches: All other branches in the repository are working spaces for users of deafrica-sandbox-notebooks. They have a unique name (typically named after the user). The notebooks on these branches can be works-in-progress and do not need to be pretty or complete. By using a working branch, it is easy to use scripts and algorithms from deafrica-sandbox-notebooks in your own work, or share and collaborate on a working version of a notebook or code.

Publishing notebooks to the main branch

Once you have a notebook that is ready to be published on the main branch, you can submit a 'pull request' in the Pull requests tab at the top of the repository. The default pull request template contains a check-list to ensure that all main branch Jupyter notebooks are consistent and well-documented so they can be understood by future users, and rendered correctly. Please ensure that as many of these checklist items are complete as possible, or leave a comment in the pull request asking for help with any remaining checklist items.

Draft pull requests

For pull requests you would like help with or that are a work in progress, consider using Github's draft pull request feature. This indicates that your work is still a draft, allowing you to get feedback from other DE Africa users before it is published on the main branch.


DE Africa Notebooks template notebook

A template notebook has been developed to make it easier to create new notebooks that meet all the pull request checklist requirements. The template notebook contains a simple structure and useful general advice on writing and formatting Jupyter notebooks. The template can be found here: DEAfrica_notebooks_template.ipynb

Using the template is not required for working branch notebooks, but is highly recommended as it will make it much easier to publish any notebooks on main in the future.


Approving pull requests

Anyone with admin access to the deafrica-sandbox-notebooks repository can approve 'pull requests'.

If the notebook meets all the checklist requirements, click the green 'Review' button and click 'Approve' (with an optional comment). You can also 'Request changes' here if any of the checklist items are not complete.

Once the pull request has been approved, you can merge it into the main branch. Select the 'Squash and merge' option from the drop down menu to the right of the green 'merge' button. Once you have merged the new branch in, you need to delete the branch. There is a button on the page that asks you if you would like to delete the now merged branch. Select 'Yes' to delete it.


Update: The default branch has been renamed!

October 2021

master is now named main in line with GitHub recommended naming conventions.

If you have a local clone created before 29 October 2021, you can update it by running the following commands.

git branch -m master main
git fetch origin
git branch -u origin/main main
git remote set-head origin -a

deafrica-sandbox-notebooks's People

Contributors

abradley60 avatar alexgleith avatar andrewdhicks avatar caitlinadams avatar cbur24 avatar dlubawy-ama avatar dunkgray avatar eefaye avatar eloise-b avatar fangfy avatar fivejjs avatar jcrattz avatar jdh-ama avatar kooie-cate avatar lavender-liu avatar lewistrotter avatar lisarebelo avatar mickwelli avatar mpho-sadiki avatar nanaboamah89 avatar neginmoghaddam avatar nikhil003 avatar nikitagandhi avatar nmoghaddam avatar robbibt avatar s-m-t-c avatar tom-butler avatar uchchwhash avatar vikineema avatar vnewey avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

deafrica-sandbox-notebooks's Issues

ModuleNotFoundError: No module named 'odc'

Hi All,

Please I need some help.

Is there any step by step material to solve the following error.

ModuleNotFoundError: No module named 'odc'

Also all the resources and material on platforms are very helpful but I cant find any material on how to setup your computer so that the code works. That is making sure all the libraries are installed, environments are setup etc. Am I missing this.

Additional information

How to setup my computer so that all the libraries below works

%matplotlib inline

import datacube
import matplotlib.pyplot as plt
import geopandas as gpd
from datacube.utils import geometry

from deafrica_tools.datahandling import load_ard
from deafrica_tools.bandindices import calculate_indices
from deafrica_tools.plotting import rgb, map_shapefile
from deafrica_tools.spatial import xr_rasterize
from deafrica_tools.classification import HiddenPrints

Thanks

Real world examples/Detecting change in urban extent

Looking at the urban change detection notebook I just noticed that there is this title"Load cloud-masked Sentinel-2 data" and the data used is actually Landsat. If the Notebook can use both LS and Sentinel-2 should we mention it in the "product used" top header?

Load_ard

I tried loading 'ga_ls8c_gm_2_annual','ga_ls8c_wofs_2_annual_summary', with the load_ard function but I got an error. Below is the code and error encountered.
Code

`
lat_range = (-6.2593, -5.8701) # Small (close fit)
lon_range = (34.9901, 35.3641)

query = {
'x': lon_range,
'y': lat_range,
'group_by': 'solar_day',
'resolution': (-30, 30),
'align': (15, 15),
'time': ("2018")
}

crs = mostcommon_crs(dc=dc, product='ga_ls8c_gm_2_annual', query=query)

ds_map = load_ard(dc=dc,
products=['ga_ls8c_gm_2_annual'],
output_crs=crs,
**query
)

`

Error thrown
`

UnboundLocalError Traceback (most recent call last)
in
16 products=['ga_ls8c_gm_2_annual'],
17 output_crs=crs,
---> 18 **query
19 )
20

~/dev/deafrica-sandbox-notebooks/Scripts/deafrica_datahandling.py in load_ard(dc, products, min_gooddata, pq_categories_s2, pq_categories_ls, mask_pixel_quality, ls7_slc_off, predicate, dtype, **kwargs)
254 # If measurements are specified but do not include pixel quality bands,
255 # add these to measurements according to collection
--> 256 if (product_type == 'c2') or (product_type == 'fc'):
257 print('Using pixel quality parameters for USGS Collection 2')
258 fmask_band = 'quality_l2_aerosol'

UnboundLocalError: local variable 'product_type' referenced before assignment

`

502 Bad Gateway File Save Error in Cape Town prod, dev sandboxes

Issue
When attempting to save changes to .ipynb files in the Cape Town dev or prod sandboxes, the browser says "Saving started" then produces the error "File Save Error for Invalid response: 502 Bad Gateway". The browser will then say "Saving failed". The file will not be saved. Any changes since the last save are not captured, and right click>Download only downloads the last saved version of the file.

image (5)

This issue is localised to the Cape Town sandboxes, Oregon does not display this issue.

Frequency
Most files - intermittent. Error occurs on a daily basis, perhaps for about half an hour at a time, sometimes it will resolve itself temporarily
Some files - consistently refuse to save. See Frequently_used_code/Imagery_on_web_map.ipynb. This is sometimes solved by downloading and reuploading the file (that solution worked for Burnt_area_mapping.ipynb, but does not exclude the file from the intermittent error
The error occurs at different times of day and has no discernible pattern for file name or size. It so far has only affected .ipynb notebooks.

Severity
High, it is not possible to reliably save work, especially problematic at the end of the work day. Not all files will experience the error at the same time.

Steps to recreate (consistent error)

  1. Log into a Cape Town sandbox.
  2. Open Frequently_used_code/Imagery_on_web_map.ipynb.
  3. Make a change (markdown or code, anything).
  4. Press Ctrl+S or the 'Save' button.

Steps to recreate (intermittent error)

  1. Log into a Cape Town sandbox.
  2. Create or open any .ipynb notebook and make a change.
  3. Save the notebook.
  4. Repeat 2,3 until error occurs.

Mangrove analysis tool can be improved and access shapefile from a central repository

Here is the link to the DEA notebook on mangroves https://gist.github.com/mubea/036800765879bb41c9c52f982db14da4

The notebook relies on the user to create his/her own mangrove file for the notebook to run. However, in the ARDC the users only selected the coordinates of an area of interest (lat and lon extent) to perform analysis on mangroves (see example here https://gist.github.com/mubea/0f46874fc81c9af1e6722dc6ba7557ac). In addition, I have compiled the ARDC mangrove shapefiles here: https://drive.google.com/drive/folders/1yKSlANrVWMXRD-ywptCTMuDBKt80-AUa?usp=sharing

@caitlinadams @andrewdhicks @jcrattz please review

Trouble with Dask distributed for filmstrips notebook

I've started trying to transition the Change_filmstrips.ipynb notebook from DEA to DE Africa, but am running into some errors with Dask. I think it might be an environment issue, since the same code (aside from a few variable changes to work with Africa) is working fine in the DEA Sandbox. I've pasted the error below, and while looking into it, I noticed that this thread also had the same error message as I am currently getting: dask/distributed#3491

After selecting an area from the map, the following occurs as normal:

Starting analysis...
Using pixel quality parameters for USGS Collection 1
Finding datasets
    ls5_usgs_sr_scene
    ls7_usgs_sr_scene
    Ignoring SLC-off observations for ls7
    ls8_usgs_sr_scene
Applying pixel quality/cloud mask
Returning 235 time steps as a dask array

Generating geomedian composites and plotting filmstrips... (click the Dashboard link above for status)

Then I get the error:

distributed.protocol.core - CRITICAL - Failed to deserialize
Traceback (most recent call last):
  File "/env/lib/python3.6/site-packages/distributed/protocol/core.py", line 106, in loads
    header = msgpack.loads(header, use_list=False, **msgpack_opts)
  File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
ValueError: tuple is not allowed for map key
distributed.core - ERROR - tuple is not allowed for map key
Traceback (most recent call last):
  File "/env/lib/python3.6/site-packages/distributed/core.py", line 448, in handle_stream
    msgs = await comm.read()
  File "/env/lib/python3.6/site-packages/distributed/comm/tcp.py", line 208, in read
    frames, deserialize=self.deserialize, deserializers=deserializers
  File "/env/lib/python3.6/site-packages/distributed/comm/utils.py", line 65, in from_frames
    res = _from_frames()
  File "/env/lib/python3.6/site-packages/distributed/comm/utils.py", line 51, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/env/lib/python3.6/site-packages/distributed/protocol/core.py", line 106, in loads
    header = msgpack.loads(header, use_list=False, **msgpack_opts)
  File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
ValueError: tuple is not allowed for map key
distributed.core - ERROR - tuple is not allowed for map key
Traceback (most recent call last):
  File "/env/lib/python3.6/site-packages/distributed/core.py", line 404, in handle_comm
    result = await result
  File "/env/lib/python3.6/site-packages/distributed/scheduler.py", line 2253, in add_client
    await self.handle_stream(comm=comm, extra={"client": client})
  File "/env/lib/python3.6/site-packages/distributed/core.py", line 448, in handle_stream
    msgs = await comm.read()
  File "/env/lib/python3.6/site-packages/distributed/comm/tcp.py", line 208, in read
    frames, deserialize=self.deserialize, deserializers=deserializers
  File "/env/lib/python3.6/site-packages/distributed/comm/utils.py", line 65, in from_frames
    res = _from_frames()
  File "/env/lib/python3.6/site-packages/distributed/comm/utils.py", line 51, in _from_frames
    frames, deserialize=deserialize, deserializers=deserializers
  File "/env/lib/python3.6/site-packages/distributed/protocol/core.py", line 106, in loads
    header = msgpack.loads(header, use_list=False, **msgpack_opts)
  File "msgpack/_unpacker.pyx", line 195, in msgpack._cmsgpack.unpackb
ValueError: tuple is not allowed for map key
---------------------------------------------------------------------------
CancelledError                            Traceback (most recent call last)
<ipython-input-3-89e63d5a63de> in <module>
      5                                 resolution,
      6                                 max_cloud,
----> 7                                 ls7_slc_off)

~/dev/deafrica-sandbox-notebooks/Scripts/notebookapp_changefilmstrips.py in run_filmstrip_app(output_name, time_range, time_step, tide_range, resolution, max_cloud, ls7_slc_off, size_limit)
    239         print('\nGenerating geomedian composites and plotting '
    240               'filmstrips... (click the Dashboard link above for status)')
--> 241         ds_geomedian = ds_geomedian.compute()
    242 
    243         # Reset CRS that is lost during geomedian compositing

/env/lib/python3.6/site-packages/xarray/core/dataset.py in compute(self, **kwargs)
    805         """
    806         new = self.copy(deep=False)
--> 807         return new.load(**kwargs)
    808 
    809     def _persist_inplace(self, **kwargs) -> "Dataset":

/env/lib/python3.6/site-packages/xarray/core/dataset.py in load(self, **kwargs)
    649 
    650             # evaluate all the dask arrays simultaneously
--> 651             evaluated_data = da.compute(*lazy_data.values(), **kwargs)
    652 
    653             for k, data in zip(lazy_data, evaluated_data):

/env/lib/python3.6/site-packages/dask/base.py in compute(*args, **kwargs)
    434     keys = [x.__dask_keys__() for x in collections]
    435     postcomputes = [x.__dask_postcompute__() for x in collections]
--> 436     results = schedule(dsk, keys, **kwargs)
    437     return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
    438 

/env/lib/python3.6/site-packages/distributed/client.py in get(self, dsk, keys, restrictions, loose_restrictions, resources, sync, asynchronous, direct, retries, priority, fifo_timeout, actors, **kwargs)
   2570                     should_rejoin = False
   2571             try:
-> 2572                 results = self.gather(packed, asynchronous=asynchronous, direct=direct)
   2573             finally:
   2574                 for f in futures.values():

/env/lib/python3.6/site-packages/distributed/client.py in gather(self, futures, errors, direct, asynchronous)
   1870                 direct=direct,
   1871                 local_worker=local_worker,
-> 1872                 asynchronous=asynchronous,
   1873             )
   1874 

/env/lib/python3.6/site-packages/distributed/client.py in sync(self, func, asynchronous, callback_timeout, *args, **kwargs)
    765         else:
    766             return sync(
--> 767                 self.loop, func, *args, callback_timeout=callback_timeout, **kwargs
    768             )
    769 

/env/lib/python3.6/site-packages/distributed/utils.py in sync(loop, func, callback_timeout, *args, **kwargs)
    332     if error[0]:
    333         typ, exc, tb = error[0]
--> 334         raise exc.with_traceback(tb)
    335     else:
    336         return result[0]

/env/lib/python3.6/site-packages/distributed/utils.py in f()
    316             if callback_timeout is not None:
    317                 future = gen.with_timeout(timedelta(seconds=callback_timeout), future)
--> 318             result[0] = yield future
    319         except Exception as exc:
    320             error[0] = sys.exc_info()

/env/lib/python3.6/site-packages/tornado/gen.py in run(self)
    733 
    734                     try:
--> 735                         value = future.result()
    736                     except Exception:
    737                         exc_info = sys.exc_info()

CancelledError: 

RGB function evaluates dask multiple times

Passing a dask array to matplotlib functions can result in the dask array being evaluated multiple times. This happens with robust=True, presumably as the function internally calculates the 2 and 98 percentiles.

(PR to follow)

Crop_health notebook for Nyankpala_Ghana RasterioIOError

Crop_health notebook for Nyankpala_Ghana runs into error https://gist.github.com/mubea/e97de92e03ebafbea1eb54b9a399a73a similar error with vegetation change detection for the same area in Ghana https://gist.github.com/mubea/e5ab0645048dc34e008232e980758fc4

Error opening source dataset: s3://deafrica-data/usgs/c1/l8/194/54/2020/01/25/LC08_L1TP_194054_20200125_20200128_01_T1_pixel_qa.tif

RasterioIOError: '/vsis3/deafrica-data/usgs/c1/l8/194/54/2020/01/25/LC08_L1TP_194054_20200125_20200128_01_T1_pixel_qa.tif' not recognized as a supported file format.

Please see screenshots here
image
image
image

Please review @andrewdhicks @caitlinadams @nanaboamah89 @cbur24

convert `load_masked_usgs` to `load_ard`

Sentinel 2 data will soon be included in the DE Africa ODC. Updating the loading data function to the more generic load_ard (by allowing it to load S2 data) will make loading cloud-masked datasets more convenient.

USE CASE: ASSESSMENT OF DEGRADATION OF OUEME BOUKOU FOREST RESERVE IN THE CENTRE OF BENIN REPUBLIC

Title:
ASSESSMENT OF DEGRADATION OF OUEME BOUKOU FOREST RESERVE IN THE CENTRE OF BENIN REPUBLIC

Aim:

The proposed use case aims at assessing the level of degradation of Oueme Boukou Forest reserve. It will analyse the changes occurred over times in that forest in the context of climate change and human pressures. It will help to confirm or infirm our research findings observed in the Sudano-guinea transition zone of Benin Republic in the last 30 years and serves as a case study for our students especially in modules on GIS and environmental research management. Other users in Africa will also learn from it through the methodological points of views and the remedies that will be proposed against deforestation.

Study area:

Study_area

Scope:

Oueme Boukou reserved forest is in the central part of the Republic of Benin, extends between 7ยฐ 47' 00 '' and 7ยฐ 57' 00'' of northern latitude and between 2ยฐ 22โ€™ 00โ€™โ€™ and 2ยฐ 31' 30'' of eastern longitude. The forest covers an area of 20,763 hectare and has been under threats according to some recent studies.
Though there are studies on the dynamics of that forest, it is necessary to observe that recent earth observation data should be used in estimate the trend of the degradation, highlight the present status of the forest in term of land cover change rate, assessment of the level of fragmentation of the land cover units, degradation of vegetation formations and ranking the factors of deforestation, etc.

The following features will be needed:

  1. Land cover features (vegetation, farmlands, roads, water bodies, rivers, etc.). These are necessary to analyse the structure of the vegetation and its dynamics
  2. Vegetation indices such as NDVI, NDWI, EVI, SAVI, TSAVI etc. These indices will be used to quantify the biomass, analyse the distribution of soil moisture and the water stress the forest is facing
  3. Fragmentation indices: How fragmented is the vegetation in term of number of patches over the years? Ecological landscape indices will be used
  4. DEM of the forest: Needed to analyse the configuration of the terrain

Input data:

-The shapefile of the forest (extent of the forest).

  • Earth observation data: Landsat TM 1986 or 1990, ETM+ 2000, 2010, OLI-TIRS of 2020, Sentinel2 data
  • SRTM data, resolution 30m

Expected outcomes and outputs:

  • Land cover maps of the forest showing the changes of vegetation over times
  • Vegetation formations areas, proportion, rate of changes or deforestation (tables and figures will be produced), level of fragmentation using ecological indices
  • Assessment of the vegetation stress, water stress, moisture of the forest
  • In term of outputs, we will advise on the trajectory of forest deforestation, actions to put in place to reduce deforestation and its impacts on human and ecological systems etc.

Additional information:

We hope this use case will culminate in a scientific publication and the methodology can also become a model for teaching.

OTPS Tidal Models not available in Cape Town sandboxes

Issue
Notebooks calling upon tidal_tag from deafrica_tools.coastal in the Cape Town dev and prod sandboxes fail due to the error:

Using user-supplied tide modelling location: -15.99, 11.74

b"At line 114 of file predict_tide.f90 (unit = 1)\nFortran runtime error: Cannot open file '/var/share/TPX08_atlas_compact/Model_atlas': No such file or directory\n\nError termination. Backtrace:\n#0  0x43acda\n#1  0x43aed5\n#2  0x43b69d\n#3  0x43d09d\n#4  0x43d3e4\n#5  0x403d28\n#6  0x4137a5\n#7  0x4c5288\n#8  0x401059"

An exception has occurred, use %tb to see the full traceback.

SystemExit: 1

Importing the required packages does not raise any errors in Cape Town:

from otps import TimePoint
from otps import predict_tide

However, the notebooks run without issue in Oregon.

Steps to recreate:

  1. Log into a Cape Town sandbox
  2. Open Real world examples/Mangrove analysis
  3. Run the notebook

Note the other tidal notebooks require full Landsat archive, which is currently unavailable. Only the Mangrove notebook uses S-2.

Split and update dataset notebooks

For each DE Africa Dataset notebook: split into one no code technical description and one ODC how to notebook. Design for docs building.

calculate indices bug

When 's2', or 'c1' is provided as the collection, the function will rename the bands so an index can be calculated, however, if the one of the bands to be renamed is not in the dataset (e.g. you load red and nir_1 to calculate NDVI), then the function will return:

ValueError: cannot rename 'swir_1' because it is not a variable or dimension in this dataset

The function requires a test at the beginning that will only try to rename the bands if they exist in the xarray.data_vars

Use Case: Analyzing Effects of the Drought on Inundation Extent and Vegetation Cover Dynamics

Title:

Analyzing Effects of the 2019 Drought on Inundation Extent and Vegetation Cover Dynamics in the Okavango Delta

Aim:

To assess the effect of 2019 drought on inundation extent and vegetation cover types in the Okavango Delta: with main emphasis on floodplain vegetation, riparian woodland, and dryland vegetation.
The results of the use case will contribute to development of large-scale low inundation extent map products and provide recent baseline maps on vegetation cover types of dynamics in the Okavango Delta system at a landscape approach, which are essential elements for integrated wetland and climate risks management by relevant decision makers.

Study area:

Picture1

Input data:

  • Okavango delta Vector file
  • Sentinel Data: May - September 2019
  • Water Observation from Space: May - September 2019

Expected outcomes and outputs:

  • The 2019 low flood extent Map (May to September). This will provide a basis for high spatial resolution Earth Observation Derived information on the effects of drought stress in the Okavango Delta wetland water extent, which is essential for improved environmental and climate risks management by decision makers
  • land cover classification for water, aquatic vegetation, primary floodplains, secondary floodplains, grassland, riparian vegetation, dry grassland/saltpan, dry woodland. May to September or extend to Apr - Oct/Nov.

**Additional information: **

McCarthy, J. Gumbricht, T. & McCarthy, T.S. (2005). Ecoregion classification in the Okavango Delta, Botswana from multitemporal remote sensing. International Journal of Remote Sensing 26 (19): 4339โ€“4357.

Geomedian Composite Error for Landsat 7

I tried to compute a geomedian composite on Landsat 7 image collection and It throws up an error.
I tried the same code for Landsat 8 image and it worked perfectly
Below is the code

        ds_dataset = load_ard(dc=dc, 
                  products=['ls7_usgs_sr_scene'],
                  output_crs=output_crs,
                  dask_chunks={'time': 1, 'x': 500, 'y': 500},
                  **query)
        
        sr_max_value = 10000                 # maximum value for SR in the loaded product
        scale, offset = (1/sr_max_value, 0)  # differs per product, aim for 0-1 values in float32

    #     #scale the values using the f_32 util function
        ds_scaled = to_f32(ds_dataset, scale=scale, offset=offset)
        
        
    #     #generate a geomedian
        ds_geomedian = xr_geomedian(ds_scaled, 
                         num_threads=1,  # disable internal threading, dask will run several concurrently
                         eps=1e-7,  
                          nocheck=True)   # disable checks inside library that use too much ram
     #convert SR scaling values back to original values
        ds_geomedian = from_float(ds_geomedian, 
                            dtype='float32', 
                            nodata=np.nan, 
                            scale=1/scale, 
                            offset=-offset/scale)
        ds_geomedian = ds_geomedian.compute()

Below is the error I got

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-22-02704d9aee3f> in <module>
      2 datacollection = {}
      3 for images_year in images_years:
----> 4     datacollection[images_year] =  cal_loaddataset(images_year, product_name, gdf, output_crs)
      5 
      6 datacollection

<ipython-input-21-2380e040c06b> in cal_loaddataset(time_ds, product_name, gdf, output_crs)
     34                             scale=1/scale,
     35                             offset=-offset/scale)
---> 36         ds_geomedian = ds_geomedian.compute()
     37 
     38         ds_geomedian = calculate_indices(ds_geomedian, index=['NDVI', 'NDWI'], collection = 'c1')

/env/lib/python3.6/site-packages/xarray/core/dataset.py in compute(self, **kwargs)
    808         """
    809         new = self.copy(deep=False)
--> 810         return new.load(**kwargs)
    811 
    812     def _persist_inplace(self, **kwargs) -> "Dataset":

/env/lib/python3.6/site-packages/xarray/core/dataset.py in load(self, **kwargs)
    652 
    653             # evaluate all the dask arrays simultaneously
--> 654             evaluated_data = da.compute(*lazy_data.values(), **kwargs)
    655 
    656             for k, data in zip(lazy_data, evaluated_data):

/env/lib/python3.6/site-packages/dask/base.py in compute(*args, **kwargs)
    442         postcomputes.append(x.__dask_postcompute__())
    443 
--> 444     results = schedule(dsk, keys, **kwargs)
    445     return repack([f(r, *a) for r, (f, a) in zip(results, postcomputes)])
    446 

/env/lib/python3.6/site-packages/distributed/client.py in get(self, dsk, keys, restrictions, loose_restrictions, resources, sync, asynchronous, direct, retries, priority, fifo_timeout, actors, **kwargs)
   2664                     should_rejoin = False
   2665             try:
-> 2666                 results = self.gather(packed, asynchronous=asynchronous, direct=direct)
   2667             finally:
   2668                 for f in futures.values():

/env/lib/python3.6/site-packages/distributed/client.py in gather(self, futures, errors, direct, asynchronous)
   1965                 direct=direct,
   1966                 local_worker=local_worker,
-> 1967                 asynchronous=asynchronous,
   1968             )
   1969 

/env/lib/python3.6/site-packages/distributed/client.py in sync(self, func, asynchronous, callback_timeout, *args, **kwargs)
    814         else:
    815             return sync(
--> 816                 self.loop, func, *args, callback_timeout=callback_timeout, **kwargs
    817             )
    818 

/env/lib/python3.6/site-packages/distributed/utils.py in sync(loop, func, callback_timeout, *args, **kwargs)
    345     if error[0]:
    346         typ, exc, tb = error[0]
--> 347         raise exc.with_traceback(tb)
    348     else:
    349         return result[0]

/env/lib/python3.6/site-packages/distributed/utils.py in f()
    329             if callback_timeout is not None:
    330                 future = asyncio.wait_for(future, callback_timeout)
--> 331             result[0] = yield future
    332         except Exception as exc:
    333             error[0] = sys.exc_info()

/env/lib/python3.6/site-packages/tornado/gen.py in run(self)
    733 
    734                     try:
--> 735                         value = future.result()
    736                     except Exception:
    737                         exc_info = sys.exc_info()

/env/lib/python3.6/site-packages/distributed/client.py in _gather(self, futures, errors, direct, local_worker)
   1824                             exc = CancelledError(key)
   1825                         else:
-> 1826                             raise exception.with_traceback(traceback)
   1827                         raise exc
   1828                     if errors == "skip":

/env/lib/python3.6/site-packages/datacube/api/core.py in fuse_lazy()
    687     data = numpy.full(geobox.shape, measurement.nodata, dtype=measurement.dtype)
    688     _fuse_measurement(data, datasets, geobox, measurement,
--> 689                       skip_broken_datasets=skip_broken_datasets)
    690     return data.reshape(prepend_shape + geobox.shape)
    691 

/env/lib/python3.6/site-packages/datacube/api/core.py in _fuse_measurement()
    713                        fuse_func=measurement.get('fuser', None),
    714                        skip_broken_datasets=skip_broken_datasets,
--> 715                        progress_cbk=progress_cbk)
    716 
    717 

/env/lib/python3.6/site-packages/datacube/storage/_load.py in reproject_and_fuse()
     70         with ignore_exceptions_if(skip_broken_datasets):
     71             with datasources[0].open() as rdr:
---> 72                 read_time_slice(rdr, destination, dst_gbox, resampling, dst_nodata)
     73 
     74         if progress_cbk:

/env/lib/python3.6/site-packages/datacube/storage/_read.py in read_time_slice()
    152             np.copyto(dst, pix)
    153         else:
--> 154             np.copyto(dst, pix, where=valid_mask(pix, rdr.nodata))
    155     else:
    156         if rr.is_st:

<__array_function__ internals> in copyto()

TypeError: Cannot cast scalar from dtype('int16') to dtype('uint8') according to the rule 'same_kind'

notebook for interactive polygon selection

Create a 'frequently_used_code' notebook for interactively creating polygons and exporting the results as a shapefile. This could provide a user with the ability to generate a shapefile fordatacube queries in other notebooks, or creating masks etc.

Coastal erosion notebook fail

The deafrica_spatialtools.py library has changed, leading to the Coastal_erosion.ipynb notebook in Real_world_examples failing when trying to load contour_extract.

Sentinel 2: No products match search terms

Sentinel 2 products are not loading and it gives the error below:
Code:
ds = dc.load(product="s2_12a", x=(24.65, 24.75), y=(-20.05, -20.15), time=("2018-01-01", "2018-12-31"))
Error:
`

ValueError Traceback (most recent call last)
in
2 x = (24.60, 24.80), #Prestea Huni Valley district, Ghana
3 y = (-20.05, -20.25),
----> 4 time=("2018-01-01", "2018-12-31"))
5
6 print(ds)

/env/lib/python3.6/site-packages/datacube/api/core.py in load(self, product, measurements, output_crs, resolution, resampling, skip_broken_datasets, dask_chunks, like, fuse_func, align, datasets, progress_cbk, **query)
290
291 if datasets is None:
--> 292 datasets = self.find_datasets(product=product, like=like, ensure_location=True, **query)
293 elif isinstance(datasets, collections.abc.Iterator):
294 datasets = list(datasets)

/env/lib/python3.6/site-packages/datacube/api/core.py in find_datasets(self, **search_terms)
329 .. seealso:: :meth:group_datasets :meth:load_data :meth:find_datasets_lazy
330 """
--> 331 return list(self.find_datasets_lazy(**search_terms))
332
333 def find_datasets_lazy(self, limit=None, ensure_location=False, **kwargs):

/env/lib/python3.6/site-packages/datacube/api/core.py in (.0)
354
355 if ensure_location:
--> 356 datasets = (dataset for dataset in datasets if dataset.uris)
357
358 return datasets

/env/lib/python3.6/site-packages/datacube/api/core.py in select_datasets_inside_polygon(datasets, polygon)
681 assert polygon is not None
682 query_crs = polygon.crs
--> 683 for dataset in datasets:
684 if intersects(polygon, dataset.extent.to_crs(query_crs)):
685 yield dataset

/env/lib/python3.6/site-packages/datacube/index/_datasets.py in search(self, limit, **query)
510 for product, datasets in self._do_search_by_product(query,
511 source_filter=source_filter,
--> 512 limit=limit):
513 yield from self._make_many(datasets, product)
514

/env/lib/python3.6/site-packages/datacube/index/_datasets.py in _do_search_by_product(self, query, return_fields, select_field_names, with_source_ids, source_filter, limit)
633 product_queries = list(self._get_product_queries(query))
634 if not product_queries:
--> 635 raise ValueError('No products match search terms: %r' % query)
636
637 for q, product in product_queries:

ValueError: No products match search terms: {'time': Range(begin=datetime.datetime(2018, 1, 1, 0, 0, tzinfo=), end=datetime.datetime(2018, 12, 31, 23, 59, 59, 999999, tzinfo=tzutc())), 'lat': Range(begin=-20.25, end=-20.05), 'lon': Range(begin=24.6, end=24.8), 'product': 's2_12a'}
`

WOFS query Error with geopolygon

I am trying to load a WOFs product using the query below, I am using the geopolygon to filter the area of interest because I have the shapefile of the area. The query works for Landsat and sentinel-2 data, but the WOFS throws the error below.

Code
'query = {
'geopolygon': geom,
'group_by': 'solar_day',
'resolution': (-30, 30),
'align': (15, 15),
}

query['time'] = ('2019')
ds_wofs_1 = dc.load(product=["ga_ls8c_wofs_2"],
output_crs=crs,
fuse_func=wofs_fuser,
**query
)

Error---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
/env/lib/python3.6/site-packages/shapely/ops.py in transform(func, geom)
236 shell = type(geom.exterior)(
--> 237 zip(*func(*izip(*geom.exterior.coords))))
238 holes = list(type(ring)(zip(*func(*izip(*ring.coords))))

TypeError: result() takes 2 positional arguments but 3 were given

During handling of the above exception, another exception occurred:

TypeError Traceback (most recent call last)
in
3 output_crs=crs,
4 fuse_func=wofs_fuser,
----> 5 **query
6 )
7

/env/lib/python3.6/site-packages/datacube/api/core.py in load(self, product, measurements, output_crs, resolution, resampling, skip_broken_datasets, dask_chunks, like, fuse_func, align, datasets, progress_cbk, **query)
302 geobox = output_geobox(like=like, output_crs=output_crs, resolution=resolution, align=align,
303 grid_spec=datacube_product.grid_spec,
--> 304 datasets=datasets, **query)
305
306 group_by = query_group_by(**query)

/env/lib/python3.6/site-packages/datacube/api/core.py in output_geobox(like, output_crs, resolution, align, grid_spec, datasets, geopolygon, **query)
674 geopolygon = get_bounds(datasets, crs)
675
--> 676 return geometry.GeoBox.from_geopolygon(geopolygon, resolution, crs, align)
677
678

/env/lib/python3.6/site-packages/datacube/utils/geometry/_base.py in from_geopolygon(cls, geopolygon, resolution, crs, align)
1007 crs = geopolygon.crs
1008 else:
-> 1009 geopolygon = geopolygon.to_crs(crs)
1010
1011 bounding_box = geopolygon.boundingbox

/env/lib/python3.6/site-packages/datacube/utils/geometry/_base.py in to_crs(self, crs, resolution, wrapdateline)
680 return clip_lon180(chopped_lonlat, eps)
681
--> 682 return geom._to_crs(crs)
683
684 def split(self, splitter: 'Geometry') -> Iterable['Geometry']:

/env/lib/python3.6/site-packages/datacube/utils/geometry/_base.py in _to_crs(self, crs)
642 assert self.crs is not None
643 return Geometry(ops.transform(self.crs.transformer_to_crs(crs),
--> 644 self.geom), crs)
645
646 def to_crs(self, crs: SomeCRS,

/env/lib/python3.6/site-packages/shapely/ops.py in transform(func, geom)
247 elif geom.type == 'Polygon':
248 shell = type(geom.exterior)(
--> 249 [func(*c) for c in geom.exterior.coords])
250 holes = list(type(ring)([func(*c) for c in ring.coords])
251 for ring in geom.interiors)

/env/lib/python3.6/site-packages/shapely/ops.py in (.0)
247 elif geom.type == 'Polygon':
248 shell = type(geom.exterior)(
--> 249 [func(*c) for c in geom.exterior.coords])
250 holes = list(type(ring)([func(*c) for c in ring.coords])
251 for ring in geom.interiors)

TypeError: result() takes 2 positional arguments but 3 were given`

Remove Tools/tests/ folder

Can we remove deafrica-sandbox-notebooks/Tools/tests/?
It constantly breaks during pytest runs.
Are there any dependencies on it?

RasterioIOError: Read or write failed - Water Extent notebook

RasterioIOError: Read or write failed. /vsis3/deafrica-data/usgs/c1/l8/166/61/2017/08/24/LC08_L1TP_166061_20170824_20170913_01_T1_sr_band2.tif, band 1: IReadBlock failed at X offset 10, Y offset 14: TIFFReadEncodedTile() failed see more information here: https://gist.github.com/mubea/318847aa49539657031258b030db4311. and notebook file here: https://drive.google.com/file/d/1CKpSZvmhv7BQvOnu1UXKP_OhS-kqfHKo/view?usp=sharing

@cbur24 @nanaboamah89

Define the area of interest

lat = -2.4144
lon = 40.6744

lat_buffer = 0.0129
lon_buffer = 0.0189

Combine central lat,lon with buffer to get area of interest

lat_range = (lat - lat_buffer, lat + lat_buffer)
lon_range = (lon - lon_buffer, lon + lon_buffer)

Define the start year and end year

start_year = '2013'
end_year = '2019'

Define the threshold

threshold_value = 0.75

Water_extent-Volta generates an InvalidIndexError: Reindexing only valid with uniquely valued Index objects

Water_extent-Volta_Sept.ipynb generates an error, "error: InvalidIndexError: Reindexing only valid with uniquely valued Index objects" here https://gist.github.com/mubea/3d1a973a550621fbce4ec87733326599

This happens at the plotting line as per screenshot here
image and here
image

The same error occurs on water extent for Lake Victoria here https://gist.github.com/mubea/241453ffab4931f40ffbab289a29290d and Lake Baringo here https://gist.github.com/mubea/bc0055017df3ac48f444b350e22d5a8f

Please review @andrewdhicks @caitlinadams @nanaboamah89 @cbur24

Beginner's Guide Feedback from Lina Yeboah

Lina Yeboah, a Ghanaian student Edward is working with on the illegal mining use case has pointed out small errors/inconsistencies in the beginners guide:

  • The term elements, and objects are used several times but what is the difference between the two? Are datasets objects, if they are, then what are the elements?
  • Using xarrays is much easier with more functionality. But are there instances where xarrys may not be convenient and where numpy is necessary?
  • In the products and measurements notebook, in the 'Visualizing available data' section, the dropdown menu is at the top left and not the top right as is written in the guide.
  • The explanation of the task graph in the 8 tutorial (Parallel Processing with Dask); point 2 should read Circles and not rectangles
  • In the Recommended Steps section, it mentions 6 tutorials but they are actually 8.
  • Are there some additional resources you could share with me to better my understanding of the Platform and how it works? I would appreciate it if you could share them with me.
  • I would also like to know the next steps from here.

Copy (with attribution!) the DEA Sandbox notebook on STAC use

We should bring over the fantastic DEA Sandbox STAC Usage notebook that Robbi created

Aim:
Enhance understanding of the DE Africa STAC API

Study area:
Notebook example exists: https://docs.dea.ga.gov.au/notebooks/Frequently_used_code/Downloading_data_with_STAC.html

Scope:

  1. Copy the notebook
  2. Adapt it to our environment
  3. Enable people to understand how to use the STAC API!

Input data:

  • n/a

Expected outcomes and outputs:

  • Enhanced understanding of the capabilities of the STAC API

Additional information:

Update tags in all notebooks

We should go through each notebook and apply common fixes:

  • Keywords (delete tags, move to top of document. Template available)
  • add last tested field, remove last modified field (see Template)
  • Remove logo from first cell

Some guidance on keywords to use: https://github.com/digitalearthafrica/deafrica-sandbox-notebooks/wiki/List-of-Documentation-Keywords

Tags need to be changed in every notebook, according to the following practices:

  • We're now going to refer to tags as keywords, and you should think of them as such. Importantly: what are the five most important concepts that will help a user (1) Understand what the notebook is about (2) quickly find the information they want if they're looking at the Docs page index.
  • Keywords can be hierarchical: for example data; landsat 8 and data; sentinel 2 would show up as the header "data" followed by subheaders for "landsat 8" and "sentinel-2" (both of which would link back to the pages they were used on.
  • We're going to move the Keywords up to the top of the notebook (tags were kept at the bottom). This means if a user goes to the notebook by using a keyword in the index, they'll be taken to the top of the notebook rather than the bottom. The keywords cell should sit just below the first cell (which contains the title and the data used)

Examples of output using hierarchies:
Screen Shot 2021-03-03 at 4 27 19 pm

Screen Shot 2021-03-03 at 4 27 12 pm

Screen Shot 2021-03-03 at 4 26 59 pm

Beginner's Guide - Caitlin

  • 01_Jupyter_notebooks.ipynb

Keywords used:
:index:beginner's guide; jupyter notebook, :index:jupyter notebook; beginner's guide, :index:jupyter notebook; markdown cell, :index:jupyter notebook; raw cell, :index:jupyter notebook; code cells

  • 02_Products_and_measurements.ipynb

Keywords used:
:index:beginner's guide; products, :index:beginner's guide; measurements, :index:products; beginner's guide, :index:measurements; beginner's guide, :index:data used; landsat 8

  • 03_Loading_data.ipynb

Keywords used:
:index:beginner's guide; loading data, :index:loading data; beginner's guide, :index:data used; landsat 8 geomedian, :index:data used; WoFS, :index:data methods; resampling :index:data methods; reprojecting, :index:data attributes; coordinate reference system

  • 04_Plotting.ipynb
  • 05_Basic_analysis.ipynb
  • 06_Intro_to_numpy.ipynb
  • 07_Intro_to_xarray.ipynb
  • 08_Parallel_processing_with_dask.ipynb

Datasets - Ee-Faye

  • Climate_Data_ERA5_AWS.ipynb
  • Fractional_cover.ipynb
  • Landsat_collections.ipynb
  • Landsat_collections_quick-guide.ipynb
  • Sentinel_1.ipynb
  • Sentinel_2.ipynb
  • Soil_Moisture.ipynb
  • Water_Observations_from_Space.ipynb

Frequently Used Code - Chad

  • Analyse_multiple_polygons.ipynb

Keywords used:
:index:spatial analysis; polygons, :index:data used; landsat 8, :index:python package; GeoPandas, :index:band index; NDVI

  • Animated_timeseries.ipynb
  • Applying_WOfS_bitmasking.ipynb
  • Calculating_band_indices.ipynb
  • Contour_extraction.ipynb
  • Exporting_GeoTIFFs.ipynb
  • Generating_composites.ipynb
  • Generating_geomedian_composites.ipynb
  • Geomedian_composites.ipynb
  • Image_segmentation.ipynb
  • Imagery_on_web_map.ipynb
  • Integrating_external_data.ipynb
  • Masking_data.ipynb
  • Monitoring_water_quality.ipynb
  • Principal_component_analysis.ipynb
  • Rasterise_vectorise.ipynb
  • Rasterize_vectorize.ipynb
  • Reprojecting_data.ipynb
  • Tidal_modelling.ipynb
  • Urban_index_comparison.ipynb
  • Using_load_ard.ipynb
  • Vegetation_phenology.ipynb
  • Working_with_time.ipynb

Real World Examples

  • Burnt_area_mapping.ipynb
  • Change_filmstrips.ipynb
  • Chlorophyll_monitoring.ipynb
  • Coastal_erosion.ipynb
  • Crop_health.ipynb
  • Intertidal_elevation.ipynb
  • Machine_learning_with_ODC.ipynb
  • Mangrove_analysis.ipynb
  • Radar_water_detection.ipynb
  • Urban_change_detection.ipynb
  • Vegetation_change_detection.ipynb
  • Water_extent.ipynb
  • Wetlands_insight_tool.ipynb

Repeated normalisation during `calculate_indices` function

When more than 1 index is supplied to the calculate_indices function in the bandindices.py script, the surface reflectance data is repeatedly normalised, resulting in SR and indice values that are increasingly deflated in value with increasing number of supplied indices.

Relabel legacy `collection=` parameter in `calculate_bandindices`

calculate_bandindices currently requires the collection= parameter which must be specified as either c2 or s2.

c2 refers to "Collection 2" for Landsat but DEAfrica no longer indexes Collection 1 data so this association is unintuitive.

s2 refers to "Sentinel-2", which is not a "collection" in the Landsat sense but a different sensor altogether.

Proposed actions

  • Rename c2 to ls
  • Rename collection= to sensor= or similar
  • Edit all notebooks where bandindices is used and make note of the change where appropriate

https://github.com/digitalearthafrica/deafrica-sandbox-notebooks/search?q=collection&type=code

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.