GithubHelp home page GithubHelp logo

alexanderjuestel / pyheatdemand Goto Github PK

View Code? Open in Web Editor NEW
3.0 2.0 3.0 105.64 MB

Processing Tool for Heat Demand Data

Home Page: https://pyhd.readthedocs.io/

License: GNU General Public License v3.0

TeX 8.39% Python 91.61%
geospatial-data geospatial-data-analysis heatmaps spatial-data spatial-data-analysis

pyheatdemand's Introduction

PyHeatDemand - Processing Tool for Heat Demand Data

PyPI - Version Conda GitHub License Read the Docs GitHub Workflow Status (with event) status Project Status: Active – The project has reached a stable, usable state and is being actively developed.

Fig1

Overview

Knowledge about the heat demand (MWh/area/year) of a respective building, district, city, state, country or even on a continental scale is crucial for an adequate heat demand planning or planning for providing power plant capacities.

PyHeatDemand is a Python library for processing spatial data containing local, regional or even national heat demand data. The heat demand data can be provided as Raster, gridded polygon Shapefile, building footprint polygons Shapefile, street network linestring Shapefile, point Shapefiles representing the heat demand of a single building or an administrative area, and lastly postal addresses provided as CSV files.

The package is a continuation of the results of Herbst et al., (2021) within the Interreg NWE project DGE Rollout, NWE 892. E. Herbst and E. Khashfe compiled the original heat demand map as part of their respective master thesis project at RWTH Aachen University. The final heat demand map is also accessible within the DGE Rollout Webviewer.

Documentation

A documentation page illustrating the functionality of PyHeatDemand is available at https://pyhd.readthedocs.io/en/latest/.

It also features installation instructions (also see below), tutorials on how to calculate heat demands, and the API Reference.

Installation

PyHeatDemand is supported for Python version 3.10 and younger. Previous versions are officially not supported. It is recommended to create a new virtual environment using the Anaconda Distribution before using PyHeatDemand. The main dependencies of PyHeatDemand are GeoPandas and Rasterio for the vector data and raster data processing, Matplotlib for plotting, GeoPy for extracting coordinates from addresses, OSMnx for getting OpenStreet Maps building footprints from coordinates, Rasterstats for analyzing the resulting heat demand maps and more secondary dependencies like Pandas, NumPy, Shapely, etc.

Installation via PyPi

PyHeatDemand can be installed via PyPi using:

pip install pyheatdemand

Installation via Anaconda

PyHeatDemand is also available from conda-forge:

conda install -c conda-forge pyheatdemand

Installation using YML-file

It is recommended to use the provided environment.yml to ensure that all dependencies are installed correctly:

conda env create -f environment.yml

Make sure that you have downloaded the environment file in that case.

Forking or cloning the repository

The PyHeatDemand repository can be forked or cloned from https://github.com/AlexanderJuestel/pyheatdemand:

git clone https://github.com/AlexanderJuestel/pyheatdemand.git

A list of requirements.txt and an environment.yml provide a list of all necessary dependencies to run PyHeatDemand from source.

General Workflow

The general workflow involves creating a global mask of quadratic polygons (e.g. 10 km x 10 km) covering the entire studied area. This is especially used for larger areas such as states, countries or the Interreg NWE region to subdivide the area into smaller areas. Depending on the size of the input heat demand data, the corresponding underlying global mask polygons are selected and the final (e.g. 100 m x 100 m) resolution polygon grid is created. This grid including the input heat demand data is need to calculate the final heat demand map. Fig1

The actual heat demand data is divided into four categories or data categories:

  • Data Category 1: Heat demand raster data or gridded polygon (vector) data, different scales possible
  • Data Category 2: Heat demand data as vector data; building footprints as polygons, street network as linestrings, single houses as points
  • Data Category 3: Heat demand as points representative for an administrative area
  • Data Category 4: Other forms of Heat Demand data such as addresses with associated heat demand or heat demand provided as usage of other fuels, e.g. gas demand, biomass demand etc.

Processing steps for Data Types 1 + 2 Fig1

Processing steps for Data Types 3 Fig1

Contribution Guidelines

Contributing to PyHeatDemand is as easy as opening issues, reporting bugs, suggesting new features or opening Pull Requests to propose changes.

For more information on how to contribute, have a look at the Contribution Guidelines.

Continuous Integration

A CI is present to test the current code. It can be initiated using pytest --cov within the test folder. After running the tests, coverage report -m can be executed to get an report on the coverage and which lines are not covered by the tests.

API Reference

For creating the API reference, navigate to the docs folder and execute sphinx-apidoc -o source/ ../pyheatdemand.

References

Jüstel, A., Humm, E., Herbst, E., Strozyk, F., Kukla, P., Bracke, R., 2024. Unveiling the Spatial Distribution of Heat Demand in North-West-Europe Compiled with National Heat Consumption Data. Energies, 17 (2), 481, https://doi.org/10.3390/en17020481.

Herbst, E., Khashfe, E., Jüstel, A., Strozyk, F. & Kukla, P., 2021. A Heat Demand Map of North-West Europe – its impact on supply areas and identification of potential production areas for deep geothermal energy. GeoKarlsruhe 2021, http://dx.doi.org/10.48380/dggv-j2wj-nk88.

pyheatdemand's People

Contributors

alexanderjuestel avatar kyleniemeyer avatar

Stargazers

 avatar  avatar  avatar

Watchers

 avatar  avatar

pyheatdemand's Issues

[JOSS Review ] Use a "numpy.random.Generator" here instead of this legacy function

Why?

See: https://rules.sonarsource.com/python/type/Code%20Smell/RSPEC-6711/

Solution

To fix this issue, replace usages of numpy.random.RandomState to numpy.random.Generator.

val = np.random.uniform(
  0, 
  0.001,
  size=data.shape
).astype(np.float32)

becomes:

generator = np.random.default_rng(42)  # set seed number for reproducibility
val = generator.uniform(
  0,
  0.001,
  size=data.shape
).astype(np.float32)

[JOSS Review] Issue with `rasterize_gdf_hd` function in notebooks examples

Describe the bug
TypeError returned in cell 21 "rasterizing vector data": TypeError: rasterize_gdf_hd() got an unexpected keyword argument 'flip_raster'

To Reproduce
Steps to reproduce the behavior:

  1. Go to :
  • docs/source/notebooks/06_Processing_Data_Type_III_Point_Coordinates.ipynb
  • docs/source/notebooks/07_Processing_Data_Type_III_Point_Data_Addresses.ipynb
  • docs/source/notebooks/08_Processing_Data_Type_IV_Vector_Polygons.ipynb
  • docs/source/notebooks/13_Processing_and_merging_heat_demand_data_for_NRW.ipynb
  1. Click run all
  • Without the missing flip_raster function parameter, the rasters also looked flipped when rendered

[JOSS Review] Code and examples

@AlexanderJuestel here are my comments on the code and examples.

openjournals/joss-reviews#6275

Code

I recommend linting processing.py for some suggestions on improving the code style. Here are three suggested changes:

Additionally, these two snippets seem to be identical. Is it possible to define it as a function for use in these two places to avoid code repetition?

# Converting Shapely Polygon to GeoDataFrame
if isinstance(mask_gdf, Polygon):
mask_gdf = gpd.GeoDataFrame(geometry=[mask_gdf],
crs=hd_gdf.crs)
# Checking that the hd_gdf is of type GeoDataFrame
if not isinstance(hd_gdf, gpd.GeoDataFrame):
raise TypeError('The heat demand gdf must be provided as GeoDataFrame')
# Checking that the mask_gdf is of type GeoDataFrame
if not isinstance(mask_gdf, gpd.GeoDataFrame):
raise TypeError('The mask gdf must be provided as GeoDataFrame')
# Checking that the Heat Demand Data Column is provided as string
if not isinstance(hd_data_column, str):
raise TypeError('The heat demand data column must be provided as string')
# Checking that the HD Data Column is in the HD GeoDataFrame
if not hd_data_column in hd_gdf:
raise ValueError('%s is not a column in the GeoDataFrame' % hd_data_column)
# Reprojecting Data if necessary
if mask_gdf.crs != hd_gdf.crs:
hd_gdf = hd_gdf.to_crs(mask_gdf.crs)
# Exploding MultiPolygons
if any(shapely.get_type_id(hd_gdf.geometry) == 6):
hd_gdf = hd_gdf.explode(index_parts=True).reset_index(drop=True)
# Assigning area to Polygons
if all(shapely.get_type_id(hd_gdf.geometry) == 3):
# Assigning area of original geometries to GeoDataFrame
hd_gdf['area'] = hd_gdf.area
# Assigning lengths to LineStrings
elif all(shapely.get_type_id(hd_gdf.geometry) == 1):
# Assigning length of original geometries to GeoDataFrame
hd_gdf['length'] = hd_gdf.length

# Converting Shapely Polygon to GeoDataFrame
if isinstance(mask_gdf, Polygon):
mask_gdf = gpd.GeoDataFrame(geometry=[mask_gdf],
crs=hd_gdf.crs)
# Checking that the hd_gdf is of type GeoDataFrame
if not isinstance(hd_gdf, gpd.GeoDataFrame):
raise TypeError('The heat demand gdf must be provided as GeoDataFrame')
# Checking that the mask_gdf is of type GeoDataFrame
if not isinstance(mask_gdf, gpd.GeoDataFrame):
raise TypeError('The mask gdf must be provided as GeoDataFrame')
# Checking that the Heat Demand Data Column is provided as string
if not isinstance(hd_data_column, str):
raise TypeError('The heat demand data column must be provided as string')
# Checking that the HD Data Column is in the HD GeoDataFrame
if not hd_data_column in hd_gdf:
raise ValueError('%s is not a column in the GeoDataFrame' % hd_data_column)
# Reprojecting Data if necessary
if mask_gdf.crs != hd_gdf.crs:
hd_gdf = hd_gdf.to_crs(mask_gdf.crs)
# Exploding MultiPolygons
if any(shapely.get_type_id(hd_gdf.geometry) == 6):
hd_gdf = hd_gdf.explode(index_parts=True).reset_index(drop=True)
# Assigning area to Polygons
if all(shapely.get_type_id(hd_gdf.geometry) == 3):
# Assigning area of original geometries to GeoDataFrame
hd_gdf['area'] = hd_gdf.area
# Assigning lengths to LineStrings
elif all(shapely.get_type_id(hd_gdf.geometry) == 1):
# Assigning length of original geometries to GeoDataFrame
hd_gdf['length'] = hd_gdf.length

Example notebooks

To import the library from the cloned repository, using from pyheatdemand import processing was sufficient. Using sys was not necessary (also did not work for me; I'm not using Windows though). In the example notebooks, the sys path uses both pyhd and pyheatdemand; all instances of pyhd should be fixed.

The data paths used in the notebooks are inconsistent, e.g. ../../../data/, ../data/, etc. I noticed that many of the datasets used are available in the test folder. Could you make use of these in the example notebooks? This will allow potential users to execute the notebooks themselves and reproduce the results.

In some notebooks (for example, 02_Processing_Data_Type_I_Raster.ipynb), the data was reprojected to epsg:3034. However, subsequent code uses to_crs('EPSG:3034'), which is redundant and can be removed.

The semicolon in show(); should be removed.

In the notebook 14_Refining_Polygon_Mask.ipynb, under 'Refining the mask', there's a cell with a TypeError output that should be deleted.

Some other suggestions for improvement are to include axis labels / titles / legends in as many plots as possible (where appropriate) as there are a lot of images in the notebooks and this will help with some context. For example, the histograms could use titles or axis labels, and the heat demand colour bars could use a legend title.

[JOSS Review] Paper

@AlexanderJuestel here are some initial comments on the paper. I have to do a quick check again so I'll comment if I have any further feedback.

openjournals/joss-reviews#6275

General comments

I'm curious how much lower the 100 m x 100 m resolution is compared to that of a building, since the paper states the following:

Evaluating the heat demand (usually in MWh = Mega Watt Hours) on a national or regional scale, including space and water heating for each apartment or each
building for every day of a year separately is from a perspective of resolution (spatial and temporal scale) and computing power
not feasible. Therefore, heat demand maps summarize the heat demand on a lower spatial resolution (e.g. 100 m x 100 m
raster) cumulated for one year (lower temporal resolution) for different sectors such as the residential and tertiary
sectors.

MWh was also defined here but it has already been used in the summary twice. I suggest moving the definition to the first instance where MWh is used. You can perhaps define it in the form of a footnote.

The acronym "HD" is used but is not defined anywhere in the paper.

According to my checklist, the statement of need must include the target audience of PyHeatDemand. The target audience should also be included in the documentation.

Formatting

I found some minor formatting issues in the PDF article and docs which I've listed below.

The blank lines in the following code are breaking the sentence, so they should be removed.

**PyHeatDemand** is an open-source Python package for processing and harmonizing multi-scale-multi-type heat demand input data for
constructing local to transnational harmonized heat demand maps (rasters). Knowledge about the heat demand (MWh/area/year) of a respective building,

**PyHeatDemand** is an open-source Python package for processing and harmonizing multi-scale-multi-type heat demand input data for
constructing local to transnational harmonized heat demand maps (rasters). Knowledge about the heat demand (MWh/area/year) of a respective building,

(points or polygons), with building footprints (polygons), with street segments (lines), or with addresses directly provided in
MWh but also as gas usage, district heating usage, or sources of heat. It is also possible to calculate the heat demand

The table isn't rendered properly. I think you need to remove all of these dashes, except for the one below the header row, to fix it. Also, you have referenced the table as Tab. 1. Since the table doesn't have a caption, you can either add a caption, or remove the reference to Tab. 1 and just refer to it as the "table below".

pyheatdemand/joss/paper.md

Lines 93 to 101 in 73f41fd

|---------------|----------------------------------------------------------------------------------------------------------------------------------|
| 2 | HD data provided as building footprints or street segments |
|---------------|----------------------------------------------------------------------------------------------------------------------------------|
| 3 | HD data provided as a point or polygon layer, which contains the sum of the HD for regions of official administrative units |
|---------------|----------------------------------------------------------------------------------------------------------------------------------|
| 4 | HD data provided in other data formats such as HD data associated with addresses |
|---------------|----------------------------------------------------------------------------------------------------------------------------------|
| 5 | No HD data available for the region |
|---------------|----------------------------------------------------------------------------------------------------------------------------------|

Paper structure

I think the State of the field, PyHeatDemand Outlook, and PyHeatDemand Resources sections could possibly be merged with the previous sections as they are short. The outlook could go in the Summary. The state of the field should probably be in the statement of need. You can remove the GitHub repository and documentation link as the PDF of the paper already has a link to the repo. The DGE Rollout Webviewer and a reference to Herbst, 2021 could go in the summary as well, where you mention in the final sentence that the package was developed during the DGE Rollout project. You can also move Jüstel et al., 2023 here as an application of the package.

References

There are no citations provided in the summary and statement of need. I suggest adding citations to the following:

  • Summary
    • the purpose of heat demand quantification and mapping
  • Statement of need:
    • energy consumption, primary energy, and thermal energy statistics
    • EU energy efficiency directive and emission reduction target
    • example of heat demand input values for the residential and commercial sectors that are easily accessible and assessable

Bibliography

The DOI for Jüstel et al., 2023 is missing in the PDF. Change note= to doi= to fix this:

note={10.3390/en17020481},

Herbst, 2021 seems like it's missing several authors?

Some of the titles are not capitalised properly in the PDF output of the paper. These words/titles (emphasised in bold) should be enclosed in braces in the bib file to preserve the capitalisation:

  • Herbst, K., E. (2021). A heat demand map of north-west europe
  • Esmukov, K., & others. (2023). GeoPy: Geocoding library for python
  • Gillies, S., & others. (2013). Rasterio: Geospatial raster i/o for Python programmers.
  • Meha, D., Novosel, T., & Duić, N. (2020). Bottom-up and top-down heat demand mapping methods for small municipalities, case gllogoc

[JOSS Review] Installation and documentation

Hi @AlexanderJuestel, I'm reviewing pyheatdemand for JOSS at openjournals/joss-reviews#6275. This issue concerns the installation instructions and documentation of your package. Please let me know if anything is unclear. I haven't finished going through the heat demand examples and the paper yet; I'll open more issues for these if necessary.

Documentation

  • For consistency, you could structure the installation instructions in the README to match that of the readthedocs documentation, i.e. split Pip and Conda installation instructions as subsections.
  • Feature the documentation more prominently in the README. Right now, there's a badge for the docs build but a section with a link to the documentation on readthedocs will be useful. You can maybe merge the API reference / local documentation build instructions here.
  • Add more details on installing from source, i.e. using environment_dev.yml and requirements.txt after cloning the repository.
  • Add a link to the contributing guidelines to the README.
  • Although you have included instructions for continuous integration, I think you should add the testing instructions to the documentation, perhaps under the installation instructions, for testing locally.
  • pytest-cov isn't included in environment_dev.yml and requirements.txt, but I think it should be for the purpose of testing locally.
  • I also noted that the packages in environment_dev.yml and requirements.txt do not match. Perhaps you could include the missing sphinx packages in requirements.txt for consistency?

I've included my installation steps below for your reference, and also in case I'm doing something wrong.

Installation with Pip

I suggest updating the Pip install command to python -m pip install pyheatdemand, which is recommended (see https://pip.pypa.io/en/stable/user_guide/#installing-packages) (and maybe also use a virtual environment).

python -m venv .env
source .env/bin/activate
python -m pip install pyheatdemand

Installation from source (including testing; I added pytest-cov to requirements.txt):

git clone https://github.com/AlexanderJuestel/pyheatdemand.git
cd pyheatdemand
python -m venv .env
source .env/bin/activate
python -m pip install -r requirements.txt
cd test
pytest --cov

Installation with Conda

Is there a reason why environment.yml and environment_dev.yml use Pip to install some dependencies when they already exist in conda-forge? Installing from environment.yml failed for me with the following output:

Channels:
 - conda-forge
 - defaults
Platform: linux-64
Collecting package metadata (repodata.json): done
Solving environment: done


==> WARNING: A newer version of conda exists. <==
    current version: 23.11.0
    latest version: 24.1.0

Please update conda by running

    $ conda update -n base -c conda-forge conda



Downloading and Extracting Packages:
                                                                                                                                                                              
Preparing transaction: done                                                                                                                                                   
Verifying transaction: done                                                                                                                                                   
Executing transaction: done                                                                                                                                                   
Installing pip dependencies: \ Ran pip subprocess with arguments:                                                                                                             
['/home/nms/miniconda3/envs/pyheatdemand/bin/python', '-m', 'pip', 'install', '-U', '-r', '/run/media/nms/Backup/Downloads/pyheatdemand/condaenv._ufstax7.requirements.txt', '--exists-action=b']                                                                                                                                                           
Pip subprocess output:                                                                                                                                                        
Collecting p (from -r /run/media/nms/Backup/Downloads/pyheatdemand/condaenv._ufstax7.requirements.txt (line 1))                                                               
  Using cached p-1.4.0-py3-none-any.whl (10 kB)                                                                                                                               
                                                                                                                                                                              
Pip subprocess error:
ERROR: Could not find a version that satisfies the requirement y (from versions: none)
ERROR: No matching distribution found for y

failed

CondaEnvException: Pip failed

I updated the environment files to use packages from conda-forge only as detailed below, which fixed this issue.

Updated environment.yml

I updated environment.yml to the following (I didn't think the other dependencies are needed as this is not the development version):

name: pyheatdemand
channels:
  - conda-forge
dependencies:
  - python>=3.10
  - pyheatdemand

... and then installed the packages as follows, which proceeded without issues:

git clone https://github.com/AlexanderJuestel/pyheatdemand.git
cd pyheatdemand
conda env create
conda activate pyheatdemand

Updated environment_dev.yml

For the development version, I updated environment_dev.yml as follows (I changed the underscores to dashes for the sphinx packages and added pytest-cov):

name: pyheatdemand
channels:
  - conda-forge
dependencies:
  - python>=3.10
  - geopandas
  - rasterstats # also installing rasterio
  - matplotlib
  - tqdm
  - geopy
  - osmnx
  - sphinx-book-theme==0.3.3
  - sphinx-copybutton
  - nbsphinx
  - pytest-cov

... then installed and tested the packages with no issues:

git clone https://github.com/AlexanderJuestel/pyheatdemand.git
cd pyheatdemand
conda env create -f environment_dev.yml
conda activate pyheatdemand
cd test
pytest --cov

Tests

The output of pytest --cov shows 11 warnings including deprecation and CRS mismatch warnings which should ideally be addressed:

============================================================================ test session starts =============================================================================
platform linux -- Python 3.12.1, pytest-8.0.0, pluggy-1.4.0
rootdir: /run/media/nms/Backup/Downloads/pyheatdemand/test
plugins: cov-4.1.0
collected 38 items                                                                                                                                                           

test_processing.py ......................................                                                                                                              [100%]

============================================================================== warnings summary ==============================================================================
../../../../../../../home/nms/miniconda3/envs/pyheatdemand-dev/lib/python3.12/site-packages/dateutil/tz/tz.py:37
  /home/nms/miniconda3/envs/pyheatdemand-dev/lib/python3.12/site-packages/dateutil/tz/tz.py:37: DeprecationWarning: datetime.datetime.utcfromtimestamp() is deprecated and scheduled for removal in a future version. Use timezone-aware objects to represent datetimes in UTC: datetime.datetime.fromtimestamp(timestamp, datetime.UTC).
    EPOCH = datetime.datetime.utcfromtimestamp(0)

test_processing.py:3
  /run/media/nms/Backup/Downloads/pyheatdemand/test/test_processing.py:3: DeprecationWarning: 
  Pyarrow will become a required dependency of pandas in the next major release of pandas (pandas 3.0),
  (to allow more performant data types, such as the Arrow string type, and better interoperability with other libraries)
  but was not found to be installed on your system.
  If this would cause problems for you,
  please provide us feedback at https://github.com/pandas-dev/pandas/issues/54466
          
    import pandas as pd

test_processing.py::test_get_building_footprints
  /home/nms/miniconda3/envs/pyheatdemand-dev/lib/python3.12/site-packages/pyproj/transformer.py:820: DeprecationWarning: Conversion of an array with ndim > 0 to a scalar is deprecated, and will error in future. Ensure you extract a single element from your array before performing this operation. (Deprecated NumPy 1.25.)
    return self._transformer._transform_point(

test_processing.py::test_calculate_hd_sindex[hd_gdf0-mask_gdf0]
test_processing.py::test_calculate_hd_sindex[hd_gdf0-mask_gdf0]
test_processing.py::test_calculate_hd_sindex[hd_gdf0-mask_gdf0]
test_processing.py::test_calculate_hd_sindex_points[hd_gdf0-mask_gdf0]
test_processing.py::test_calculate_hd_sindex_points[hd_gdf0-mask_gdf0]
test_processing.py::test_calculate_hd_sindex_lines[hd_gdf0-mask_gdf0]
test_processing.py::test_calculate_hd_sindex_lines[hd_gdf0-mask_gdf0]
  /home/nms/miniconda3/envs/pyheatdemand-dev/lib/python3.12/site-packages/geopandas/geodataframe.py:1525: SettingWithCopyWarning: 
  A value is trying to be set on a copy of a slice from a DataFrame.
  Try using .loc[row_indexer,col_indexer] = value instead
  
  See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
    super().__setitem__(key, value)

test_processing.py::test_calculate_hd_street_segments[gdf_roads0-gdf_buildings0]
  /run/media/nms/Backup/Downloads/pyheatdemand/pyheatdemand/processing.py:1448: UserWarning: CRS mismatch between the CRS of left geometries and the CRS of right geometries.
  Use `to_crs()` to reproject one of the input geometries to match the CRS of the other.
  
  Left CRS: EPSG:25832
  Right CRS: None
  
    gdf_joined = gpd.sjoin_nearest(gdf_buildings,

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html

---------- coverage: platform linux, python 3.12.1-final-0 -----------
Name                                                                      Stmts   Miss  Cover
---------------------------------------------------------------------------------------------
/run/media/nms/Backup/Downloads/pyheatdemand/pyheatdemand/__init__.py         5      0   100%
/run/media/nms/Backup/Downloads/pyheatdemand/pyheatdemand/processing.py     321      0   100%
__init__.py                                                                   0      0   100%
test_processing.py                                                          412      0   100%
---------------------------------------------------------------------------------------------
TOTAL                                                                       738      0   100%

====================================================================== 38 passed, 11 warnings in 26.68s ======================================================================

[JOSS Review] Consider using `black` (or similar) to format the codebase

I generally recommend python developers to use a strict code formatter such as [black](https://black.readthedocs.io/en/stable/) to help the development process. The idea is simple: if the code is always formatted the same way, you are only committing actual code changes instead of formatting changes (black spaces, tabs, etc.)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.