GithubHelp home page GithubHelp logo

energy-modelling-toolkit / dispa-set Goto Github PK

View Code? Open in Web Editor NEW
81.0 11.0 37.0 213.94 MB

The Dispa-SET unit-commitment and optimal dispatch model, developed at the JRC

License: European Union Public License 1.2

GAMS 7.65% Batchfile 0.08% Python 52.72% Jupyter Notebook 39.54%
python dispatch power

dispa-set's Introduction

dispaset logo

Documentation License Documentation Build Status

Description

The Dispa-SET model is a unit commitment and dispatch model developed within the “Joint Research Centre” and focused on the balancing and flexibility problems focusing on the European context. It is written in GAMS with advanced input/output data handling and visualization routines in Python.

Three different formulations are available offering a trade-off between accuracy and computational complexity ( Linear Programming (LP), Mixed-Integer Linear Programming (MILP)). This allows to model a power system at any level of detail e.g. micro-grid, region, country, continent. A Pan-European scenario is included with the model as of version 2.3.

Features

The model is expressed as an optimization problem. Continuous variables include the individual unit dispatched power, the shedded load and the curtailed power generation. The binary variables are the commitment status of each unit. The main model features can be summarized as follows:

  • Minimum and maximum power for each unit
  • Power plant ramping limits
  • Reserves up and down
  • Minimum up/down times
  • Load Shedding
  • Curtailment
  • Pumped-hydro storage
  • Non-dispatchable units (e.g. wind turbines, run-of-river, etc.)
  • Start-up, ramping and no-load costs
  • Multi-nodes with capacity constraints on the lines (congestion)
  • Constraints on the targets for renewables and/or CO2 emissions
  • Yearly schedules for the outages (forced and planned) of each units
  • CHP power plants and thermal storage

The demand is assumed to be inelastic to the price signal. The MILP objective function is therefore the total generation cost over the optimization period.

Quick start

If you want to download the latest version from github for use or development purposes, make sure that you have git and the anaconda distribution installed and type the following:

git clone https://github.com/energy-modelling-toolkit/Dispa-SET.git
cd Dispa-SET
conda env create  # Automatically creates environment based on environment.yml
conda activate dispaset # Activate the environment
pip install -e . # Install editable local version

The above commands create a dedicated environment so that your anaconda configuration remains clean from the required dependencies installed.

At this point, it is necessary to make sure that the GAMS API is properly installed in the newly created environment:

  • Make sure to define an evironment variable GAMSDIR pointing to the gams installation folder (e.g. C:\GAMS\win62\47.7.0)
  • For GAMS version 45 and higher, the api can be installed directly from pip, replacing the x values below with the installed version of gams:
pip install gamsapi==4x.x.x
  • For older version of gams, the API can be compiled from the apifile/Python directory in the gams installation folder. The the path to the library must then be added to the PYTHONPATH environment variable. For example, in linux:
cd /path/to/gams/apifiles/Python/api_310
python gamssetup.py install
export PYTHONPATH=/path/to/gams/apifiles/Python/gams:/path/to/gams/apifiles/Python/api_310
  • For very old versions of GAMS (e.g. 24.x, 25.x), it is possible to install the old low-level api from pip. Note that if this does not work, the api must be de-installed and compiled from the gams apifile folder.
pip install gdxcc, gamsxcc, optcc

To check that everything runs fine, you can build and run a test case by typing:

dispaset -c ConfigFiles/ConfigTest.xlsx build simulate

Documentation

The documentation and the stable releases are available on the main Dispa-SET website: http://www.dispaset.eu

Get involved

This project is an open-source project. Interested users are therefore invited to test, comment or contribute to the tool. Submitting issues is the best way to get in touch with the development team, which will address your comment, question, or development request in the best possible way. We are also looking for contributors to the main code, willing to contibute to its capabilities, computational-efficiency, formulation, etc. Finally, we are willing to collaborate with national agencies, reseach centers, or academic institutions on the use on the model for different data sets relative to EU countries.

License

Dispa-SET is a free software licensed under the “European Union Public Licence" EUPL v1.2. It can be redistributed and/or modified under the terms of this license.

Main developers

This software has been developed initially within the Directorate C Energy, Transport and Climate, which is one of the 7 scientific directorates of the Joint Research Centre (JRC) of the European Commission. Directorate C is based both in Petten, the Netherlands, and Ispra, Italy. Currently the main developers are the following:

  • Sylvain Quoilin (KU Leuven, Belgium)
  • Konstantinos Kavvadias (Joint Research Centre, European Commission)
  • Matija Pavičević (KU Leuven, Belgium)
  • Matthias Zech (Deutsches Zentrum für Luft-und Raumfahrt, DLR)
  • Matteo De Felice (Joint Research Centre, European Commission)

dispa-set's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

dispa-set's Issues

Reserve timeseries as input

You should be able to define an input file for 2U 2D similarly to normal demand.

Currently they are only hardcoded but this empirical equation tends to be outdated

reserve_2U_tot = {i: (np.sqrt(10 * PeakLoad[i] + 150 ** 2) - 150) for i in Load.columns}

The work was done on a branch a while ago but it is currently lost

Provide YAML config files

Perhaps it is a naive request, but I would like to use YAML instead of Excel when working remotely on my linux machine, can you provide both the versions for the EU configuration?

Two missing things for the mid-term scheduling

I have tried the latest version of this branch, and two very important points came out:

  1. The mid-term scheduling should not be carried out for pumped storage
  2. The parameter StorageMinimum should be read from the power plant list

Duplicate values added

I have realised that I had this behaviour in all the past Dispa-SET simulations. However to be sure I have downloaded the latest release and runned the test simulation.
The starting date of the simulation is 01/01/2015 and the stop date is 07/01/2015, the simulation runs smoothly except for those confusing errors:

[INFO    ] (build_simulation): New build started. DispaSET version: b'v2.3-19-g43dc439'
[ERROR   ] (NodeBasedTable): File /home/felicma/work/DS/tests/dummy_data/Load_RealTime/Z1/2015.csv index different size (8759) than desired index (168).
[ERROR   ] (NodeBasedTable): File /home/felicma/work/DS/tests/dummy_data/Load_RealTime/Z2/2015.csv index different size (8759) than desired index (168).
[ERROR   ] (NodeBasedTable): File /home/felicma/work/DS/tests/dummy_data/Load_RealTime/Z1/2015.csv index different size (8759) than desired index (8760).
[ERROR   ] (NodeBasedTable): File /home/felicma/work/DS/tests/dummy_data/Load_RealTime/Z2/2015.csv index different size (8759) than desired index (8760).

But this is another story. Let's go back to the real bug here.
I open the Inputs.gdx with GAMS and the Demand vector has a length of 192 that is different from the length of the vectors contained in Results.gdx, as for example, OutputSystemCost which has the length of 168.
Looking at the Demand vector you also spot that:

  1. The first two values are identical even if in the CSV used as inputs they are not
  2. The last 24 values (168-192) are identical too

I think this might have a big impact on the first time steps of the simulations, for example in the system cost you can see that the value at t=1 is four times higher than the all the other samples. I had a similar issue and for my analyses I had to omit the first samples from the analysis due to the presence of outliers.

Help with load in file Excel "configtest" Demand (.. Z1 or Z2/ 2015 csv)

Good day friends,
Someone could help me with a problem that I have with Dispa-SET, when trying to execute the test case "configtest", this runs well, however, I notice that the data of the Demand is not loaded, I think the tool is not capable If you read the route written in the "configtest" file where the demand data is, ... Z1 / 2015 .csv, someone has a sulution, I would greatly appreciate it.

Link one heat demand to multiple chp plants

Currently one heat demand timeseries can be assigned to one (CHP) power plant.
It should be possible that multiple CHP plants can satisfy a given heat demand.
That requires:

  • creating a mapping between heat profiles and power plants
  • modification of the data parsing routine
  • building an incidence matrix and
  • modifying the relevant equation in gams

Dispatch Plot Bug When RoW present

This bug is bothering me for quite while. Here are my observations:

When there are RoW interconnections present in the simulation for some reason plots are not working properly as shown in this figure: https://ibb.co/jMnpLGD

Z1 has RoW interconnection and Z2 doesn't have one and thus doesnt have that same problem. I saw the same behavior in other simulations were i was using RoW.

I tired to debug it but didnt manage to do so except to identify that its caused by RoW interconnections.

Allow country codes of any length (currently limited to 2). Useful when modelling other regions #15

Never mind. Excuse my mistake. I tested it again and it seems to be working fine!

Comment on the latest pull request. I am using country codes of length 4. I have made the latest changes in the pull request then tested it. Although the edits are useful to construct the correct incidence matrix, the function "write_variables" did not write the parameter "LineNode" properly to the Inputs.gdx file.

Fix v2.3 tag

I think you forgot to annotate tag v2.3 (git tag -a):

[...]$ git for-each-ref refs/tags
ba47317035639ba9338a86549c76c379fb3f8cad tag	refs/tags/v2.0
125eb29930b05191c98303343128b754c5f65425 tag	refs/tags/v2.1
d0d1bea7729031dbe9a99de891fb3ad29e9a0c56 tag	refs/tags/v2.2
08d9ac5dd37ba7f4fd00e689aad12b05746f8c49 commit	refs/tags/v2.3

You can notice that the v2.3 tag is not annotated but lightweight.
So, if you get the readable version from the last annotated tag:

[...]$ git describe
v2.2-345-g1a0a94e
# 345 commits ahead from v2.2 (hash of the last commit : 1a0a94e / -g prefix stands for git)

and from the last tag:

[...]$ git describe --tags
v2.3-163-g1a0a94e
# 163 commits ahead from v2.3

To fix it, you can force annotation of the tag v2.3:

[...]$ git tag -a --force v2.3 v2.3 -m "fix annotated tag"
Updated tag 'v2.3' (was 08d9ac5)

Now, it seems to be right:

[...]$ git for-each-ref refs/tags
ba47317035639ba9338a86549c76c379fb3f8cad tag	refs/tags/v2.0
125eb29930b05191c98303343128b754c5f65425 tag	refs/tags/v2.1
d0d1bea7729031dbe9a99de891fb3ad29e9a0c56 tag	refs/tags/v2.2
afaaa6007ee2ebdc68426c48d986871bc6c1fb1e tag	refs/tags/v2.3

[...]$ git describe
v2.3-163-g1a0a94e

[...]$ git describe --tags
v2.3-163-g1a0a94e

I also pushed tags to my fork repository:

[...] git push --force --tags
Counting objects: 1, done.
Writing objects: 100% (1/1), 172 bytes | 172.00 KiB/s, done.
Total 1 (delta 0), reused 0 (delta 0)
To github.com:corralien/Dispa-SET.git
 + 08d9ac5...afaaa60 v2.3 -> v2.3 (forced update)

Don't forget to annotate the next release tag for v2.4 ;-)

The function "load_csv" returns timezone aware Timestamp, while "pd.DataFrame(index=idx_utc_noloc)" has timezone naive Timestamp index

For example,

Load = NodeBasedTable(config['Demand'],idx_utc_noloc,config['zones'],tablename='Demand')

This line does not properly assigns the values in the csv file (it assigns Nan's). if I change it to:

Load = NodeBasedTable(config['Demand'],idx_utc,config['zones'],tablename='Demand')

It works!

Looking at the function "NodeBasedTable":

tmp = load_csv(path, index_col=0, parse_dates=True)
data[c] = tmp.iloc[:,0]

It seems that the function "load_csv" returns timezone aware Timestamp. Then, when the values are assigned to data[c], it does not work since the variable data has timezone naive Timestamp index (data = pd.DataFrame(index=idx_utc_noloc)).

I might be missing something here?

The preprocessor did not stop when using a nonexistent path

After 30 minutes trying to understand why my simulations looked so weird I have realised that there was an error in the config file, I wrote '/PATH/storage_level.csv' instead of '/PATH/##/storage_level.csv' for the ReservoirLevel. Needless to say, the first path does not exist. Why the simulation started anyway? And also without any warning!

DISCLAIMER: I was not using the latest version but this the mid-term scheduling branch before the latest commits. I will check if this is still happening with the more recent version.

UPDATE: Yes, in the new version at least there is something in the log file:

19/10/24 14:35:25 [INFO    ] (UnitBasedTable): No data file found for the table ReservoirLevels. Using default value 0

but again, as I suggested in #44 whenever the user specify a path we assume that he/she intends to use it and then without falling back to defaults values. Never.

CLI and mid-term scheduling

I have the impression that the CLI (in other words, running dispaset from the command line directly with dispaset) doesn't build a full simulation (thus with the MTS) but just a plain simulation. We should add the option in the CLI or deprecate it.

Issues when timezone information is present in the CSV timestamp

I am loading a CSV formatted in this way (the first 4 rows):

2016-01-01T00:00:00Z,21632
2016-01-01T01:00:00Z,20357
2016-01-01T02:00:00Z,19152
2016-01-01T03:00:00Z,18310

When building a simulation the preprocessor is not able to load the data, or better, is not able to put the data correctly loaded with:

tmp = load_csv(path, index_col=0, parse_dates=True)

in the structure data which has a different index (idx).

idx is this one:

DatetimeIndex(['2016-01-01 00:00:00', '2016-01-01 01:00:00',
              '2016-01-01 02:00:00', '2016-01-01 03:00:00',
              '2016-01-01 04:00:00', '2016-01-01 05:00:00',
              '2016-01-01 06:00:00', '2016-01-01 07:00:00',
              '2016-01-01 08:00:00', '2016-01-01 09:00:00',
              ...
              '2016-12-31 14:00:00', '2016-12-31 15:00:00',
              '2016-12-31 16:00:00', '2016-12-31 17:00:00',
              '2016-12-31 18:00:00', '2016-12-31 19:00:00',
              '2016-12-31 20:00:00', '2016-12-31 21:00:00',
              '2016-12-31 22:00:00', '2016-12-31 23:00:00'],
             dtype='datetime64[ns]', length=8784, freq='H')
<class 'pandas.core.indexes.datetimes.DatetimeIndex'>

while the one loaded by the CSV is:

DatetimeIndex(['2016-01-01 01:00:00+00:00', '2016-01-01 02:00:00+00:00',
               '2016-01-01 03:00:00+00:00', '2016-01-01 04:00:00+00:00',
               '2016-01-01 05:00:00+00:00', '2016-01-01 06:00:00+00:00',
               '2016-01-01 07:00:00+00:00', '2016-01-01 08:00:00+00:00',
               '2016-01-01 09:00:00+00:00', '2016-01-01 10:00:00+00:00',
               ...
               '2016-12-31 14:00:00+00:00', '2016-12-31 15:00:00+00:00',
               '2016-12-31 16:00:00+00:00', '2016-12-31 17:00:00+00:00',
               '2016-12-31 18:00:00+00:00', '2016-12-31 19:00:00+00:00',
               '2016-12-31 20:00:00+00:00', '2016-12-31 21:00:00+00:00',
               '2016-12-31 22:00:00+00:00', '2016-12-31 23:00:00+00:00'],
              dtype='datetime64[ns, UTC]', name='2016-01-01T00:00:00Z', length=8759, freq=None)
<class 'pandas.core.indexes.datetimes.DatetimeIndex'>

Now I will try to avoid inserting the timezone information when creating the CSV files, however I think we should add a warning when this (silently unfortunately) happens.

Renaming "country" and "countries" to "zone" and "zones"?

The zones in my case are:

  • regions within a country
  • countries.

I defined a sub-set of the set "n" in GAMS (to represent a country that includes several zones)

My idea is: wouldn't it be more correct to rename all "country" variables in the code to "zone"?

I did this in my code (check out my fork) to define a new variable named "country".
I believe this will keep the code consistent and it will avoid any confusion between country, zone and node.

Internalize mid-term scheduling

This would avoid the necessity to have reservoir levels as input. Time resolution: 1 day.
Two options:

  • New GAMs file with simplified LP formulation
  • Existing gams file, modified to run with any timestep (horizon of 365 days, no overlap)

The `StartUpTime` parameter is never used

The Dispa-SET documentation includes the parameter StartUpTime among the input fields needed for the power plants and in fact it is present in the power plants files under the Database folder. However, I couldn't find any reference to it in the model formulation: if the parameter is not used it must be removed to avoid to confuse the users (like me, I spent time to try to understand how to characterise that field before realising that probably is not used).

Index different size

Launching a simulation using the mid_term scheduling branch.
I am using as reference the year 2016 but then the leap year poses a problem, apparently.
In the log I have:

19/10/11 17:10:06 [ERROR   ] (NodeBasedTable): File Hist-PanEU-DB/Load_RealTime/IT/emh_demand_2015.csv index different size (8759) than desired index (8784).

and

19/10/11 17:10:07 [ERROR   ] (NodeBasedTable): File /scratch/felicma/345167/Hist-PanEU-DB/FuelPrices/CC_L_coal_price.csv index different size (8760) than desired index (8784).

They are both errors but the simulation continues. What happens then? The demand is set to zero and the fuel price to the default?

Technology, CO2 and storage wise brakedown

  • Sometimes there is a need for a technology wise breakdown (ie. when comparing generation from different GAS units or between conventional and CHP units)

  • CO2 emissions sorted by fuel/technology

  • Storage breakdown in dispatch plots (distinguish between HDAM, HPHS, TES, EV...)

  • #85

EV & Sector wise Demands

@squoilin and @kavvkon People are telling me that we should introduce separate demand curves as inputs which are provided as .csv time-series:

  • Electric vehicles
  • Commercial, Residential, Industrial

Although i do agree that it might make sense for the EV's but I'm not convinced that we would get any useful insights from the second one. The only thing that comes to my mind when we speak about individual demand curves is that it might be easier to decide how much load shedding is available (e.a. industry gets cut off first....), but then again we usually don't analyze systems that lack capacity

TypeError: in method 'gdxCreateD', argument 2 of type 'char const *'

I have performed on my Linux machine a simulation, everything was fine. But I cannot load the output with get_sim_results.

import dispaset as ds
inputs,results = ds.get_sim_results('Simulations/simulationPANEU/', cache=False)


---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-2-0dcf5cb94e21> in <module>
----> 1 inputs,results = ds.get_sim_results('Simulations/simulationPANEU/', cache=False)

~/work/Dispa-SET/dispaset/postprocessing/postprocessing.py in get_sim_results(path, gams_dir, cache, temp_path)
    514                 results = pickle.load(pfile)
    515     else:
--> 516         results = gdx_to_dataframe(gdx_to_list(gams_dir, resultfile, varname='all', verbose=True), fixindex=True,
    517                                    verbose=True)
    518

~/work/Dispa-SET/dispaset/misc/gdx_handler.py in gdx_to_list(gams_dir, filename, varname, verbose)
    214     tgdx = tm.time()
    215     gdxHandle = new_gdxHandle_tp()
--> 216     gdxCreateD(gdxHandle, gams_dir, GMS_SSSIZE)
    217
    218     # make sure the file path is properly formatted:

~/miniconda2/envs/dispaset/lib/python3.7/site-packages/gdxcc/gdxcc.py in gdxCreateD(pgdx, dirName, msgBufSize)
    313 def gdxCreateD(pgdx, dirName, msgBufSize):
    314     """gdxCreateD(pgdx, dirName, msgBufSize) -> int"""
--> 315     return _gdxcc.gdxCreateD(pgdx, dirName, msgBufSize)
    316
    317 def gdxCreateL(pgdx, libName, msgBufSize):

TypeError: in method 'gdxCreateD', argument 2 of type 'char const *'

The path is detected but I don't know where the error could be.

Problem in Postprocessing of the testing configuration

Hello,
My name in Andrea Mangipinto and I have just started my Master Thesis with professor Quoilin.

I am running the testing configuration, and I have had a problem in the postprocessing.
In fact, after changing all the attributes containing "country" in "zones" (e.g. plot_country in plot_zone), everything works except the last attribute.

I get the error " module 'dispaset' has no attribute 'storage_levels' ".

Naming conventions

Currently:
Github repo: Dispa-SET
package folder: DispaSET
package name in setup.py: dispaset
command line interface: dispaset
documentation website: dispaset.eu

Proposal:

  • rename package folder (and consequently package name) from DispaSET to dispaset. This simplifies/harmonizes some things, and is inline with python naming conventions. This means the import will be changed to import dispaset as ds. See also #3.
  • Keep official name in manuals and documents as Dispa-SET.

Renaming non-simulated zones as 'RoW'

Is this really necessary? I was wondering if we could plan to remove this renaming but rather using the original names provided in the crossborder flow dataset, to make the analysis easier and more intuitive.

Datetime bug

When time series input csv has more years then one and no leap year is present new year specified in the config file overwrites all days in csv file to a new year. This results in an error in last line of function: load_time_series(config,path,header='infer'):

return data.reindex(config['idx_long'], method='nearest').fillna(method='bfill')

I will work on this! This is just a reminder so that i don't forget!

Chord diagram for net-flows

It would be nice to include a Chord diagram for the net-flows between different zones

Plot.ly offeres this kind of graphical representations

benchmarking plots

Should we include the benchmarking plots (simulated load duration cuves vs historical LDC) inside the master banch?
If yes, how should we do this? in a notebook?

Storage error in dispatch plot (Standard formulation only!)

When Standard formulation is used, reservoir levels of storage units (only hydro and batteries and not thermal) are not multiplied by the storage capacity. This results in misleading storage level plots as instead of xyz TWh plots are in range from 0 - 1e-6 TWh.

Improving the error management

I would like to suggest making more explicit the fallback to default prices: when the path of a file with fuel prices is specified in the configuration file then, in case of error, the pre-processor should stop instead of falling back to default prices giving a warning. This can happen because you mistyped the filename or because the path was incorrect. I think that if an user specifies a filename he/she expects to use the data included in that file.

Last fixes before 2.3

  • Check hydro capacities (HDAM and HSPS)
  • Adapt the thermal unit capacities according to the historical generation. We should be careful about the "sleeping" capacity in reserve markets
  • Check wind offshore (both AF and capacity)
  • Clean up database from obsolete files. One last check for licensing issues. Attribute where necessary.
  • "Benchmarking" plots
  • Building the base EU case should not have any 'errors' or 'critical' messages. Ideally also no warnings. Modify code or complete data where necessary

Tuples in yaml files

When dumping the config dictionary into a yaml file, tuples need to be registered for the start/stop dates.
When loading back the file (e.g. config_EL.yml), the following error occurs:

ConstructorError: could not determine a constructor for the tag 'tag:yaml.org,2002:python/tuple'
in "ConfigFiles/config_EL.yml", line 38, column 12

Cost of imports and exports (EUR) for each country (node) ?

Given the current GAMS code and its results, is it possible to write a postprocessing function to produce a time series for the cost of imports and exports (EUR) for each country (node)?
I would really appreciate any help or suggestion!

GAMS path for get_sim_results

What do you think adding a parameter to specify the GAMS path to the function to avoid the interactive prompt when the path used for the simulation is not found?

Include the EU Dispa-SET data and config file in the repository

  • Slovakia power plants are missing
  • Norway big amounts of lost load. Inflows maybe out of scale (e.g. for 2012 it gets to 2x the installed capacity)
  • Real(istic) outages
  • Solar in FR are missing (~7 GW)
  • LU does not appear to be connected to any other zone in the NTC table
  • CH solar capacity seem to be overestimated (~1GW vs 0.3GW historical)

PriceTransmission

Dear all,
I would like to set a cost for the transmission between the zones, looking at the Objective Function it seems that PriceTransmission should be what I need but although it is defined in the code, a value is never assigned to it, should we add it to the configuration?
I need it because in my case, I cannot assume that importing electricity is the same than generate it locally.

Get time series of unit-specific operation cost

def get_units_operation_cost(inputs, results):
"""
Function that computes the operation cost for each power unit at each instant of time from the DispaSET results
Operation cost includes: CostFixed + CostStartUp + CostShutDown + CostRampUp + CostRampDown + CostVariable

:param inputs:      DispaSET inputs
:param results:     DispaSET results
:returns out:       Dataframe with the the power units in columns and the operatopn cost at each instant in rows
"""

datain = ds_to_df(inputs)

#DataFrame with startup times for each unit (1 for startup)
StartUps = results['OutputCommitted'].copy()
for u in StartUps:
    values = StartUps.loc[:, u].values
    diff = -(np.roll(values, 1) - values )
    diff[diff <= 0] = 0
    StartUps[u] = diff

#DataFrame with shutdown times for each unit (1 for shutdown)
ShutDowns = results['OutputCommitted'].copy()
for u in ShutDowns:
    values = ShutDowns.loc[:, u].values
    diff = (np.roll(values, 1) - values )
    diff[diff <= 0] = 0
    ShutDowns[u] = diff

#DataFrame with ramping up levels for each unit at each instant (0 for ramping-down & leveling out)
RampUps = results['OutputPower'].copy()
for u in RampUps:
    values = RampUps.loc[:, u].values
    diff = -(np.roll(values, 1) - values )
    diff[diff <= 0] = 0
    RampUps[u] = diff

#DataFrame with ramping down levels for each unit at each instant (0 for ramping-up & leveling out)
RampDowns = results['OutputPower'].copy()
for u in RampDowns:
    values = RampDowns.loc[:, u].values
    diff = (np.roll(values, 1) - values )
    diff[diff <= 0] = 0
    RampDowns[u] = diff

FiexedCost = results['OutputCommitted'].copy()
StartUpCost = results['OutputCommitted'].copy()
ShutDownCost = results['OutputCommitted'].copy()
RampUpCost = results['OutputCommitted'].copy()
RampDownCost = results['OutputCommitted'].copy()
VariableCost = results['OutputCommitted'].copy()
UnitOperationCost = results['OutputCommitted'].copy()

OperatedUnitList = results['OutputCommitted'].columns
for u in OperatedUnitList:
    unit_indexNo = inputs['units'].index.get_loc(u)
    FiexedCost.loc[:,[u]] = np.array(results['OutputCommitted'].loc[:,[u]])*inputs['parameters']['CostFixed']['val'][unit_indexNo]
    StartUpCost.loc[:,[u]] = np.array(StartUps.loc[:,[u]])*inputs['parameters']['CostStartUp']['val'][unit_indexNo]
    ShutDownCost.loc[:,[u]] = np.array(ShutDowns.loc[:,[u]])*inputs['parameters']['CostShutDown']['val'][unit_indexNo]
    RampUpCost.loc[:,[u]] = np.array(RampUps.loc[:,[u]])*inputs['parameters']['CostRampUp']['val'][unit_indexNo]
    RampDownCost.loc[:,[u]] = np.array(RampDowns.loc[:,[u]])*inputs['parameters']['CostRampDown']['val'][unit_indexNo]
    VariableCost.loc[:,[u]] = np.array(datain['CostVariable'].loc[:,[u]])*np.array(results['OutputPower'][u]).reshape(-1,1)

UnitOperationCost = FiexedCost+StartUpCost+ShutDownCost+RampUpCost+RampDownCost+VariableCost

return UnitOperationCost

pytest: 4 failed

I have cloned the master branch and created an environment from the yml file. I have then started the pytest:

platform win32 -- Python 3.7.3, pytest-4.4.1, py-1.8.0, pluggy-0.9.0

I don't know where to find the total output of pytest, however this is the error that appears multiple times:

____________________________________________________________________ test_solve_gams[MILP] ____________________________________________________________________

config = {'AllowCurtailment': 1.0, 'Clustering': 1.0, 'CostHeatSlack': 'H:\\Code\\Dispa-SET\\', 'CostLoadShedding': 'H:\\Code\\Dispa-SET\\', ...}

    @pytest.mark.skipif('TRAVIS' in os.environ,
                        reason='This test is too long for the demo GAMS license version which is currently installed in Travis')
    def test_solve_gams(config):
        from dispaset.misc.gdx_handler import get_gams_path
>       r = ds.solve_GAMS(config['SimulationDirectory'], get_gams_path())

tests\test_solve.py:30:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
dispaset\misc\gdx_handler.py:424: in get_gams_path
    tmp = input('Specify the path to GAMS within quotes (e.g. "C:\\\\GAMS\\\\win64\\\\24.3"): ')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <_pytest.capture.DontReadFromInput object at 0x000000000389BE48>, args = ()

    def read(self, *args):
>       raise IOError("reading from stdin while output is captured")
E       OSError: reading from stdin while output is captured

C:\PGM\ANACONDA\envs\dispaset\lib\site-packages\_pytest\capture.py:693: OSError
-------------------------------------------------------------------- Captured stdout call ---------------------------------------------------------------------
Specify the path to GAMS within quotes (e.g. "C:\\GAMS\\win64\\24.3"):
_______________________________________________________________________ test_build[LP] ________________________________________________________________________

config = {'AllowCurtailment': 1.0, 'Clustering': 1.0, 'CostHeatSlack': 'H:\\Code\\Dispa-SET\\', 'CostLoadShedding': 'H:\\Code\\Dispa-SET\\', ...}
tmpdir = local('C:\\Users\\felicma\\AppData\\Local\\Temp\\1\\pytest-of-felicma\\pytest-0\\test_build_LP_0')

    def test_build(config, tmpdir):
        # Using temp dir to ensure that each time a new directory is used
        config['SimulationDirectory'] = tmpdir
>       SimData = ds.build_simulation(config)

tests\test_solve.py:23:
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
dispaset\preprocessing\preprocessing.py:651: in build_simulation
    write_variables(config['GAMS_folder'], gdx_out, [sets, parameters])
dispaset\misc\gdx_handler.py:178: in write_variables
    gams_dir = get_gams_path(gams_dir=gams_dir.encode())
dispaset\misc\gdx_handler.py:424: in get_gams_path
    tmp = input('Specify the path to GAMS within quotes (e.g. "C:\\\\GAMS\\\\win64\\\\24.3"): ')
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <_pytest.capture.DontReadFromInput object at 0x000000000389BE48>, args = ()

    def read(self, *args):
>       raise IOError("reading from stdin while output is captured")
E       OSError: reading from stdin while output is captured

I think it is due to the fact that it is looking for the GAMS path interactively, is it possible to improve the automatic discovery of the GAMS path without asking the user for an input?

Fuel prices per country (node)

Currently the fuel price timeseries, from where the var costs are estimated, are global for all plants. We should offer an optional extra level of detail when the input file include one fuel price timeseries per country.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.