GithubHelp home page GithubHelp logo

utwente-energy / alpg Goto Github PK

View Code? Open in Web Editor NEW
56.0 6.0 26.0 112 KB

Artificial Load Profile Generator for DSM

License: GNU General Public License v3.0

Python 100.00%
simulation energy energy-management model

alpg's Issues

Regarding HP heat demand-Electrical power; Self-consumption; Location

Hello Sir,

I am currently working on my Master's thesis on "Demand Side Flexibility". In order to analyze household load consumption and PV generation data, I am interested in utilizing "alpg". I have a few questions regarding the Heat Pump's Heat Demand Output, the results of Self-Consumption, and the location in Germany.

1: Result folder:

  • I have a dataset called "Heatdemand_Profile" which contains information on the heat demand (in Watts) for DH Water. In addition, I also have a "Heat gain profile" which includes the combined heat gain from both people and devices. My question is whether the values in these datasets are already in electrical power (Watts) or if I need to convert them using the COP?
  • Additionally, I would like to inquire about how the HP or CHP heat demand is factored into the results. I have also observed alterations in the profiles of "Heat gain" and "Heat gain persons" based on the thermostat's response to achieving the set point temperature. Could you clarify if the data for "Heat gain persons" (measured in Watts) is equivalent to the heat pump's heat demand, or do I need to do something else for the calculation? I would appreciate your guidance on this matter.

2: example.py:
-In the "example.py" file, I came across a line that reads, "This emulates the Dutch 'nul-op-the-meter regime (net zero annual electricity usage)." From this, it appears that the output is generated using either a self-consumption or a net-metering mechanism to achieve net-zero annual electricity usage. Can you provide more clarity on the meaning of this line?

  • I need to update the location from the Netherlands to Germany in this file. I switched from the "Europe/Amsterdam" timezone to "Europe/Berlin" in order to collect German household data, and I also updated the latitude and longitude accordingly. I'm curious to know if the changes I made are enough to display the relevant data or if I need to modify anything else in the config or input folder.

Thank you in advance!
image

Induction consumption

I have created two examples (each with 35 households), one with zero penetration of induction cooking (penetrationInductioncooking = 0) and the other one with 100%. When I check the file Electricity_profile_GroupInductive.csv I can see more or less the same consumption. Am I looking at the wrong file? Where can I see the consumption of the induction stove?

simulation with data from TMY-generator + electricity needed to warm up the house

Hello!

I want to use this generator in my masterthesis (Simulation of electrical load profiles of residential prosumers) but I run into two problems.

The first question I have:
is the power needed to warm up the house (conventional, HP or CHP) summed up in the output file: 'Electricity_Profile'? The heating of the house is mostly one of the biggest consumers so for my simulation this is also needed in the load profiles.

The second problem I have:
when I use data from the TMY-generator (https://re.jrc.ec.europa.eu/pvg_tools/en/#TMY) the GHI-data is provided in W/m² hourly. Because the model from ALPG asks the GHI in J/cm² I multiply the values from the TMY-generator with 0,36 so the data is now provided in J/cm². However, this leads to larger values for GHI than Twente's sample file has (10 times larger sometimes). As a result, the generated energy of the solar panel installation is also at least 2 times larger.
I compare data from the Netherlands (Twente, in input ALPG) with that of Belgium (Gent, TMY) so there is normally not much difference in weather conditions here.
My question is whether I convert the W/m² correctly in the first place and whether I can possibly modify something in the program so that I can leave the data in W/m² hourly and insert it that way.

Thanks in advance

Performance of generation of profiles

I've been looking to use ALPG to generate profiles for my thesis. While running the generator, I've noticed that it takes quite a while to run. I've looked into it and I think it can be improved by doing 2 things: generating data for multiple households at the same time and writing the data to disks in a different way. For both of these issues, I've done some quick tests and it seems feasible. Separately they've both shown significant execution time reduction (20% or more on my laptop)

Generating data for multiple households

Currently the generator generates the data for a single household at the time, then writes it to disk and then starts on the next household. If I understand the code well, there is no normalisation or relation between the data of households of any kind. It could thus be parallelized at the cost of having a bit more randomness in the output data as the random function is not called in the same order for the same house for the data generation.

Writing data to disk in a different way

The generator currently generates one data for one household, then writes it to disk. Because of the format it is stored in, it has to look for the end of a line and append it. This is quite time consuming. There are different ways to solve this, but I propose two.

The first would be a change in output format. Instead of splitting the data over multiple files, it could be that a single file contains all the data of one household. This way, the file can be written separately, without having to look anything up and as one big batch of data.

While I do think that it is a nice output format and would actually help me a lot (since I'm now working on a lot of code to split and rearrange the data back in to households), I don't think this is the right way to go. The reason is that this will significantly change the output format and will require a rewrite of any program that uses input based on ALPG.

The second solution would be to generate data for multiple houses and write them away at the same time. This is faster as it reduces waiting for IO. The downside is that it will require more RAM as it needs to store the data of multiple houses at the same time in memory.

Proposal

I propose to implement concurrent data generation for households and implement batched writing to csv files which are both configurable withing the config. This way, one should be able to run ALPG as originally written. However, if one chooses to do so, it will lead to faster data generation which can be quite significant if generating a large amount of houses

I know this is quite out of the blue, so please let me know if you're even looking for something like this. Any thoughts and comments are much appreciated.

Changing time resolution of the simulation

Hello!
I'm using the tool for the Master Thesis in the TU Eindhoven. I see that you did not fully implement the time rounding feature (to change the simulation resolution) although you started. Some loads (full electricity load, saved in file ElectricityProfile.csv) are in 1 minute resolution already. I was trying to change the time resolution by myself, but it seems I was doing it in the wrong place of the code.
Can you please point me to the exact moment the definition of the time resolution happens? How do you see the integration of the time resolution roundings in all simulations (e.g. clustering the energy demand of the EVs)?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.