GithubHelp home page GithubHelp logo

grass-flammability's People

Contributors

dschwilk avatar lindbeem2791 avatar xiulingao avatar

Stargazers

 avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

grass-flammability's Issues

hobo data files are named by date, but no indication of position

Ok, as far as I can tell, the files in data/hobo are named by date, followed by a mandatory "0" (why?) followed by an optional "a", "b" or "c".

Don't you have multiple thermocouples measuring simultaneously in a trial? What is the naming scheme for these data? I've searched and can find nothing on this.

unable to build linear mixed model for mass loss rate: badly scaled factor

I am trying to build a full linear mixed model for biomass loss rate with logged total biomass, mass density and location as factors, and label(each specimen) as random effects, no random slope is assumed, but I got error as below:

tmloss10FullMod <- lmer(estimate ~ logtmasssp.cdlocation + density10sp.cdlocation +
(1 | label), data=all_burns )
Error in fn(x, ...) : Downdated VtV is not positive definite

As i checked online, seems it is associated with badly scaled factor, but I am not sure how should I re-scale mass loss rate here. I am thinking about using abs(mass-loss-rate) and log it, but it does not work. this is the distribution of absolute value of mass loss rate
screenshot at nov 03 20-09-45

Any suggestion on traits measurement method?

I made trait measurement method file, however, still not sure whether I need to improve my method.
Especially for measuring plant surface area and volume. If you guys have time to look at it, please leave
any comments to help improve this. Appreciate it!

ms-figs.R won't run without some pre-existing pca results

The script refers to "../results/flamabove_PCA.rda" but this file does not necessarily exist. There is commented out code above which looks relevant but that code fails to run as well if I uncomment it.

Side note: I highly recommend using writeRDS() and readRDS() rather than save() and load() so that you can control object names. save() and load() I consider bad practice as they relinquish control of the workspace names to an external file.

goals for analyzing summer data

I do have three goals that I want to accomplish with playing around with my summer data( flammability and plant traits) on 8 grass species(6 are shade intolerant, 2 are shade tolerant):

  1. If there is species effect on flammability data, regarding flaming and smoldering duration, hobo temperature data at 0~10cm and above 10cm and max. flame height.
  2. If the canopy architecture data for unburned plants can be used to build the model to predict biomass located at 0~10cm and above 10cm for burned plants.
  3. If biomass at 010cm and above 10cm of burned plants have effects on duration of >60 temperature at 010cm and above 10cm.
  4. If leaf SAV has influence on max. flame height.

final_summary_dataset.R does not produce usable summary data

  1. I can't easily figure out what this script dumps into the global namespace. It is undocumented.
  2. 'alldata' is a nice data frame but it is not produced by this script but rather by ms-figs.R
  3. the PCA data (eg pcadata.above) has no sample information. Therefore, I can't line up cpa results with anything useful like species names. So how can one line up alldata and the pca scores?

For this project, you should have one final full data frame with observations as rows. Eg "alldata". If you want to run a pca on a subset, subset it temporarily and then merge the results back in. As it is I can't figure out how to make figures I really want to make.

clean_up_trial.R runs multiple times

hobo-temps.R calls source("./clean_up_trial.R")

read_balance_data.R calls source("./clean_up_trial.R")

read_balance_data.R is sourced by ./burning_trial_summaries.R

./burning_trial_summaries.R and hobo-temps.R are both of the above are sourced by final_summary_dataset.R

I'll figure out how to clean that up

2015 data clean up

I cleaned up data from 2016, however, data from 2015 still need some clean up. I already renamed all those balance data in a consistent naming system, that should be fine now. But I need to re-write metadata files for 2015. Will come back on this later.

Mass loss calculations needed

It looks like there is a start on some of the balance-derived burning trial data (biomass-loss.R), but this has hard-coded dates and looks unfinished.

We need to

  1. break mass (balance) data into trials as was done for the HOBO data
  2. calculate summaries per trial (eg maximum rates of mass loss, what else?)

no "end.time" in burning-trials.csv makes the whole data fragile

This data is fragile because the order of the rows matters. In general, data should not depend upon a particular sorting method in order to be understood. The only way to know when a burning trial ends is to either 1) examine the temperatures directly or 2) assume that it ends sometime before the next start.time.

durations are recorded as character strings in burning-trials.csv

ignition.d, combustion.d and glowing.d were all recorded as character vectors rather than numeric data.

This will require some parsing and there are no built-in functions in lubridate to do this. These should all be numeric data in seconds. Why were these recorded as character data when they should be numbers?

architecture data clean up and biomass prediction

I need to write code to :

  1. read raw canopy architecture data and clean it up to a format which can be directly used for further analysis.
  2. build model using canopy architecture data and biomass data of unburned plants to predict biomass at 0~10cm and above 10cm interval for burned plants.
  3. Look at if biomass located at the two intervals related to flammability data.

balance data smoothing and regression

as some balance data is too noisy, thus it'd be better to smooth the data before do any regression to grab the coefficient, which will be used as biomass loss rate later. @dschwilk split the data into two subsets(flaming and smoldering) and did linear regression for each with excluding data points from the first 50 seconds, where fluctuation usually goes crazy. However, there is still noise existing in the rest of data and may cause problematic regression. Here is my though on how to smooth balance data and do regression to grab coefficient:

  1. use loess() to fit a smooth curve to the original balance data
  2. use predict() with formula from step 1, and x points(time in second) from original balance data(only within flaming period, the first 50s is not excluded tho) to predict y value(biomass data).
  3. use the predict biomass value from step 2 to do the regression and grab the coefficient.

And my questions are:

  1. should I only use data from flaming stage to do the prediction and the regression? I did this because 1) the burn behaves differently at flaming and smoldering stage in terms of biomass loss rate. 2) biomass loss rate at flaming stage should be the best proxy for intensity.
  2. if include all data points, instead of doing linear regression, a polynomial regression will work better. So may be find the steepest slope on the fitted smooth curve? but it is only the biomass loss rate at a point then.
  3. how to evaluate if this solution is better than @dschwilk 's method? Or, does this work?

Any thoughts or advice on this would be appreciate! @dschwilk @efwaring @lindbeem2791 @FieryMedusa

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.