mpascariu / mortalityforecast Goto Github PK
View Code? Open in Web Editor NEWStandard tools to compare and evaluate mortality forecasting methods
License: GNU General Public License v3.0
Standard tools to compare and evaluate mortality forecasting methods
License: GNU General Public License v3.0
Everybody is talking about robustness but nobody is checking/measuring/writing/publishing about it. What it is this animal called "robustness"?
The package contains an attempted to measure it in the same way the accuracy is assessed. We understand that a model is robust if the estimates and generated results are fairly stable following gradual changes in input data or model specification.
What if the results are changing to little following a reasonable change in the input data? Is this model more robust than a model that changes in a "proportional" manner? What if the changes are to large? When is "to large" to large?
We are interested to learn about the best practices and implement them here. Help.
Hi Marius,
I am getting an error message, that I simply cannot figure out, how to solve:
Error in model.LeeCarter(data = mx.data, x = x, y = y, verbose = FALSE) :
The input data contains death rates equal to zero at various ages.
I am staying close to your example - on data from the Greenlandic Statbank, and would very much appreciate if you could give me a push in the right direction
best regards
Lars
library(tidyverse)
library(MortalityForecast)
bexbbdtb <- read.delim2("https://bank.stat.gl:443/sq/d512a451-20e3-4d64-8132-b125095f9468.relational_table") %>% clean_names() %>%
select(time,sex=gender,age,measure,value=life_expectancy) %>%
filter(measure=="qx" & age<=90) %>%
mutate(value=round(as.numeric(value),4))
x <- 0:90 # Ages
y <- 1999:2021 # Years
h <- 20 # forecasting horizon
D <- bexbbdtb %>% filter(sex=="m") %>% pivot_wider(names_from = time, values_from = value) %>%
select(-sex,-age,-measure) %>% as.matrix()
rownames(D)<-c(0:90)
B <- bexbbdtb %>% filter(sex=="f") %>% pivot_wider(names_from = time, values_from = value) %>%
select(-sex,-measure,-age) %>% as.matrix()
rownames(B)<-c(0:90)
M <- MortalityForecast::do.MortalityModels(data = D,
data.B = B,
x = x,
y = y,
data.in = "qx",
models = "LeeCarter")
Dear Mr Pascariu, this is Kenzi Lamberto again.
I've read review_1_response and understand that the moments ยตn used in the Maxent is the observed empirical moments from the data such as the equation below.
However, I'm still having a hard time understanding the steps after obtaining the moments and then putting them into the Legendre Transformation (and differentiated with respect to ๐๐ and set equal to zero) even though I have tried to read Mead and Papanicolau (1984) and the steps within find_density.R code.
May I ask you to explain the steps written in the find_density.R? or any references that show the steps of doing the differentiation of the Legendre Transformation?
Thank you very much for your time and attention, Mr Pascariu.
Sincerely,
Kenzi Lamberto
Friedman Rank Sum Test
Page Rank Sum Test
etc.
The MaxEnt algorithm implemented in find.density()
shows convergence problems in some cases.
For example, I've noticed that if one extrapolates series of raw moments too far into the future (> 50 time steps) with a multivariate random walk w drift the likelihood of a failure to converge and estimate a density out of those moments is high.
And I think, this has to do with the way one extrapolates the moments and keeps them within the adequate range.
The fix should come as follows:
Consider adding the STAD
model in the package.
Issue: the function should allow the estimation of the model starting from a matrix dx
or mx
only. As of now, the code that I have from Ugo allows the estimation if Dx
and Ex
are available. For a model that extrapolates distributions this is not preferable.
In the generated output objects there are too many lists in lists in lists and arrays and matrices. Even for me (the creator of the package) is difficult to read, remember and make sense of the results after a certain point.
A stable version of the package should include only functions and methods that return tidy datasets.
Tidy datasets are easy to manipulate, model and visualize, and have a specific structure: each variable is a column, each observation is a row, and each type of observational unit is a table. (Wickham, 2014)
This should be easy considering what we have learned from B.Guillaume.
Add the Double-Gap model in the package by merging the MortalityGaps
R package into this one.
The evalAccuracy
method is slow especially when used in relation with multiple back-testing objects. Need more efficient code.
Good evening Mr Pascariu. My name is Kenzi Lamberto from Indonesia.
I am trying to analyze Japanese and Taiwan mortality using this package for my undergraduate thesis. I found some problem while using HMD Taiwan Female Data in some period into the Backtesting and BBackTesting from Mortality Forecast package.
For the BackTesting, the errors are as follows.
For the BBackTesting, the error is
Error in { : task 1 failed - 'could not find function "do.BackTesting"'
May I know what went wrong and what should I do to fix these? Thank you for your attention and time.
Sincerely,
Kenzi Lamberto
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.