GithubHelp home page GithubHelp logo

uupharmacometrics / psn Goto Github PK

View Code? Open in Web Editor NEW
64.0 13.0 19.0 24.07 MB

Perl-Speaks-NONMEM

Home Page: https://uupharmacometrics.github.io/PsN

License: GNU General Public License v2.0

Makefile 0.12% R 2.32% Perl 89.61% TeX 0.18% Shell 0.11% AMPL 5.99% Scheme 0.54% Roff 0.01% Python 0.04% Fortran 0.26% Visual Basic 6.0 0.02% NMODL 0.81%
pharmacometrics nonmem

psn's Introduction

************************
Perl-speaks-NONMEM
************************

Perl-speaks-NONMEM (PsN) is a collection of Perl modules and programs aiding 
in the development of non-linear mixed effect models using NONMEM. The 
functionality ranges from solutions to simpler tasks such as parameter 
estimate extraction from output files, data file sub setting and resampling, 
to advanced computer-intensive statistical methods. PsN includes stand-alone 
tools for the end-user as well as development libraries for method developers.

Please find more information on the webpage: https://uupharmacometrics.github.io/PsN 

************************
Installation
************************

See https://uupharmacometrics.github.io/PsN/install.html for instructions on how to install PsN on different platforms.

************************
Documentation
************************

All documentation can be found at the PsN website:
https://uupharmacometrics.github.io/PsN

************************
Testing 
************************

To install and run the PsN test suite see the developers_guide.pdf 
on the homepage

************************
Licensing 
************************

This program is free software: you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation, either version 2 of the License, or
(at your option) any later version.

Please find the GPL version 2 in the file called LICENSE

************************
Authors 
************************

Please see the file called AUTHORS

psn's People

Contributors

agnetaekholm avatar andreasnoack avatar billdenney avatar kharling avatar pmxhh avatar rikardn avatar sebastianueckert avatar svetlanafreiberga avatar vrognas avatar yngman avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

psn's Issues

Likelihood observations not supported by FREM

FREM should set F_FLAG=0 in FREM observations whenever F_FLAG is set in model (e.g. for M3 method of BLQ data). If LIKELIHOOD option is used in $EST (e.g. for categorical data) the same applies, but original observations also needs F_FLAG=1 and LIKELIHOOD unset.

Breaks: Usage of M3 method, categorical data, etc. with FREM.
Workaround: Edit crashed model_2.mod manually, delete results and re-start FREM to continue.

estimation.pm: $EST without METHOD is not always FO

sub get_method
{
	my $self = shift;

	my $method='0'; #this is the default, FO
	my @options = defined($self->options) ? @{$self->options} : ();
	foreach my $opt (@options){
		if (defined $opt and ((uc($opt->name) eq 'METHOD') || index('METHOD',uc($opt ->name )) == 0)){
			$method = uc($opt->value) if (defined $opt->value and length($opt->value)>0);
		}
	}
	return $method;
}

Only FO if lone $EST record in problem. Otherwise $EST might lack METHOD because previous $EST record has METHOD, which inherits (as other options do).

FREM: Linearized FREM cannot handle datasets wider than 1000 character columns

Updates to linearize has made it possible to format the dataset in a way so that a specific format can be added to $DATA making very big datasets possible. This does not work in FREM as the format specifier is taken out from $DATA and the dataset is mangled into a new csv dataset not compatible with the format.

cdd calculate garbage results when one or more individuals lack observations

cdd will calculate garbage dOFV results if an individual has no observations in the dataset or if all observations for an individual is taken out because of an IGNORE or ACCEPT statement. Most often this will result in multiple negative dOFV values, but not always. The results of such a cdd run is not to be trusted.

4.7.0 through 4.9.0 Test fails: unit/setup.t

When installing PsN 4.7.0 on Ubuntu 16.04 in docker, I get the following error when running prove -r unit.

I'm currently running the system tests (prove -r system), and they're currently passing (run is in progress).

unit/annotation.t .......................... ok
unit/annotation/tag.t ...................... ok
unit/array.t ............................... ok
unit/bin/psn.t ............................. ok
unit/binning.t ............................. ok
unit/boxcox.t .............................. ok
unit/citations.t ........................... ok
unit/code_record.t ......................... ok
unit/common_options.t ...................... ok
unit/covmat.t .............................. ok
unit/data.t ................................ ok
unit/data_bootstrap.t ...................... ok
unit/data_cdd.t ............................ ok
unit/data_column_parsing.t ................. ok
unit/data_extra.t .......................... ok
unit/get_filtered_values.t ................. ok
unit/individual.t .......................... ok
unit/influential_individuals.t ............. ok
unit/init_option.t ......................... ok
unit/init_record.t ......................... ok
unit/input_checking.t ...................... ok
unit/inputrecord.t ......................... ok
unit/linear_algebra.t ...................... ok
unit/math.t ................................ ok
unit/model.t ............................... ok
unit/model_transformations.t ............... ok
unit/msfi.t ................................ ok
unit/multnorm.t ............................ ok
unit/nmtable.t ............................. ok
unit/nmtablefile.t ......................... ok
unit/nonmemrun.t ........................... ok
unit/nonmemrun/lsf.t ....................... ok
unit/nonmemrun/sge.t ....................... ok
unit/nonmemrun/slurm.t ..................... ok
unit/npde.t ................................ ok
unit/option.t .............................. ok
unit/output.t .............................. ok
unit/parse_output.t ........................ ok
unit/problem.t ............................. ok
unit/psn.t ................................. ok
unit/record.t .............................. ok
unit/record_order.t ........................ ok
unit/relmultnorm.t ......................... ok

#   Failed test 'default version config file /opt/PsN/4.7.0/PsN_4_7_0/psn.conf'
#   at unit/setup.t line 24.
#          got: undef
#     expected: '/opt/PsN/4.7.0/PsN_4_7_0/psn.conf'

#   Failed test 'default version lib dir exists: '
#   at unit/setup.t line 26.
#          got: undef
#     expected: '1'

#   Failed test 'default version lib dir /opt/PsN/4.7.0/PsN_4_7_0'
#   at unit/setup.t line 29.
#          got: undef
#     expected: '/opt/PsN/4.7.0/PsN_4_7_0'

#   Failed test 'default version base_lib_dir is defined: '
#   at unit/setup.t line 31.
#          got: ''
#     expected: '1'

#   Failed test 'default version base_lib_dir positive length: '
#   at unit/setup.t line 32.
#          got: ''
#     expected: '1'

#   Failed test 'default version base_lib_dir exists: '
#   at unit/setup.t line 33.
#          got: undef
#     expected: '1'

#   Failed test 'default version bin dir '
#   at unit/setup.t line 41.
#          got: ''
#     expected: '1'

#   Failed test 'default version bin_dir exists: '
#   at unit/setup.t line 42.
#          got: undef
#     expected: '1'

#   Failed test 'default version number 4.7.0'
#   at unit/setup.t line 43.
#          got: undef
#     expected: '4.7.0'

#   Failed test 'get_new_installation_defaults: old config file is /opt/PsN/4.7.0/PsN_4_7_0/psn.conf'
#   at unit/setup.t line 48.
#          got: undef
#     expected: '/opt/PsN/4.7.0/PsN_4_7_0/psn.conf'

#   Failed test 'get_new_installation_defaults: new lib dir /usr/local/share/perl/5.22.1/PsN_10_2_4'
#   at unit/setup.t line 50.
#          got: '/usr/local/share/perl/5.22.1/PsN_10_2_4'
#     expected: '/PsN_10_2_4'

#   Failed test 'get_new_installation_defaults: new base lib dir /usr/local/share/perl/5.22.1'
#   at unit/setup.t line 51.
#          got: '/usr/local/share/perl/5.22.1'
#     expected: undef

#   Failed test 'get_new_installation_defaults: new bin dir /usr/local/bin'
#   at unit/setup.t line 52.
#          got: '/usr/local/bin'
#     expected: undef

#   Failed test 'get_new_installation_defaults: old default version number 4.7.0'
#   at unit/setup.t line 53.
#          got: undef
#     expected: '4.7.0'
# Looks like you failed 14 tests of 20.
unit/setup.t ...............................
Dubious, test returned 14 (wstat 3584, 0xe00)
Failed 14/20 subtests
unit/simulation.t .......................... ok
unit/so/parsers/bootstrap.t ................ ok
unit/so/parsers/nmoutput.t ................. ok
unit/so/parsers/psn.t ...................... ok
unit/so/soblock/estimation/ofmeasures.t .... ok
unit/so/soblock/rawresults.t ............... ok
unit/so/soblock/rawresults/datafile.t ...... ok
unit/so/soblock/taskinformation.t .......... ok
unit/so/soblock/taskinformation/message.t .. ok
unit/so/soblock/taskinformation/runtime.t .. ok
unit/so/table.t ............................ ok
unit/so/xml.t .............................. ok
unit/table.t ............................... ok
unit/theta.t ............................... ok
unit/theta_option.t ........................ ok
unit/tool/benchmark.t ...................... ok
unit/tool/bootstrap.t ...................... ok
unit/tool/cdd.t ............................ ok
unit/tool/frem.t ........................... ok
unit/tool/lasso.t .......................... ok
unit/tool/mcmp.t ........................... ok
unit/tool/modelfit.t ....................... ok
unit/tool/nonparametric.t .................. ok
unit/tool/npc.t ............................ ok
unit/tool/precond.t ........................ ok
unit/tool/proseval.t ....................... ok
unit/tool/resmod.t ......................... ok
unit/tool/scm.t ............................ ok
unit/tool/simeval.t ........................ ok
unit/tool/sir.t ............................ ok
unit/tool/sse.t ............................ ok
unit/tool/tool.t ........................... ok
unit/tool/vpc.t ............................ ok
unit/update_inits.t ........................ ok
unit/utils/file.t .......................... ok
unit/utils/rplots.t ........................ ok

Test Summary Report
-------------------
unit/setup.t                             (Wstat: 3584 Tests: 20 Failed: 14)
  Failed tests:  1-6, 8-15
  Non-zero exit status: 14
Files=80, Tests=7302, 54 wallclock secs ( 0.51 usr  0.05 sys + 51.44 cusr  1.70 csys = 53.70 CPU)
Result: FAIL

Prevent covariate state `none` from running in SCM

I am running a SCM in PsN 4.7.0 and have in my .scm file (see below) overwritten the exponential and power states for one of my covariates (AGE in this example) to be none (i.e. no covariate effect), as I would like to still test the power and exponential models on another continuous covariate (WT in this example).

continuous_covariates=AGE,WT

[test_relations]
CL=AGE,WT
V=AGE,WT

[valid_states]
continuous=1,2,4,5

[code]
*:AGE-4=none
*:AGE-5=none

But it seems that despite having no covariate effect (i.e. none) the SCM will run these models anyway which (unless I am missing a point here) would only result in slowing down the SCM by having meaningless runs.

postfrem: commas in labels breaks plotting

Labels of OMEGAs in base model will be used as "parameter names" in FREM/postfrem. Commas in these labels (e.g. for a comment on a "label") will break

pardata <- read.csv(pardatafile,header = T, as.is = T)

in PsN_frem_plots.R since two columns are parsed where only one is present.

Suggestion: Don't store parameter names as a delimiter-separated value file since it's only a vector.

Cannot reproduce BCa confidence intervals in Bootstrap tool

I have problems reproducing the bca.confidence.intervals from the Bootstrap tool.

The jackknife.means that is reported seem to be incorrect. For example when running the bootstrap on CONTROL5 without IIV, I get the following jackknife results for THETA1 (KA):

> bs.jackknife$THETA1
 [1] 1.46623 1.44630 1.43295 1.53039 1.49318 1.51285 1.57620
 [8] 1.49994 1.38698 1.58814 1.37259 1.60716

The mean is 1.492743. However, PsN reports 1.49318. Coincidentally, for the cases I checked, the PsN-reported mean was each time one of the jackknifed parameters (5th in this case).

Could it be that an incorrect jackknife mean enters the BCa formula, resulting in an erroneous acceleration factor?

The command I used to obtain the results above was:

BOOTSTRAP CONTROL5_noiiv.mdl -rplots=1 -template_file_rplots=bootstrap_default_ORIG460.R -samples=10 -threads=7 -seed=1234321 -bca -dofv -clean=3

Feature Request: Naming Off-Diagonal OMEGA Values

Is there a way to automatically name the off-diagonal OMEGA values for a $OMEGA block like the one below:

$OMEGA BLOCK(2)
0.1      ; IIV CL
0.01 0.1 ; IIV V1

In that scenario, the name for OMEGA(1,1) becomes "IIV CL", OMEGA(2,1) is "OMEGA(2,1)", and OMEGA(2,2) becomes "IIV V1". In runrecord, sumo, and probably other places it is not possible to understand the link between off-diagonal OMEGA values and the underlying parameter without going back to the model, and if ordering of the OMEGAs change during modeling, the meaning of the off-diagonal elements would change, too.

Ideally, the off-diagonals would be referred to by name of the pair of diagonals that it links. For the example above, that would mean OMEGA(2,1) becomes "OMEGA(IIV V1,IIV CL)".

Ensure Similar Data with dOFV

I'm testing various methods of BLQ handing with a model, and in run1 there are 1109 records, and in run2 there are 900 records due to excluding BLQ in run2. The OFV should not be comparable between these two runs since they are on a different data set. While full dataset comparison is probably outside the scope of PsN, it would be nice to notify the user that dOFV is not meaningful when different datasets are used for estimation.

Specifically, if number of individuals and number of observations are not both equal, a note should be indicated in sumo and runrecord (and maybe other tools) indicating that the dOFV should be interpreted with caution.

Examples below

Run 1:

Objective function value: 2322.4144

Condition number: 85.07

Number of observation records: 1109
Number of individuals: 72

Run 2:

Objective function value: 2218.5880
Delta ofv (from run1): -103.8264

Condition number: 79.39

Number of observation records: 900
Number of individuals: 72

Add PsN command to the mod file

Just a thought I was having. To increase reproducibility it would be nice to have a way to add the PsN command and/or just a couple PsN options in a dedicated header on top of the code that would be evaluated.

;; PsN cmd: execute-4.7.0 -nm_version=7_30 <run>
$PROBLEM Test PsN

A generic PsN function could then be created to read commands from model files e.g.:
run run001.mod

This way the .lst files would not only contain the NONMEM results but also the PsN version and options used to get them.

sumo Reports Final Zero Gradients when They Don't Exist

When running sumo on a model, I see a note that there are final zero gradients. When I look in the .lst file, there are none on the final row.

This is with NONMEM 7.4.2 and PsN 4.7.0.

$ dockpsn sumo run7.lst
-----------------------------------------------------------------------

run7.lst

Successful minimization                                           [    OK   ]
No rounding errors                                                [    OK   ]
Zero gradients found 65 times                                     [ WARNING ]
Final zero gradients                                              [  ERROR  ]
Hessian reset 3 times                                             [ WARNING ]
No parameter near boundary                                        [    OK   ]
Covariance step                                                   [    OK   ]

Condition number                                                  [    OK   ]
Correlations                                                      [    OK   ]

Total run time for model (hours:min:sec):                  0:03:46
Estimation time for subproblem, sum over $EST (seconds):   208.72
Covariance time for subproblem, sum over $EST (seconds):   7.19

Objective function value: 9733.1820
Delta ofv (from run3): 706.5966

Condition number: 131.7

Number of observation records: 1004
Number of individuals: 30

The minimization success in the .lst file:

0ITERATION NO.:  189    OBJECTIVE VALUE:   9733.18195748821        NO. OF FUNC. EVALS.:   0
 CUMULATIVE NO. OF FUNC. EVALS.:     3776
 NPARAMETR:  4.6918E-01  1.3086E+01 -3.6413E-02 -1.8613E+00  3.5331E+00  4.3722E+00  2.9716E+00  2.6281E+00  6.4750E-02  1.9373E-01
             1.7766E+00
 PARAMETER: -2.3432E-02  3.8884E+00 -7.2826E-03 -3.7226E-01  2.4373E-01  4.5419E-01 -4.6723E-01 -2.2732E-01 -9.2204E-01  8.4072E-02
             1.5386E+00
 GRADIENT:  -4.9890E-03  4.4135E-03  1.1206E+00  8.8543E-02 -1.0265E-02  2.7523E-02  7.7408E-03 -5.1899E-02  1.4544E-03  6.1830E-04
            -5.6703E-03

 #TERM:
0MINIMIZATION SUCCESSFUL
 NO. OF FUNCTION EVALUATIONS USED:     3776
 NO. OF SIG. DIGITS IN FINAL EST.:  3.3

 ETABAR IS THE ARITHMETIC MEAN OF THE ETA-ESTIMATES,
 AND THE P-VALUE IS GIVEN FOR THE NULL HYPOTHESIS THAT THE TRUE MEAN IS 0.

 ETABAR:         0.0000E+00  0.0000E+00 -1.3807E-03  1.8004E-02  0.0000E+00  0.0000E+00  1.1500E-01  0.0000E+00  0.0000E+00
 SE:             0.0000E+00  0.0000E+00  3.8617E-02  7.7973E-02  0.0000E+00  0.0000E+00  2.2609E-01  0.0000E+00  0.0000E+00
 N:                      30          30          30          30          30          30          30          30          30

 P VAL.:         1.0000E+00  1.0000E+00  9.7148E-01  8.1739E-01  1.0000E+00  1.0000E+00  6.1100E-01  1.0000E+00  1.0000E+00

 ETASHRINKSD(%)  1.0000E+02  1.0000E+02  1.5457E+01  1.3107E+00  1.0000E+02  1.0000E+02  5.5045E+00  1.0000E+02  1.0000E+02
 ETASHRINKVR(%)  1.0000E+02  1.0000E+02  2.8524E+01  2.6043E+00  1.0000E+02  1.0000E+02  1.0706E+01  1.0000E+02  1.0000E+02
 EBVSHRINKSD(%)  0.0000E+00  0.0000E+00  1.6554E+01  3.9662E+00  0.0000E+00  0.0000E+00  5.1856E+00  0.0000E+00  0.0000E+00
 EBVSHRINKVR(%)  0.0000E+00  0.0000E+00  3.0367E+01  7.7751E+00  0.0000E+00  0.0000E+00  1.0102E+01  0.0000E+00  0.0000E+00
 EPSSHRINKSD(%)  2.4916E+00
 EPSSHRINKVR(%)  4.9211E+00


 TOTAL DATA POINTS NORMALLY DISTRIBUTED (N):         1004
 N*LOG(2PI) CONSTANT TO OBJECTIVE FUNCTION:    1845.22857467498
 OBJECTIVE FUNCTION VALUE WITHOUT CONSTANT:    9733.18195748821
 OBJECTIVE FUNCTION VALUE WITH CONSTANT:       11578.4105321632
 REPORTED OBJECTIVE FUNCTION DOES NOT CONTAIN CONSTANT

 TOTAL EFFECTIVE ETAS (NIND*NETA):                            90

 #TERE:
 Elapsed estimation  time in seconds:   208.72
 Elapsed covariance  time in seconds:     7.19
 Elapsed postprocess time in seconds:     0.14

ID Synonym Error with npc

I just got the following errors:

It is forbidden to use a synonym for ID in the $INPUT record. at constructor tool::npc::new (defined at /opt/PsN/4.8.0/PsN_4_8_0/tool/npc.pm line 5587) line 2443.
It is forbidden to use a synonym for MDV in the $INPUT record. at constructor tool::npc::new (defined at /opt/PsN/4.8.0/PsN_4_8_0/tool/npc.pm line 5587) line 2443.

Can PsN detect what the ID and MDV columns are and use those throughout?

Linearized SCM derivatives table format

When running a linearized SCM, PsN will store the derivatives in derivatives_covariates.dta however to generate this file the option FORMAT=s1PE10.4 is used in the derivatives.mod. I understand that this format may have been used to reduced by 1 character each column but in the presence of a minus sign the column alignment is not respected. For example

G071
-1.0351E-04
8.9329E-06

I am not sure it could have any consequence on the NONMEM import but maybe the standard TABLE FORMAT=s1PE11.4 could be used instead:

G071
-1.0351E-04
 8.9329E-06

model_subdir option does not work with update

Updating initial estimates with activated -model_subdir wors as expected, i.e., for
update_inits run103.mod -out=run104.mod -model_subdir
values are retrieved from run103/run103.lst.

However, using the synonym command
update run103.mod -out=run104.mod -model_subdir
does not appear to look for the lst file in the model subdirectory.

PSN cant find NONMEM control stream when Modeling Single Subject PK

I am trying to estimate single subject PK parameters using the following control streams
[run100.mod]

$PROBLEM    MODELING SINGLE SUBJECT
$INPUT      ID TRT SEQ PERIOD TIME TAFD DV AMT AGE WT HT BMI EVID MDV CMT
$DATA       data.csv RECS=ID IGNORE=@ 
$SUBROUTINE ADVAN4 TRANS4
$PK
IF(TRT.EQ.1) THEN
  KA=THETA(1)
  CL=THETA(2)
  V2=THETA(3)
  Q=THETA(4)
  V3=THETA(5)
  ALAG1=THETA(6)
ELSE
  KA=THETA(7)
  CL=THETA(8)
  V2=THETA(9)
  Q=THETA(10)
  V3=THETA(11) 
  ALAG1=THETA(12)
ENDIF
 
  K=CL/V2
  K23=Q/V2
  K32=Q/V3
  S2=V2
  $THETA  
  (0,0.6)     ; KA 1
  (0,11)      ; CL 1 
  (0,135)     ; V2 1
  (0,14)      ; Q 1
  (0,90)      ; V3 1
  (0,0.4)     ; ALAG 1
  (0,0.6)     ; KA 2
  (0,11)      ; CL 2 
  (0,135)     ; V2 2
  (0,14)      ; Q 2
  (0,90)      ; V3 2
  (0,0.4)     ; ALAG 2
  $OMEGA  0.014
  $ERROR           
  IPRED=F
  Y= IPRED+ERR(1)
  $ESTIMATION PRINT=5 MAXEVALS=9990
  $TABLE      ID TRT SEQ PERIOD TIME TAFD AGE WT HT BMI EVID MDV CMT 
                    KA CL V2 Q V3 ALAG1 IPRED NOPRINT ONEHEADER FILE=mytab100;
  INCLUDE ../../run101.mod 35

[run101.mod]

$PROBLEM    MODELING SINGLE SUBJECT
$INPUT      ID TRT SEQ PERIOD TIME TAFD DV AMT AGE WT HT BMI EVID MDV CMT
$DATA       data.csv RECS=ID NOREWIND IGNORE=@ 
$THETA  
(0,0.6)     ; KA 1
(0,11)      ; CL 1 
(0,135)     ; V2 1
(0,14)      ; Q 1
(0,90)      ; V3 1
(0,0.4)     ; ALAG 1
(0,0.6)     ; KA 2
(0,11)      ; CL 2 
(0,135)     ; V2 2
(0,14)      ; Q 2
(0,90)      ; V3 2
(0,0.4)     ; ALAG 2
$OMEGA  0.014
$ESTIMATION PRINT=5 MAXEVALS=9990
$TABLE  ID TRT SEQ PERIOD TIME TAFD AGE WT HT BMI EVID MDV CMT 
              KA CL V2 Q V3 ALAG1 IPRED NOPRINT ONEHEADER FILE=mytab100;

Nonmem estimates the parameters for all individuals in the dataset, but it seems there is failure by PsN to compile the output and instead gives the following message:

Could not find a model file copy (control stream) at top of lst-file for problem number 2 in lst-file /home/eliford/working/modeling/modelfit_dir1/NM_run1/psn.lst The nmfe script normally copies the model file but PsN cannot find it.

The message is repeated as many times as the total number of subjects, in this case, 35 times.

I there a work around to this issue?
Thank you very much.

Bootstrap tool: different seeds lead to identical results

When using the bootstrap tool, seeds 100, 101, 102 etc. up until 109 lead to identical results. A remark about this issue used to be in the bootstrap user guide, but it was taken out in recent versions. Perhaps it should be added again?

Not sure if the same issue holds for other simulation based tools.

Could it be a command parsing issue, as the last number of the provided seed seems to be skipped?

Common option "template_directory_rplots" does not work on Windows

Run crashes on Windows when trying to change template_directory_rplots.

Command was run like this:
execute pheno.mod -template_directory_rplots=C:\my_rplots\R-scripts

Error message:
The 'string' parameter does not pass the type constraint because: Validation failed for 'Str' with value undef at C:\Strawberry\perl\site\lib\MooseX\Params\Validate.pm line 239
MooseX::Params::Validate::ANON(undef, 'HASH(0x3179670)') called at C:\Strawberry\perl\site\lib\MooseX\Params\Validate.pm line 69
eval {...} at C:\Strawberry\perl\site\lib\MooseX\Params\Validate.pm line 69
MooseX::Params::Validate::validated_hash('ARRAY(0x8d91528)', 'string', 'HASH(0x93a2240)') called at C:\Users\svefr809\PsN\lib\rplots.pm line 201
rplots::double_backslashes('string', undef) called at C:\Users\svefr809\PsN\lib\rplots.pm line 129
rplots::setup('rplots=HASH(0xa0252b0)') called at C:\Users\svefr809\PsN\lib\rplots.pm line 39
rplots::BUILD('rplots=HASH(0xa0252b0)', 'HASH(0x9e492c8)') called at constructor rplots::new (defined at C:\Users\svefr809\PsN\lib\rplots.pm line 412) line 194
rplots::new('rplots', 'toolname', 'execute', 'directory', 'C:\Users\svefr809\develope\qa_new\modelfit_dir2', 'level', 0, 'raw_results_file', 'raw_results_pheno.csv', 'tool_results_file', 'psn_results.csv', 'plotcode', 'ARRAY(0x907a4e0)', 'subset_variable', undef, 'model', 'model=HASH(0x9234cd0)', 'R_markdown', 1, 'rmarkdown_installed', 1) called at C:\Users\svefr809\PsN\lib\tool.pm line 1941
tool::create_R_script(undef) called at C:\Users\svefr809\PsN\lib\tool.pm line 713
tool::print_results(undef) called at C:\Users\svefr809\PsN\bin\execute line 332

SIR: Capping of correlations can break positive definiteness, causing library exit

A SIR execution were observed where the matrix was positive definite but 5 correlations >= 0.8 caused SIR (per default) to modify the matrix ("capping" the correlations). The underlying C code (randlib) of Math::Random::random_multivariate_normal did then exit on the new matrix with COVM not positive definite in SETGMN.

It seems that SIR ought to at least check the positive definiteness after the modification to give a more informative error exit, but ideally perhaps also have a -force_posdef_covmatrix option with e.g. Frobenius norm reporting (similar to postfrem) which can save the matrix (possibly being default after correlation modifications). A diagonal inflation could also be suggested to the user to resolve the numerical issues.

PsN 4.7.0 execute cannot parse "1THERE"

When running PsN 4.7.0 with NONMEM 7.4.1, I got the following error at the end of a run:

could not match 1THERE at /opt/PsN/4.7.0/PsN_4_7_0/output/problem/subproblem.pm line 3240.

Unfortunately, I can't share the entire .lst file, but the offending line appears to be the second of the below (these are the last 4 lines of the .lst file):

 Elapsed finaloutput time in seconds:     0.58
1THERE ARE ERROR MESSAGES IN FILE PRDERR
 #CPUT: Total CPU Time in Seconds,     6640.001
Stop Time:
Sun Jan  7 16:05:28 UTC 2018

The model converged with two parameters near their boundaries:

0MINIMIZATION SUCCESSFUL
 NO. OF FUNCTION EVALUATIONS USED:     2495
 NO. OF SIG. DIGITS IN FINAL EST.:  3.4
0PARAMETER ESTIMATE IS NEAR ITS BOUNDARY

nonpb generates illegal abbreviated code for theophylline PK

Running util/CONTROL5 with nonpb crashes PsN. The cause is the illegal generated abbreviated code.

To reproduce
$ nonpb CONTROL5 -samples=2

Error message;
AN ERROR WAS FOUND ON LINE 37 AT THE APPROXIMATE POSITION NOTED:
K=THETA(2)+-2.712300E-02

The unary minus cannot follow directly on the binary plus.

Feature Request: psn_config script

From a system administrator perspective, it would be helpful to simplify the configuration update process for PsN.

I currently do this in a Dockerfile in a way that is fragile to the order of PsN options (see https://github.com/billdenney/Pharmacometrics-Docker/blob/9b4301deb94fcd20ce69dcd21d1bf623a10324a1/Perl_speaks_NONMEM_4.7.0.Dockerfile#L84 lines 84 to 99).

What would you think about a helper script called something like psn_config which would help update the psn.conf file in a repeatable way. My thoughts are that it would have an interface like:

psn_conf --conf_file=[filename] --section=[section name] [action] [text]

Where [section name] is a PsN section name (that may or may not exist) or the name "preamble" for the initial section.

[action] is one of

  • --remove which would delete the whole section,
  • --replace which would replace the whole section with the [text],
  • --prepend which would place the [text] before any text in the section,
  • --append which would place the [text] after any text in the section, and
  • --replace_value which would replace the value of a single setting (or set it if the setting does not exist) with the [text]

For any of these, if the section does not yet exist, a message would be generated indicating that the section doesn't exist, and it will be created.

Add option `-compress_tables`

In the new version of xpose I have added the possibility to automatically read in compressed output tables (e.g. sdtab001.zip). This option could be useful for many as storage can become an issue over time especially with cloud based solutions.

I am assuming the easiest way for implementation would be to compress tables during the run post processing just before tables are moved from runxxx.dir1/NM_run1/ to the parent directory.

4.7.0 to 4.9.3 with nmqual fails system test: space in filename

With the same setup as #65, I got the error below (this appears to be an internal issue to PsN and not an accuracy issue since it refers to the -no-copy-data option).

Starting 1 NONMEM executions. 1 in parallel.
S:1 ..
All executions started.
can't change dir to /tmp/PsN-test_system_execute_1/a
No such file or directory

NMtran could not be initiated (the NMtran output file FDATA is missing).
There is no output for model 1. - check that the nmfe script can be run
independent of PsN
F:1 .. abort_on_fail is set and run in NM_run1 failed. Aborting at /opt/PsN/4.7.0/PsN_4_7_0/tool/modelfit.pm line 834.

#   Failed test '/opt/PsN/4.7.0/bin/execute-4.7.0  -abort_on_fail  pheno.mod -no-copy_data, spaces in data path'
#   at system/execute.t line 40.

Cannot Summarize NUTS Method

I have a model with 2 estimation steps: FOCEI followed by NUTS.

$ESTIMATION
  SIGL=9 NSIG=2
  NOABORT
  METHOD=CONDITIONAL INTERACTION
  LAPLACIAN
  NUMERICAL SLOW
  MAXEVAL=9999 PRINT=1
  SADDLE_RESET=1
  FILE=run28foce.ext

$EST METHOD=NUTS
  INTERACTION
  LAPLACIAN
  PRINT=1
  NBURN=200
  NITER=500
  CTYPE=2
  NOPRIOR=0

According to the .lst file, it appeared to complete successfully:


 iteration          500 MCMCOBJ=    1093.58352180651
 BURN-IN WAS COMPLETED

 #TERM:
 STATISTICAL PORTION WAS COMPLETED

 TOTAL DATA POINTS NORMALLY DISTRIBUTED (N):          179
 N*LOG(2PI) CONSTANT TO OBJECTIVE FUNCTION:    328.979994887273
 OBJECTIVE FUNCTION VALUE WITHOUT CONSTANT:    1824.85955048676
 OBJECTIVE FUNCTION VALUE WITH CONSTANT:       2153.83954537403
 REPORTED OBJECTIVE FUNCTION DOES NOT CONTAIN CONSTANT

 TOTAL EFFECTIVE ETAS (NIND*NETA):                           144
 NIND*NETA*LOG(2PI) CONSTANT TO OBJECTIVE FUNCTION:    264.654297562946
 OBJECTIVE FUNCTION VALUE WITHOUT CONSTANT:    1824.85955048676
 OBJECTIVE FUNCTION VALUE WITH CONSTANT:       2089.51384804970
 REPORTED OBJECTIVE FUNCTION DOES NOT CONTAIN CONSTANT

 #TERE:
 Elapsed estimation  time in seconds:   174.76
 Elapsed covariance  time in seconds:     0.00
1




















 ************************************************************************************************************************
 ********************                                                                                ********************
 ********************                              NUTS BAYESIAN ANALYSIS                            ********************
 #OBJT:**************                       AVERAGE VALUE OF LIKELIHOOD FUNCTION                     ********************
 ********************                                                                                ********************
 ************************************************************************************************************************

$ sumo modelfit_dir1/NM_run1/psn.lst
Did not find expected information under METH: NUTS Bayesian Analysis number 2

Unable to read everything from outputfile, parser error message:
Did not find expected information under METH: NUTS Bayesian Analysis number 2

The info that shows up under what I think is that section is:

 #TBLN:      2
 #METH: NUTS Bayesian Analysis

 ESTIMATION STEP OMITTED:                 NO
 ANALYSIS TYPE:                           POPULATION
 NUMBER OF SADDLE POINT RESET ITERATIONS:      1
 HESSIAN USED IN SADDLE POINT RESET: VARIABLE METRIC
 GRADIENT METHOD USED:               SLOW
 CONDITIONAL ESTIMATES USED:              YES
 CENTERED ETA:                            NO
 EPS-ETA INTERACTION:                     YES
 LAPLACIAN OBJ. FUNC.:                    NO
 NO. OF FUNCT. EVALS. ALLOWED:            9999
 NO. OF SIG. FIGURES REQUIRED:            2
 INTERMEDIATE PRINTOUT:                   YES
 ESTIMATE OUTPUT TO MSF:                  NO
 ABORT WITH PRED EXIT CODE 1:             NO
 IND. OBJ. FUNC. VALUES SORTED:           NO
 NUMERICAL DERIVATIVE
       FILE REQUEST (NUMDER):               NONE
 MAP (ETAHAT) ESTIMATION METHOD (OPTMAP):   0
 ETA HESSIAN EVALUATION METHOD (ETADER):    0
 INITIAL ETA FOR MAP ESTIMATION (MCETA):    0
 SIGDIGITS FOR MAP ESTIMATION (SIGLO):      9
 GRADIENT SIGDIGITS OF
       FIXED EFFECTS PARAMETERS (SIGL):     9
 NOPRIOR SETTING (NOPRIOR):                 OFF
 NOCOV SETTING (NOCOV):                     OFF
 DERCONT SETTING (DERCONT):                 OFF
 FINAL ETA RE-EVALUATION (FNLETA):          ON
 EXCLUDE NON-INFLUENTIAL (NON-INFL.) ETAS
       IN SHRINKAGE (ETASTYPE):             NO
 NON-INFL. ETA CORRECTION (NONINFETA):      OFF
 RAW OUTPUT FILE (FILE): psn.ext
 EXCLUDE TITLE (NOTITLE):                   NO
 EXCLUDE COLUMN LABELS (NOLABEL):           NO
 FORMAT FOR ADDITIONAL FILES (FORMAT):      S1PE12.5
 PARAMETER ORDER FOR OUTPUTS (ORDER):       TSOL
 WISHART PRIOR DF INTERPRETATION (WISHTYPE):0
 KNUTHSUMOFF:                               0
 INCLUDE LNTWOPI:                           NO
 INCLUDE CONSTANT TERM TO PRIOR (PRIORC):   NO
 INCLUDE CONSTANT TERM TO OMEGA (ETA) (OLNTWOPI):NO
 EM OR BAYESIAN METHOD USED:                MCMC BAYESIAN (BAYES)
 MU MODELING PATTERN (MUM):
 GRADIENT/GIBBS PATTERN (GRD):
 AUTOMATIC SETTING FEATURE (AUTO):          OFF
 CONVERGENCE TYPE (CTYPE):                  2
 KEEP ITERATIONS (THIN):            1
 CONVERGENCE INTERVAL (CINTERVAL):          1
 CONVERGENCE ITERATIONS (CITER):            10
 CONVERGENCE ALPHA ERROR (CALPHA):          5.000000000000000E-02
 BURN-IN ITERATIONS (NBURN):                200
 ITERATIONS (NITER):                        500
 ANEAL SETTING (CONSTRAIN):                 1
 STARTING SEED FOR MC METHODS (SEED):       11456
 MC SAMPLES PER SUBJECT (ISAMPLE):          1
 RANDOM SAMPLING METHOD (RANMETHOD):        3U
 PROPOSAL DENSITY SCALING RANGE
              (ISCALE_MIN, ISCALE_MAX):     1.000000000000000E-06   ,1000000.00000000
 SAMPLE ACCEPTANCE RATE (IACCEPT):          0.400000000000000
 METROPOLIS HASTINGS POPULATION SAMPLING FOR NON-GIBBS
 SAMPLED THETAS AND SIGMAS:
 PROPOSAL DENSITY SCALING RANGE
              (PSCALE_MIN, PSCALE_MAX):   1.000000000000000E-02   ,1000.00000000000
 SAMPLE ACCEPTANCE RATE (PACCEPT):                       0.500000000000000
 SAMPLES FOR GLOBAL SEARCH KERNEL (PSAMPLE_M1):          1
 SAMPLES FOR LOCAL SEARCH KERNEL (PSAMPLE_M2):           -1
 SAMPLES FOR LOCAL UNIVARIATE KERNEL (PSAMPLE_M3):       1
 METROPOLIS HASTINGS POPULATION SAMPLING FOR NON-GIBBS
 SAMPLED OMEGAS:
 SAMPLE ACCEPTANCE RATE (OACCEPT):                       0.500000000000000
 SAMPLES FOR GLOBAL SEARCH KERNEL (OSAMPLE_M1):          -1
 SAMPLES FOR LOCAL SEARCH KERNEL (OSAMPLE_M2):           -1
 SAMPLES FOR LOCAL UNIVARIATE SEARCH KERNEL (OSAMPLE_M3):-1
 MASS/IMP./POST. MATRIX REFRESH SETTING (MASSREST):      -1
 MASS MATRIX ACCUMULATION ITERATIONS (MADAPT):          -1
 MASS MATRIX BLOCKING TYPE (NUTS_MASS):                 B
 MODEL PARAMETERS TRASNFORMED BY MASS MATRIX (NUTS_TRANSFORM=0)
 POWER TERM WEIGHTING FOR MASS MATRIX ACCUM. (KAPPA):   1.00000000000000
 NUTS SAMPLE ACCEPTANCE RATE (NUTS_DELTA):                   0.800000000000000
 NUTS GAMMA SETTING (NUTS_GAMMA):                            5.000000000000000E-02
 DEG. FR. FOR T DIST.  PRIOR FOR THETAS (TTDF):        0.00000000000000
 DEG. FR. FOR LKJ CORRELATION PRIOR FOR OMEGAS (OLKJDF): 0.00000000000000
 WEIGHT FACTOR FOR STD PRIOR FOR OMEGAS (OVARF): 1.00000000000000
 DEG. FR. FOR LKJ CORRELATION PRIOR FOR SIGMAS (SLKJDF): 0.00000000000000
 WEIGHT FACTOR FOR STD PRIOR FOR SIGMAS (SVARF): 1.00000000000000
 NUTS WARMUP METHOD (NUTS_TEST):       0
 NUTS MAXIMAL DEPTH SEARCH (NUTS_MAXDEPTH):       10
 NUTS STAGE I WARMUP ITERATIONS (NUTS_INIT):       7.500000000000000E-02
 NUTS STAGE II base WARMUP ITERATIONS (NUTS_BASE): 2.500000000000000E-02
 NUTS STAGE III FINAL ITERATIONS (NUTS_TERM): 5.000000000000000E-02
 INITIAL ITERATIONS FOR STEP NUTS SIZE ASSESSMENT (NUTS_STEPITER): 1
 INTERVAL ITERATIONS FOR STEP NUTS SIZE ASSESSMENT (NUTS_STEPINTER):0
 ETA PARAMETERIZATION (NUTS_EPARAM):0
 OMEGA PARAMETERIZATION (NUTS_OPARAM):1
 SIGMA PARAMETERIZATION (NUTS_SPARAM):1
 NUTS REGULARIZING METHOD (NUTS_REG): 0.00000000000000


 THE FOLLOWING LABELS ARE EQUIVALENT
 PRED=PREDI
 RES=RESI
 WRES=WRESI
 IWRS=IWRESI
 IPRD=IPREDI
 IRS=IRESI

 EM/BAYES SETUP:
 THETAS THAT ARE MU MODELED:

 THETAS THAT ARE GIBBS SAMPLED:
   1   3   5   6   7   8   9  10  11  12  13  20  21
 THETAS THAT ARE METROPOLIS-HASTINGS SAMPLED:

 SIGMAS THAT ARE GIBBS SAMPLED:

 SIGMAS THAT ARE METROPOLIS-HASTINGS SAMPLED:

 OMEGAS ARE GIBBS SAMPLED

 MONITORING OF SEARCH:

 Burn-in Mode
 iteration         -180 MCMCOBJ=    10638.6549471017

VPC binning

Hi,
Is it possible to bin a VPC by a column variable (like nominal time)?
BR Mads

Feature Request: Move R Scripts into a Package?

Hi,

When working on different systems, I don't always have PsN installed on the system where I do my reporting (e.g. moving between a Linux server running PsN and a Windows laptop with R for reporting).

As I recently was doing a transition as above for analysis of bootstrap results, I had to unzip PsN on the Windows laptop to gain access to the scripts, move them into a different location for reproducible results, etc.

Would it be feasible to put the R scripts associated with PsN into an R package (or perhaps put them into part of xpose) so that R-based post-processing of PsN results can more simply use the PsN-provided scripts?

Sumo with -csv

In sumo (v. 4.7.5) when off diagonal elements are present for OMEGA and or SIGMA they are for example displayed as OMEGA(5,4) 0.1888

However the comma in the name is creating issues for the -csv option.

I suggest changing the format from OMEGA(5,4) to Om.5.4. as it is for diagonal elements.

install issue

hi -
just upgraded the version to 4.6.0. on ubuntu 16. Installation was seamless but we keep getting the following error message -

It seems like Fortran compilation by the NONMEM's nmfe script failed.
Cannot start NONMEM.
Go to the NM_run1 subdirectory and run psn.mod with NONMEM's nmfe script to
diagnose the problem.

I go into the NM_run1 directory and test the nmfe script by running

nmfe73 psn.mod out.txt

The model runs fine. The nmfe73 is in the global path.

here is a snippet of the psn.conf -

[nm_versions]
default=/opt/nm73,7.3
nm73=/opt/nm73,7.3

Any ideas why the "Fortran compilation by the NONMEM's nmfe script failed" ?

Uninitialized value messages in execute and sumo

When running execute (and it completes), I see the following repeated many times (~50 repetitions as the pair of lines) for a model:

Use of uninitialized value $element in string eq at /opt/PsN/4.7.0/PsN_4_7_0/output/problem/subproblem.pm line 3082.
Use of uninitialized value $element in eval "string" at /opt/PsN/4.7.0/PsN_4_7_0/output/problem/subproblem.pm line 3082.

When running sumo, I see the same error and after that, I see the below error repeated a similar (probably the same) number of times:

Use of uninitialized value in abs at /opt/PsN/4.7.0/PsN_4_7_0/output.pm line 419.

I'm not sure exactly how to troubleshoot it (and I can't readily share my .lst file), but here are some lines from the .lst file that I think may be the issue:

 ************************************************************************************************************************
 ********************                                                                                ********************
 ********************               FIRST ORDER CONDITIONAL ESTIMATION WITH INTERACTION              ********************
 ********************                            STANDARD ERROR OF ESTIMATE                          ********************
 ********************                                                                                ********************
 ************************************************************************************************************************



 THETA - VECTOR OF FIXED EFFECTS PARAMETERS   *********


         TH 1      TH 2      TH 3      TH 4      TH 5

              NaN       NaN       NaN       NaN       NaN



 OMEGA - COV MATRIX FOR RANDOM EFFECTS - ETAS  ********


         ETA1      ETA2      ETA3

 ETA1
+        1.3E+154

 ETA2
+       .........  1.3E+154

 ETA3
+       ......... .........  1.3E+154



 SIGMA - COV MATRIX FOR RANDOM EFFECTS - EPSILONS  ****


         EPS1      EPS2

 EPS1
+        1.3E+154

 EPS2
+       .........  1.3E+154

1


 OMEGA - CORR MATRIX FOR RANDOM EFFECTS - ETAS  *******


         ETA1      ETA2      ETA3

 ETA1
+             NaN

 ETA2
+       .........       NaN

 ETA3
+       ......... .........       NaN



 SIGMA - CORR MATRIX FOR RANDOM EFFECTS - EPSILONS  ***


         EPS1      EPS2

 EPS1
+             NaN

 EPS2
+       .........       NaN

1
 ************************************************************************************************************************
 ********************                                                                                ********************
 ********************               FIRST ORDER CONDITIONAL ESTIMATION WITH INTERACTION              ********************
 ********************                          COVARIANCE MATRIX OF ESTIMATE                         ********************
 ********************                                                                                ********************
 ************************************************************************************************************************


            TH 1      TH 2      TH 3      TH 4      TH 5      OM11      OM12      OM13      OM22      OM23      OM33      SG11
             SG12      SG22

 TH 1
+             NaN

 TH 2
+             NaN       NaN

 TH 3
+             NaN       NaN       NaN

 TH 4
+             NaN       NaN       NaN       NaN

 TH 5
+             NaN       NaN       NaN       NaN       NaN

 OM11
+             NaN       NaN       NaN       NaN       NaN       NaN

 OM12
+       ......... ......... ......... ......... ......... ......... .........

 OM13
+       ......... ......... ......... ......... ......... ......... ......... .........

 OM22
+             NaN       NaN       NaN       NaN       NaN       NaN ......... .........       NaN

 OM23
+       ......... ......... ......... ......... ......... ......... ......... ......... ......... .........

 OM33
+             NaN       NaN       NaN       NaN       NaN       NaN ......... .........       NaN .........       NaN

 SG11
+             NaN       NaN       NaN       NaN       NaN       NaN ......... .........       NaN .........       NaN       NaN

 SG12
+       ......... ......... ......... ......... ......... ......... ......... ......... ......... ......... ......... .........
         .........

 SG22
+             NaN       NaN       NaN       NaN       NaN       NaN ......... .........       NaN .........       NaN       NaN
         .........       NaN

1
 ************************************************************************************************************************
 ********************                                                                                ********************
 ********************               FIRST ORDER CONDITIONAL ESTIMATION WITH INTERACTION              ********************
 ********************                          CORRELATION MATRIX OF ESTIMATE                        ********************
 ********************                                                                                ********************
 ************************************************************************************************************************


            TH 1      TH 2      TH 3      TH 4      TH 5      OM11      OM12      OM13      OM22      OM23      OM33      SG11
             SG12      SG22

 TH 1
+             NaN

 TH 2
+             NaN       NaN

 TH 3
+             NaN       NaN       NaN

 TH 4
+             NaN       NaN       NaN       NaN

 TH 5
+             NaN       NaN       NaN       NaN       NaN

 OM11
+             NaN       NaN       NaN       NaN       NaN       NaN

 OM12
+       ......... ......... ......... ......... ......... ......... .........

 OM13
+       ......... ......... ......... ......... ......... ......... ......... .........

 OM22
+             NaN       NaN       NaN       NaN       NaN       NaN ......... .........       NaN

 OM23
+       ......... ......... ......... ......... ......... ......... ......... ......... ......... .........

 OM33
+             NaN       NaN       NaN       NaN       NaN       NaN ......... .........       NaN .........       NaN

 SG11
+             NaN       NaN       NaN       NaN       NaN       NaN ......... .........       NaN .........       NaN       NaN

 SG12
+       ......... ......... ......... ......... ......... ......... ......... ......... ......... ......... ......... .........
         .........

 SG22
+             NaN       NaN       NaN       NaN       NaN       NaN ......... .........       NaN .........       NaN       NaN
         .........       NaN

1
 ************************************************************************************************************************
 ********************                                                                                ********************
 ********************               FIRST ORDER CONDITIONAL ESTIMATION WITH INTERACTION              ********************
 ********************                      INVERSE COVARIANCE MATRIX OF ESTIMATE                     ********************
 ********************                                                                                ********************
 ************************************************************************************************************************

(To help me trace back for questions: it's from /*/2017-006/Code/NONMEM/PK/run14.lst)

Linearize does not work with NONMEM 7.4

Due to a bug in NONMEM 7.4 linearized models are not estimated correctly. A workaround would be to let PsN add a dummy THETA for these models when running on NONMEM 7.4

Linearized scm fails to filter data

I experienced what seems to be a bug in the scm function of PsN when using the linearize option. The cause seems to be that the ignore statements present in the original model were not removed from the code generated inbase_model.mod or base_model_with_included_relations.mod.

Here is the PsN generated INPUT and DATA

$INPUT      ID DV MDV OPRED D_EPS1 D_EPS2 D_EPS3 D_EPS4 D_ETA1 D_ETA2
            D_ETA3 D_ETA4 D_ETA5 D_ETA6 D_ETA7 D_ETA8 D_ETA9 D_ETA10
            D_ETA11 OETA1 OETA2 OETA3 OETA4 OETA5 OETA6 OETA7 OETA8
            OETA9 OETA10 OETA11 D_EPSETA1_1 D_EPSETA1_2 D_EPSETA1_3
            D_EPSETA1_4 D_EPSETA1_5 D_EPSETA1_6 D_EPSETA1_7
            D_EPSETA1_8 D_EPSETA1_9 D_EPSETA1_10 D_EPSETA1_11
            D_EPSETA2_1 D_EPSETA2_2 D_EPSETA2_3 D_EPSETA2_4
            D_EPSETA2_5 D_EPSETA2_6 D_EPSETA2_7 D_EPSETA2_8
            D_EPSETA2_9 D_EPSETA2_10 D_EPSETA2_11 D_EPSETA3_1
            D_EPSETA3_2 D_EPSETA3_3 D_EPSETA3_4 D_EPSETA3_5
            D_EPSETA3_6 D_EPSETA3_7 D_EPSETA3_8 D_EPSETA3_9
            D_EPSETA3_10 D_EPSETA3_11 D_EPSETA4_1 D_EPSETA4_2
            D_EPSETA4_3 D_EPSETA4_4 D_EPSETA4_5 D_EPSETA4_6
            D_EPSETA4_7 D_EPSETA4_8 D_EPSETA4_9 D_EPSETA4_10
            D_EPSETA4_11 STUDY ARM MEAL SEX AGE HT WT FAT CARB OGZ_FAU
            OGK_FAU OGZ_TON OGK_TON OGZ_SGR OGK_SGR OGZ_HLRB OGK_HLRB
            OGZ_MTTAL OGK_MTTAL OGZ_EH OGK_EH
$DATA      ../../derivatives_covariates.dta IGNORE=@ IGNORE=(IGN>0)
            IGNORE=(STUDY/=1) IGNORE=(FIPP>=1) IGNORE=(FTBA==1)

Hence NONMEM returned the following error and the scm stopped.

 AN ERROR WAS FOUND ON LINE  23 AT THE APPROXIMATE POSITION NOTED:
 $DATA      ../../derivatives_covariates.dta IGNORE=@ IGNORE=(IGN>0)
                                                              X  
 THE CHARACTERS IN ERROR ARE: IGN
 508  IGNORE LIST OR ACCEPT LIST CONTAINS AN UNKNOWN DATA ITEM LABEL.

I used PsN v.4.7.0 and here is some of the options I have used in the scm file:

linearize=1
foce=1 
lst_file=run701.lst

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.