References
references.Rmd
Libraries for survival analysis
- the core
libraries:
- survival, ggplot for survival curves: survminer, a nice vignette for eha
- advanced (Machine Learning): Random survival
forest, vignette.
- Installation guide for parallel processing: RSF requires
OpenMP
(as doesdata.table
), see e.g. OpenMP installation guide.
- Installation guide for parallel processing: RSF requires
- the core
libraries:
- lifelines vignettes, scikit-survival
- Installation notes for
scikit-survival
on Mac Silicone chips: As per installation guide, we needcmake
, so in a terminalbrew install cmake
, thenpip3 install scikit-survival
.
Correspondence tables
fct | ex | ||
---|---|---|---|
formula | Surv(durat, fail) ~ x1 + .. | ||
KM estimator | survfit | KaplanMeierFitter | |
survminer::ggsurvplot | |||
arrange_ggsurvplots | |||
Log-Rank Test | survdiff | logrank_test | ex |
pairwise_survdiff | |||
integrated hazard | ggsurvplot(.,fun = “cumhaz”) | naf = NelsonAalenFitter | |
naf.plot_cumulative_hazard | |||
naf.plot_hazard(bandwidth=) | |||
RSF | randomForestSRC::rfsrc | RandomSurvivalForest | ex |
PH, Weibull | eha::phreg(., dist = “weibull”) | ||
as AFT | survreg + ConvertWeibull | WeibullAFTFitter | ex |
PH, PWE | eha::pchreg(., cuts) | CoxPHFitter( baseline_estimation_method = “piecewise”, ) | |
survSplit + GLM | |||
PH, partial likelihood | coxph | CoxPHFitter | |
+ nonpara. baseline haz. | basehaz + locpoly | ex | |
Penalised Cox regressions | glmnet | ex | |
PH, propor. test | cox.zph | CoxPHFitter.check_assumptions | ex |
ggcoxzph | proportional_hazard_test | ||
ggcoxdiagnostics | |||
MPH | parfm |
from lifelines import KaplanMeierFitter
from lifelines.statistics import logrank_test
from lifelines import NelsonAalenFitter
from lifelines import CoxPHFitter
from lifelines import WeibullAFTFitter
from lifelines.statistics import proportional_hazard_test
from sksurv.nonparametric import kaplan_meier_estimator
from sksurv.nonparametric import nelson_aalen_estimator
from sksurv.linear_model import CoxPHSurvivalAnalysis
from sksurv.ensemble import RandomSurvivalForest
from sklearn.model_selection import train_test_split
from sklearn.inspection import permutation_importance
The Zen of Python
The principles (import this
) are listed as follows :
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one– and preferably only one –obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea – let's do more of those!