This talk does not present new mathematical results. Instead, it
revisits Oksana Chernova’s work on Cox‑type models and baseline
estimation—particularly the deep asymptotic theory and simulation
evidence—from the standpoint of a practitioner working across finance
and biomarker research. Reflecting on her results made me realize that
it may be valuable to share my own unsuccessful attempts to find
rigorous mathematical support for the kinds of Monte Carlo studies
that dominate both fields.
Although the underlying questions appear similar, the structural
differences between domains are profound. In finance, the number of
loans is enormous, observations are nearly continuous, and covariates
evolve dynamically according to diffusion‑type processes. In biomarker
research, sample sizes are small, measurement times are sparse, and
covariates are few and noisy. These differences place the practitioner
in a regime where classical asymptotic arguments—so powerful in
fixed‑design survival analysis—become surprisingly fragile.
A central theme of the talk is why asymptotic foundations remain
elusive when covariates evolve dynamically over time, despite the
enormous practical need for them. Once covariates follow stochastic
processes, the hazard becomes a random environment, the cumulative
hazard becomes a random functional, and the likelihood no longer
factorizes in a way that supports standard limit theory. The sample
size is “in the wrong dimension”: finance has a large cross‑section but
strong temporal dependence, while biomarkers have weak dynamics but
tiny cross‑section. As a result, the model has no stable asymptotic
object to converge to, and the usual law‑of‑large‑numbers machinery
collapses. Yet practitioners urgently need asymptotic anchors—what
Tukey called leading cases—to build toy models, interpret simulations,
and create shared terminology.