stsb3.sts
AR1
def __init__(
self,
name=None,
t0=0,
t1=2,
size=1,
alpha=None,
beta=None,
scale=None,
):
An autoregressive block of order 1.
The data generating process for this block is
\[ z_t = \alpha_t + \beta_t z_{t-1} + \mathrm{scale}_t w_t, \]
for \(t = t_0,...,t_1\) and \(w_t \sim \text{Normal}(0, 1)\). Here, \(\alpha\) is the dgp for the intercept parameter, \(\beta\) is the dgp for the slope parameter, and \(\mathrm{scale}\) is the dgp for the scale parameter. These processes may be other Block
s, torch.tensor
s, or pyro.distributions
objects, and the interpretation of these parameters will change accordingly.
Args:
alpha (Block || torch.Tensor || pyro.distributions)
: the intercept parameterbeta (Block || torch.Tensor || pyro.distributions)
: the slope parameterscale (Block || torch.Tensor || pyro.distributions)
: the noise scale parameterSee Block
for definitions of other parameters.
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
floor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockprec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
sin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
BernoulliNoise
def __init__(
self,
dgp,
data=None,
name=None,
t0=0,
t1=2,
size=1,
):
A noise block (time series likelihood function) that assumes a Bernoulli observation process.
This observation process is suitable for use with on-off / indicator data.
The likelihood function for this block is
\[ p(x | \mathrm{dgp}) = \prod_{t=t_0}^{t_1} \mathrm{Bernoulli}(x_t | \mathrm{dgp}_t) \]
The \(\mathrm{dgp}\) needs to be constrained to lie in (0, 1) because it is used as the probability of the Bernoulli likelihood. Consider using .invlogit(...)
on an unconstrained Block
.
Args:
See NoiseBlock
for definitions of arguments.
_fit_autoguide
None
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
fit
def fit(self, method="nf_block_ar", method_kwargs=dict(), verbosity=0.01):
Fits a guide (variational posterior) to the model.
Wraps multiple Pyro implementations of variational inference. To minimize noise in the estimation you should follow the Pyro guidelines about marginalizing out discrete latent rvs, etc.
Args:
method (str)
: one of “advi”, “low_rank”, or “nf_block_ar”.
"advi"
: fits a diagonal normal distribution in unconstrained latent space"low_rank"
: fits a low-rank multivariate normal in unconstrained latent space. Unlike the diagonal normal, this guide can capture some nonlocal dependence in latent rvs."nf_block_ar"
: fits a normalizing flow block autoregressive neural density estimator in unconstrained latent space. This method uses two stacked block autoregressive NNs. See the Pyro docs for more details about this.method_kwargs (dict)
: optional keyword arguments to pass to Pyro’s inference capabilities. If no keyword arguments are specified, sane defaults will be passed instead. Some arguments could include:
"niter"
: number of iterations to run optimization (default 1000
)"lr"
: the learning rate (default 0.01
)"loss"
: the loss function to use (default "Trace_ELBO"
)"optim"
: the optimizer to use (default "AdamW"
)verbosity (float)
: status messages are printed every int(1.0 / verbosity)
iterationsfloor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockposterior_predictive
def posterior_predictive(
self,
nsamples=1,
):
Draws from the posterior predictive distribution of the graph with self
as the root
Args:
nsamples (int)
: number of samples to drawReturns:
samples (torch.tensor)
prec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
prior_predictive
def prior_predictive(
self,
nsamples=1,
):
Draws from the prior predictive distribution of the graph with self
as the root
Args:
nsamples (int)
: number of samples to drawReturns:
samples (torch.tensor)
sample
def sample(
self,
nsamples=100,
thin=0.1,
burnin=500,
):
Sample from the model’s posterior using the Pyro implementation of the No-U Turn Sampler
This could take a very long time for long time series. It is recommended to use .fit(...)
instead.
Args:
nsamples (int)
: number of desired samples after burn in and thinningthin (float)
: every int(1.0 / thin)
sample is keptburnin (int)
: samples[burnin:]
are keptsin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
CCSDE
def __init__(
self,
name=None,
t0=0,
t1=2,
size=1,
loc=None,
scale=None,
dt=None,
ic=None,
):
A constant-coefficient Euler-Maruyama stochastic differential equation dgp.
The generative model for this process is
\[ z_t = z_{t - 1} + \mathrm{dt}_t \mathrm{loc}_t + \sqrt{\mathrm{dt}_t} \mathrm{scale}_t w_t,\ z_0 = \mathrm{ic}, \]
for \(t = t_0, ..., t_1\). Here, \(\mathrm{loc}\) is the dgp for the location parameter, \(\mathrm{scale}\) is the dgp for the scale parameter, and \(\mathrm{dt}\) is the dgp for the time discretization. These processes may be other Block
s, torch.tensor
s, or pyro.distributions
objects, and the interpretation of \(\mathrm{loc}_t\), \(\mathrm{scale}_t\), and \(\mathrm{dt}_t\) will change accordingly. The initial condition, \(\mathrm{ic}\), can be either a torch.tensor
or pyro.distributions
object. The term \(w_t\) is a standard normal variate.
Args:
loc (Block || torch.tensor || pyro.distributions)
: location parameterscale (Block || torch.tensor || pyro.distributions)
: scale parameterdt (Block || torch.tensor || pyro.distributions)
: time discretization parameteric (torch.tensor || pyro.distributions)
: initial conditionSee Block
for definitions of other parameters.
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
floor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockprec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
sin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
DiscreteSeasonal
def __init__(
self,
name=None,
t0=0,
t1=2,
size=1,
n_seasons=2,
seasons=None,
):
A discrete seasonal block that represents the most basic form of discrete seasonality.
The data generating process for this block is
\[ z_t = \theta_{t \mod s},\ s = 1,...,S, \]
where \(S\) is the total number of seasons and \(\theta = (\theta_1,...,\theta_S)\) are the seasonality components. Currently, \(\theta\) can be only a pyro.distributions
instance or a torch.Tensor
, though that might change in a future release.
Args:
n_seasons (int)
: number of discrete seasonsseasons (pyro.distributions || torch.Tensor)
: season valuesSee Block
for definitions of other parameters.
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
floor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockprec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
sin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
DiscriminativeGaussianNoise
def __init__(
self,
dgp,
X=None,
y=None,
name=None,
t0=0,
t1=2,
size=1,
scale=None,
):
A discriminative noise block used for dynamic regression.
The observation likelihood is given by
\[ p(x | \mathrm{dgp}, \mathrm{scale}) = \prod_{t=t_0}^{t_1} \mathrm{Normal}(x_t | X_t \mathrm{dgp}_t, \mathrm{scale}_t), \]
where \(X_t \mathrm{dgp}_t\) should be interpreted as a batched dot product, i.e., \(\mathrm{loc}_{it} = \sum_j X_{ijt}\mathrm{dgp}_{jt}\).
Args:
X (torch.tensor)
: shape (size, dims, time)
y (None || torch.tensor)
: if not None
, shape (size, time)
See GaussianNoise
for definitions of other parameters
_fit_autoguide
None
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
fit
def fit(self, method="nf_block_ar", method_kwargs=dict(), verbosity=0.01):
Fits a guide (variational posterior) to the model.
Wraps multiple Pyro implementations of variational inference. To minimize noise in the estimation you should follow the Pyro guidelines about marginalizing out discrete latent rvs, etc.
Args:
method (str)
: one of “advi”, “low_rank”, or “nf_block_ar”.
"advi"
: fits a diagonal normal distribution in unconstrained latent space"low_rank"
: fits a low-rank multivariate normal in unconstrained latent space. Unlike the diagonal normal, this guide can capture some nonlocal dependence in latent rvs."nf_block_ar"
: fits a normalizing flow block autoregressive neural density estimator in unconstrained latent space. This method uses two stacked block autoregressive NNs. See the Pyro docs for more details about this.method_kwargs (dict)
: optional keyword arguments to pass to Pyro’s inference capabilities. If no keyword arguments are specified, sane defaults will be passed instead. Some arguments could include:
"niter"
: number of iterations to run optimization (default 1000
)"lr"
: the learning rate (default 0.01
)"loss"
: the loss function to use (default "Trace_ELBO"
)"optim"
: the optimizer to use (default "AdamW"
)verbosity (float)
: status messages are printed every int(1.0 / verbosity)
iterationsfloor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockposterior_predictive
def posterior_predictive(
self,
nsamples=1,
):
Draws from the posterior predictive distribution of the graph with self
as the root
Args:
nsamples (int)
: number of samples to drawReturns:
samples (torch.tensor)
prec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
prior_predictive
def prior_predictive(
self,
nsamples=1,
):
Draws from the prior predictive distribution of the graph with self
as the root
Args:
nsamples (int)
: number of samples to drawReturns:
samples (torch.tensor)
sample
def sample(
self,
nsamples=100,
thin=0.1,
burnin=500,
):
Sample from the model’s posterior using the Pyro implementation of the No-U Turn Sampler
This could take a very long time for long time series. It is recommended to use .fit(...)
instead.
Args:
nsamples (int)
: number of desired samples after burn in and thinningthin (float)
: every int(1.0 / thin)
sample is keptburnin (int)
: samples[burnin:]
are keptsin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
GaussianNoise
def __init__(
self,
dgp,
data=None,
name=None,
t0=0,
t1=2,
size=1,
scale=None,
):
A noise block (time series likelihood function) that assumes a centered normal observation process.
The likelihood function for this block is
\[ p(x | \mathrm{dgp}, \mathrm{scale}) = \prod_{t=t_0}^{t_1} \mathrm{Normal}(x_t | \mathrm{dgp}_t, \mathrm{scale}_t) \]
Args:
scale (Block || torch.tensor || pyro.distributions)
:See NoiseBlock
for definitions of other parameters
_fit_autoguide
None
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
fit
def fit(self, method="nf_block_ar", method_kwargs=dict(), verbosity=0.01):
Fits a guide (variational posterior) to the model.
Wraps multiple Pyro implementations of variational inference. To minimize noise in the estimation you should follow the Pyro guidelines about marginalizing out discrete latent rvs, etc.
Args:
method (str)
: one of “advi”, “low_rank”, or “nf_block_ar”.
"advi"
: fits a diagonal normal distribution in unconstrained latent space"low_rank"
: fits a low-rank multivariate normal in unconstrained latent space. Unlike the diagonal normal, this guide can capture some nonlocal dependence in latent rvs."nf_block_ar"
: fits a normalizing flow block autoregressive neural density estimator in unconstrained latent space. This method uses two stacked block autoregressive NNs. See the Pyro docs for more details about this.method_kwargs (dict)
: optional keyword arguments to pass to Pyro’s inference capabilities. If no keyword arguments are specified, sane defaults will be passed instead. Some arguments could include:
"niter"
: number of iterations to run optimization (default 1000
)"lr"
: the learning rate (default 0.01
)"loss"
: the loss function to use (default "Trace_ELBO"
)"optim"
: the optimizer to use (default "AdamW"
)verbosity (float)
: status messages are printed every int(1.0 / verbosity)
iterationsfloor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockposterior_predictive
def posterior_predictive(
self,
nsamples=1,
):
Draws from the posterior predictive distribution of the graph with self
as the root
Args:
nsamples (int)
: number of samples to drawReturns:
samples (torch.tensor)
prec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
prior_predictive
def prior_predictive(
self,
nsamples=1,
):
Draws from the prior predictive distribution of the graph with self
as the root
Args:
nsamples (int)
: number of samples to drawReturns:
samples (torch.tensor)
sample
def sample(
self,
nsamples=100,
thin=0.1,
burnin=500,
):
Sample from the model’s posterior using the Pyro implementation of the No-U Turn Sampler
This could take a very long time for long time series. It is recommended to use .fit(...)
instead.
Args:
nsamples (int)
: number of desired samples after burn in and thinningthin (float)
: every int(1.0 / thin)
sample is keptburnin (int)
: samples[burnin:]
are keptsin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
GlobalTrend
def __init__(
self,
name=None,
t0=0,
t1=2,
size=1,
alpha=None,
beta=None,
):
A global (linear) trend dgp.
The generative model for this process is
\[ z_t = \alpha + \beta t, \]
for \(t = t_0, ..., t_1\). Here, \(\alpha\) is the dgp for the intercept parameter and \(\beta\) is the dgp for the slope parameter. These processes may be other Block
s, torch.tensor
s, or pyro.distributions
objects, and the interpretation of \(\alpha\) and \(\beta\) will change accordingly.
Args:
alpha (Block || torch.tensor || pyro.distributions)
: intercept parameterbeta (Block || torch.tensor || pyro.distributions)
: slope parameterSee Block
for definitions of other parameters.
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
floor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockprec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
sin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
MA1
def __init__(
self,
name=None,
t0=0,
t1=2,
size=1,
beta=None,
loc=None,
scale=None,
):
A moving average block of order 1.
The data generating process for this block is
\[ z_t = \mathrm{loc}_t + \mathrm{scale}_t w_t + \beta_t \mathrm{scale}_{t - 1} w_{t-1}, \]
for \(t = t_0,...,t_1\) and \(w_t \sim \text{Normal}(0, 1)\). Here, \(\mathrm{loc}\) is the dgp for the location parameter, \(\mathrm{scale}\) is the dgp for the scale parameter, and \(\beta\) is the dgp for the FIR filter. These processes may be other Block
s, torch.tensor
s, or pyro.distributions
objects, and the interpretation of these parameters will change accordingly.
NOTE: from the definition of the dgp, \(\mathrm{scale}\) has dimensionality \((N, t_1 - t_0 + 1)\), where the \(+1\) is due to the lagged noise term on the \(t = t_0\) value.
Args:
beta (Block || torch.Tensor || pyro.distributions)
: the FIR filter parameterloc (Block || torch.Tensor || pyro.distributions)
: the location parameterscale (Block || torch.Tensor || pyro.distributions)
: the noise scale parameter. Note that, if scale
subclasses Block
, it must have time dimensionality one higher than this blockSee Block
for definitions of other parameters.
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
floor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockprec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
sin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
PoissonNoise
def __init__(
self,
dgp,
data=None,
name=None,
t0=0,
t1=2,
size=1,
):
A noise block (time series likelihood function) that assumes a Poisson observation process.
This observation process is suitable for use with count (or other non-negative integer) data that does not exhibit over- or under-dispersion (in practice, if the log ratio of mean to variance of the observed data is not too far away from zero).
The likelihood function for this block is
\[ p(x | \mathrm{dgp}) = \prod_{t=t_0}^{t_1} \mathrm{Poisson}(x_t | \mathrm{dgp}_t) \]
The \(\mathrm{dgp}\) needs to be non-negative because it is used as the rate function of the Poisson likelihood. Consider using .softplus(...)
or .exp(...)
on an unconstrained Block
.
Args:
See NoiseBlock
for definitions of arguments.
_fit_autoguide
None
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
fit
def fit(self, method="nf_block_ar", method_kwargs=dict(), verbosity=0.01):
Fits a guide (variational posterior) to the model.
Wraps multiple Pyro implementations of variational inference. To minimize noise in the estimation you should follow the Pyro guidelines about marginalizing out discrete latent rvs, etc.
Args:
method (str)
: one of “advi”, “low_rank”, or “nf_block_ar”.
"advi"
: fits a diagonal normal distribution in unconstrained latent space"low_rank"
: fits a low-rank multivariate normal in unconstrained latent space. Unlike the diagonal normal, this guide can capture some nonlocal dependence in latent rvs."nf_block_ar"
: fits a normalizing flow block autoregressive neural density estimator in unconstrained latent space. This method uses two stacked block autoregressive NNs. See the Pyro docs for more details about this.method_kwargs (dict)
: optional keyword arguments to pass to Pyro’s inference capabilities. If no keyword arguments are specified, sane defaults will be passed instead. Some arguments could include:
"niter"
: number of iterations to run optimization (default 1000
)"lr"
: the learning rate (default 0.01
)"loss"
: the loss function to use (default "Trace_ELBO"
)"optim"
: the optimizer to use (default "AdamW"
)verbosity (float)
: status messages are printed every int(1.0 / verbosity)
iterationsfloor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockposterior_predictive
def posterior_predictive(
self,
nsamples=1,
):
Draws from the posterior predictive distribution of the graph with self
as the root
Args:
nsamples (int)
: number of samples to drawReturns:
samples (torch.tensor)
prec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
prior_predictive
def prior_predictive(
self,
nsamples=1,
):
Draws from the prior predictive distribution of the graph with self
as the root
Args:
nsamples (int)
: number of samples to drawReturns:
samples (torch.tensor)
sample
def sample(
self,
nsamples=100,
thin=0.1,
burnin=500,
):
Sample from the model’s posterior using the Pyro implementation of the No-U Turn Sampler
This could take a very long time for long time series. It is recommended to use .fit(...)
instead.
Args:
nsamples (int)
: number of desired samples after burn in and thinningthin (float)
: every int(1.0 / thin)
sample is keptburnin (int)
: samples[burnin:]
are keptsin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
RandomWalk
def __init__(
self,
name=None,
t0=0,
t1=2,
size=1,
loc=None,
scale=None,
ic=None,
):
A (biased) normal random walk dgp.
The generative model for this process is
\[ z_t = z_{t - 1} + \mathrm{loc}_t + \mathrm{scale}_t w_t,\ z_0 = \mathrm{ic}, \]
for \(t = t_0,...,t_1\). Here, \(\mathrm{loc}\) is the dgp for the location parameter and \(\mathrm{scale}\) is the dgp for the scale parameter. These processes may be other Block
s, torch.tensor
s, or pyro.distributions
objects, and the interpretation of \(\mathrm{loc}_t\) or \(\mathrm{scale}_t\) will change accordingly. The initial condition, \(\mathrm{ic}\), can be either a torch.tensor
or pyro.distributions
object. The term \(w_t\) is a standard normal variate.
Args:
loc (Block || torch.tensor || pyro.distributions)
: location parameterscale (Block || torch.tensor || pyro.distributions)
: scale parameteric (torch.tensor || pyro.distributions)
: initial conditionSee Block
for definitions of other parameters.
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
floor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockprec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
sin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
SmoothSeasonal
def __init__(
self,
name=None,
t0=0,
t1=2,
size=1,
phase=None,
amplitude=None,
lengthscale=None,
cycles=1,
):
A smooth seasonal block.
The generative model for this process is
\[ z_t = \mathrm{amplitude}_t \cos\left(\mathrm{phase}_t + \frac{2\pi\ \mathrm{cycles}\ t}{\mathrm{lengthscale}_t}\right) \]
for \(t = t_0, ..., t_1\). Here, \(\mathrm{amplitude}\) is the dgp for the amplitude, \(\mathrm{phase}\) is the dgp for the phase, and \(\mathrm{lengthscale}\) is the parameter for the lengthscale. These processes may be other Block
s, torch.tensor
s, or pyro.distributions
objects, and the interpretation of these parameters will change accordingly.
This block is experimental and may be removed in a future release.
Args:
phase (Block || torch.tensor || pyro.distributions)
: phase of the sinusoidal functionnamplitude (Block || torch.tensor || pyro.distributions)
: amplitude of the siusoidal functionlengthscale (Block || torch.tensor || pyro.distributions)
: lengthscale of the sinusoidal function; corresponds to \(L\) in \(A \cos(\varphi + 2\pi n t / L)\)scycles (int)
: number of cycles of ths sinusoidal to complete over the interval \([0, L)\)See Block
for definitions of other parameters.
_maybe_add_blocks
def _maybe_add_blocks(self, *args):
Adds parameters to prec and succ if they subclass Block.
Args:
args
: iterable of (name, parameter, bound)_maybe_remove_blocks
None
_model
None
_transform
Defines a transform from a string argument.
Currently the following string arguments are supported:
The resulting transform will be added to the transform stack iff it is not already at the top of the stack.
Args:
arg (str)
: one of the above strings corresponding to functionReturns:
self (stsb.Block)
arctanh
x -> arctanh(x)
, i.e. x -> 0.5 log ((1 + x) / (1 - x))
clear_cache
Clears the block cache.
This method does not alter the cache mode.
cos
x -> cos x
diff
x -> x[1:] - x[:-1]
Note that this lowers the time dimension from T to T - 1.
exp
x -> exp(x)
floor
x -> x - [[x]]
, where [[.]]
is the fractional part operator
invlogit
x -> 1 / (1 + exp(-x))
log
x -> log x
Block paths must be positive for valid output.
logdiff
x -> log x[1:] - log x[:-1]
Note that this lowers the time dimension from T to T - 1.
logit
x -> log(x / (1 - x))
model
def model(self, *args, **kwargs):
Draws a batch of samples from the block.
Args:
args
: optional positional argumentskwargs
: optional keyword argumentsReturns:
draws
(torch.tensor) sampled values from the blockprec
Returns the predecessor nodes of self
in the (implicit) compute graph
Returns:
_prec (list)
: list of predecessor nodes
sin
x -> sin x
softplus
x -> log(1 + exp(x))
succ
Returns the successor nodes of self
in the (implicit) compute graph
Returns:
_succ (list)
: list of successor nodes
tanh
x -> tanh(x)
, i.e. x -> (exp(x) - exp(-x)) / (exp(x) + exp(-x))
forecast
def forecast(dgp, samples, *args, Nt=1, nsamples=1, **kwargs):
Forecasts the root node of the DGP forward in time.
Args:
dgp (Block)
: the root node to forecast forwardsamples (dict)
: {semantic site name: value}
The value tensors should have shape (m, n, T)
, where m
is the number of samples, n
is the batch size, and T
is the length of the time series*args
: any additional positional arguments to pass to dgp.model
Nt (int):
number of timesteps for which to generate forecast. Forecast is generated from t1 + 1
to t1 + 1 + Nt
.nsamples (int)
: number of samples to draw from the forecast distributiondesign_tensors (Dict[str, torch.Tensor])
:**kwargs
: any additional keyword arguments to pass to dgp.model
redefine
def redefine(
block,
attribute,
obj,
):
Redefines an attribute of a block to the passed object
Args:
block (Block)
attribute (str)
obj (Block || torch.tensor || pyro.distributions)
register_block
def register_block(
name,
fn_addr_param,
model_fn,
):
Registers a new block at runtime
Args:
name (str)
: name of the new block (class)fn_addr_param (dict)
: functional address parameterization; see documentation of core.construct_init
for required structuremodel_fn (callable)
: the implementation of the likelihood-function portion of Block._model
. An example implementation, here for a (determininstic) quadratic trend, is shown below:
def model_fn(x):
alpha, beta, gamma = core.name_to_definite(x, "alpha", "beta", "gamma")
with autoname.scope(prefix=constants.dynamic):
t = torch.linspace(x.t0, x.t1, x.t1 - x.t0)
path = pyro.deterministic(
x.name + "-" + constants.generated,
alpha + t * beta + t.pow(2) * gamma
)
return path
The call to core.name_to_definite
takes care of calling pyro.sample
if the parameters are defined as pyro distributions, calls model methods if the parameters are defined as Block
s, and so on.