torch_backend Module#
This is an experimental sub-module to compile pangolin models into plain-old pytorch functions.
Note: Because pytorch is large and sometimes annoying to install, and many users will not use this functionality, pangolin does not install pytorch as a requirement by default. This might lead you to get this error:
ImportError: Using torch backend requires torch to be installed
To fix this, either install pangolin with the pytorch requirements
(e.g. with uv sync --extra torch) or manually install pytorch yourself
(e.g. with pip install torch or uv pip install torch).
Also note: This backend has some limitations in its support for certain distributions, as shown in the following table:
Op |
sampling |
vmapped sampling |
log probs |
vmapped log probs |
|---|---|---|---|---|
|
✔ |
❌ |
✔ |
✔ |
|
✔ |
❌ |
✔ |
✔ |
|
✔ |
❌ |
✔ |
✔ |
|
✔ |
❌ |
✔ |
✔ |
|
✔ |
❌ |
✔ |
✔ |
|
✔ |
❌ |
✔ |
✔ |
|
❌ |
❌ |
❌ |
❌ |
|
❌ |
❌ |
❌ |
❌ |
Everything else is fully supported.
For Multinomial and BetaBinomial, this is due to torch lacking these distributions. For the others, this is due to a basic ~~bug~~ limitations in PyTorch. Namely, in PyTorch, this works fine:
torch.vmap(lambda dummy: torch.distributions.Normal(0,1).sample(), randomness='different')(torch.zeros(2))
And this should work fine:
torch.vmap(lambda dummy: torch.distributions.Exponential(2.0).rsample(), randomness='different')(torch.zeros(2))
But the latter raises the error RuntimeError: vmap: Cannot ask for different inplace randomness on an unbatched tensor. This will appear like same randomness. If this is necessary for your usage, please file an issue with functorch. (Said issue is here .) Curiously, Gamma does not have this issue, even though Exponential does and Gamma is a generalization of Exponential. So this backend just creates a Gamma when an Exponential is needed.
- pangolin.torch_backend.ancestor_sample(vars, size=None)[source]#
Draw exact samples!
- Parameters:
- Returns:
out – PyTree matching structure of
vars, but withtorch.tensorin place ofRV. IfsizeisNone, then each tensor will have the same shape as the correspondingRV. Otherwise, each tensor will have an extra dimension of sizesizeappended at the beginning.
Examples
Sample a constant RV.
>>> x = RV(ir.Constant(1.5)) >>> ancestor_sample(x) tensor(1.5000)
Sample a PyTree with the RV inside it.
>>> ancestor_sample({'sup': [[x]]}) {'sup': [[tensor(1.5000)]]}
Draw several samples.
>>> ancestor_sample(x, size=3) tensor([1.5000, 1.5000, 1.5000])
Sample several samples from a PyTree with an RV inside it.
>>> ancestor_sample({'sup': x}, size=3) {'sup': tensor([1.5000, 1.5000, 1.5000])}
Sample from several random variables at once
>>> y = RV(ir.Add(), x, x) >>> z = RV(ir.Mul(), x, y) >>> print(ancestor_sample({'cat': x, 'dog': [y, z]})) {'cat': tensor(1.5000), 'dog': [tensor(3.), tensor(4.5000)]}
See also
- pangolin.torch_backend.ancestor_sampler(vars)[source]#
Compiles a pytree of RVs into a plain-old torch function that returns a pytree with the same structure containing a joint sample from the distribution of those RVs.
- Parameters:
vars – a pytree of
RVto sample- Returns:
out – callable to create a sample matching the structure and shape of
vars
Examples
>>> x = RV(ir.Constant(1.5)) >>> y = RV(ir.Add(), x, x) >>> fun = ancestor_sampler([{'cat': x}, y])
You now have a plain-old torch function that’s completely independent of pangolin.
>>> fun() [{'cat': tensor(1.5000)}, tensor(3.)]
You can do normal torch stuff with it, e.g. vmap. But note that limitations in pytorch mean that you must pass some kind of dummy argument and pass
randomness='different'to get independent samples.>>> print(torch.vmap(lambda dummy: fun(), randomness='different')(torch.ones(3))) [{'cat': tensor([1.5000, 1.5000, 1.5000])}, tensor([3., 3., 3.])]
- pangolin.torch_backend.ancestor_log_prob(*vars, **kwvars)[source]#
Given a pytree of vars, create a plain-old torch function to compute log_probabilities.
Examples
>>> loc = ir.RV(ir.Constant(0.0)) >>> scale = ir.RV(ir.Constant(1.0)) >>> x = RV(ir.Normal(),loc,scale) >>> fun = ancestor_log_prob(x)
You now have a plain torch function that’s completely independent of pangolin. You can evaluate it.
>>> fun(torch.tensor(0.0)) tensor(-0.9189)
Or you can vmap it.
>>> torch.vmap(fun)(torch.tensor([0.0, 0.5])) tensor([-0.9189, -1.0439])
Here’s a more complex example:
>>> op = ir.VMap(ir.Normal(), [None,None], 3) >>> y = RV(op,loc,scale) >>> fun = ancestor_log_prob({'x':x, 'y':y}) >>> fun({'x':torch.tensor(0.0), 'y':torch.tensor([0.0, 0.5, 0.1])}) tensor(-3.8058)
You can also create a function that uses positional and/or keyword arguments:
>>> fun = ancestor_log_prob(x, cat=y) >>> fun(torch.tensor(0.0), cat=torch.tensor([0.0, 0.5, 0.1])) tensor(-3.8058)