diff --git a/tutorials/A Bottom-Up Introduction to Gen.ipynb b/tutorials/A Bottom-Up Introduction to Gen.ipynb
index 835d1ba..a7ead9c 100644
--- a/tutorials/A Bottom-Up Introduction to Gen.ipynb
+++ b/tutorials/A Bottom-Up Introduction to Gen.ipynb
@@ -2170,7 +2170,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "The ability to *trace* the values of random choices in a probabilistic program (i.e. record the value of each choice in a trace data structure) is one of the basic features of Gen's built-in modeling language. To write a function in this language we use the `@gen` macro provided by Gen. Note that the built-in modeling language is just one way of defining a [generative function](https://probcomp.github.io/Gen/dev/ref/distributions/).\n",
+ "The ability to *trace* the values of random choices in a probabilistic program (i.e. record the value of each choice in a trace data structure) is one of the basic features of Gen's built-in modeling language. To write a function in this language we use the `@gen` macro provided by Gen. Note that the built-in modeling language is just one way of defining a [generative function](https://www.gen.dev/docs/stable/ref/distributions/).\n",
"\n",
"Below, we write a `@gen function` version of the function `f` defined above, this time using Gen's tracing instead of our own:"
]
@@ -2225,7 +2225,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "To run a `@gen` function and get a trace of the execution, we use the [`simulate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.simulate) method:"
+ "To run a `@gen` function and get a trace of the execution, we use the [`simulate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.simulate) method:"
]
},
{
@@ -2715,7 +2715,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We can also get the log probability that an individual trace would be generated by the function ($\\log p(t; x)$), using the [`get_score`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.get_score) method.\n",
+ "We can also get the log probability that an individual trace would be generated by the function ($\\log p(t; x)$), using the [`get_score`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.get_score) method.\n",
"\n",
"Let's generate a trace below, get its log probability with `get_score`"
]
@@ -2778,7 +2778,7 @@
" gen_f(0.3)\n",
" ```\n",
"\n",
- "2. Using the [`simulate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.simulate) method:\n",
+ "2. Using the [`simulate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.simulate) method:\n",
"\n",
" ```julia\n",
" trace = simulate(gen_f, (0.3,))\n",
@@ -2789,7 +2789,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We can also generate a trace that satisfies a set of constraints on the valus of random choices using the [`generate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.generate) method. Suppose that we want a trace where `:a` is always `true` and `:c` is always `false`. We first construct a choice map containing these constraints:"
+ "We can also generate a trace that satisfies a set of constraints on the valus of random choices using the [`generate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.generate) method. Suppose that we want a trace where `:a` is always `true` and `:c` is always `false`. We first construct a choice map containing these constraints:"
]
},
{
@@ -3013,7 +3013,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "A more efficient and numerically robust implementation of importance resampling is provided in Gen's inference library (see [`importance_resampling`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.importance_resampling))."
+ "A more efficient and numerically robust implementation of importance resampling is provided in Gen's inference library (see [`importance_resampling`](https://www.gen.dev/docs/stable/ref/inference/#Gen.importance_resampling))."
]
},
{
@@ -3433,7 +3433,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now, we use the [`update`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.update) method, to change the value of `:c` from `true` to `false`:"
+ "Now, we use the [`update`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.update) method, to change the value of `:c` from `true` to `false`:"
]
},
{
diff --git a/tutorials/A Bottom-Up Introduction to Gen.jl b/tutorials/A Bottom-Up Introduction to Gen.jl
index e681115..97b4625 100644
--- a/tutorials/A Bottom-Up Introduction to Gen.jl
+++ b/tutorials/A Bottom-Up Introduction to Gen.jl
@@ -188,7 +188,7 @@ plot(map(p -> query(p, 14), [0.1, 0.5, 0.9])...)
# ## 2. Tracing the values of random choices in generative functions
-# The ability to *trace* the values of random choices in a probabilistic program (i.e. record the value of each choice in a trace data structure) is one of the basic features of Gen's built-in modeling language. To write a function in this language we use the `@gen` macro provided by Gen. Note that the built-in modeling language is just one way of defining a [generative function](https://probcomp.github.io/Gen/dev/ref/distributions/).
+# The ability to *trace* the values of random choices in a probabilistic program (i.e. record the value of each choice in a trace data structure) is one of the basic features of Gen's built-in modeling language. To write a function in this language we use the `@gen` macro provided by Gen. Note that the built-in modeling language is just one way of defining a [generative function](https://www.gen.dev/docs/stable/ref/distributions/).
#
# Below, we write a `@gen function` version of the function `f` defined above, this time using Gen's tracing instead of our own:
@@ -210,7 +210,7 @@ end;
gen_f(0.3)
-# To run a `@gen` function and get a trace of the execution, we use the [`simulate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.simulate) method:
+# To run a `@gen` function and get a trace of the execution, we use the [`simulate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.simulate) method:
using Gen: simulate
trace = simulate(gen_f, (0.3,));
@@ -300,7 +300,7 @@ for prob_a in [0.1, 0.5, 0.9]
println("expected: $(prob_true(prob_a)), actual: $actual")
end
-# We can also get the log probability that an individual trace would be generated by the function ($\log p(t; x)$), using the [`get_score`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.get_score) method.
+# We can also get the log probability that an individual trace would be generated by the function ($\log p(t; x)$), using the [`get_score`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.get_score) method.
#
# Let's generate a trace below, get its log probability with `get_score`
@@ -321,13 +321,13 @@ println("log probability: $(get_score(trace))")
# gen_f(0.3)
# ```
#
-# 2. Using the [`simulate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.simulate) method:
+# 2. Using the [`simulate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.simulate) method:
#
# ```julia
# trace = simulate(gen_f, (0.3,))
# ```
-# We can also generate a trace that satisfies a set of constraints on the valus of random choices using the [`generate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.generate) method. Suppose that we want a trace where `:a` is always `true` and `:c` is always `false`. We first construct a choice map containing these constraints:
+# We can also generate a trace that satisfies a set of constraints on the valus of random choices using the [`generate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.generate) method. Suppose that we want a trace where `:a` is always `true` and `:c` is always `false`. We first construct a choice map containing these constraints:
# +
using Gen: choicemap
@@ -414,7 +414,7 @@ function my_importance_sampler(gen_fn, args, constraints, num_traces)
return traces[idx]
end;
-# A more efficient and numerically robust implementation of importance resampling is provided in Gen's inference library (see [`importance_resampling`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.importance_resampling)).
+# A more efficient and numerically robust implementation of importance resampling is provided in Gen's inference library (see [`importance_resampling`](https://www.gen.dev/docs/stable/ref/inference/#Gen.importance_resampling)).
# Suppose our goal is to sample `:a` and `:b` from the conditional distribution given that we have observed `:c` is `false`. That is, we want to sample choice map $t$ with probability $0$ if $t(c) = \mbox{false}$ and otherwise probability:
#
@@ -473,7 +473,7 @@ plot(map(N -> importance_query(0.3, N), [1, 10, 100])...)
(trace, weight) = generate(foo, (0.3,), choicemap((:a, true), (:b, true), (:c, true)));
get_choices(trace)
-# Now, we use the [`update`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.update) method, to change the value of `:c` from `true` to `false`:
+# Now, we use the [`update`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.update) method, to change the value of `:c` from `true` to `false`:
# +
using Gen: update, NoChange
diff --git a/tutorials/Data-Driven Proposals in Gen.ipynb b/tutorials/Data-Driven Proposals in Gen.ipynb
index f0fe191..83f9e12 100644
--- a/tutorials/Data-Driven Proposals in Gen.ipynb
+++ b/tutorials/Data-Driven Proposals in Gen.ipynb
@@ -1213,7 +1213,7 @@
"metadata": {},
"source": [
"The inference algorithm above used a variant of\n",
- "[`Gen.importance_resampling`](https://probcomp.github.io/Gen/dev/ref/importance/#Gen.importance_resampling)\n",
+ "[`Gen.importance_resampling`](https://www.gen.dev/docs/stable/ref/importance/#Gen.importance_resampling)\n",
"that does not take a custom proposal distribution. It uses the default\n",
"proposal distribution associated with the generative model. For generative\n",
"functions defined using the built-in modeling DSL, the default proposal\n",
@@ -1437,7 +1437,7 @@
"metadata": {},
"source": [
"We can propose values of random choices from the proposal function using\n",
- "[`Gen.propose`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.propose).\n",
+ "[`Gen.propose`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.propose).\n",
"This method returns the choices, as well as some other information, which we\n",
"won't need for our purposes. For now, you can think of `Gen.propose` as\n",
"similar to `Gen.generate` except that it does not produce a full execution\n",
@@ -1568,7 +1568,7 @@
"source": [
"We now use our data-driven proposal within an inference algorithm. There is a\n",
"second variant of\n",
- "[`Gen.importance_resampling`](https://probcomp.github.io/Gen/dev/ref/importance/#Gen.importance_resampling)\n",
+ "[`Gen.importance_resampling`](https://www.gen.dev/docs/stable/ref/importance/#Gen.importance_resampling)\n",
"that accepts a generative function representing a custom proposal. This\n",
"proposal generative function makes traced random choices at the addresses of\n",
"a subset of the unobserved random choices made by the generative model. In\n",
@@ -1593,7 +1593,7 @@
"\n",
"This time, use only 5 importance samples (`amt_computation`). You can run\n",
"`?Gen.importance_resampling` or check out the\n",
- "[documentation](https://probcomp.github.io/Gen/dev/ref/inference/#Importance-Sampling-1)\n",
+ "[documentation](https://www.gen.dev/docs/stable/ref/inference/#Importance-Sampling-1)\n",
"to understand how to supply the arguments to invoke this second version of of\n",
"importance resampling."
]
@@ -1948,7 +1948,7 @@
"metadata": {},
"source": [
"Finally, we use the\n",
- "[`Gen.train!`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.train!)\n",
+ "[`Gen.train!`](https://www.gen.dev/docs/stable/ref/inference/#Gen.train!)\n",
"method to actually do the training.\n",
"\n",
"For each epoch, `Gen.train!` makes `epoch_size` calls to the data-generator\n",
diff --git a/tutorials/Data-Driven Proposals in Gen.jl b/tutorials/Data-Driven Proposals in Gen.jl
index c360545..4860ea0 100644
--- a/tutorials/Data-Driven Proposals in Gen.jl
+++ b/tutorials/Data-Driven Proposals in Gen.jl
@@ -556,7 +556,7 @@ visualize_inference(measurements, scene_2doors, start, computation_amt=100, samp
# ## 2. Writing a data-driven proposal as a generative function
# The inference algorithm above used a variant of
-# [`Gen.importance_resampling`](https://probcomp.github.io/Gen/dev/ref/importance/#Gen.importance_resampling)
+# [`Gen.importance_resampling`](https://www.gen.dev/docs/stable/ref/importance/#Gen.importance_resampling)
# that does not take a custom proposal distribution. It uses the default
# proposal distribution associated with the generative model. For generative
# functions defined using the built-in modeling DSL, the default proposal
@@ -676,7 +676,7 @@ compute_bin_probs(num_y_bins, scene.ymin, scene.ymax, measurements[1].y, measure
end;
# We can propose values of random choices from the proposal function using
-# [`Gen.propose`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.propose).
+# [`Gen.propose`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.propose).
# This method returns the choices, as well as some other information, which we
# won't need for our purposes. For now, you can think of `Gen.propose` as
# similar to `Gen.generate` except that it does not produce a full execution
@@ -723,7 +723,7 @@ visualize_custom_destination_proposal(measurements, start, custom_dest_proposal,
# We now use our data-driven proposal within an inference algorithm. There is a
# second variant of
-# [`Gen.importance_resampling`](https://probcomp.github.io/Gen/dev/ref/importance/#Gen.importance_resampling)
+# [`Gen.importance_resampling`](https://www.gen.dev/docs/stable/ref/importance/#Gen.importance_resampling)
# that accepts a generative function representing a custom proposal. This
# proposal generative function makes traced random choices at the addresses of
# a subset of the unobserved random choices made by the generative model. In
@@ -742,7 +742,7 @@ visualize_custom_destination_proposal(measurements, start, custom_dest_proposal,
#
# This time, use only 5 importance samples (`amt_computation`). You can run
# `?Gen.importance_resampling` or check out the
-# [documentation](https://probcomp.github.io/Gen/dev/ref/inference/#Importance-Sampling-1)
+# [documentation](https://www.gen.dev/docs/stable/ref/inference/#Importance-Sampling-1)
# to understand how to supply the arguments to invoke this second version of of
# importance resampling.
@@ -933,7 +933,7 @@ end;
update = Gen.ParamUpdate(Gen.FixedStepGradientDescent(0.001), custom_dest_proposal_trainable);
# Finally, we use the
-# [`Gen.train!`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.train!)
+# [`Gen.train!`](https://www.gen.dev/docs/stable/ref/inference/#Gen.train!)
# method to actually do the training.
#
# For each epoch, `Gen.train!` makes `epoch_size` calls to the data-generator
diff --git a/tutorials/Introduction to Modeling in Gen.ipynb b/tutorials/Introduction to Modeling in Gen.ipynb
index 252c18d..ebb583b 100644
--- a/tutorials/Introduction to Modeling in Gen.ipynb
+++ b/tutorials/Introduction to Modeling in Gen.ipynb
@@ -225,7 +225,7 @@
"Generative functions are used to represent a variety of different types of\n",
"probabilistic computations including generative models, inference models,\n",
"custom proposal distributions, and variational approximations (see the [Gen\n",
- "documentation](https://probcomp.github.io/Gen/dev/ref/gfi/) or the \n",
+ "documentation](https://www.gen.dev/docs/stable/ref/gfi/) or the \n",
"[paper](https://dl.acm.org/doi/10.1145/3314221.3314642)). In this\n",
"tutorial,\n",
"we focus on implementing _generative models_. A generative model represents\n",
@@ -234,7 +234,7 @@
"\n",
"\n",
"The simplest way to construct a generative function is by using the [built-in\n",
- "modeling DSL](https://probcomp.github.io/Gen/dev/ref/modeling/). Generative\n",
+ "modeling DSL](https://www.gen.dev/docs/stable/ref/modeling/). Generative\n",
"functions written in the built-in modeling DSL are based on Julia function\n",
"definition syntax, but are prefixed with the `@gen` macro:\n",
"\n",
@@ -425,7 +425,7 @@
"Although the random choices are not included in the return value, they *are*\n",
"included in the *execution trace* of the generative function. We can run the\n",
"generative function and obtain its trace using the [`\n",
- "simulate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.simulate) method\n",
+ "simulate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.simulate) method\n",
"from the Gen API:"
]
},
@@ -3228,10 +3228,10 @@
"x-coordinates by adding noise to the value of the wave at each x-coordinate.\n",
"Use a `gamma(1, 1)` prior distribution for the period, and a `gamma(1, 1)`\n",
"prior distribution on the amplitude (see\n",
- "[`Gen.gamma`](https://probcomp.github.io/Gen/dev/ref/distributions/#Gen.gamma)).\n",
+ "[`Gen.gamma`](https://www.gen.dev/docs/stable/ref/distributions/#Gen.gamma)).\n",
"Sampling from a Gamma distribution will ensure to give us postive real values.\n",
"Use a uniform distribution between 0 and $2\\pi$ for the phase (see\n",
- "[`Gen.uniform`](https://probcomp.github.io/Gen/dev/ref/distributions/#Gen.uniform)).\n",
+ "[`Gen.uniform`](https://www.gen.dev/docs/stable/ref/distributions/#Gen.uniform)).\n",
"\n",
"The sine wave should implement:\n",
"\n",
diff --git a/tutorials/Introduction to Modeling in Gen.jl b/tutorials/Introduction to Modeling in Gen.jl
index 8a577af..3a53835 100644
--- a/tutorials/Introduction to Modeling in Gen.jl
+++ b/tutorials/Introduction to Modeling in Gen.jl
@@ -120,7 +120,7 @@ typeof("foo")
# Generative functions are used to represent a variety of different types of
# probabilistic computations including generative models, inference models,
# custom proposal distributions, and variational approximations (see the [Gen
-# documentation](https://probcomp.github.io/Gen/dev/ref/gfi/) or the
+# documentation](https://www.gen.dev/docs/stable/ref/gfi/) or the
# [paper](https://dl.acm.org/doi/10.1145/3314221.3314642)). In this
# tutorial,
# we focus on implementing _generative models_. A generative model represents
@@ -129,7 +129,7 @@ typeof("foo")
#
#
# The simplest way to construct a generative function is by using the [built-in
-# modeling DSL](https://probcomp.github.io/Gen/dev/ref/modeling/). Generative
+# modeling DSL](https://www.gen.dev/docs/stable/ref/modeling/). Generative
# functions written in the built-in modeling DSL are based on Julia function
# definition syntax, but are prefixed with the `@gen` macro:
#
@@ -268,7 +268,7 @@ y = line_model(xs)
# Although the random choices are not included in the return value, they *are*
# included in the *execution trace* of the generative function. We can run the
# generative function and obtain its trace using the [`
-# simulate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.simulate) method
+# simulate`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.simulate) method
# from the Gen API:
trace = Gen.simulate(line_model, (xs,));
@@ -354,10 +354,10 @@ grid(render_trace, traces)
# x-coordinates by adding noise to the value of the wave at each x-coordinate.
# Use a `gamma(1, 1)` prior distribution for the period, and a `gamma(1, 1)`
# prior distribution on the amplitude (see
-# [`Gen.gamma`](https://probcomp.github.io/Gen/dev/ref/distributions/#Gen.gamma)).
+# [`Gen.gamma`](https://www.gen.dev/docs/stable/ref/distributions/#Gen.gamma)).
# Sampling from a Gamma distribution will ensure to give us postive real values.
# Use a uniform distribution between 0 and $2\pi$ for the phase (see
-# [`Gen.uniform`](https://probcomp.github.io/Gen/dev/ref/distributions/#Gen.uniform)).
+# [`Gen.uniform`](https://www.gen.dev/docs/stable/ref/distributions/#Gen.uniform)).
#
# The sine wave should implement:
#
diff --git a/tutorials/Iterative inference in Gen.ipynb b/tutorials/Iterative inference in Gen.ipynb
index 2c6d811..c6f1f72 100644
--- a/tutorials/Iterative inference in Gen.ipynb
+++ b/tutorials/Iterative inference in Gen.ipynb
@@ -3732,7 +3732,7 @@
"`is_outlier` score to the most likely possibility. We can do this by\n",
"iterating over both possible traces, scoring them, and choosing the one with\n",
"the higher score. We can do this using Gen's\n",
- "[`update`](https://probcomp.github.io/Gen/dev/ref/gfi/#Update-1) function,\n",
+ "[`update`](https://www.gen.dev/docs/stable/ref/gfi/#Update-1) function,\n",
"which allows us to manually update a trace to satisfy some constraints:"
]
},
diff --git a/tutorials/Iterative inference in Gen.jl b/tutorials/Iterative inference in Gen.jl
index fdbb492..9b8f515 100644
--- a/tutorials/Iterative inference in Gen.jl
+++ b/tutorials/Iterative inference in Gen.jl
@@ -917,7 +917,7 @@ gif(viz)
# `is_outlier` score to the most likely possibility. We can do this by
# iterating over both possible traces, scoring them, and choosing the one with
# the higher score. We can do this using Gen's
-# [`update`](https://probcomp.github.io/Gen/dev/ref/gfi/#Update-1) function,
+# [`update`](https://www.gen.dev/docs/stable/ref/gfi/#Update-1) function,
# which allows us to manually update a trace to satisfy some constraints:
function is_outlier_map_update(tr)
diff --git a/tutorials/Reasoning About Regenerate.ipynb b/tutorials/Reasoning About Regenerate.ipynb
index 57b714a..0458f40 100644
--- a/tutorials/Reasoning About Regenerate.ipynb
+++ b/tutorials/Reasoning About Regenerate.ipynb
@@ -6,7 +6,7 @@
"source": [
"# Reasoning About Regenerate\n",
"\n",
- "Gen provides a primitive called [`regenerate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Regenerate-1) that allows users to ask for certain random choices in a trace to be re-generated from scratch. `regenerate` is the basis of one variant of the [`metropolis_hastings`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.metropolis_hastings) operator in Gen's inference library.\n",
+ "Gen provides a primitive called [`regenerate`](https://www.gen.dev/docs/stable/ref/gfi/#Regenerate-1) that allows users to ask for certain random choices in a trace to be re-generated from scratch. `regenerate` is the basis of one variant of the [`metropolis_hastings`](https://www.gen.dev/docs/stable/ref/inference/#Gen.metropolis_hastings) operator in Gen's inference library.\n",
"\n",
"This notebook aims to help you understand the computation that `regenerate` is performing."
]
@@ -108,7 +108,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Note that unlike [`update`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.update), we do not provide the new values for the random choices that we want to change. Instead, we simply pass in a [selection](https://probcomp.github.io/Gen/dev/ref/selections/#Selections-1) indicating the addresses that we want to propose new values for.\n",
+ "Note that unlike [`update`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.update), we do not provide the new values for the random choices that we want to change. Instead, we simply pass in a [selection](https://www.gen.dev/docs/stable/ref/selections/#Selections-1) indicating the addresses that we want to propose new values for.\n",
"\n",
"Note that `select(:a)` is equivalent to:\n",
"```julia\n",
@@ -160,7 +160,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "What distribution is `regenerate` sampling the selected values from? It turns out that `regenerate` is using the [*internal proposal distribution family*](https://probcomp.github.io/Gen/dev/ref/gfi/#.-Internal-proposal-distribution-family-1) $q(t; x, u)$, just like like `generate`. Recall that for `@gen` functions, the internal proposal distribution is based on *ancestral sampling*. But whereas `generate` was given the expicit choice map of constraints ($u$) as an argument, `regenerate` constructs $u$ by starting with the previous trace $t$ and then removing any selected addresses. In other words, `regenerate` is like `generate`, but where the constraints are the choices made in the previous trace less the selected choices.\n",
+ "What distribution is `regenerate` sampling the selected values from? It turns out that `regenerate` is using the [*internal proposal distribution family*](https://www.gen.dev/docs/stable/ref/gfi/#.-Internal-proposal-distribution-family-1) $q(t; x, u)$, just like like `generate`. Recall that for `@gen` functions, the internal proposal distribution is based on *ancestral sampling*. But whereas `generate` was given the expicit choice map of constraints ($u$) as an argument, `regenerate` constructs $u$ by starting with the previous trace $t$ and then removing any selected addresses. In other words, `regenerate` is like `generate`, but where the constraints are the choices made in the previous trace less the selected choices.\n",
"\n",
"We can make this concrete. Let us start with a deterministic trace again:"
]
diff --git a/tutorials/Reasoning About Regenerate.jl b/tutorials/Reasoning About Regenerate.jl
index 234ab02..25b0c8f 100644
--- a/tutorials/Reasoning About Regenerate.jl
+++ b/tutorials/Reasoning About Regenerate.jl
@@ -15,7 +15,7 @@
# # Reasoning About Regenerate
#
-# Gen provides a primitive called [`regenerate`](https://probcomp.github.io/Gen/dev/ref/gfi/#Regenerate-1) that allows users to ask for certain random choices in a trace to be re-generated from scratch. `regenerate` is the basis of one variant of the [`metropolis_hastings`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.metropolis_hastings) operator in Gen's inference library.
+# Gen provides a primitive called [`regenerate`](https://www.gen.dev/docs/stable/ref/gfi/#Regenerate-1) that allows users to ask for certain random choices in a trace to be re-generated from scratch. `regenerate` is the basis of one variant of the [`metropolis_hastings`](https://www.gen.dev/docs/stable/ref/inference/#Gen.metropolis_hastings) operator in Gen's inference library.
#
# This notebook aims to help you understand the computation that `regenerate` is performing.
@@ -61,7 +61,7 @@ trace, weight = generate(foo, (0.3,), choicemap((:a, true), (:b, false), (:c, tr
using Gen: regenerate, select, NoChange
(trace, weight, retdiff) = regenerate(trace, (0.3,), (NoChange(),), select(:a));
-# Note that unlike [`update`](https://probcomp.github.io/Gen/dev/ref/gfi/#Gen.update), we do not provide the new values for the random choices that we want to change. Instead, we simply pass in a [selection](https://probcomp.github.io/Gen/dev/ref/selections/#Selections-1) indicating the addresses that we want to propose new values for.
+# Note that unlike [`update`](https://www.gen.dev/docs/stable/ref/gfi/#Gen.update), we do not provide the new values for the random choices that we want to change. Instead, we simply pass in a [selection](https://www.gen.dev/docs/stable/ref/selections/#Selections-1) indicating the addresses that we want to propose new values for.
#
# Note that `select(:a)` is equivalent to:
# ```julia
@@ -79,7 +79,7 @@ get_choices(trace)
# Re-run the regenerate command until you get a trace where `a` is `false`. Note that the address `b` doesn't appear in the resulting trace. Then, run the command again until you get a trace where `a` is `true`. Note that now there is a value for `b`. This value of `b` was sampled along with the new value for `a`---`regenerate` will regenerate new values for the selected adddresses, but also any new addresses that may be introduced as a consequence of stochastic control flow.
-# What distribution is `regenerate` sampling the selected values from? It turns out that `regenerate` is using the [*internal proposal distribution family*](https://probcomp.github.io/Gen/dev/ref/gfi/#.-Internal-proposal-distribution-family-1) $q(t; x, u)$, just like like `generate`. Recall that for `@gen` functions, the internal proposal distribution is based on *ancestral sampling*. But whereas `generate` was given the expicit choice map of constraints ($u$) as an argument, `regenerate` constructs $u$ by starting with the previous trace $t$ and then removing any selected addresses. In other words, `regenerate` is like `generate`, but where the constraints are the choices made in the previous trace less the selected choices.
+# What distribution is `regenerate` sampling the selected values from? It turns out that `regenerate` is using the [*internal proposal distribution family*](https://www.gen.dev/docs/stable/ref/gfi/#.-Internal-proposal-distribution-family-1) $q(t; x, u)$, just like like `generate`. Recall that for `@gen` functions, the internal proposal distribution is based on *ancestral sampling*. But whereas `generate` was given the expicit choice map of constraints ($u$) as an argument, `regenerate` constructs $u$ by starting with the previous trace $t$ and then removing any selected addresses. In other words, `regenerate` is like `generate`, but where the constraints are the choices made in the previous trace less the selected choices.
#
# We can make this concrete. Let us start with a deterministic trace again:
diff --git a/tutorials/Scaling with Combinators and the Static Modeling Language.ipynb b/tutorials/Scaling with Combinators and the Static Modeling Language.ipynb
index 98e5138..2f2c592 100644
--- a/tutorials/Scaling with Combinators and the Static Modeling Language.ipynb
+++ b/tutorials/Scaling with Combinators and the Static Modeling Language.ipynb
@@ -335,7 +335,7 @@
"\n",
"However, it should be possible for the algorithm to scale linearly in the number of data points. Briefly, deciding whether to update a given `is_outlier` variable can be done without referencing the other data points. This is because each `is_outiler` variable is conditionally independent of the outlier variables and y-coordinates of the other data points, conditioned on the parameters.\n",
"\n",
- "We can make this conditional independence structure explicit using the [Map generative function combinator](https://probcomp.github.io/Gen/dev/ref/combinators/#Map-combinator-1). Combinators like map encapsulate common modeling patterns (e.g., a loop in which each iteration is making independent choices), and when you use them, Gen can take advantage of the restrictions they enforce to implement performance optimizations automatically during inference. The `Map` combinator, like the `map` function in a functional programming language, helps to execute the same generative code repeatedly. "
+ "We can make this conditional independence structure explicit using the [Map generative function combinator](https://www.gen.dev/docs/stable/ref/combinators/#Map-combinator-1). Combinators like map encapsulate common modeling patterns (e.g., a loop in which each iteration is making independent choices), and when you use them, Gen can take advantage of the restrictions they enforce to implement performance optimizations automatically during inference. The `Map` combinator, like the `map` function in a functional programming language, helps to execute the same generative code repeatedly. "
]
},
{
@@ -372,7 +372,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "We then apply the [`Map`](https://probcomp.github.io/Gen/dev/ref/combinators/#Map-combinator-1), which is a Julia function, to this generative function, to obtain a new generative function:"
+ "We then apply the [`Map`](https://www.gen.dev/docs/stable/ref/combinators/#Map-combinator-1), which is a Julia function, to this generative function, to obtain a new generative function:"
]
},
{
@@ -746,7 +746,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "In order to provide `generate_all_points` with the knowledge that its arguments do not change during an update to the `is_outlier` variable, we need to write the top-level model generative function that calls `generate_all_points` in the [Static Modeling Language](https://probcomp.github.io/Gen/dev/ref/modeling/#Static-Modeling-Language-1), which is a restricted variant of the built-in modeling language that uses static analysis of the computation graph to generate specialized trace data structures and specialized implementations of trace operations. We indicate that a function is to be interpreted using the static language using the `static` annotation:"
+ "In order to provide `generate_all_points` with the knowledge that its arguments do not change during an update to the `is_outlier` variable, we need to write the top-level model generative function that calls `generate_all_points` in the [Static Modeling Language](https://www.gen.dev/docs/stable/ref/modeling/#Static-Modeling-Language-1), which is a restricted variant of the built-in modeling language that uses static analysis of the computation graph to generate specialized trace data structures and specialized implementations of trace operations. We indicate that a function is to be interpreted using the static language using the `static` annotation:"
]
},
{
diff --git a/tutorials/Scaling with Combinators and the Static Modeling Language.jl b/tutorials/Scaling with Combinators and the Static Modeling Language.jl
index 1fe3176..5f5e991 100644
--- a/tutorials/Scaling with Combinators and the Static Modeling Language.jl
+++ b/tutorials/Scaling with Combinators and the Static Modeling Language.jl
@@ -140,7 +140,7 @@ plot(ns, times, xlabel="number of data points", ylabel="running time (seconds)",
#
# However, it should be possible for the algorithm to scale linearly in the number of data points. Briefly, deciding whether to update a given `is_outlier` variable can be done without referencing the other data points. This is because each `is_outiler` variable is conditionally independent of the outlier variables and y-coordinates of the other data points, conditioned on the parameters.
#
-# We can make this conditional independence structure explicit using the [Map generative function combinator](https://probcomp.github.io/Gen/dev/ref/combinators/#Map-combinator-1). Combinators like map encapsulate common modeling patterns (e.g., a loop in which each iteration is making independent choices), and when you use them, Gen can take advantage of the restrictions they enforce to implement performance optimizations automatically during inference. The `Map` combinator, like the `map` function in a functional programming language, helps to execute the same generative code repeatedly.
+# We can make this conditional independence structure explicit using the [Map generative function combinator](https://www.gen.dev/docs/stable/ref/combinators/#Map-combinator-1). Combinators like map encapsulate common modeling patterns (e.g., a loop in which each iteration is making independent choices), and when you use them, Gen can take advantage of the restrictions they enforce to implement performance optimizations automatically during inference. The `Map` combinator, like the `map` function in a functional programming language, helps to execute the same generative code repeatedly.
# ## 2. Introducing the map combinator
@@ -155,7 +155,7 @@ plot(ns, times, xlabel="number of data points", ylabel="running time (seconds)",
return y
end;
-# We then apply the [`Map`](https://probcomp.github.io/Gen/dev/ref/combinators/#Map-combinator-1), which is a Julia function, to this generative function, to obtain a new generative function:
+# We then apply the [`Map`](https://www.gen.dev/docs/stable/ref/combinators/#Map-combinator-1), which is a Julia function, to this generative function, to obtain a new generative function:
generate_all_points = Map(generate_single_point);
@@ -217,7 +217,7 @@ plot!(ns, with_map_times, label="with map")
# ## 3.Combining the map combinator with the static modeling language
-# In order to provide `generate_all_points` with the knowledge that its arguments do not change during an update to the `is_outlier` variable, we need to write the top-level model generative function that calls `generate_all_points` in the [Static Modeling Language](https://probcomp.github.io/Gen/dev/ref/modeling/#Static-Modeling-Language-1), which is a restricted variant of the built-in modeling language that uses static analysis of the computation graph to generate specialized trace data structures and specialized implementations of trace operations. We indicate that a function is to be interpreted using the static language using the `static` annotation:
+# In order to provide `generate_all_points` with the knowledge that its arguments do not change during an update to the `is_outlier` variable, we need to write the top-level model generative function that calls `generate_all_points` in the [Static Modeling Language](https://www.gen.dev/docs/stable/ref/modeling/#Static-Modeling-Language-1), which is a restricted variant of the built-in modeling language that uses static analysis of the computation graph to generate specialized trace data structures and specialized implementations of trace operations. We indicate that a function is to be interpreted using the static language using the `static` annotation:
@gen (static) function static_model_with_map(xs::Vector{Float64})
slope ~ normal(0, 2)
diff --git a/tutorials/tensorflow/Modeling with TensorFlow Code.ipynb b/tutorials/tensorflow/Modeling with TensorFlow Code.ipynb
index 0a8c49a..fa45e44 100644
--- a/tutorials/tensorflow/Modeling with TensorFlow Code.ipynb
+++ b/tutorials/tensorflow/Modeling with TensorFlow Code.ipynb
@@ -6,7 +6,7 @@
"source": [
"# Modeling with TensorFlow code\n",
"\n",
- "So far, we have seen generative functions that are defined only using the built-in modeling language, which uses the `@gen` keyword. However, Gen can also be extended with other modeling languages, as long as they produce generative functions that implement the [Generative Function Interface](https://probcomp.github.io/Gen/dev/ref/gfi/). The [GenTF](https://github.com/probcomp/GenTF) Julia package provides one such modeling language which allow generative functions to be constructed from user-defined TensorFlow computation graphs. Generative functions written in the built-in language can invoke generative functions defined using the GenTF language.\n",
+ "So far, we have seen generative functions that are defined only using the built-in modeling language, which uses the `@gen` keyword. However, Gen can also be extended with other modeling languages, as long as they produce generative functions that implement the [Generative Function Interface](https://www.gen.dev/docs/stable/ref/gfi/). The [GenTF](https://github.com/probcomp/GenTF) Julia package provides one such modeling language which allow generative functions to be constructed from user-defined TensorFlow computation graphs. Generative functions written in the built-in language can invoke generative functions defined using the GenTF language.\n",
"\n",
"This notebook shows how to write a generative function in the GenTF language, how to invoke a GenTF generative function from a `@gen` function, and how to perform basic supervised training of a generative function. Specifically, we will train a softmax regression conditional inference model to generate the label of an MNIST digit given the pixels. Later tutorials will show how to use deep learning and TensorFlow to accelerate inference in generative models, using ideas from \"amortized inference\"."
]
@@ -367,7 +367,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
- "Now, we train the trainable parameters of the `tf_softmax_model` generative function (`W` and `b`) on the MNIST traing data. Note that these parameters are stored as the state of the TensorFlow variables. We will use the [`Gen.train!`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.train!) method, which supports supervised training of generative functions using stochastic gradient opimization methods. In particular, this method takes the generative function to be trained (`digit_model`), a Julia function of no arguments that generates a batch of training data, and the update to apply to the trainable parameters."
+ "Now, we train the trainable parameters of the `tf_softmax_model` generative function (`W` and `b`) on the MNIST traing data. Note that these parameters are stored as the state of the TensorFlow variables. We will use the [`Gen.train!`](https://www.gen.dev/docs/stable/ref/inference/#Gen.train!) method, which supports supervised training of generative functions using stochastic gradient opimization methods. In particular, this method takes the generative function to be trained (`digit_model`), a Julia function of no arguments that generates a batch of training data, and the update to apply to the trainable parameters."
]
},
{
diff --git a/tutorials/tensorflow/Modeling with TensorFlow Code.jl b/tutorials/tensorflow/Modeling with TensorFlow Code.jl
index 7251182..b5d6b61 100644
--- a/tutorials/tensorflow/Modeling with TensorFlow Code.jl
+++ b/tutorials/tensorflow/Modeling with TensorFlow Code.jl
@@ -15,7 +15,7 @@
# # Modeling with TensorFlow code
#
-# So far, we have seen generative functions that are defined only using the built-in modeling language, which uses the `@gen` keyword. However, Gen can also be extended with other modeling languages, as long as they produce generative functions that implement the [Generative Function Interface](https://probcomp.github.io/Gen/dev/ref/gfi/). The [GenTF](https://github.com/probcomp/GenTF) Julia package provides one such modeling language which allow generative functions to be constructed from user-defined TensorFlow computation graphs. Generative functions written in the built-in language can invoke generative functions defined using the GenTF language.
+# So far, we have seen generative functions that are defined only using the built-in modeling language, which uses the `@gen` keyword. However, Gen can also be extended with other modeling languages, as long as they produce generative functions that implement the [Generative Function Interface](https://www.gen.dev/docs/stable/ref/gfi/). The [GenTF](https://github.com/probcomp/GenTF) Julia package provides one such modeling language which allow generative functions to be constructed from user-defined TensorFlow computation graphs. Generative functions written in the built-in language can invoke generative functions defined using the GenTF language.
#
# This notebook shows how to write a generative function in the GenTF language, how to invoke a GenTF generative function from a `@gen` function, and how to perform basic supervised training of a generative function. Specifically, we will train a softmax regression conditional inference model to generate the label of an MNIST digit given the pixels. Later tutorials will show how to use deep learning and TensorFlow to accelerate inference in generative models, using ideas from "amortized inference".
@@ -115,7 +115,7 @@ Gen.get_choices(trace)
include("mnist.jl")
training_data_loader = MNISTTrainDataLoader();
-# Now, we train the trainable parameters of the `tf_softmax_model` generative function (`W` and `b`) on the MNIST traing data. Note that these parameters are stored as the state of the TensorFlow variables. We will use the [`Gen.train!`](https://probcomp.github.io/Gen/dev/ref/inference/#Gen.train!) method, which supports supervised training of generative functions using stochastic gradient opimization methods. In particular, this method takes the generative function to be trained (`digit_model`), a Julia function of no arguments that generates a batch of training data, and the update to apply to the trainable parameters.
+# Now, we train the trainable parameters of the `tf_softmax_model` generative function (`W` and `b`) on the MNIST traing data. Note that these parameters are stored as the state of the TensorFlow variables. We will use the [`Gen.train!`](https://www.gen.dev/docs/stable/ref/inference/#Gen.train!) method, which supports supervised training of generative functions using stochastic gradient opimization methods. In particular, this method takes the generative function to be trained (`digit_model`), a Julia function of no arguments that generates a batch of training data, and the update to apply to the trainable parameters.
# The `ParamUpdate` constructor takes the type of update to perform (in this case a gradient descent update with step size 0.00001), and a specification of which trainable parameters should be updated). Here, we request that the `W` and `b` trainable parameters of the `tf_softmax_model` generative function should be trained.