Skip to content

Commit

Permalink
Merge pull request #33 from ndeutschmann/v0.3_release
Browse files Browse the repository at this point in the history
v0.3 release
  • Loading branch information
ndeutschmann authored Oct 1, 2021
2 parents 1a75e00 + fd8c471 commit 7656a78
Show file tree
Hide file tree
Showing 147 changed files with 10,960 additions and 271 deletions.
7 changes: 7 additions & 0 deletions doc_build/benchmarks_api/utils.benchmark.benchmark_time.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
utils.benchmark.benchmark\_time module
======================================

.. automodule:: utils.benchmark.benchmark_time
:members:
:undoc-members:
:show-inheritance:
1 change: 1 addition & 0 deletions doc_build/benchmarks_api/utils.benchmark.rst
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ utils.benchmark package
.. toctree::
:maxdepth: 1

utils.benchmark.benchmark_time
utils.benchmark.benchmarker
utils.benchmark.known_integrand_benchmarks
utils.benchmark.vegas_benchmarks
6 changes: 5 additions & 1 deletion doc_build/library/tutorial/config.rst
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,11 @@ flows, the number of bins. It is also possible to choose either a `checkerboard`
should be used.

For the purpose of training, either a `variance` or `dkl` loss can be specified.
For the DKL loss, it is possible to also request the survey strategy `adaptive_dkl`.
Next to the default `flat` survey strategy, there exists also the `forward` and
`forward_flat_int` survey strategy. For fixed samples, the `fixed_sample` survey
strategy creates a :doc:`Fixed Sample Integrator </api/zunis.integration.fixed_sample_integrator>`.
Specific for variance/DKL loss,
a survey strategy `adaptive_variance`/`adaptive_dkl` is provided.
`n_iter` refers to the number of iterations, whereas `n_points_survey` defines the
number of points used per iteration for the survey stage; the same can be defined
for the refine stage too.
Expand Down
46 changes: 21 additions & 25 deletions doc_build/library/tutorial/preeval.rst
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,16 @@ useful when fine-tuning integration parameters for a function that is very costl

The functionality for using pre-evaluated samples are provided by the
:doc:`Fixed Sample Integrator </api/zunis.integration.fixed_sample_integrator>`.
This integrator is accessible when using config files by choosing the survey strategy
`fixed_sample`.

Starting from the basic example, on can train on a sample defined as a
PyTorch tensor:

.. code-block:: python
import torch
from zunis.integration.fixed_sample_integrator import FixedSampleSurveyIntegrator
from zunis.training.weighted_dataset.stateful_trainer import StatefulTrainer
from zunis.integration import Integrator
device = torch.device("cuda")
Expand All @@ -23,15 +24,14 @@ PyTorch tensor:
def f(x):
return x[:,0]**2 + x[:,1]**2
trainer = StatefulTrainer(d=d, loss="variance", flow="pwquad", device=device)
integrator = FixedSampleSurveyIntegrator(f,trainer, device=device, n_points_survey=5)
integrator = Integrator(d=d, f=f, survey_strategy='fixed_sample', device=device, n_points_survey=1000)
n_points = 100
n_points = 1000
# Uniformly sampled points
x = torch.rand(n_points,d,device=device)
# x.shape = (n_points,d)
px = torch.ones(n_points)
px = torch.ones(n_points, device=device)
# px.shape = (n_points,)
# Function values
Expand All @@ -55,26 +55,26 @@ batch of the same structure:
import torch
import pickle
from zunis.integration.fixed_sample_integrator import FixedSampleSurveyIntegrator
from zunis.training.weighted_dataset.stateful_trainer import StatefulTrainer
from zunis.integration import Integrator
device = torch.device("cuda")
d = 2
def f(x):
return x[:,0]**2 + x[:,1]**2
return x[:,0]**2 + x[:,1]**2
trainer = StatefulTrainer(d=d, loss="variance", flow="pwquad", device="cuda")
integrator = FixedSampleSurveyIntegrator(f,trainer, device=device, n_points_survey=5)
integrator = Integrator(d=d, f=f, survey_strategy='fixed_sample', device=device, n_points_survey=1000)
data_x=[[0,4],[1,3],[2,2],[3,1],[4,0]]
data_px=[1.0,1.0,1.0,1.0,1.0]
data_x = torch.rand(1000,d,device=device)
#[[0.2093, 0.9918],[0.3216, 0.6965],[0.0625, 0.5634],...]
data_px = torch.ones(1000)
#[1.0,1.0,1.0...]
sample=(torch.tensor(data_x, device="cuda"),torch.tensor(data_px, device="cuda"),f(torch.tensor(data_x, device="cuda")))
sample=(data_x.clone().detach(),data_px.clone().detach(),f(data_x.clone().detach()))
pickle.dump(sample, open("sample.p","wb"))
integrator.set_sample_pickle("sample.p",device="cuda")
integrator.set_sample_pickle("sample.p",device=device)
result, uncertainty, history = integrator.integrate()
Finally , it is also possible to provide samples as a `.csv` file. This
Expand All @@ -85,11 +85,10 @@ For the above example, the `.csv` file would look like:

.. code-block:: python
0, 4, 1, 16
1, 3, 1, 10
2, 2, 1, 8
3, 1, 1, 10
4, 0, 1, 16
0.2093, 0.9918, 1, 1.0274
0.3216, 0.6965, 1, 0.5885
0.0625, 0.5634, 1, 0.3213
...
This could be imported as a pre-evaluated example and used for integration in the
following way:
Expand All @@ -98,16 +97,13 @@ following way:
import torch
import numpy as np
from zunis.integration.fixed_sample_integrator import FixedSampleSurveyIntegrator
from zunis.training.weighted_dataset.stateful_trainer import StatefulTrainer
from zunis.integration import Integrator
device = torch.device("cuda")
d = 2
trainer = StatefulTrainer(d=d, loss="variance", flow="pwquad", device=device)
integrator = FixedSampleSurveyIntegrator(f,trainer, device=device, n_points_survey=5)
integrator = Integrator(d=d, f=f, survey_strategy='fixed_sample', device=device, n_points_survey=1000)
integrator.set_sample_csv("sample.csv",device="cuda",dtype=np.float32)
result, uncertainty, history = integrator.integrate()
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
utils.benchmark.benchmark\_time module
======================================

.. automodule:: utils.benchmark.benchmark_time
:members:
:undoc-members:
:show-inheritance:
1 change: 1 addition & 0 deletions docs/_sources/benchmarks_api/utils.benchmark.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ utils.benchmark package
.. toctree::
:maxdepth: 1

utils.benchmark.benchmark_time
utils.benchmark.benchmarker
utils.benchmark.known_integrand_benchmarks
utils.benchmark.vegas_benchmarks
6 changes: 5 additions & 1 deletion docs/_sources/library/tutorial/config.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,11 @@ flows, the number of bins. It is also possible to choose either a `checkerboard`
should be used.

For the purpose of training, either a `variance` or `dkl` loss can be specified.
For the DKL loss, it is possible to also request the survey strategy `adaptive_dkl`.
Next to the default `flat` survey strategy, there exists also the `forward` and
`forward_flat_int` survey strategy. For fixed samples, the `fixed_sample` survey
strategy creates a :doc:`Fixed Sample Integrator </api/zunis.integration.fixed_sample_integrator>`.
Specific for variance/DKL loss,
a survey strategy `adaptive_variance`/`adaptive_dkl` is provided.
`n_iter` refers to the number of iterations, whereas `n_points_survey` defines the
number of points used per iteration for the survey stage; the same can be defined
for the refine stage too.
Expand Down
46 changes: 21 additions & 25 deletions docs/_sources/library/tutorial/preeval.rst.txt
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,16 @@ useful when fine-tuning integration parameters for a function that is very costl

The functionality for using pre-evaluated samples are provided by the
:doc:`Fixed Sample Integrator </api/zunis.integration.fixed_sample_integrator>`.
This integrator is accessible when using config files by choosing the survey strategy
`fixed_sample`.

Starting from the basic example, on can train on a sample defined as a
PyTorch tensor:

.. code-block:: python
import torch
from zunis.integration.fixed_sample_integrator import FixedSampleSurveyIntegrator
from zunis.training.weighted_dataset.stateful_trainer import StatefulTrainer
from zunis.integration import Integrator
device = torch.device("cuda")
Expand All @@ -23,15 +24,14 @@ PyTorch tensor:
def f(x):
return x[:,0]**2 + x[:,1]**2
trainer = StatefulTrainer(d=d, loss="variance", flow="pwquad", device=device)
integrator = FixedSampleSurveyIntegrator(f,trainer, device=device, n_points_survey=5)
integrator = Integrator(d=d, f=f, survey_strategy='fixed_sample', device=device, n_points_survey=1000)
n_points = 100
n_points = 1000
# Uniformly sampled points
x = torch.rand(n_points,d,device=device)
# x.shape = (n_points,d)
px = torch.ones(n_points)
px = torch.ones(n_points, device=device)
# px.shape = (n_points,)
# Function values
Expand All @@ -55,26 +55,26 @@ batch of the same structure:
import torch
import pickle
from zunis.integration.fixed_sample_integrator import FixedSampleSurveyIntegrator
from zunis.training.weighted_dataset.stateful_trainer import StatefulTrainer
from zunis.integration import Integrator
device = torch.device("cuda")
d = 2
def f(x):
return x[:,0]**2 + x[:,1]**2
return x[:,0]**2 + x[:,1]**2
trainer = StatefulTrainer(d=d, loss="variance", flow="pwquad", device="cuda")
integrator = FixedSampleSurveyIntegrator(f,trainer, device=device, n_points_survey=5)
integrator = Integrator(d=d, f=f, survey_strategy='fixed_sample', device=device, n_points_survey=1000)
data_x=[[0,4],[1,3],[2,2],[3,1],[4,0]]
data_px=[1.0,1.0,1.0,1.0,1.0]
data_x = torch.rand(1000,d,device=device)
#[[0.2093, 0.9918],[0.3216, 0.6965],[0.0625, 0.5634],...]
data_px = torch.ones(1000)
#[1.0,1.0,1.0...]
sample=(torch.tensor(data_x, device="cuda"),torch.tensor(data_px, device="cuda"),f(torch.tensor(data_x, device="cuda")))
sample=(data_x.clone().detach(),data_px.clone().detach(),f(data_x.clone().detach()))
pickle.dump(sample, open("sample.p","wb"))
integrator.set_sample_pickle("sample.p",device="cuda")
integrator.set_sample_pickle("sample.p",device=device)
result, uncertainty, history = integrator.integrate()
Finally , it is also possible to provide samples as a `.csv` file. This
Expand All @@ -85,11 +85,10 @@ For the above example, the `.csv` file would look like:

.. code-block:: python
0, 4, 1, 16
1, 3, 1, 10
2, 2, 1, 8
3, 1, 1, 10
4, 0, 1, 16
0.2093, 0.9918, 1, 1.0274
0.3216, 0.6965, 1, 0.5885
0.0625, 0.5634, 1, 0.3213
...
This could be imported as a pre-evaluated example and used for integration in the
following way:
Expand All @@ -98,16 +97,13 @@ following way:
import torch
import numpy as np
from zunis.integration.fixed_sample_integrator import FixedSampleSurveyIntegrator
from zunis.training.weighted_dataset.stateful_trainer import StatefulTrainer
from zunis.integration import Integrator
device = torch.device("cuda")
d = 2
trainer = StatefulTrainer(d=d, loss="variance", flow="pwquad", device=device)
integrator = FixedSampleSurveyIntegrator(f,trainer, device=device, n_points_survey=5)
integrator = Integrator(d=d, f=f, survey_strategy='fixed_sample', device=device, n_points_survey=1000)
integrator.set_sample_csv("sample.csv",device="cuda",dtype=np.float32)
result, uncertainty, history = integrator.integrate()
45 changes: 45 additions & 0 deletions docs/api/zunis.integration.adaptive_survey_integrator.html
Original file line number Diff line number Diff line change
Expand Up @@ -127,6 +127,51 @@

</dd></dl>

<dl class="py class">
<dt class="sig sig-object py" id="zunis.integration.adaptive_survey_integrator.VarianceAdaptiveSurveyIntegrator">
<em class="property"><span class="pre">class</span> </em><span class="sig-name descname"><span class="pre">VarianceAdaptiveSurveyIntegrator</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="o"><span class="pre">*</span></span><span class="n"><span class="pre">args</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs</span></span></em><span class="sig-paren">)</span><a class="reference external" href="https://www.github.com/ndeutschmann/zunis/tree/master/zunis_lib/zunis/integration/adaptive_survey_integrator.py"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#zunis.integration.adaptive_survey_integrator.VarianceAdaptiveSurveyIntegrator" title="Permalink to this definition"></a></dt>
<dd><p>Bases: <a class="reference internal" href="#zunis.integration.adaptive_survey_integrator.AdaptiveSurveyIntegrator" title="zunis.integration.adaptive_survey_integrator.AdaptiveSurveyIntegrator"><code class="xref py py-class docutils literal notranslate"><span class="pre">zunis.integration.adaptive_survey_integrator.AdaptiveSurveyIntegrator</span></code></a></p>
<p>Survey/Refine adaptive integrator based on the variance loss.
The integrator estimates the variance of a flat integrator and switches to forward sampling if
the flow performs significantly (2 sigma) better.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters</dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>f</strong> (<em>function</em>) – the function to integrate</p></li>
<li><p><strong>n_iter</strong> (<em>int</em>) – general number of iterations - ignored for survey/refine if n_iter_survey/n_inter_refine is set</p></li>
<li><p><strong>n_iter_survey</strong> (<em>int</em>) – number of iterations for the survey stage</p></li>
<li><p><strong>n_iter_refine</strong> (<em>int</em>) – number of iterations for the refine stage</p></li>
<li><p><strong>n_points</strong> – general number of points per iteration - ignored for survey/refine if n_points_survey/n_points_refine is set</p></li>
<li><p><strong>n_points_survey</strong> (<em>int</em>) – number of points per iteration for the survey stage</p></li>
<li><p><strong>n_points_refine</strong> (<em>int</em>) – number of points per iteration for the refine stage</p></li>
<li><p><strong>use_survey</strong> (<em>bool</em>) – whether to use the points generated during the survey to compute the final integral
not recommended due to uncontrolled correlations in error estimates</p></li>
<li><p><strong>verbosity</strong> (<em>int</em>) – verbosity level of the integrator</p></li>
</ul>
</dd>
</dl>
<dl class="py method">
<dt class="sig sig-object py" id="zunis.integration.adaptive_survey_integrator.VarianceAdaptiveSurveyIntegrator.compute_flat_variance_loss">
<span class="sig-name descname"><span class="pre">compute_flat_variance_loss</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">fx</span></span></em><span class="sig-paren">)</span><a class="reference external" href="https://www.github.com/ndeutschmann/zunis/tree/master/zunis_lib/zunis/integration/adaptive_survey_integrator.py"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#zunis.integration.adaptive_survey_integrator.VarianceAdaptiveSurveyIntegrator.compute_flat_variance_loss" title="Permalink to this definition"></a></dt>
<dd><p>Compute the variance loss and its standard deviation assuming points are sampled from a flat distribution
We clip the loss standard deviation to loss/4 so that the switching condition can be met</p>
</dd></dl>

<dl class="py method">
<dt class="sig sig-object py" id="zunis.integration.adaptive_survey_integrator.VarianceAdaptiveSurveyIntegrator.process_survey_step">
<span class="sig-name descname"><span class="pre">process_survey_step</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">sample</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">integral</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">integral_var</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">training_record</span></span></em>, <em class="sig-param"><span class="o"><span class="pre">**</span></span><span class="n"><span class="pre">kwargs</span></span></em><span class="sig-paren">)</span><a class="reference external" href="https://www.github.com/ndeutschmann/zunis/tree/master/zunis_lib/zunis/integration/adaptive_survey_integrator.py"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#zunis.integration.adaptive_survey_integrator.VarianceAdaptiveSurveyIntegrator.process_survey_step" title="Permalink to this definition"></a></dt>
<dd><p>Process the result of a survey step</p>
</dd></dl>

<dl class="py method">
<dt class="sig sig-object py" id="zunis.integration.adaptive_survey_integrator.VarianceAdaptiveSurveyIntegrator.survey_switch_condition">
<span class="sig-name descname"><span class="pre">survey_switch_condition</span></span><span class="sig-paren">(</span><span class="sig-paren">)</span><a class="reference external" href="https://www.github.com/ndeutschmann/zunis/tree/master/zunis_lib/zunis/integration/adaptive_survey_integrator.py"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#zunis.integration.adaptive_survey_integrator.VarianceAdaptiveSurveyIntegrator.survey_switch_condition" title="Permalink to this definition"></a></dt>
<dd><p>Boolean valued method that checks if it is time to switch between sampling uniformly
and using the model</p>
</dd></dl>

</dd></dl>

</section>


Expand Down
Loading

0 comments on commit 7656a78

Please sign in to comment.