Skip to content

Commit

Permalink
Docstring/Docs: Fix various typos
Browse files Browse the repository at this point in the history
- layerwise -> layer-wise
- CONTRIBUTING.md#50 numpy codestyle
- docs/source/index.rst#5 Propagation
- docs/source/getting-started.rst#149 instantiate
- docs/source/how-to/visualize-results.rst#444 accessed
- docs/source/how-to/write-custom-canonizer.rst#113 torch
- docs/source/tutorial/image-...ipynb section 3.2 cell 1 line 10 gamma
- src/zennit/core.py#165 lengths
- tests/test_attribution.py#91 preferred
- tests/test_attribution.py#140 SmoothGrad
- tests/test_canonizers.py#120 AttributeCanonizer
- tests/test_canonizers.py#141 whether
- shared/scripts/palette_fit.py#48 brightness

Typos pointed out by @HeinrichAD

Co-authored-by: HeinrichAD <[email protected]>
  • Loading branch information
chr5tphr and HeinrichAD committed Oct 4, 2022
1 parent 8a94236 commit d46f3e7
Show file tree
Hide file tree
Showing 11 changed files with 17 additions and 17 deletions.
2 changes: 1 addition & 1 deletion CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,7 +47,7 @@ The documentation uses [Sphinx](https://www.sphinx-doc.org). It can be built at
`docs/build` using the respective Tox environment with `tox -e docs`. To rebuild the full
documentation, `tox -e docs -- -aE` can be used.

The API-documentation is generated from the numpycodestyle docstring of respective modules/classes/functions.
The API-documentation is generated from the numpydoc-style docstring of respective modules/classes/functions.

### Tutorials
Tutorials are written as Jupyter notebooks in order to execute them using
Expand Down
6 changes: 3 additions & 3 deletions docs/source/getting-started.rst
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ Zennit implements propagation-based attribution methods by overwriting the
gradient of PyTorch modules in PyTorch's auto-differentiation engine. This means
that Zennit will only work on models which are strictly implemented using
PyTorch modules, including activation functions. The following demonstrates a
setup to compute Layerwise Relevance Propagation (LRP) relevance for a simple
setup to compute Layer-wise Relevance Propagation (LRP) relevance for a simple
model and random data.

.. code-block:: python
Expand Down Expand Up @@ -133,7 +133,7 @@ and :doc:`/how-to/write-custom-attributors`.
Canonizers
^^^^^^^^^^

For some modules and operations, Layerwise Relevance Propagation (LRP) is not
For some modules and operations, Layer-wise Relevance Propagation (LRP) is not
implementation-invariant, eg. ``BatchNorm -> Dense -> ReLU`` will be attributed
differently than ``Dense -> BatchNorm -> ReLU``. Therefore, LRP needs a
canonical form of the model, which is implemented in ``Canonizers``. These may
Expand All @@ -146,7 +146,7 @@ be simply supplied when instantiating a composite:
from zennit.torchvision import VGGCanonizer
# instatiate the model
# instantiate the model
model = vgg16()
# create the canonizers
canonizers = [VGGCanonizer()]
Expand Down
2 changes: 1 addition & 1 deletion docs/source/how-to/visualize-results.rst
Original file line number Diff line number Diff line change
Expand Up @@ -441,7 +441,7 @@ time. This is the way the built-in color maps are defined in
img.show()
:py:class:`~zennit.cmap.LazyColorMapCache` stores the specified source code for
each key, and if accesed with `cmaps[key]`, it will either compile the
each key, and if accessed with `cmaps[key]`, it will either compile the
:py:class:`~zennit.cmap.ColorMap`, cache it if it has not been accessed
before and return it, or it will return the previously cached
:py:class:`~zennit.cmap.ColorMap`.
Expand Down
4 changes: 2 additions & 2 deletions docs/source/how-to/write-custom-canonizers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ Writing Custom Canonizers
=========================

**Canonizers** are used to temporarily transform models into a canonical form to
mitigate the lack of implementation invariance of methods Layerwise Relevance
mitigate the lack of implementation invariance of methods Layer-wise Relevance
Propagation (LRP). A general introduction to **Canonizers** can be found here:
:ref:`use-canonizers`.

Expand Down Expand Up @@ -110,7 +110,7 @@ the ReLU activations in a model with Softplus activations:
self.relu_children = relu_children
for name, _ in relu_children:
# set each of the attributes corresponding to the ReLU to a new
# instance of toch.nn.Softplus
# instance of torch.nn.Softplus
setattr(module, name, torch.nn.Softplus(beta=self.beta))
def remove(self):
Expand Down
2 changes: 1 addition & 1 deletion docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
Zennit Documentation
====================

Zennit (Zennit Explains Neural Networks in Torch) is a python framework using PyTorch to compute local attributions in the sense of eXplainable AI (XAI) with a focus on Layerwise Relevance Progagation.
Zennit (Zennit Explains Neural Networks in Torch) is a python framework using PyTorch to compute local attributions in the sense of eXplainable AI (XAI) with a focus on Layer-wise Relevance Propagation.
It works by defining *rules* which are used to overwrite the gradient of PyTorch modules in PyTorch's auto-differentiation engine.
Rules are mapped to layers with *composites*, which contain directions to compute the attributions of a full model, which maps rules to modules based on their properties and context.

Expand Down
2 changes: 1 addition & 1 deletion docs/source/tutorial/image-classification-vgg-resnet.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -639,7 +639,7 @@
" low=low,\n",
" high=high,\n",
" canonizers=[],\n",
" gamma=0.5, # change the gammma for all layers\n",
" gamma=0.5, # change the gamma for all layers\n",
" epsilon=0.1, # change the epsilon for all layers\n",
" layer_map=[\n",
" (BatchNorm, Pass()), # explicitly ignore BatchNorm\n",
Expand Down
2 changes: 1 addition & 1 deletion share/scripts/palette_fit.py
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ def main(source_file, output_file, strategy, source_cmap, source_level, invert,
matchpal = palette(source_cmap, source_level)

if strategy == 'intensity':
# matching based on the source image intensity/ brigthness
# matching based on the source image intensity/ brightness
values = source.astype(float).mean(2)
elif strategy == 'nearest':
# match by finding the neareast centroids in a source colormap
Expand Down
2 changes: 1 addition & 1 deletion src/zennit/attribution.py
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@ def forward(self, input, attr_output_fn):

class Gradient(Attributor):
'''The Gradient Attributor. The result is the product of the attribution output and the (possibly modified)
jacobian. With a composite, i.e. `EpsilonGammaBox`, this will compute the Layerwise Relevance Propagation
jacobian. With a composite, i.e. `EpsilonGammaBox`, this will compute the Layer-wise Relevance Propagation
attribution values.
Parameters
Expand Down
4 changes: 2 additions & 2 deletions src/zennit/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -162,7 +162,7 @@ def expand(tensor, shape, cut_batch_dim=False):
# append singleton dimensions if tensor has fewer dimensions, and the existing ones match with shape
tensor = tensor[(...,) + (None,) * (len(shape) - len(tensor.shape))]
if tensor.ndim == len(shape):
# if the dims match completely (lenghts match and zipped match), expand normally
# if the dims match completely (lengths match and zipped match), expand normally
if all(left in (1, right) for left, right in zip(tensor.shape, shape)):
return tensor.expand(shape)
# if `cut_batch_dim` and dims match except first, which is larger than shape, the the first dim and expand
Expand Down Expand Up @@ -370,7 +370,7 @@ def backward(ctx, *grad_outputs):


class Hook:
'''Base class for hooks to be used to compute layerwise attributions.'''
'''Base class for hooks to be used to compute layer-wise attributions.'''
def __init__(self):
self.stored_tensors = {}
self.active = True
Expand Down
4 changes: 2 additions & 2 deletions tests/test_attribution.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,7 +88,7 @@ def test_gradient_attributor_grad(data_simple):


def test_gradient_attributor_output_fn_precedence(data_simple):
'''Test whether the gradient attributor attr_output at call is prefered when it is supplied at both initialization
'''Test whether the gradient attributor attr_output at call is preferred when it is supplied at both initialization
and call.
'''
model = IdentityLogger()
Expand Down Expand Up @@ -137,7 +137,7 @@ def test_smooth_grad_distribution(data_simple, noise_level):
with SmoothGrad(model=model, noise_level=noise_level, n_iter=n_iter, attr_output=torch.ones_like) as attributor:
_, grad = attributor(data_simple)

assert len(model.tensors) == n_iter, 'SmootGrad iterations did not match n_iter!'
assert len(model.tensors) == n_iter, 'SmoothGrad iterations did not match n_iter!'

sample_mean = sum(model.tensors) / len(model.tensors)
sample_var = ((sum((tensor - sample_mean) ** 2 for tensor in model.tensors) / len(model.tensors))).mean(dims)
Expand Down
4 changes: 2 additions & 2 deletions tests/test_canonizers.py
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ def attribute_map(name, module):


def test_composite_canonizer():
'''Test whether CompositeCanonizer correctly combines two AttributCanonizer canonizers.'''
'''Test whether CompositeCanonizer correctly combines two AttributeCanonizer canonizers.'''
module_vanilla = torch.nn.Module()
model = torch.nn.Sequential(module_vanilla)

Expand All @@ -138,7 +138,7 @@ def test_composite_canonizer():


def test_base_canonizer_cooperative_apply():
'''Test wheter Canonizer's apply method is cooperative.'''
'''Test whether Canonizer's apply method is cooperative.'''

class DummyCanonizer(Canonizer):
'''Class to test Canonizer's cooperative apply.'''
Expand Down

0 comments on commit d46f3e7

Please sign in to comment.