Skip to content

Commit

Permalink
use module and fully qualified module names in doc sources
Browse files Browse the repository at this point in the history
  • Loading branch information
newville committed Jul 21, 2016
1 parent 59710f4 commit 0be33dd
Show file tree
Hide file tree
Showing 5 changed files with 105 additions and 104 deletions.
2 changes: 1 addition & 1 deletion doc/builtin_models.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Built-in Fitting Models in the :mod:`models` module
=====================================================

.. module:: models
.. module:: lmfit.models

Lmfit provides several builtin fitting models in the :mod:`models` module.
These pre-defined models each subclass from the :class:`model.Model` class of the
Expand Down
28 changes: 14 additions & 14 deletions doc/confidence.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
Calculation of confidence intervals
====================================

.. module:: confidence
.. module:: lmfit.confidence

The lmfit :mod:`confidence` module allows you to explicitly calculate
confidence intervals for variable parameters. For most models, it is not
Expand Down Expand Up @@ -63,17 +63,17 @@ starting point::
>>> result = mini.minimize()
>>> print(lmfit.fit_report(result.params))
[Variables]]
a: 0.09943895 +/- 0.000193 (0.19%) (init= 0.1)
b: 1.98476945 +/- 0.012226 (0.62%) (init= 1)
a: 0.09943895 +/- 0.000193 (0.19%) (init= 0.1)
b: 1.98476945 +/- 0.012226 (0.62%) (init= 1)
[[Correlations]] (unreported correlations are < 0.100)
C(a, b) = 0.601
C(a, b) = 0.601

Now it is just a simple function call to calculate the confidence
intervals::

>>> ci = lmfit.conf_interval(mini, result)
>>> lmfit.printfuncs.report_ci(ci)
99.70% 95.00% 67.40% 0.00% 67.40% 95.00% 99.70%
99.70% 95.00% 67.40% 0.00% 67.40% 95.00% 99.70%
a 0.09886 0.09905 0.09925 0.09944 0.09963 0.09982 0.10003
b 1.94751 1.96049 1.97274 1.97741 1.99680 2.00905 2.02203

Expand Down Expand Up @@ -103,16 +103,16 @@ uncertainties and correlations
which will report::

[[Variables]]
a1: 2.98622120 +/- 0.148671 (4.98%) (init= 2.986237)
a2: -4.33526327 +/- 0.115275 (2.66%) (init=-4.335256)
t1: 1.30994233 +/- 0.131211 (10.02%) (init= 1.309932)
t2: 11.8240350 +/- 0.463164 (3.92%) (init= 11.82408)
a1: 2.98622120 +/- 0.148671 (4.98%) (init= 2.986237)
a2: -4.33526327 +/- 0.115275 (2.66%) (init=-4.335256)
t1: 1.30994233 +/- 0.131211 (10.02%) (init= 1.309932)
t2: 11.8240350 +/- 0.463164 (3.92%) (init= 11.82408)
[[Correlations]] (unreported correlations are < 0.500)
C(a2, t2) = 0.987
C(a2, t1) = -0.925
C(t1, t2) = -0.881
C(a1, t1) = -0.599
95.00% 68.00% 0.00% 68.00% 95.00%
C(a2, t2) = 0.987
C(a2, t1) = -0.925
C(t1, t2) = -0.881
C(a1, t1) = -0.599
95.00% 68.00% 0.00% 68.00% 95.00%
a1 2.71850 2.84525 2.98622 3.14874 3.34076
a2 -4.63180 -4.46663 -4.33526 -4.22883 -4.14178
t2 10.82699 11.33865 11.82404 12.28195 12.71094
Expand Down
5 changes: 3 additions & 2 deletions doc/fitting.rst
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
.. _minimize_chapter:

.. module:: lmfit.minimizer

=======================================
Performing Fits, Analyzing Outputs
=======================================
Expand All @@ -23,7 +25,6 @@ function that calculates the array to be minimized), a :class:`Parameters`
object, and several optional arguments. See :ref:`fit-func-label` for
details on writing the objective.

.. currentmodule:: minimizer

.. function:: minimize(function, params[, args=None[, kws=None[, method='leastsq'[, scale_covar=True[, iter_cb=None[, **fit_kws]]]]]])

Expand Down Expand Up @@ -862,7 +863,7 @@ You can see that we recovered the right uncertainty level on the data.::
Getting and Printing Fit Reports
===========================================

.. currentmodule:: printfuncs
.. currentmodule:: lmfit.printfuncs

.. function:: fit_report(result, modelpars=None, show_correl=True, min_correl=0.1)

Expand Down
114 changes: 57 additions & 57 deletions doc/model.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
Modeling Data and Curve Fitting
=================================================

.. module:: model
.. module:: lmfit.model

A common use of least-squares minimization is *curve fitting*, where one
has a parametrized model function meant to explain some phenomena and wants
Expand Down Expand Up @@ -146,21 +146,21 @@ a :class:`ModelResult` object. As we will see below, this has many
components, including a :meth:`fit_report` method, which will show::

[[Model]]
gaussian
gaussian
[[Fit Statistics]]
# function evals = 33
# data points = 101
# variables = 3
chi-square = 3.409
reduced chi-square = 0.035
Akaike info crit = -336.264
Bayesian info crit = -328.418
# function evals = 33
# data points = 101
# variables = 3
chi-square = 3.409
reduced chi-square = 0.035
Akaike info crit = -336.264
Bayesian info crit = -328.418
[[Variables]]
amp: 8.88021829 +/- 0.113594 (1.28%) (init= 5)
cen: 5.65866102 +/- 0.010304 (0.18%) (init= 5)
wid: 0.69765468 +/- 0.010304 (1.48%) (init= 1)
amp: 8.88021829 +/- 0.113594 (1.28%) (init= 5)
cen: 5.65866102 +/- 0.010304 (0.18%) (init= 5)
wid: 0.69765468 +/- 0.010304 (1.48%) (init= 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, wid) = 0.577
C(amp, wid) = 0.577

The result will also have :attr:`init_fit` for the fit with the initial
parameter values and a :attr:`best_fit` for the fit with the best fit
Expand Down Expand Up @@ -348,9 +348,9 @@ specifying one or more independent variables.
* ``None``: Do not check for null or missing values (default)
* ``'none'``: Do not check for null or missing values.
* ``'drop'``: Drop null or missing observations in data. If pandas is
installed, ``pandas.isnull`` is used, otherwise :attr:`numpy.isnan` is used.
installed, ``pandas.isnull`` is used, otherwise :attr:`numpy.isnan` is used.
* ``'raise'``: Raise a (more helpful) exception when data contains null
or missing values.
or missing values.

.. attribute:: name

Expand Down Expand Up @@ -976,11 +976,11 @@ to model a peak with a background. For such a simple problem, we could just
build a model that included both components::

def gaussian_plus_line(x, amp, cen, wid, slope, intercept):
"line + 1-d gaussian"
"line + 1-d gaussian"

gauss = (amp/(sqrt(2*pi)*wid)) * exp(-(x-cen)**2 /(2*wid**2))
line = slope * x + intercept
return gauss + line
gauss = (amp/(sqrt(2*pi)*wid)) * exp(-(x-cen)**2 /(2*wid**2))
line = slope * x + intercept
return gauss + line

and use that with::

Expand All @@ -992,8 +992,8 @@ model function would have to be changed. As an alternative we could define
a linear function::

def line(x, slope, intercept):
"a line"
return slope * x + intercept
"a line"
return slope * x + intercept

and build a composite model with just::

Expand All @@ -1006,24 +1006,24 @@ This model has parameters for both component models, and can be used as:
which prints out the results::

[[Model]]
(Model(gaussian) + Model(line))
(Model(gaussian) + Model(line))
[[Fit Statistics]]
# function evals = 44
# data points = 101
# variables = 5
chi-square = 2.579
reduced chi-square = 0.027
Akaike info crit = -360.457
Bayesian info crit = -347.381
# function evals = 44
# data points = 101
# variables = 5
chi-square = 2.579
reduced chi-square = 0.027
Akaike info crit = -360.457
Bayesian info crit = -347.381
[[Variables]]
amp: 8.45931061 +/- 0.124145 (1.47%) (init= 5)
cen: 5.65547872 +/- 0.009176 (0.16%) (init= 5)
intercept: -0.96860201 +/- 0.033522 (3.46%) (init= 1)
slope: 0.26484403 +/- 0.005748 (2.17%) (init= 0)
wid: 0.67545523 +/- 0.009916 (1.47%) (init= 1)
amp: 8.45931061 +/- 0.124145 (1.47%) (init= 5)
cen: 5.65547872 +/- 0.009176 (0.16%) (init= 5)
intercept: -0.96860201 +/- 0.033522 (3.46%) (init= 1)
slope: 0.26484403 +/- 0.005748 (2.17%) (init= 0)
wid: 0.67545523 +/- 0.009916 (1.47%) (init= 1)
[[Correlations]] (unreported correlations are < 0.100)
C(amp, wid) = 0.666
C(cen, intercept) = 0.129
C(amp, wid) = 0.666
C(cen, intercept) = 0.129


and shows the plot on the left.
Expand Down Expand Up @@ -1100,13 +1100,13 @@ convolution function, perhaps as::

import numpy as np
def convolve(dat, kernel):
# simple convolution
npts = min(len(dat), len(kernel))
pad = np.ones(npts)
tmp = np.concatenate((pad*dat[0], dat, pad*dat[-1]))
out = np.convolve(tmp, kernel, mode='valid')
noff = int((len(out) - npts)/2)
return (out[noff:])[:npts]
# simple convolution
npts = min(len(dat), len(kernel))
pad = np.ones(npts)
tmp = np.concatenate((pad*dat[0], dat, pad*dat[-1]))
out = np.convolve(tmp, kernel, mode='valid')
noff = int((len(out) - npts)/2)
return (out[noff:])[:npts]

which extends the data in both directions so that the convolving kernel
function gives a valid result over the data range. Because this function
Expand All @@ -1118,23 +1118,23 @@ binary operator. A full script using this technique is here:
which prints out the results::

[[Model]]
(Model(jump) <function convolve at 0x109ee4488> Model(gaussian))
(Model(jump) <function convolve at 0x109ee4488> Model(gaussian))
[[Fit Statistics]]
# function evals = 27
# data points = 201
# variables = 3
chi-square = 22.091
reduced chi-square = 0.112
Akaike info crit = -437.837
Bayesian info crit = -427.927
# function evals = 27
# data points = 201
# variables = 3
chi-square = 22.091
reduced chi-square = 0.112
Akaike info crit = -437.837
Bayesian info crit = -427.927
[[Variables]]
mid: 5 (fixed)
sigma: 0.64118585 +/- 0.013233 (2.06%) (init= 1.5)
center: 4.51633608 +/- 0.009567 (0.21%) (init= 3.5)
amplitude: 0.62654849 +/- 0.001813 (0.29%) (init= 1)
mid: 5 (fixed)
sigma: 0.64118585 +/- 0.013233 (2.06%) (init= 1.5)
center: 4.51633608 +/- 0.009567 (0.21%) (init= 3.5)
amplitude: 0.62654849 +/- 0.001813 (0.29%) (init= 1)
[[Correlations]] (unreported correlations are < 0.100)
C(center, amplitude) = 0.344
C(sigma, amplitude) = 0.280
C(center, amplitude) = 0.344
C(sigma, amplitude) = 0.280


and shows the plots:
Expand Down
60 changes: 30 additions & 30 deletions doc/parameters.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
.. _parameters_chapter:

.. currentmodule:: lmfit.parameter
.. module:: lmfit.parameter

================================================
:class:`Parameter` and :class:`Parameters`
Expand Down Expand Up @@ -96,35 +96,35 @@ The :class:`Parameter` class
be set only if the provided value is not ``None``. You can use this to
update some Parameter attribute without affecting others, for example::

p1 = Parameter('a', value=2.0)
p2 = Parameter('b', value=0.0)
p1.set(min=0)
p2.set(vary=False)
p1 = Parameter('a', value=2.0)
p2 = Parameter('b', value=0.0)
p1.set(min=0)
p2.set(vary=False)

to set a lower bound, or to set a Parameter as have a fixed value.

Note that to use this approach to lift a lower or upper bound, doing::

p1.set(min=0)
.....
# now lift the lower bound
p1.set(min=None) # won't work! lower bound NOT changed
p1.set(min=0)
.....
# now lift the lower bound
p1.set(min=None) # won't work! lower bound NOT changed

won't work -- this will not change the current lower bound. Instead
you'll have to use ``np.inf`` to remove a lower or upper bound::

# now lift the lower bound
p1.set(min=-np.inf) # will work!
# now lift the lower bound
p1.set(min=-np.inf) # will work!

Similarly, to clear an expression of a parameter, you need to pass an
empty string, not ``None``. You also need to give a value and
explicitly tell it to vary::

p3 = Parameter('c', expr='(a+b)/2')
p3.set(expr=None) # won't work! expression NOT changed
p3 = Parameter('c', expr='(a+b)/2')
p3.set(expr=None) # won't work! expression NOT changed

# remove constraint expression
p3.set(value=1.0, vary=True, expr='') # will work! parameter now unconstrained
# remove constraint expression
p3.set(value=1.0, vary=True, expr='') # will work! parameter now unconstrained


The :class:`Parameters` class
Expand All @@ -150,28 +150,28 @@ The :class:`Parameters` class
object associated with the key `name`, with optional arguments
passed to :class:`Parameter`::

p = Parameters()
p.add('myvar', value=1, vary=True)
p = Parameters()
p.add('myvar', value=1, vary=True)

.. method:: add_many(self, paramlist)

add a list of named parameters. Each entry must be a tuple
with the following entries::

name, value, vary, min, max, expr
name, value, vary, min, max, expr

This method is somewhat rigid and verbose (no default values), but can
be useful when initially defining a parameter list so that it looks
table-like::

p = Parameters()
# (Name, Value, Vary, Min, Max, Expr)
p.add_many(('amp1', 10, True, None, None, None),
('cen1', 1.2, True, 0.5, 2.0, None),
('wid1', 0.8, True, 0.1, None, None),
('amp2', 7.5, True, None, None, None),
('cen2', 1.9, True, 1.0, 3.0, None),
('wid2', None, False, None, None, '2*wid1/3'))
p = Parameters()
# (Name, Value, Vary, Min, Max, Expr)
p.add_many(('amp1', 10, True, None, None, None),
('cen1', 1.2, True, 0.5, 2.0, None),
('wid1', 0.8, True, 0.1, None, None),
('amp2', 7.5, True, None, None, None),
('cen2', 1.9, True, 1.0, 3.0, None),
('wid2', None, False, None, None, '2*wid1/3'))


.. automethod:: Parameters.pretty_print
Expand Down Expand Up @@ -228,10 +228,10 @@ can be simplified using the :class:`Parameters` :meth:`valuesdict` method,
which would make the objective function ``fcn2min`` above look like::

def fcn2min(params, x, data):
""" model decaying sine wave, subtract data"""
v = params.valuesdict()
""" model decaying sine wave, subtract data"""
v = params.valuesdict()

model = v['amp'] * np.sin(x * v['omega'] + v['shift']) * np.exp(-x*x*v['decay'])
return model - data
model = v['amp'] * np.sin(x * v['omega'] + v['shift']) * np.exp(-x*x*v['decay'])
return model - data

The results are identical, and the difference is a stylistic choice.

0 comments on commit 0be33dd

Please sign in to comment.