dist - Non-uniform samples

DistributionExperiment

Compute reflectivity from a non-uniform sample.

Weights

Parameterized distribution for use in DistributionExperiment.

Inhomogeneous samples

In the presence of samples with short range order on scale of the coherence length of the probe in the plane, but long range disorder following some distribution of parameter values, the reflectivity can be computed from a weighted incoherent sum of the reflectivities for different values of the parameter.

DistristributionExperiment allows the model to be computed for a single varying parameter. Multi-parameter dispersion models are not available.

class refl1d.dist.DistributionExperiment(experiment=None, P=None, distribution=None, coherent=False)[source]

Bases: ExperimentBase

Compute reflectivity from a non-uniform sample.

P is the target parameter for the model, which takes on the values from distribution in the context of the experiment. The result is the weighted sum of the theory curves after setting P.value to each distribution value. Clearly, P should not be a fitted parameter, but the remaining experiment parameters can be fitted, as can the parameters of the distribution.

If coherent is true, then the reflectivity of the mixture is computed from the coherent sum rather than the incoherent sum.

See Weights for a description of how to set up the distribution.

format_parameters()
interpolation = 0
is_reset()

Returns True if a model reset was triggered.

magnetic_slabs()
magnetic_step_profile()
property name
nllf()

Return the -log(P(data|model)).

Using the assumption that data uncertainty is uncorrelated, with measurements normally distributed with mean R and variance dR**2, this is just sum( resid**2/2 + log(2*pi*dR**2)/2 ).

The current version drops the constant term, sum(log(2*pi*dR**2)/2).

numpoints()
parameters()[source]
plot(plot_shift=None, profile_shift=None, view=None)
plot_profile(plot_shift=0.0)[source]
plot_reflectivity(show_resolution=False, view=None, plot_shift=None)
plot_weights()[source]
probe: probe.Probe = None
reflectivity(resolution=True, interpolation=0)[source]
residuals()
restore_data()

Restore original data after resynthesis.

resynth_data()

Resynthesize data with noise from the uncertainty estimates.

save(basename)
save_json(basename)

Save the experiment as a json file

save_profile(basename)
save_refl(basename)
simulate_data(noise=2.0)

Simulate a random data set for the model.

This sets R and dR according to the noise level given.

Parameters:

noise: float or array or None | %

dR/R uncertainty as a percentage. If noise is set to None, then use dR from the data if present, otherwise default to 2%.

slabs()
smooth_profile(dz=1)[source]

Compute a density profile for the material

step_profile()[source]

Compute a scattering length density profile

to_dict()[source]
update()

Called when any parameter in the model is changed.

This signals that the entire model needs to be recalculated.

update_composition()

When the model composition has changed, we need to lookup the scattering factors for the new model. This is only needed when an existing chemical formula is modified; new and deleted formulas will be handled automatically.

write_data(filename, **kw)

Save simulated data to a file

class refl1d.dist.Weights(edges=None, cdf=None, args=(), loc=None, scale=None, truncated=True)[source]

Bases: object

Parameterized distribution for use in DistributionExperiment.

To support non-uniform experiments, we must bin the possible values for the parameter and compute the theory function for one parameter value per bin. The weighted sum of the resulting theory functions is the value that we compare to the data.

Performing this analysis requires a cumulative density function which can return the integrated value of the probability density from -inf to x. The total density in each bin is then the difference between the cumulative densities at the edges. If the distribution is wider than the range, then the tails need to be truncated and the bins reweighted to a total density of 1, or the tail density can be added to the first and last bins. Weights of zero are not returned. Note that if the tails are truncated, this may result in no weights being returned.

The vector edges contains the bin edges for the distribution. The function cdf returns the cumulative density function at the edges. The cdf function must implement the scipy.stats interface, with function signature f(x, a1, a2, …, loc=0, scale=1). The list args defines the arguments a1, a2, etc. The underlying parameters are available as args[i]. Similarly, loc and scale define the distribution center and width. Use truncated=False if you want the distribution tails to be included in the weights.

SciPy distribution D is used by specifying cdf=scipy.stats.D.cdf. Useful distributions include:

norm      Gaussian distribution.
halfnorm  Right half of a gaussian.
triang    Triangle distribution from loc up to loc+args[0]*scale
          and down to loc+scale.  Use loc=edges[0], scale=edges[-1]
          and args=[0.5] to define a symmetric triangle in the range
          of parameter P.
uniform   Flat from loc to loc+scale. Use loc=edges[0], scale=edges[-1]
          to define P as uniform over the range.
parameters()[source]
to_dict()[source]