Measuring objects properties: ndimage.measurements

Một phần của tài liệu Python scientific lecture notes (Trang 251 - 256)

Synthetic data:

>>> n = 10

>>> l = 256

>>> im = np.zeros((l, l))

>>> points = l*np.random.random((2, n**2))

>>> im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1

>>> im = ndimage.gaussian_filter(im, sigma=l/(4.*n))

>>> mask = im > im.mean()

• Analysis of connected components Label connected components:ndimage.label:

>>> label_im, nb_labels = ndimage.label(mask)

>>> nb_labels # how many regions?

23

>>> plt.imshow(label_im)

<matplotlib.image.AxesImage object at ...>

Compute size, mean_value, etc. of each region:

>>> sizes = ndimage.sum(mask, label_im, range(nb_labels + 1))

>>> mean_vals = ndimage.sum(im, label_im, range(1, nb_labels + 1))

Clean up small connect components:

>>> mask_size = sizes < 1000

>>> remove_pixel = mask_size[label_im]

>>> remove_pixel.shape

(256, 256)

>>> label_im[remove_pixel] = 0

>>> plt.imshow(label_im)

<matplotlib.image.AxesImage object at ...>

Now reassign labels withnp.searchsorted:

>>> labels = np.unique(label_im)

>>> label_im = np.searchsorted(labels, label_im)

Find region of interest enclosing object:

>>> slice_x, slice_y = ndimage.find_objects(label_im==4)[0]

>>> roi = im[slice_x, slice_y]

>>> plt.imshow(roi)

<matplotlib.image.AxesImage object at ...>

Other spatial measures:ndimage.center_of_mass,ndimage.maximum_position, etc.

Can be used outside the limited scope of segmentation applications.

Example: block mean:

>>> from scipy import misc

>>> l = misc.lena()

>>> sx, sy = l.shape

>>> X, Y = np.ogrid[0:sx, 0:sy]

>>> regions = sy/6 * (X/4) + Y/6 # note that we use broadcasting

>>> block_mean = ndimage.mean(l, labels=regions, index=np.arange(1,

... regions.max() +1))

>>> block_mean.shape = (sx/4, sy/6)

When regions are regular blocks, it is more efficient to use stride tricks (Example: fake dimensions with strides (page 173)).

Non-regularly-spaced blocks: radial mean:

>>> sx, sy = l.shape

>>> X, Y = np.ogrid[0:sx, 0:sy]

>>> r = np.hypot(X - sx/2, Y - sy/2)

>>> rbin = (20* r/r.max()).astype(np.int)

>>> radial_mean = ndimage.mean(l, labels=rbin, index=np.arange(1, rbin.max() +1))

• Other measures

Correlation function, Fourier/wavelet spectrum, etc.

One example with mathematical morphology:granulometry(http://en.wikipedia.org/wiki/Granulometry_%28morphology%29)

>>> def disk_structure(n):

... struct = np.zeros((2 * n + 1, 2 * n + 1)) ... x, y = np.indices((2 * n + 1, 2 * n + 1)) ... mask = (x - n)**2 + (y - n)**2 <= n**2 ... struct[mask] = 1

... return struct.astype(np.bool) ...

>>>

>>> def granulometry(data, sizes=None):

... s = max(data.shape) ... if sizes == None:

... sizes = range(1, s/2, 2)

... granulo = [ndimage.binary_opening(data, \

... structure=disk_structure(n)).sum() for n in sizes]

... return granulo ...

>>>

>>> np.random.seed(1)

>>> n = 10

>>> l = 256

>>> im = np.zeros((l, l))

>>> points = l*np.random.random((2, n**2))

>>> im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1

>>> im = ndimage.gaussian_filter(im, sigma=l/(4.*n))

>>>

>>> mask = im > im.mean()

>>>

>>> granulo = granulometry(mask, sizes=np.arange(2, 19, 4))

Mathematical optimization: finding minima of functions

authors Gặl Varoquaux

Mathematical optimizationdeals with the problem of finding numerically minimums (or maximums or zeros) of a function. In this context, the function is calledcost function, orobjective function, orenergy.

Here, we are interested in usingscipy.optimizefor black-box optimization: we do not rely on the mathe- matical expression of the function that we are optimizing. Note that this expression can often be used for more efficient, non black-box, optimization.

Prerequisites

• Numpy, Scipy

• IPython

• matplotlib

References

Mathematical optimization is very ... mathematical. If you want performance, it really pays to read the books:

• Convex Optimizationby Boyd and Vandenberghe (pdf available free online).

• Numerical Optimization, by Nocedal and Wright. Detailed reference on gradient descent methods.

• Practical Methods of Optimizationby Fletcher: good at hand-waving explainations.

Chapters contents

• Knowing your problem(page 252)

– Convex versus non-convex optimization(page 253) – Smooth and non-smooth problems(page 253) – Noisy versus exact cost functions(page 254) – Constraints(page 254)

• A review of the different optimizers(page 254) – Getting started: 1D optimization(page 254) – Gradient based methods(page 255)

* Some intuitions about gradient descent(page 255)

* Conjugate gradient descent(page 256) – Newton and quasi-newton methods(page 257)

* Newton methods: using the Hessian (2nd differential)(page 257)

* Quasi-Newton methods: approximating the Hessian on the fly(page 259) – Gradient-less methods(page 259)

* A shooting method: the Powell algorithm(page 259)

* Simplex method: the Nelder-Mead(page 260) – Global optimizers(page 261)

* Brute force: a grid search(page 261)

* Simulated annealing(page 261)

• Practical guide to optimization with scipy(page 261) – Choosing a method(page 261)

– Making your optimizer faster(page 262) – Computing gradients(page 262) – Synthetic exercices(page 263)

• Special case: non-linear least-squares(page 263)

– Minimizing the norm of a vector function(page 263) – Curve fitting(page 264)

• Optimization with constraints(page 265) – Box bounds(page 265)

– General constraints(page 265)

Một phần của tài liệu Python scientific lecture notes (Trang 251 - 256)

Tải bản đầy đủ (PDF)

(356 trang)