1. Trang chủ
  2. » Giáo án - Bài giảng

boosting and tree - structured classifier

77 143 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Boosting and Tree-structured Classifier
Tác giả Nguyen Dang Binh
Định dạng
Số trang 77
Dung lượng 5,47 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Object Tracking b Fast Re Detection From Time t to ™ Online discriminative feature selection [Collins et al 03], Ensemble tracking [Avidan 07]... Structure of this talk™ Second half Š

Trang 1

Boosting and Tree-structured g Classifier

Presenter: Nguyen Dang Binh

Trang 2

The classification speed is not just for time-efficiency but good accuracy

Trang 3

Object Detection

b a Cascade of Classifiers

/76 3

Pictures from Romdhani et al. ICCV01 

Trang 4

Object Tracking

b Fast (Re ) Detection

From  Time  t to 

™ Online discriminative feature selection [Collins et al 03], Ensemble 

tracking [Avidan 07]

Trang 5

Semantic Segmentation

™ Requiring pixel‐wise classification

/76 5

Trang 6

Structure of this talk

™ Second half

Š Unified Boosting framework

Trang 7

Things not covered

Trang 8

Introduction to Boosting Classifiers

Trang 10

A brief history (continued)

Trang 13

Bagging (Bootstrap AGGregatING)

™ Bootstrap

Š For each set randomly draw examples from the uniform

Š For each set, randomly draw examples from the uniform dist. allowing duplication and missing

Trang 14

In classification:   \([) = the majority class in {         ,…,         } \([) \7([)

Trang 15

Randomized Decision Forest [Breiman 01, Geurts et al 06]]

Trang 16

Randomized Tree Learning

left splitright split

™ Features I Y chosen from a random feature pool I  )

h h ld h i

™ Thresholds Wchosen in range

™ Choose I and Wto maximize gain in information

/76 16

Trang 17

Random Forest – Summary

Š G li ti th h b i ( d l ) &

Š Generalization through  bagging (random samples) &  randomised tree learning (random features)

Š Very f ast classification

Š Very  f ast classification

Trang 18

™ Iteratively reweighting training samples. 

™ Higher weights to previously misclassified samples.

/76 18

1 round

2 rounds

50 rounds

Trang 19

In classification:   = the majority class in {         ,…,         } 

according to the weights 

Trang 20

AdaBoost [Freund and Schapire 04]

Trang 21

Existence of weak learners

™ XOR problems (Matlab demo)

/76

™ XOR problems (Matlab demo) 

21

Trang 22

Does AdaBoost generalize?

Trang 23

Multiple classifier system

Trang 24

Mixture of Experts [Jordan, Jacobs 94]

Gating Network

Trang 25

Ensemble learning: Boosting and

Trang 26

Robust real-time object detector

Trang 27

Boosting Simple Features

[Viola and Jones CVPR 01]

™ Adaboost classification

Weak  classifier

Strong  classifier

™ W k l ifi H b i lik f ti ( )

™ Weak classifiers: Haar‐basis like functions (45,396 in total)

/76 27

Trang 28

Boosting Simple Features

[Viola and Jones CVPR 01]

Trang 29

Boosting as a Tree-structured Classifier

Trang 30

Boosting (very shallow network)

Trang 31

g

Trang 32

B i Bagging

Tree hierarchy

Inspired by Yin, Crinimisi CVPR07

Trang 33

BREAK !!

Trang 34

Unified Boosting Framework

Trang 35

AnyBoost : an unified framework

[Mason et al 00]

™ Most boosting algorithms have in common that they iteratively update sample weights and select the next hypothesis based on p p g ypthe weighted samples

Trang 38

Tree-structured Classifiers

Trang 39

Multi-view and multi-category object detection

™ Images exhibit multi modality

Trang 40

Multiclass object detection [Torralba et

Trang 41

Multiclass object detection [Torralba et

al PAMI 07]

/76 41

Trang 42

Multiclass object detection [Torralba et

al PAMI 07]]

/76 42

Trang 43

ClusterBoost [Wu et al ICCV07]

Trang 44

ClusterBoost result

/76 44

Trang 45

AdaTree (Grossmann CVPRW04) Result

1% Noise in data 10% Noise in data

0.5 0.4

0.4 0.3 0.2

0.2 0.1

/76 45

Mean computation cost

0

0        90

Trang 46

Boosting for XOR

- MCBoost [T-K Kim, R Cipolla NIPS 08]

Trang 47

Face cluster 1

/76 47

K‐means  clustering

Face cluster 2 MCBoost

ICCV09 Tutorial Tae-Kyun Kim University of Cambridge

Trang 48

MCBoost: Multiple Strong Classifier

Trang 49

Toy XOR classification problem

™ Discriminative clustering positive samples

™ Discriminative clustering positive samples

™ It doesn’t partition an input space

/76 49

Trang 50

Toy XOR classification problem

classifier1 classifier2 classifier3

1.2 1.1

1.0 0.9

0.5 0.4

0 3

Boosting round

10       20       30

0.3 0.2

/76

™ Matlab demo

50

Trang 52

Pedestrian detection by MCBoost

[Wojek, Walk, Schiele et al CVPR09]

Trang 53

Pedestrian detection by MCBoost [Wojek, Walk, Schiele et al CVPR09]

Trang 54

/76 54

Trang 55

Speeding up

Trang 56

A sequential structure of varying

Classify x as ‐ and exit

Trang 57

Making a shallow network deep

- Super tree [Kim, Budvytis, Cipolla, 09]

Y

Y

Trang 58

Converting a boosting classifier to a decision tree

™ Many short paths for speeding up

™ Preserving (smooth) decision 

regions for good generalisation

/76 58

Trang 59

Converting a boosting classifier to a decision tree

™ Many short paths for speeding up

8 2

7

13 2

4

5 times  speed up

/76 59

Boosting

Super tree

Trang 60

/76 60

Trang 61

Boolean optimisation formulation

0 1

0 1

W3

R8 1 1 1 x

Trang 62

Boolean expression minimization

0 1

1 1

R4

R3

/76 62

W1W2W3 v W1W2W3 v W1W2W3 vW1W2W3 W1 v W1W2W3

Trang 63

Boolean optimisation formulation

Trang 64

Synthetic data exp1

Examples  generated 

from GMMs

/76 64

Trang 65

Face detection experiment

False negativer ate

Average path  length

False positive rate

False negativer ate

Average path  length

False positiv

e rate

False negative  rate

Average path  length

/76 65

Trang 66

ST vs Random forest

/76

™ ST is about 2 to 3 times faster than RF at similar accuracy

66

Trang 67

Experiments with tracking and

segmentation by ST g y

/76 67

Trang 68

More segmentation experiments by Boosting and RF

Trang 69

MCBoost building class segmentation

Building class: average class accuracy = 77% 

/76

Ground Truth Segmentation Ground Truth Segmentation

Trang 70

Building Non‐building Error Road Non‐road Error

global :  74.50% 

average: 

79.89%

global:  88.33%,  average:  87.26%

80.45% 

85.55 %,  average:  85.24 % 

/76 70

Trang 71

Boosting for Segmentation g g

Trang 72

Conclusions

Trang 74

B i Bagging

Tree hierarchy

Inspired by Yin, Crinimisi CVPR07

Trang 75

• Slower than RF

Stumps • Fast classification

• Theoretical backgrounds

• Slow training

/76 75

Trang 77

Th k

/76

Thanks

77

... class="page_container" data-page="29">

Boosting as a Tree- structured Classifier< /h3>

Trang 30

Boosting. .. class="page_container" data-page="25">

Ensemble learning: Boosting and

Trang 26

Robust real-time... data-page="17">

Random Forest – Summary

Š G li ti th h b i ( d l ) &

Š Generalization through  bagging (random samples) &  randomised? ?tree? ?learning (random features)

Ngày đăng: 24/04/2014, 13:26

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN