1. Trang chủ
  2. » Thể loại khác

GIỚI THIỆU MÔ HÌNH HỌC NHIỀU TẦNG (deep learning models)

106 6 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Giới thiệu Về Mô Hình Học Nhiều Tầng (Deep Learning Models)
Tác giả Phùng Quốc Định
Trường học Deakin University
Chuyên ngành Data Science
Thể loại essay
Năm xuất bản 2017
Thành phố Australia
Định dạng
Số trang 106
Dung lượng 7,95 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Deep Learning2 “Deep Learning: machine learning algorithms representation and abstraction” - Joshua Bengio ©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST Học nhiều tầng là n

Trang 1

Vietnam Institute for Advanced Studies in Mathmeatics

VIASM, Data Science 2017, FIRST

Few Useful Things

to Know About Deep Learning

Phùng Quốc Định

Centre for Pattern Recognition and Data Analytics (PRaDA)

Deakin University, Australia

Email: dinh.phung@deakin.edu.au

(published under Dinh Phung)

Giới thiệu về

Mô Hình Học Nhiều Tầng (deep learning models)

Trang 2

Deep Learning

2

“Deep Learning: machine learning algorithms

representation and abstraction” - Joshua Bengio

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Học nhiều tầng là những thuật toán học máy

dựa trên việc học các tầng biểu diễn khác

nhau của dữ liệu.

Trang 3

Deep Learning

“Deep Learning: machine learning algorithms

representation and abstraction” - Joshua Bengio

[Zeiler and Fergus 2013]

Feature visualization of CNN trained on ImageNet

Trang 4

A fast moving field! The literature is

vast

pattern recognition theory.

Intelligence systems.

Goals of this talk:

 Give the basic foundations, i.e., it’s

important to know the basic.

 Give an overview on important classes

of deep learning models.

4

Deep Learning

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

[Image courtesy: @DeepLearningIT]

Trang 5

Flexible models

Data, data and data

Distributed and parallel computing

 Data parallelism : split data and run in parallel with the same model

o Fast at test time

o Asynchronous at training time

 Model parallelism : one unit of data, but run multiple models (e.g., Gibbs)

What drives success in DL?

Trang 6

What makes deep learning take off?

Trang 7

Machine learning predicts the look of stem cells, Nature News, April 2017

The Allen Cell Explorer Project

“No two stem cells are identical, even if they are genetic clones … Computer scientists analysed thousands of the images using deep

learning programs and found relationships between the locations of cellular structures They then used that information to predict where

the structures might be when the program was given just a couple of clues, such as the position of the nucleus The program ‘learned’ by

comparing its predictions to actual cells”

Trang 8

Speech Recognition &

Computer Vision

Recommender System

Drug discovery &

Medical Image Analysis

Applied DL and AI Systems

Computer vision and data technology

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Trang 9

Impact of Deep Learning Applied DL and AI Systems

Reinforcement Learning

Nature, 2016

Lee Sedol

Trang 10

Impact of Deep Learning Applied DL and AI Systems

Trang 11

Impact of Deep Learning

Neural Language NLP

word2vec

[Mikolov et al 2013]

Language model

Class-based language model

Neural Language models

Trang 12

Impact of Deep Learning

Analogical reasoning: đàn ông : phụ nữ = hoàng đế : ?

argmin𝑣 ℎ đàn ông − ℎ phụ nữ + ℎ hoàng đế − ℎ𝑣 2

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Neural Language NLP

Trang 13

Impact of Deep Learning

Analogical reasoning: đàn ông : phụ nữ = hoàng đế : nữ hoàng

argmin𝑣 ℎ đàn ông − ℎ phụ nữ + ℎ hoàng đế − 𝑣 2

Neural Language NLP

Trang 14

Impact of Deep Learning Applied DL and AI Systems

Neural Language NLP

RNNs

LSTMs

• Neural Machine Translation

• Sequence to sequence (Sutskever et al, 14)

• Attention-based Neural MT (Luong, et al 15)

• End-to-End Memory Networks

• Neural Turing Machine (Graves et al 14)

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Tí đi vào nhà bếp

Ti lượm được một trái khế

Sau đó Tí chạy ra vườn

Tí làm rớt trái khế

Hỏi: trái khế ở đâu?

Máy tính: ở ngoài vườn

Dư hấu thì tròn

Quả mướp thì dài

Mướp xanh mươn mướt

Mướp và dưa hấu cùng màu

Hỏi: quả dưa hấu màu gì?

Máy tính: màu xanh

Trang 15

Early history of DL

1943

Warren McCulloch and Walter Pitts

| first formal model of a neuron |aka

McCulloch-Pitts model | AND and OR gate, switch on when

number of active inputs > T |Doesn’t learn! |

1949

Donald Hebb (Canadian psychologist)

| Hebb’s rule::“Neurons that fire together wire

together” | Cornerstone of connectionism:

attempt to explain how neurons learn |

1950

Frank Rosenblatt (Cornell psychologist)

| put weights to McCulloch-Pitts model | Coined

‘Perceptron’ | guaranteed to work if separable

1969

Marvin Minsky and Seymour Papert

| published book “Perceptrons”|

expressive neuron layers but didn’t know how to learn | Machine learning = neural networks

1982

John Hopfield (Caltech physicist)

| notice the analogy between neurons and spin glasses

1985

Ackley, Hinton and Sejnowsky

| probabilistic Hopfield networks | energy states << than lower-energy states |

higher-Coined Boltzmann machine

1986

Rumerlhart, Hinton, and Williams

| invented Backprop | solved XOR problem | trained multiplayer perceptron (NETtalk) |

mid-1990s

The heat for NN was over

|learning with bigger and more hidden layers was infeasible |

mid-2000s

Resurgent of NN and connectionism

| re-invented Autoencorder (AE) |

Stacked Sparse AE: Google learns

‘cat’ from 10 million youtube videos

Hinton, Osindero, and Teh

|published “A fast learning algorithm

for deep belief nets” |

2006

Trang 17

Currently the (only) most comprehensive book in DL

What is not covered in this book:

 Latest development (extremely fast moving field)

 Lack of practical deployments, codebase, framework

– which is important for this field.

 Represent a very subset of the field, e.g., no

discussion of statistical foundation, uncertainty, Bayesian treatment.

Books

Trang 18

What is deep learning and

few application examples.

Deep neural networks

About this talk

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Trang 19

DEEP NEURAL NETWORKS

Trang 20

Our brain is made up by network of connected neurons and have highly-parallel

inter-architecture

ANN (artificial neural networks) are motivated by

biological neural systems.

Two groups of ANN researchers:

 To use ANN to study/model the brain

 Use the brain as the motivation to design ANN as aneffective learning machine, which might not be the truemodel of the brain Deviation from the brain model isnot necessarily bad, e.g airplanes do not fly like birds

So far, most, if not all, use the brain architecture as motivation to build computational models

Some historical motivations

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Trang 22

Perceptron are linear models and too weak in

what they can represent.

Modern ML/Stats deal with high-dimensional

data

 non-linear decision surfaces

Feed Forward Neural Networks

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Trang 23

Activation Functions

Given input 𝒙𝑡 and desired output 𝒚𝑡, 𝑡 = 1, … , 𝑛, find the

network weights 𝒘 such that

𝒚𝑡 ≈ 𝒚𝑡, ∀𝑡Stating as an optimization problem: find 𝑤 to minimize the

error function

𝐽 𝒘 = 1

2𝑡=1

𝑀

𝑘=1

𝐾(𝑦𝑡𝑘 − 𝑦𝑡𝑘) 2

Trang 24

At a point 𝒙 = (𝑥1, 𝑥2), the gradient

vector of the function 𝑓 𝒙 w.r.t 𝒙 is

𝜕 𝑥1 ,

𝜕𝑓

𝜕 𝑥2

𝛻𝑓 𝒙 represents the direction that

produces steepest increase in 𝑓

Similarly −𝛻𝑓 𝑥 is the direction of

Trang 25

To minimize a functional 𝑓(𝒙), use

Trang 26

Also known as delta rule, LMS rule or Widrow-Hoff rule.

Instead of minimizing J(𝒘), minimize the instantaneous approximation of J(𝒘):

Compare with standard gradient descent:

 Less computation required in each iteration

 Approaching the minimum in a stochastic sense

 Use smaller learning rate 𝜂, hence need more iterations

Stochastic Gradient-Descent (SGD) Learning

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Trang 27

Similarly for input → hidden weights

chain rule = 𝑤𝑗𝑘0 since 𝑦𝑘 = ∑𝑤𝑗𝑘0 𝑧𝑗

Gradient descent update

Trang 28

[Nguồn: Machine Learning, Tom Mitchell, 1997]

Early day of NN application and TODAY

Truck, Train, Boat, Uber, Delivery,

……

4 nodes

Trang 29

SGD provides unbiased estimate of the

gradient by taking average gradient on a

minibatch

Since the SGD gradient estimator

introduces a source of noise (random

sampling data example), thus gradually

decrease/reduce the learning rate over

time

Let 𝜂𝑘 be the learning rate at epoch 𝑘 A

sufficient condition to guarantee

Tricks and tips for applying BP

In practice, it is common to decay the learning rate linearly until iteration 𝜏:

𝜂𝑘 = 1 − 𝛼 𝜂0 + 𝛼𝜂𝜏with 𝛼 = 𝑘

𝜏 After iteration 𝜏, leave 𝜂 constant.Monitor convergence rate

 Measure the excess error

Trang 30

Start with large learning rate (e.g., 0.1)

Maintain until validation error stops

improving

Step decay: Divide the learning rate by 2

and go back to the second step

Exponential decay: 𝜂 = 𝜂0𝑒−𝑘𝑡 where 𝑘 is

a hyperparameter and 𝑡 is the

epoch/iteration number

1/t decay: 𝜂 = 𝜂0

1+𝑘𝑡 where 𝑘 is a hyperparameter and 𝑡 is the

epoch/iteration number

30

Tricks and tips for applying BP

Strategies to decay learning rate 𝜼

Sensitivity of learning rate 𝜼

[Image courtesy: Wikipedia]

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Trang 31

SGD can sometimes be slow.

Momentum (Polyak, 1964) accelerates the learning by accumulating an exponentially

decaying moving average of past gradients and then continuing to move in their direction

𝒗 ← 𝛼𝒗 − 𝜂𝛻𝜽𝐽 𝜽

𝜽 ← 𝜽 + 𝒗

𝛼 is a hyperparameter that indicates how quickly the contributions of previous gradients

exponentially decay In practice, this is usually set to 0.5, 0.9, and 0.99

The momentum primarily solves 2 problems: poor conditioning of the Hessian matrix and

variance in the stochastic gradient

Tricks and tips for applying BP

Accelerate learning - Momentum

normal SGD SGD with momentum

Momentum helps accelerate SGD to use relevant directions and dampens

[Image courtesy: Sebastian Ruder]

Trang 32

Tricks and tips for applying BP

Accelerate learning - Nesterov Momentum

A variant of the momentum algorithm,

inspired by Nesterov’s accelerated gradient

method (Nesterov, 1983, 2004)

𝜽 ← 𝜽 + 𝒗

The only difference between Nesterov

momentum and standard momentum is

how the gradient is computed.

from 𝒪 1/𝑘 to 𝒪 1/𝑘2

stochastic gradient

[Image courtesy: Sebastian Ruder]

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Trang 33

Tricks and tips for applying BP

Accelerate learning – AdaGrad (Duchi, 11)

Learning rates are scaled by the square root of the

cumulative sum of squared gradients

𝜸 ← 𝜸 + 𝒈 ⊙ 𝒈

where 𝛿 is a small constant

Large partial derivatives

 Thus, rapid decrease in their learning rates

Small partial derivatives

 hence relatively small decrease in their learning rates.

Weakness: always decrease the learning rate!

[Image courtesy: Sebastian Ruder]

Trang 34

Tricks and tips for applying BP

A modification of AdaGrad to work better for

RMSProp has been shown to be an effective and

practical optimization algorithm for DNN It is

currently one of the go-to optimization methods

being employed routinely by DL applications

Accelerate learning – RMSProp (Hinton, 12)

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

[Image courtesy: Sebastian Ruder]

Trang 35

Tricks and tips for applying BP

RMSProp with Nesterov momentum

Accelerate learning – RMSProp (Hinton, 12)

[Image courtesy: Sebastian Ruder]

Trang 36

Tricks and tips for DNN

The best variant that essentially combines

RMSProp with momentum

𝛽2 = 0.999 and 𝛿 = 10−8

Should always be used as first choice!

Accelerate learning – Adam (Kingma & Ba, 14)

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

[Image courtesy: Sebastian Ruder]

Trang 37

Tricks and tips for DNN

Performance measured by errors on “unseen”

data

Minimize error alone on training data is not

enough

Causes: too many hidden nodes and overtrained

Possible quick fixes:

 Use cross validation or early stopping, e.g.,

stop training when validation error starts to

grow

 Weight decaying: minimize also the

magnitude of weights, keeping weights small

(since the sigmoid function is almost linear

near 0, if weights are small, decision surfaces

are less non-linear and smoother)

 Keep small number of hidden nodes!

Overfitting problem - quick fixes

Trang 38

 Gradient: 𝛻𝑾 𝑘 Ω 𝜽 = 2𝑾 𝑘

 Apply on weights (𝑾) only, not on biases (𝒃)

 Bayesian interpretation: the weights follow a

 Optimization is now much harder – subgradient.

 Apply on weights (𝑾) only, not on biases (𝒃)

 Bayesian interpretation: the weights follow a

Trang 40

Can help: better optimization, better

generalization (regularization).

Allow to use higher learning rate

because of acting as a regularizer.

It is actually not an optimization

algorithm at all It is a method of

adaptive reparameterization, motivated

by the difficulty of training very deep

models.

Update all of the layers simultaneously

using gradient can cause unexpected

results, as many functions compose and

Overfitting problem -Batch Normalization (Loffe et al., 2015)

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Trang 41

Apply before activation: 𝑓 𝒘⊤𝒉 + 𝒃 → 𝑓 𝐵𝑁 𝒘⊤𝒙 + 𝒃

Make a minibatch has mean 0 and variance 1.

𝝈𝐵2 + 𝜖 Undo the batch norm by multiplying with a new scale parameter 𝜸 and adding a

new shift parameter 𝜷 (note that 𝜸 and 𝜷 are learnable)

Tricks and tips for DNN (tự đọc)

Overfitting problem - Batch Normalization (Ioffe et al., 2015)

Trang 42

Hai biến thể quan trọng của deep NN

Autoencoder

Convolution neural networks (CNN)

42

Trang 43

Simply a neural network that tries to copy its

input to its output

 Input 𝒙 = 𝑥1, 𝑥2, … , 𝑥𝑁 ⊤

 An encoder function 𝑓 parameterized by 𝜽

 A coding representation 𝒛 = 𝑧1, 𝑧2, … , 𝑧𝐾 ⊤ =

𝑓𝜽 𝒙

 A decoder function 𝑔 parameterized by 𝝋

 An output, also called reconstruction

𝒓 = 𝑟1, 𝑟2, … , 𝑟𝑁 ⊤ = 𝑔𝝋 𝒛 = 𝑔𝝋 𝑓𝜽 𝒙

 A loss function 𝒥 that computes a scalar 𝒥 𝒙, 𝒓 to

measure how good of a reconstruction 𝒓 of the

given input 𝒙, e.g., mean square error loss:

Trang 44

Hidden layers 𝐾 < 𝑁: under complete,

otherwise overcomplete.

Shallow representation works better

with overcomplete case.

why? one can always recover the exact

Trang 45

Khử nhiễu ảnh

[Vincent et al., JMLR’10]

Trang 46

Convolutional Neural Networks (CNN, ConvNets)

46

Nguồn: http://yann.lecun.com/

LeNet5

Motivation: vision processing in the brain is fast

Technical: sparse interactions and sparse weights within

a smaller kernel (e.g., 3x3, 5x5) instead of the whole

input -> reduce #params

Parameter sharing: a kernel with same set of

weights while applying onto different location

Translation invariance

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

Trang 47

A glimpse of GoogLeNet 2014: deeper and thinner …

Networks become thinner and deeper

Trang 48

And … ResNet 2015 beats human performance (5.1%)!

Trang 49

Larger models with new training

techniques:

Large ImageNet dataset [Fei-Fei et al

2012]

Fast graphical processing units (GPU)

So what help SOTA results emerge?

Trang 50

ConvNets won all Computer Vision challenges recently

Galaxy

Diabetes recognition from retina image

©Dinh Phung, 2017 VIASM 2017, Data Science Workshop, FIRST

• CNN đã gặt hái rất nhiều thành công (nhất là trong computer vision)

• Tuy nhiên nó đã đến lúc bão hòa – nếu có cải thiện, thì chỉ là vụn vặt.

• Next? Mô hình sinh dữ liệu nhiều tầng (deep generative models)

Trang 51

What is deep learning and few

application examples.

Deep neural networks

 Key intuitions and techniques

Trang 53

Advances in NLP

Language

model

Class-based language model

Neural Language models

o (aka n-gram model)

o distributions over sequence of

symbolic tokens in a natural

language.

o Use probability chain rule.

o Estimating by counting the

o Cluster words into categories

o Use category ID for context

o Symbolic representation

o With one-hot representation, the distance between any two words will always be exactly √2 !

Ngày đăng: 15/05/2022, 03:28

HÌNH ẢNH LIÊN QUAN

Mô Hình Học Nhiều Tầng (deep learning models) - GIỚI THIỆU MÔ HÌNH HỌC NHIỀU TẦNG (deep learning models)
nh Học Nhiều Tầng (deep learning models) (Trang 1)
Nhược điểm của VAE: không tạo ra được hình sắc nét trong trường hợp dữ liệu rất phức tạp (ví dụ hình ảnh thiên nhiên) - GIỚI THIỆU MÔ HÌNH HỌC NHIỀU TẦNG (deep learning models)
h ược điểm của VAE: không tạo ra được hình sắc nét trong trường hợp dữ liệu rất phức tạp (ví dụ hình ảnh thiên nhiên) (Trang 80)
Mô hình chung của GAN - GIỚI THIỆU MÔ HÌNH HỌC NHIỀU TẦNG (deep learning models)
h ình chung của GAN (Trang 84)

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN