1. Trang chủ
  2. » Công Nghệ Thông Tin

Lecture 2 ArithCode

14 322 0
Tài liệu đã được kiểm tra trùng lặp

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Arithmetic Code
Tác giả Alexander Kolesnikov
Trường học Standard University
Chuyên ngành Data Compression
Thể loại bài giảng
Năm xuất bản 2023
Thành phố Standard City
Định dạng
Số trang 14
Dung lượng 196,5 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Mã nén Lecture 2 ArithCode

Trang 1

Data Compression

Lecture 2

Arithmetic Code

Alexander Kolesnikov

Trang 2

Arithmetic code

Alphabet extension (blocking symbols) can lead to coding efficiency

How about treating entire sequence as one symbol!

Not practical with Huffman coding

Arithmetic coding allows you to do precisely this

Basic idea - map data sequences to sub-intervals in [0,1) with lengths equal to probability of corresponding

sequence

1) Huffman coder: H  R  H + 1 bit/(symbol,

pel)

Trang 3

Arithmetic code: History

Rissanen [1976] : arithmetic code Pasco [1976] : arithmetic code

Trang 4

Arithmetic code: Algorithm (1)

0) Start by defining the current interval as [0,1)

1) REPEAT for each symbol s in the input stream

a) Divide the current interval [L, H) into subintervals whose sizes are proportional to the symbols's

probabilities

b) Select the subinterval [L, H) for the symbol s

and define it as the new current interval

2) When the entire input stream has been processed,

the output should be any number V that uniquely

identify the current interval [L, H)

Trang 5

Arithmetic code: Algorithm (2)

0.70

Trang 6

Arithmetic code:Algorithm (3)

0) Current interval [L, H) = [0.0, 1.0):

2) UNTIL the entire input stream has been processed

the current interval [L, H)

Trang 7

Example 1: Statistics

Message: 'SWISS_MISS'

Trang 8

Example 1: Encoding

S 0.5 [0.5, 1.0)

W 0.1 [0.4, 0.5)

I 0.2 [0.2, 0.4)

M 0.1 [0.1, 0.2) 0.1 [0.0, 0.1)

S 0.5 [0.5, 1.0)

W 0.1 [0.4, 0.5)

I 0.2 [0.2, 0.4)

M 0.1 [0.1, 0.2) 0.1 [0.0, 0.1)

Trang 9

Example 1: Decoding

S 0.5 [0.5, 1.0)

W 0.1 [0.4, 0.5)

I 0.2 [0.2, 0.4)

M 0.1 [0.1, 0.2) 0.1 [0.0, 0.1)

S 0.5 [0.5, 1.0)

W 0.1 [0.4, 0.5)

I 0.2 [0.2, 0.4)

M 0.1 [0.1, 0.2) 0.1 [0.0, 0.1)

Trang 10

Example 1: Compression?

V  [0.71753375, 0.71753500)

• How many bits do we need to encode a number V

in the final interval [L, H)?

• The number of bits m to represent a value in the interval

0 1

00 01 10 11

000 001 010 011 101 101 110 111

0000 0001 1110 1111

Trang 11

Example 1: Compression (1)

• Interval size (range) r:

r=0.5*0.1*0.2*0.5*0.50.1*0.1*0.2*0.5*0.5=0.00000125

• The number of bits to represent a value in the

interval [L, H)=[L, L+r) of size r:

n i

i

p

r

1

1

2

n i

i

p r

m

Trang 12

Example 1: Compression (2)

• Entropy = 1.96 bits/char

• Arithmetic coder:

20 bits

b) Codelength m:

c) Bitrate: R=20 bits/10 chars = 2.0 bits/char

Trang 13

Properties of arithmetic code

In practice, for images, arithmetic coding gives 15-30% improvement in compression ratios over a simple

Huffman coder The complexity of arithmetic coding is however 50-300% higher

Trang 14

BE_A_BBE

Ngày đăng: 26/10/2012, 11:57

w