1. Trang chủ
  2. » Ngoại Ngữ

Training an Asymmetric Signal Perceptron in an Artificial Chemist

2 7 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 2
Dung lượng 710,52 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Portland State University PDXScholar Student Research Symposium Student Research Symposium 2013 May 8th, 11:00 AM Training an Asymmetric Signal Perceptron in an Artificial Chemistry

Trang 1

Portland State University

PDXScholar

Student Research Symposium Student Research Symposium 2013

May 8th, 11:00 AM

Training an Asymmetric Signal Perceptron in an

Artificial Chemistry

Peter Banda

Portland State University

Follow this and additional works at: https://pdxscholar.library.pdx.edu/studentsymposium

Part of the Chemistry Commons

Let us know how access to this document benefits you

Banda, Peter, "Training an Asymmetric Signal Perceptron in an Artificial Chemistry" (2013) Student Research Symposium 8

https://pdxscholar.library.pdx.edu/studentsymposium/2013/Poster/8

This Poster is brought to you for free and open access It has been accepted for inclusion in Student Research Symposium by an authorized administrator of PDXScholar Please contact us if we can make this document more accessible: pdxscholar@pdx.edu

Trang 2

Training an Asymmetric Signal Perceptron in an Artificial Chemistry

teuscher.:Lab | Department of Computer Science | Portland State University

www.teuscher-lab.com | banda@pdx.edu

Conclusion

 The first full-featured implementation of online learning in (simulated) chemistry called the chemical perceptron.

 Learning as well as linear integration of weights are handled internally.

 A chemical perceptron

• is reusable, since it recovers its internal ready state after each processing;

• both WRP and ASP versions learn successfully all 14 linearly-separable logic functions with correct rate of 100%;

• is robust to perturbations of rate constants that alleviates reaction-timing restrictions for real chemical implementation (using DNA-strand displacement technique);

• is implemented in artificial chemistry, but real chemistry extension possible (DNA strand displacement) applications: chemical hardware abstraction (prog interface), ALIFE, spiders; → and

• can serve as basis of programmable and adaptable wet chemical computing.

● Next steps: random DNA circuits with complex dynamics, and Reservoir Computing.

Model

● Two-input binary perceptron implemented in an unstructured artificial

chemistry driven by mass-action or Michaelis-Menten kinetics.

Weight-race perceptron (WRP) – symmetric design, uses two species to

represent 0 and 1, learned by desired output (Figure 2 (left))

Asymmetric Signal Perceptron (ASP) – asymmetric design, uses a single

species with 0.5 threshold to represent 0 and 1, learned by reinforcements

(Figure 2 (right))

● Optimal rate constants found by genetic algorithms

● Operate in two modes:

• Binary function mode – output molecules produced as a product of

weight-species driven catalysis; input molecules injected

• Learning mode (weight adaptation) – the concentration of weight species

change as a result of discrepancy between actual and desired output; input

and desired-output (or penalty signal) molecules injected (Figure 3)

Peter Banda

Abstract

Autonomous learning implemented purely by means of a synthetic chemical system has not

been previously realized Learning promotes reusability, and minimizes the system design to

simple input-output specification

In this poster, I present a simulated chemical system, the first full-featured implementation of a

perceptron in an artificial (simulated) chemistry, which can successfully learn all 14 linearly

separable logic functions A perceptron is the simplest system capable of learning inspired by

the functioning of a biological neuron My newest model called the asymmetric signal

perceptron (ASP) is, as opposed to its predecessors such as the weight-race perceptron (WRP),

substantially simpler by exploiting asymmetric chemical arithmetics and is fully described by

mass-action kinetics I suggest that DNA strand displacement could, in principle, provide an

implementation substrate for my model, allowing the chemical perceptron to perform reusable,

programmable and adaptable wet biochemical computing

Figure 1: Model of a perceptron An activation function f processes the dot product of weights and inputs w

· x T , producing output y During the learning process, the actual output y and the desired output d is

compared, the error is fed back to the perceptron and triggers an adaptation of the participating weights

Desired Output

[Hebb, 1949]

Integration

Output

Figure 3: Left: Qualitative diagram WRP’s (top) and ASP’s (top) reactions employed in the learning mode Each node represents a species, solid lines are reactions, and dashed lines are catalyses Right:

Training of WRP (top) and ASP (bottom) to perform NAND function starting from the CONST0 setting

Random inputs with desired output (or penalty signal) are repeatedly provided to circuits, and so the concentration of weight species are adapted towards required function Constant 0s gradually change to the NAND function outputs 1, 1, 1, 0

CONST0

NAND

Figure 4: Mean and standard deviation of the 14 correct learning rate averages Each average corresponds to one linearly separable binary function, for which 104 runs were performed

CONST0

NAND

Figure 2: Qualitative diagram WRP’s (left) and ASP's (right) reactions required for linear integration of inputs and weights Each node represents species, solid lines are reactions, and dashed green lines are catalyses

Penalty Signal

Weights

Problem and Approach

● Issues of biochemical computing: time-consuming and costly design, no

reset, no reusability (hard-wired purpose), lacks programming paradigms.

● I address these issues by introducing an artificial chemical machine

capable of learning = chemical perceptron A perceptron inspired by the

functioning of a biological neuron (Figure 1).

● Serves as a general template that can be trained to act as desired binary

function Strict online (autonomous) learning; no external help needed.

References

[1] Banda, P., Teuscher, C., Lakin, M R.: Online learning in a chemical perceptron Artificial life 19(2) (2013)

[2] Dittrich, P., Ziegler, J., Banzhaf, W.: Artificial chemistries - a review Artificial Life 7(3) (2001) 225– 275

[3] Hebb, D O.: The organization of behavior John Wiley & Sons, New York (1949) [4] Kim, J., Hopfield, J J., Winfree, E.: Neural network computation by in vitro transcriptional circuits In Advances in Neural Information Processing Systems Volume 17., MIT Press (2004) 681–688

[5] Soloveichik, D., Seelig, G., Winfree, E.: DNA as a universal substrate for chemical kinetics Proceedings of the National Academy of Sciences of the United States of America 107(12) (March 2010) 5393–5398

Inputs

Weights Desired Output

Outputs

Results

 ASP is simpler since it requires just 12 species and 16 reactions as opposed to 14 species and 30 reactions required by WRP.

 ASP employs the Runge-Kutta 4 numerical integration of the rate differential equations, which produces a higher-precision concentration series.

 ASP learns by a more biologically plausible reinforcement method (penalty signal).

 ASP determines the output value by thresholding as opposed to the comparison of positive and negative output species concentrations.

 ASP is transformable to the DNA-strand displacement primitives by Soloveichik’s method giving our symbolic species DNA-strand counterparts.

Ngày đăng: 20/10/2022, 14:34

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

🧩 Sản phẩm bạn có thể quan tâm

w