1. Trang chủ
  2. » Luận Văn - Báo Cáo

A Learning Method based on Bisimulation in Inconsistent Knowledge Systems44981

6 10 0

Đang tải... (xem toàn văn)

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 6
Dung lượng 350,14 KB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

A Learning Method based on Bisimulation in Inconsistent KnowledgeSystems Thi Hong Khanh Nguyen, Quang-Thuy Ha and Trong Hieu Tran Abstract— Inconsistencies may naturally occur in the con

Trang 1

A Learning Method based on Bisimulation in Inconsistent Knowledge

Systems

Thi Hong Khanh Nguyen, Quang-Thuy Ha and Trong Hieu Tran

Abstract— Inconsistencies may naturally occur in the

consid-ered application domains in Artificial Intelligence, for example

as a result of data mining works in distributed sources In

order to solve inconsistent knowledge, several paraconsistent

description logics have been proposed In this paper, we face the

problem of concept learning for an inconsistent knowledge base

system based on bisimulation This algorithm allows learning a

concept from a training information system in a paraconsistent

descriptive logic system with a set of positive items, negative

items, and inconsistent items Here, we present a system for

learning concept in an inconsistent knowledge base and discuss

preliminary experimental results obtained in the electronic

application domain

I INTRODUCTION Description logics (DLs) are a family of formal languages

which are well suited for representation and reasoning in a

domain of particular interest DLs is of particular importance

in providing theoretical models for semantic systems It is

the basis for building languages for modeling ontologies in

which OWL is a language that is recommended by the W3C

International Standard for use in Semantic Web systems

[14], [21] Description logics have usually been considered

as syntactic variants of restricted versions of classical

first-order logic [1] On the other hand, in Semantic Web and

multiagent applications, knowledge fusion frequently leads

to inconsistencies [13] A way to deal with inconsistencies

is to follow the area of paraconsistent reasoning [22], [21],

[15], [16], [4], [28], [8], [31]

Concept learning in DLs is similar to binary classification

in traditional machine learning The differences are that in

DLs objects are described not only by attributes but also

by the relationships between the objects As bisimulation

is the notion for characterizing indiscernibility of objects

in DLs It is very useful for concept learning in these

DLs [27], [30], [26], [9], [12] Consider a domain with

Thi Hong Khanh Nguyen, Quang-Thuy Ha and Trong Hieu Tran are

with Faculty of Information Technology, University of Engineering and

Technology, Vietnam National University, Hanoi, Vietnam

Thi Hong Khanh Nguyen is with Electricity Power University, Vietnam

khanhnth@epu.edu.vn,thuyhq@vnu.edu.vn,

hieutt@vnu.edu.vn

individuals represented by single and binary predicates In

DL language, predicates such as concept names and role names, respectively Domains can be described by different sources For instance, individuals may be some objects in an area on earth and sources of information may be computer systems of different satellites objects described by some boolean attributes and binary relationships between them Another example is the following: some banks cooperate to share information about their customers to a certain extent; the bank’s customers are individuals; atomic concepts can be credibility, wealth, and financial discipline; Atomic roles can

be some relationships based on the transformation Sources may provide consistent or inconsistent assertions Based on them, the paraconsistent interpretation can be generated as

an integrated information system

Concept learning in description logics has been studied by many researchers and is divided into three main approaches The first approach focuses on the ability to learn in descrip-tion logics and builds some simple algorithms [24], [18], [7], [11], [19] The second approach studies the learning concept in the description logic using refinement operators [6], [5], [17], [20], [2] Lehman and el [29] employed inte-grating refinement operators in terminological decision trees Third approach exploits bisimulation for concept learning problems in description logics [10], [27], [30], [9] In [19] Lambrix and Larochia proposed a simple concept learning algorithm based on the concept Lehmann and Hitzler [20], Badea and Nienhys [2], Iannone et al [17] studied concept learning in DLs by using refinement operators as in inductive logic programming Apart from refinement operators, scoring functions, and search strategies also play important roles in the algorithms proposed in those works

Nguyen and Szalas [26] applied bisimulation in DLs to model indiscernibility of objects Their work is pioneering

in using bisimulation for concept learning in DLs [27], [30], [9] The authors propose a general method for smoothing the domain of interpretation and through which to build the concept to be learned

Unlike all previous work handling consistent knowledge

2018 15th International Conference on

Control, Automation, Robotics and Vision (ICARCV)

Singapore, November 18-21, 2018

Trang 2

bases In this paper, we develop a bisimulation based method

for inconsistent knowledge systems, for concept learning in

paraconsistent DLs

The rest of this paper is structured as follows In Section

2, we present notation and semantics of the paraconsistent

DLs considered in this paper We recall bisimilarity for

paraconsistent description logics in Section 3 A learning

algorithm based on bisimulation is presented in Section 4

In Section 5, we evaluate this algorithm by means of our

implementation Finally, in Section 6 we summarize our

work and draw conclusions

II PRELIMINARIES Following the recommendation of W3C for OWL,

like [25], [3], [23] we use the traditional syntax of DLs and

only change its semantics to cover paraconsistency

In this work, we consider a DL-signature is a set Σ =

I∪C∪R, where: a finite set C of concept names, a countable

set I of individual names, a countable set R of role names

We use uppercase letters like A, B to denote concept names,

lowercase letters like r, s to denote role names, and lowercase

letters like a, b to denote individual names

Let Φ be a set of features among I (inverse roles), O

(nominal), Q (qualified number restrictions), U (the universal

role) and Self (local reflexivity of a role)

Recall that, using the traditional semantics, every query is

a logical consequence of an inconsistent knowledge base A

knowledge base may be inconsistent, for instance, when it

contains both individual assertions A(a) and ¬A(a) for some

A ∈ C and a ∈ I Paraconsistent reasoning is

inconsistency-tolerant and aims to derive meaningful logical consequences

even when the knowledge base is inconsistent This problem

can be handled by three-valued logic (t: true, f: false and i:

inconsistent) The general approach is to define semantics s

such that, given a knowledge base KB , the set Conss(KB )

of logical consequences of KB w.r.t semantics s is a subset

of the set Cons(KB ) of logical consequences of KB w.r.t

the traditional semantics, with the property, that Conss(KB )

contains mainly only meaningful logical consequences of

KB and Conss(KB ) approximates Cons(KB ) as much as

possible That is, it is to weaken the traditional semantics in

an appropriate way

Paraconsistent semantics from the defined family, let’s say

s, is characterized by four parameters, denoted by sC, sR,

s∀∃Q, sGCI, with the following intuitive meanings:

• sC∈ {2, 3} specifies the number of possible truth values

of assertions of the form A(x), where A ∈ C

• sR∈ {2, 3} specifies the number of possible truth values

of assertions of the form r(x, y), where r ∈ R

• s∀∃Q ∈ {+, ±} specifies which from two considered semantics is used for concepts of the form ∀R.C, ∃R.C,

≤ n R.C or ≥ n R.C

• sGCI ∈ {w, m, s} specifies one of the three semantics for general concept inclusions: weak (w), moderate (m), strong (s)

In the case sC = 2, the truth values of assertions of the form A(x) are t (true) and f (false) In the case sC = 3, the third truth value is i (inconsistent) In the case sC = 4, the additional truth value is u (unknown) When sC = 3, one can identify inconsistency with the lack of knowledge, and the third value i can be read either as inconsistent or as unknown

Similar explanations can be stated for sR In this paper, the problem is handled on three-valued logic Therefore, the set of considered paraconsistent semantics is as follows:

S = {2, 3} × {2, 3} × {+, ±} × {w, m, s}

For s ∈ S, an s-interpretation is a pair I = h∆I, ·Ii, where ∆I is a non-empty set, called the domain, ·I is the interpretation function, which maps every individual name

a to an element aI ∈ ∆I, every concept name A to a pair

AI= hAI+, AI−i of subsets of ∆I, and every role name r to

a pair rI = hrI+, r−Ii of binary relations on ∆I such that:

• if sC= 2 then AI+= ∆I\ AI

• if sC= 3 then AI+∪ AI

−= ∆I

• if sR= 2 then rI+= (∆I× ∆I) \ rI−

• if sR= 3 then rI+∪ rI

−= ∆I× ∆I The intuition behind AI = hAI+, AI−i is that AI

+ gathers positive items about A, while AI− gathers negative items about A Thus, AI can be treated as the function from ∆I

to {t, f, i} defined below:

AI(x) =

t for x ∈ AI+ and x /∈ AI

f for x ∈ AI− and x /∈ AI

+

i for x ∈ AI+ and x ∈ AI−

(1)

Informally, AI(x) can be thought of as the truth value of

x ∈ AI Note that AI(x) ∈ {t, f} if sC = 2, and AI(x) ∈ {t, f, i} if sC = 3 The intuition behind rI = hrI+, r−Ii is similar, and under which rI(x, y) ∈ {t, f} if sR = 2, and

rI(x, y) ∈ {t, f, i} if sR= 3

The interpretation function ·I maps a role R to a pair

RI = I+, RI− , defined for the case R /∈ R as follows:

(r−)I = I+)−1, (r−I)−1

UI = I× ∆I, ∅

Trang 3

Function · maps a complex concept C to a pair C =

hCI

+, C−Ii of subsets of ∆I defined in [27] as follows:

>I = h∆I, ∅i

⊥I = h∅, ∆Ii

({a})I = I}, ∆I\ {aI}

(¬C)I = hCI

−, C+Ii (C u D)I = hCI

+∩ DI +, C−I∪ DI

−i (C t D)I = hCI

+∪ DI +, CI

−∩ DI

−i (∃R.Self)I = I| (x, x) ∈ R+I},

{x ∈ ∆I | (x, x) ∈ RI

−} ;

if s∀∃Q= + then

(∃R.C)I =

I| ∃y((x, y) ∈ RI

+∧ y ∈ CI

+)}, {x ∈ ∆I| ∀y((x, y) ∈ RI+→ y ∈ C−I)}

(∀R.C)I =

I| ∀y((x, y) ∈ RI

+→ y ∈ CI

+)}, {x ∈ ∆I| ∃y((x, y) ∈ RI

+∧ y ∈ CI

−)} ; (≥ n R.C)I=

I| #{y | (x, y) ∈ RI

+∧ y ∈ CI

+} ≥ n}, {x ∈ ∆I| #{y | (x, y) ∈ RI

+∧ y /∈ CI

−} < n}

(≤ n R.C)I=

I| #{y | (x, y) ∈ RI

+∧ y /∈ CI

−} ≤ n}, {x ∈ ∆I| #{y | (x, y) ∈ RI

+∧ y ∈ CI

+} > n} ;

if s∀∃Q= ± then

(∃R.C)I =

I| ∃y((x, y) ∈ R+I ∧ y ∈ C+I)}, {x ∈ ∆I| ∀y((x, y) /∈ RI

−→ y ∈ CI

−)}

(∀R.C)I =

I| ∀y((x, y) /∈ RI

− → y ∈ CI

+)}, {x ∈ ∆I| ∃y((x, y) ∈ RI

+∧ y ∈ CI

−)} ; (≥ n R.C)I=

I| #{y | (x, y) ∈ RI

+∧ y ∈ CI

+} ≥ n}, {x ∈ ∆I | #{y | (x, y) /∈ RI

−∧ y /∈ CI

−} < n}

(≤ n R.C)I=

I| #{y | (x, y) /∈ RI

−∧ y /∈ nCI

−} ≤ n}, {x ∈ ∆I | #{y | (x, y) ∈ RI

+∧ y ∈ CI

+} > n}

We denote Γ is a set of concepts, ΓI+ =T{CI

+ | C ∈ Γ},

ΓI

− =S{CI

− | C ∈ Γ} and ΓI = ΓI

+, ΓI

− Observe that, if

Γ is finite, then ΓI= (Γ)I

Example 1: An inconsistent knowledge system inL refers

to electronic devices:

Let I={Cellphone, Bluetooth, Laptop, Memory, Size, Weight}

C={Device, General}

R={hasGeneral}

A={General(Memory), General(Size), General(Weight), General(Bluetooth), Device(Cellphone), Device(Laptop), Device(Memory), hasGeneral(Cellphone,Size), hasGen-eral(Cellphone,Memory), hasGeneral(Cellphone,Bluetooth), hasGeneral(Laptop,Size), hasGeneral(Cellphone,Weight), hasGeneral(Cellphone,Memory)}

T ={Device = ∃hasGeneral.>}

This knowledge base is inconsistent because both concepts Device and General contain the object Memory

An interpretation of the inconsistent knowledge base is as follows:

∆I = {a, b, c, d, e, f }, CellphoneI = a, BluetoothI = b, LaptopI= c, M emoryI= d, SizeI = e, W eightI = f DeviceI= {a, c, d}, GeneralI= {b, d, e, f }

III BISIMULATION Bisimulation is as a binary relation between nodes of a labeled in a graph We will demonstrate how to modify and extend bisimulation to deal with richer logic languages Bisimulation is of interest to researchers and it is applied

in practice in which three main applications are mentioned: (i) Separating the expressive powers of logic languages; (ii) Minimizing interpretations and labeled state transition systems; (iii) Concept learning in description logics

In this section, we consider an implementation of bisimilarity for concept learning in description logics when inconsisten-cies occur Bisimilation is applied for Description Logics in concept learning problems with consistent knowledge [10], [30], [12], [27] The idea is to use models of KB and bisimularity in this model to guide the search for concept C

Let Φ ⊆ {I, O, Q, U, Self} be a set of features, s ∈ S

a paraconsistent semantics, and I, I0 s-interpretations A non-empty binary relation Z ⊆ ∆I× ∆I 0

is called a (Φ, s)-bisimulation between I and I0 if the following conditions hold for every a ∈ ΣI, x, y ∈ ∆I, x0, y0 ∈ ∆I 0

, A ∈ C,

r ∈ R and every role R of ALCΦdifferent from U : (1) Z(aI, aI0)

(2) Z(x, x0) ⇒ [AI+(x) ⇒ AI+0(x0)]

(3) Z(x, x0) ⇒ [AI−(x) ⇒ AI−0(x0)]

(4) [Z(x, x0) ∧ RI+(x, y)] ⇒ ∃y0 ∈ ∆I0[Z(y, y0) ∧

RI+0(x0, y0)], (5) if s∀∃Q= + then [Z(x, x0) ∧ R+I0(x0, y0)] ⇒ ∃y ∈ ∆I[Z(y, y0) ∧

Trang 4

R+(x, y)]

(6) if s∀∃Q= ± then

[Z(x, x0) ∧ ¬RI−0(x0, y0)] ⇒ ∃y ∈ ∆I[Z(y, y0) ∧

¬RI

−(x, y)],

(7) if O ∈ Φ then

Z(x, x0) ⇒ (x = aI ⇔ x0 = aI0),

(8) if Q ∈ Φ then

if Z(x, x0) holds and y1, , yn (n ≥ 1) are

pair-wise different elements of ∆I such that RI+(x, yi)

holds for every 1 ≤ i ≤ n, then there exist pairwise

different elements y10, , y0n of ∆I0 such that

R+I0(x0, yi0) and Z(yi, y0i) hold for every 1 ≤ i ≤ n,

(9) if Q ∈ Φ and s∀∃Q= + then

if Z(x, x0) holds and y0

1, , y0

n (n ≥ 1) are pair-wise different elements of ∆I0such that RI+0(x0, yi0)

holds for every 1 ≤ i ≤ n, then there exist

pairwise different elements y1, , yn of ∆I such

that RI+(x, yi) and Z(yi, y0i) hold for every 1 ≤

i ≤ n,

(10) if Q ∈ Φ and s∀∃Q = ± then if Z(x, x0) holds

and y10, , yn0 (n ≥ 1) are pairwise different

elements of ∆I0 such that ¬RI−0(x0, y0i) holds

for every 1 ≤ i ≤ n, then there exist pairwise

different elements y1, , yn of ∆I such that

¬RI

−(x, yi) and Z(yi, y0i) hold for every 1 ≤ i ≤ n,

if U ∈ Φ then

(11) ∀x ∈ ∆I∃x0 ∈ ∆I 0

Z(x, x0)

(12) ∀x0∈ ∆I 0

∃x ∈ ∆IZ(x, x0),

if Self ∈ Φ then

(13) Z(x, x0) ⇒ [rI+(x, x) ⇒ r+I0(x0, x0)]

(14) Z(x, x0) ⇒ [rI−(x, x) ⇒ r−I0(x0, x0)]

As a consequence, if one of the above conditions holds and

I, I0are s-interpretations (Φ, s)-bisimilar to each other, then

I |=sA iff I0|=sA [27]

IV CONCEPTLEARNING FORPARACONSISTENT

DESCRIPTIONLOGICS Concept learning problem is similar to binary classification

in traditional machine learning The difference is that in

paraconsistent description logics, objects are described not

only by attributes but also by binary relationships between

objects As bisimulation is the notion for characterizing

indiscernibility of objects in paraconsistent description log-ics It is very useful for concept learning in inconsistent knowledge base systems

Definition 2: (Learning problem in paraconsistent de-scription logics) Let I be a finite interpretation (given as

a training information system), a knowledge base KB in a

DL L and sets E+, E− of individuals, learn a concept C in

L such that:

(1) KB |= C(a) for all a ∈ E+ and a /∈ E−, (2) KB |= C(a) for all a ∈ E− and a /∈ E+, (3) KB |= C(a) for all a ∈ E+ and a ∈ E− The goal of learning is to find a correct concept with respect

to the examples This can be seen as a search process in the space of concepts A natural idea is imposing an ordering on this search space and use models of KB and bisimulation

in those models to guide the search for C

The main idea of this method is to smooth the ∆ domain

of the information system I using the selectors Based on that idea, the concept learning approach is broadly described

as follows:

Let Ap∈ C be a concept name standing for the decision attributeand suppose that Apcan be expressed by a concept

C in LΣ + ,Φ +, where Σ+ ⊆ Σ\Ap and Φ+ ⊆ Φ Let I be

a training information system over How can we learn that concept C on the basis of I

The trace of the learning algorithm in this case is as follows:

1) Starting from ∆I partition, we smooth this partition sequentially until we reach the partition corresponding

to Ap This smoothing process can be stopped sooner when the current partition is consistent with E or satisfies certain conditions

2) In the process of smoothing ∆I partition, the blocks created at all steps are Y1, Y2, , Yn Each generated block is denoted by a new index by increasing the value of n For each 1 ≤ i ≤ n, we set the following information:

• Yiis characterized by a concept Cisuch that CiI=

Yi,

• Record information about Yi is split by E,

• Saving the index of the largest block Yj such that

Yi⊆ Yj and Yj is not split by E

3) The current partition is denoted Y = {Yi 1, Yi2, , Yik} ⊆ {Y1, Y2, , Yn}

4) When the current partition becomes consistent with

Ap, return Ci 1t tCi j, where i1 ij are indices such that Yi1 Yij are all the blocks of the current partition

Trang 5

that are subsets of Ap.

V PRELIMINARY EVALUATION

A The datasets

We applied the proposed model to the set of electric

devices We build several datasets including concepts, roles,

and individuals We labeled and tested the datasets with

different numbers of inconsistent concepts

The device dataset contains electronic device information

and attributes including information on 941 types of

configu-rations (concepts), 32 links between objects (roles), and 521

objects (individuals) Each object in the dataset is expressed

by a concept We use data from 7 out of 627 subjects for

training and validation Data for the other types of equipment

is used for testing

After some preprocessing steps on these datasets, i.e tested

the inconsistent dataset on Protege Protege error when

testing Reasoner with HermiT as shown in Fig 1

Fig 1 Test the inconsistent data set on Protege

The reason it’s inconsistent data here is that of the plastic

cover that the plastic is also on the screen Meanwhile,

the cover and the screen are disjointed (the two layers are

totally unrelated) to each other

B Experimental results

We took several experiments with different rates of

incon-sistent data to evaluate the effect of the proposed algorithm

In order to analyze the contribution of the labeled datasets,

we also generated some subsets with the size of 25, 30, 35,

40, 45, 50, 55, 60 inconsistent data rates

To illustrate the influence of the inconsistent parameter,

we additionally measured ten-fold cross-validation accuracy,

Recall, Precision, and F1-Measure The results are shown in

TABLE I

SYSTEM

Inconsistent Accuracy Precision Recall F1-Measure

Table 1 Since the inconsistent parameter acts as a termina-tion criterion, we observe, as expected that lower inconsistent values lead to significant increases in accuracy

Overall, the presented approach is able to learn accurate and inconsistent concepts with a reasonably low number of expensive reasoner requests Note that all the approaches are able to learn in a very expressive language with arbitrarily nested structures, as can be seen in the concept above Learning many levels of the structure has recently been identified as a key issue for structured Machine Learning and our work provides a clear advance on this front The results show that our approach is competitive with state-of-the-art Semantic Web systems when appearing in-conisistent in a system

VI CONCLUSIONS AND FUTURE WORKS

In this paper, a concept learning model for paraconsistent knowledge base system is introduced and discussed The key idea in this work is to use models of KB and bisimulation in those models to guide the search for C This mathematical technique, along with the partitioning strategies used, has been tested on two theoretical and experimental aspects This work can be extended along various directions We can extend the comparison with other methods like as refinement operators Through empirical evaluation of experiments, we examine the correlation between learning algorithms

REFERENCES [1] F Baader, D Calvanese, D McGuinness, D Nardi, and P Patel-Schneider, editors Description Logic Handbook Cambridge Uni-versity Press, 2002.

[2] L Badea and S Nienhuys-Cheng Refining concepts in description logics In Proceedings of the 2000 International Workshop on Description Logics (DL2000), Aachen, Germany, August 17-19, 2000, pages 31–44, 2000.

[3] N Belnap A useful four-valued logic In G Eptein and J Dunn, editors, Modern Uses of Many Valued Logic, pages 8–37 Reidel, 1977.

Trang 6

[4] P Besnard and A Hunter Quasi-classical logic: Non-trivializable

classical reasoning from incosistent information In Proc of

EC-SQARU’95, volume 946 of LNCS, pages 44–51 Springer, 1995.

[5] L B¨uhmann, J Lehmann, and P Westphal Dl-learner - A framework

for inductive learning on the semantic web J Web Sem., 39:15–24,

2016.

[6] L B¨uhmann, J Lehmann, P Westphal, and S Bin Dl-learner

structured machine learning on semantic web data In Companion of

the The Web Conference 2018 on The Web Conference 2018, WWW

2018, Lyon , France, April 23-27, 2018, pages 467–471, 2018.

[7] W W Cohen and H Hirsh Learning the classic description logic:

Theoretical and experimental results In Proceedings of the 4th

International Conference on Principles of Knowledge Representation

and Reasoning (KR’94) Bonn, Germany, May 24-27, 1994., pages

121–133, 1994.

[8] N C A da Costa Erratum: ”on the theory of inconsistent formal

systems” Notre Dame Journal of Formal Logic, 16(4):608.

[9] A Divroodi, Q.-T Ha, L Nguyen, and H Nguyen On C-learnability

in description logics In Proc of ICCCI’2012 (1), volume 7653 of

LNCS, pages 230–238 Springer, 2012.

[10] A R Divroodi, Q Ha, L A Nguyen, and H S Nguyen On the

possibility of correct concept learning in description logics Vietnam

J Computer Science, 5(1):3–14, 2018.

[11] N Fanizzi, C d’Amato, and F Esposito DL-FOIL concept learning in

description logics In Inductive Logic Programming, 18th International

Conference, ILP 2008, Prague, Czech Republic, September 10-12,

2008, Proceedings, pages 107–121, 2008.

[12] Q.-T Ha, T.-L.-G Hoang, L Nguyen, H Nguyen, A Szałas, and T.-L.

Tran A bisimulation-based method of concept learning for knowledge

bases in description logics In Proc of SoICT’2012, pages 241–249.

ACM, 2012.

[13] K H¨offner, S Walter, E Marx, R Usbeck, J Lehmann, and A N.

Ngomo Survey on challenges of question answering in the semantic

web Semantic Web, 8(6):895–920, 2017.

[14] I Horrocks, P Patel-Schneider, and F van Harmelen From SHIQ and

RDF to OWL: The making of a web ontology language Journal of

Web Semantics, 1(1):7–26, 2003.

[15] A Hunter Paraconsistent logics In D Gabbay and P Smets, editors,

Handbook of Defeasible Reasoning and Uncertain Information, pages

11–36 Kluwer, 1998.

[16] A Hunter Reasoning with contradictory information using

quasi-classical logic J Log Comput., 10(5):677–703, 2000.

[17] L Iannone, I Palmisano, and N Fanizzi An algorithm based on

counterfactuals for concept learning in the semantic web Appl Intell.,

26(2):139–159, 2007.

[18] B Konev, A Ozaki, and F Wolter A model for learning description

logic ontologies based on exact learning In Proceedings of the

Thirtieth AAAI Conference on Artificial Intelligence, February 12-17,

2016, Phoenix, Arizona, USA., pages 1008–1015, 2016.

[19] P Lambrix and J Maleki Learning composite concepts in description

logics: A first step In Foundations of Intelligent Systems, 9th

International Symposium, ISMIS ’96, Zakopane, Poland, June 9-13,

1996, Proceedings, pages 68–77, 1996.

[20] J Lehmann and P Hitzler Concept learning in description logics using

refinement operators Machine Learning, 78(1-2):203–250, 2010.

[21] Y Ma and P Hitzler Paraconsistent reasoning for OWL 2 In

A Polleres and T Swift, editors, Proc of Web Reasoning and Rule

Systems, volume 5837 of LNCS, pages 197–211 Springer, 2009.

[22] Y Ma, P Hitzler, and Z Lin Paraconsistent reasoning for expressive

and tractable description logics In Proc of Description Logics, 2008.

[23] J Maluszy´nski, A Szałas, and A Vit´oria Paraconsistent logic

programs with four-valued rough sets In C.-C Chan, J

Grzymala-Busse, and W Ziarko, editors, Proc of RSCTC’2008, volume 5306 of LNAI, pages 41–51, 2008.

[24] R Mehri and V Haarslev Applying machine learning to enhance optimization techniques for OWL reasoning In Proceedings of the 30th International Workshop on Description Logics, Montpellier, France, July 18-21, 2017., 2017.

[25] L Nguyen and A Szałas Three-valued paraconsistent reasoning for Semantic Web agents In P J et al., editor, Proc of KES-AMSTA 2010, Part I, volume 6070 of LNAI, pages 152–162 Springer, 2010 [26] L Nguyen and A Szałas Logic-based roughification In A Skowron and Z Suraj, editors, Rough Sets and Intelligent Systems (To the Memory of Professor Zdzisław Pawlak), Vol 1, pages 529–556 Springer, 2012.

[27] L A Nguyen, T H K Nguyen, N T Nguyen, and Q Ha Bisimilarity for paraconsistent description logics Journal of Intelligent and Fuzzy Systems, 32(2):1203–1215, 2017.

[28] N T Nguyen Inconsistency of knowledge and collective intelligence Cybernetics and Systems, 39(6):542–562, 2008.

[29] G Rizzo, N Fanizzi, J Lehmann, and L B¨uhmann Integrating new refinement operators in terminological decision trees learning.

In Knowledge Engineering and Knowledge Management - 20th Inter-national Conference, EKAW 2016, Bologna, Italy, November 19-23,

2016, Proceedings, pages 511–526, 2016.

[30] T Tran, Q Ha, T Hoang, L Nguyen, and H Nguyen Bisimulation-based concept learning in description logics Fundam Inform., 133(2-3):287–303, 2014.

[31] Q B Vo, T H Tran, and T H K Nguyen On the use of surplus division to facilitate efficient negotiation in the presence of incomplete information In Knowledge-Based and Intelligent Information & En-gineering Systems: Proceedings of the 20th International Conference KES-2016, York, UK, 5-7 September 2016., pages 295–304, 2016.

Ngày đăng: 24/03/2022, 10:12

TỪ KHÓA LIÊN QUAN

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN