1. Trang chủ
  2. » Kinh Doanh - Tiếp Thị

web information systems and technologies

326 58 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Định dạng
Số trang 326
Dung lượng 5 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

In particular, the most important parameters which will be compared, to evaluate the effectiveness of the RMS are: verification of the files, performed by the users and negative payoff,

Trang 2

in Business Information Processing 45

Series Editors

Wil van der Aalst

Eindhoven Technical University, The Netherlands

Trang 4

Web Information

Systems and Technologies

5th International Conference, WEBIST 2009

Lisbon, Portugal, March 23-26, 2009

Revised Selected Papers

1 3

Trang 5

José Cordeiro

Joaquim Filipe

Department of Systems and Informatics

Polytechnic Institute of Setúbal

Rua do Vale de Chaves, Estefanilha, 2910-761 Setúbal, Portugal

E-mail: {jcordeir,j.filipe}@est.ips.pt

Library of Congress Control Number: 2010924048

ACM Computing Classification (1998): H.3.5, J.1, K.4.4, I.2

ISBN-10 3-642-12435-6 Springer Berlin Heidelberg New York

ISBN-13 978-3-642-12435-8 Springer Berlin Heidelberg New York

This work is subject to copyright All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks Duplication of this publication

or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965,

in its current version, and permission for use must always be obtained from Springer Violations are liable

to prosecution under the German Copyright Law.

Trang 6

Preface

This book contains a selection of the best papers from WEBIST 2009 (the 5th national Conference on Web Information Systems and Technologies), held in Lisbon, Portugal, in 2009, organized by the Institute for Systems and Technologies of Infor-mation, Control and Communication (INSTICC), in collaboration with ACM SIGMIS and co-sponsored by the Workflow Management Coalition (WFMC)

Inter-The purpose of the WEBIST series of conferences is to bring together researchers, engineers and practitioners interested in the technological advances and business applications of Web-based information systems The conference has four main tracks, covering different aspects of Web information systems, including Internet Technol-ogy, Web Interfaces and Applications, Society, e-Communities, e-Business and e-Government

WEBIST 2009 received 203 paper submissions from 47 countries on all nents A double-blind review process was enforced, with the help of more than 150 experts from the International Program Committee; each of them specialized in one of the main conference topic areas After reviewing, 28 papers were selected to be published and presented as full papers and 44 additional papers, describing work-in-progress, published and presented as short papers Furthermore, 35 papers were pre-sented as posters The full-paper acceptance ratio was 13%, and the total oral paper acceptance ratio was 36%

conti-Therefore, we hope that you find the papers included in this book interesting, and

we trust they may represent a helpful reference for all those who need to address any

of the research areas mentioned above

Joaquim Filipe

Trang 8

Sérgio Brissos INSTICC, Portugal

Marina Carvalho INSTICC, Portugal

Helder Coelhas INSTICC, Portugal

Andreia Costa INSTICC, Portugal

Bruno Encarnação INSTICC, Portugal

Raquel Martins INSTICC, Portugal

Vitor Pedrosa INSTICC, Portugal

Program Committee

Christof van Nimwegen, Belgium

Ajith Abraham, USA

Isaac Agudo, Spain

Abdullah Alghamdi, Saudi Arabia

Rachid Anane, UK

Margherita Antona, Greece

Matteo Baldoni, Italy

Cristina Baroglio, Italy

David Bell, UK

Orlando Belo, Portugal

Ch Bouras, Greece

Stéphane Bressan, Singapore

Tobias Buerger, Austria

Maria Claudia Buzzi, Italy

Elena Calude, New Zealand Nunzio Casalino, Italy Sergio de Cesare, UK Maiga Chang, Canada Shiping Chen, Australia Dickson Chiu, China Isabelle Comyn-wattiau, France Michel Crampes, France Daniel Cunliffe, UK Alfredo Cuzzocrea, Italy Steven Demurjian, USA

Y Ding, USA Schahram Dustdar, Austria Barry Eaglestone, UK

Trang 9

Atilla Elci, Turkey

Vadim Ermolayev, Ukraine

Josep-lluis Ferrer-gomila, Spain

Filomena Ferrucci, Italy

Giovanni Fulantelli, Italy

Erich Gams, Austria

Dragan Gasevic, Canada

Nicola Gessa, Italy

José Antonio Gil, Spain

Karl Goeschka, Austria

Stefanos Gritzalis, Greece

Vic Grout, UK

Francesco Guerra, Italy

Aaron Gulliver, Canada

Abdelkader Hameurlain, France

Ioannis Hatzilygeroudis, Greece

Stylianos Hatzipanagos, UK

Dominic Heutelbeck, Germany

Pascal Hitzler, Germany

Wen Shyong Hsieh, Taiwan

Christian Huemer, Austria

Alexander Ivannikov, Russian Federation

Kai Jakobs, Germany

Ivan Jelinek, Czech Republic

Qun Jin, Japan

Carlos Juiz, Spain

Michail Kalogiannakis, Greece

Jaakko Kangasharju, Finland

George Karabatis, USA

Frank Kargl, Germany

Roland Kaschek, New Zealand

Sokratis Katsikas, Greece

Ralf Klamma, Germany

Agnes Koschmider, Germany

Tsvi Kuflik, Israel

Daniel Lemire, Canada

Tayeb Lemlouma, France

Kin Li, Canada

Claudia Linnhoff-Popien, Germany

Pascal Lorenz, France

Vicente Luque-Centeno, Spain

Cristiano Maciel, Brazil

Andreas Ninck, Switzerland Alex Norta, Finland Dusica Novakovic, UK Andrea Omicini, Italy Kok-leong Ong, Australia Jose A Onieva, Spain Jun Pang, Luxembourg Laura Papaleo, Italy Eric Pardede, Australia Kalpdrum Passi, Canada Viviana Patti, Italy Günther Pernul, Germany Josef Pieprzyk, Australia Luís Ferreira Pires, The Netherlands Thomas Risse, Germany

Danguole Rutkauskiene, Lithuania Maytham Safar, Kuwait

Alexander Schatten, Austria Jochen Seitz, Germany Tony Shan, USA Quan Z Sheng, Australia Keng Siau, USA

Miguel Angel Sicilia, Spain Marianna Sigala, Greece Pedro Soto-Acosta, Spain

J Michael Spector, USA Martin Sperka, Slovak Republic Eleni Stroulia, Canada

Hussein Suleman, South Africa Junichi Suzuki, USA

Ramayah T., Malaysia Taro Tezuka, Japan Dirk Thissen, Germany Arun Kumar Tripathi, Germany

Th Tsiatsos, Greece Michail Vaitis, Greece Juan D Velasquez, Chile Maria Esther Vidal, Venezuela Viacheslav Wolfengagen, Russian Federation

Lu Yan, UK

Trang 10

Auxiliary Reviewers

Michelle Annett, Canada

David Chodos, Canada

Zafer Erenel, Turkey

Nils Glombitza, Germany

Mehdi Khouja, Spain

Xitong Li, China

Antonio Molina Marco, Spain

Sergio Di Martino, Italy

Bruce Matichuk, Canada

Christine Mueller, New Zealand Eni Mustafaraj, USA

Parisa Naeimi, Canada Stephan Pöhlsen, Germany Axel Wegener, Germany Christian Werner, Germany Jian Yu, Australia

Donglai Zhang, Australia

Invited Speakers

Peter A Bruck World Summit Award, Austria

Dieter A Fensel University Innsbruck, Austria

Ethan Munson University of Wisconsin – Milwaukee, USA

Mats Daniels Uppsala University, Sweden

Trang 12

Part I: Internet Technology

Collaboration and Human Factor as Drivers for Reputation System

Effectiveness 3

Guido Boella and Marco Remondino

Agent-Oriented Programming for Client-Side Concurrent Web 2.0

Applications 17

Mattia Minotti, Giulio Piancastelli, and Alessandro Ricci

The SHIP: A SIP to HTTP Interaction Protocol: Advanced Thin-Client

Architecture for IMS Applications 30

Joachim Zeiß, Rene Gabner, Sandford Bessler, and

Marco Happenhofer

Efficient Authorization of Rich Presence Using Secure and Composed

Web Services 44

Li Li and Wu Chou

Part II: Web Interfaces and Applications

Information Supply of Related Papers from the Web for Scholarly

e-Community 61

Muhammad Tanvir Afzal

When Playing Meets Learning: Methodological Framework for

Designing Educational Games 73

Stephanie B Linek, Daniel Schwarz, Matthias Bopp, and

Dietrich Albert

SiteGuide: A Tool for Web Site Authoring Support 86

Vera Hollink, Viktor de Boer, and Maarten van Someren

ArhiNet – A Knowledge-Based System for Creating, Processing and

Retrieving Archival eContent 99

Ioan Salomie, Mihaela Dinsoreanu, Cristina Pop, and Sorin Suciu

Optimizing Search and Ranking in Folksonomy Systems by Exploiting

Context Information 113

Fabian Abel, Nicola Henze, and Daniel Krause

Trang 13

Adaptation of the Domain Ontology for Different User Profiles:

Application to Conformity Checking in Construction 128

Anastasiya Yurchyshyna, Catherine Faron-Zucker,

Nhan Le Thanh, and Alain Zarli

The RDF Protune Policy Editor: Enabling Users to Protect Data in

the Semantic Web 142

Fabian Abel, Juri Luca De Coi, Nicola Henze, Arne Wolf Koesling,

Daniel Krause, and Daniel Olmedilla

An Unsupervised Rule-Based Method to Populate Ontologies from

Text 157

Eduardo Motta, Sean Siqueira, and Alexandre Andreatta

Web Spam, Social Propaganda and the Evolution of Search Engine

Rankings 170

Panagiotis Takis Metaxas

Part III: Society, e-Business and e-Government

Making the Invisible Visible: Design Guidelines for Supporting Social

Awareness in Distributed Collaboration 185

Monique Janneck

Interaction Promotes Collaboration and Learning: Video Analysis of

Algorithm Visualization Use during Collaborative Learning 198

Mikko-Jussi Laakso, Niko Myller, and Ari Korhonen

Modelling the B2C Marketplace: Evaluation of a Reputation Metric for

e-Commerce 212

Anna Gutowska and Andrew Sloane

Part IV: Web Intelligence

Using Scientific Publications to Identify People with Similar Interests 229

Stanley Loh, Fabiana Lorenzi, Roger Granada, Daniel Lichtnow,

Leandro Krug Wives, and Jos´ e Palazzo Moreira de Oliveira

Website-Level Data Extraction 242

Jianqiang Li and Yu Zhao

Anti-folksonomical Recommender System for Social Bookmarking

Service 256

Akira Sasaki, Takamichi Miyata, Yasuhiro Inazumi,

Aki Kobayashi, and Yoshinori Sakai

Classifying Structured Web Sources Using Support Vector Machine and

Aggressive Feature Selection 270

Hieu Quang Le and Stefan Conrad

Trang 14

Scalable Faceted Ranking in Tagging Systems 283

Jos´ e I Orlicki, J Ignacio Alvarez-Hamelin, and Pablo I Fierens

Answering Definition Questions: Dealing with Data Sparseness in

Lexicalised Dependency Trees-Based Language Models 297

Alejandro Figueroa and John Atkinson

Author Index 311

Trang 16

Part I

Internet Technology

Trang 18

J Cordeiro and J Filipe (Eds.): WEBIST 2009, LNBIP 45, pp 3–16, 2010

© Springer-Verlag Berlin Heidelberg 2010

for Reputation System Effectiveness

Guido Boella and Marco Remondino University of Turin, Department of Computer Science, Cso Svizzera 185, Turin, Italy

{guido,remond}di.unito.it

Abstract Reputation management is about evaluating an agent's actions and

other agents' opinions about those actions, reporting on those actions and opinions, and reacting to that report thus creating a feedback loop This social mechanism has been successfully used, through Reputation Management Systems (RMSs) to classify agents within normative systems Most RMSs rely

on the feedbacks given by the member of the social network in which the RMS itself operates In this way, the reputation index can be seen as an endogenous and self produced indicator, created by the users for the users' benefit This implies that users’ participation and collaboration is a key factor for the effectiveness a RMS In this work the above factor is explored by means of an agent based simulation, and is tested on a P2P network for file sharing

Keywords: Reputation system, Agent based simulation, Peer to peer network

1 Introduction

In everyday's life, when a choice subject to limited resources (like for instance money, time, and so on) must be done, due to the overwhelming number of possibilities that people have to choose from, something is needed to help them make choices People often follow the advice of others when it comes to which products to by, which movies to watch, which music to listen, which websites to visit, and so on This is a social attitude that uses others’ experience They base their judgments of whether or not to follow this advice partially upon the other person's reputation in helping to find reliable and useful information, even with all the noise

Using and building upon early collaboration filtering techniques, reputation management software gather ratings for people, companies, and information sources Since this is a distributed way of computing reputation, it is implicitly founded on two main assumptions: 1 - the correctness of shared information and 2 - the participation

of users to the system While the negation of the first could be considered as an attack

to the system itself, performed by users trying to crash it, and its occurrence is quite rare, the second factor is often underestimated, when designing a collaborative RMS Users without a vision of the macro level often use the system, but simply forget to collaborate, since this seems to cause a waste of time

The purpose of the present work is to give a qualitative and, when possible, quantitative evaluation of the collaborative factor in RMSs, by means of an empirical

Trang 19

analysis conducted via an agent based simulation Thus, the main research question is: what’s the effectiveness of a RMS, when changing the collaboration rate coming from the involved users?

In order to answer this question, in the paper an agent based model is introduced, representing a peer-to-peer (P2P) network for file sharing A basic RMS is applied to the system, in order to help users to choose the best peers to download from In fact, some of the peers are malicious, and they try to exploit the way in which the P2P system rewards users for sharing files, by uploading inauthentic resources when they

do not own the real ones The model is described in detail and the results are evaluated through a multi-run ceteris paribus technique, in which only one setting is changed at a time In particular, the most important parameters which will be compared, to evaluate the effectiveness of the RMS are: verification of the files, performed by the users and negative payoff, given in case a resource is reported as being inauthentic The verification of the files, i.e users’ the collaboration, is an exogenous factor for the RMS, while the negative payoff is an endogenous and thus directly controllable factor, from the point of view of a RMS’s designer

The P2P framework has been chosen since there are many works focusing on the reputation as a system to overcome the issue of inauthentic files, but, when evaluating the effectiveness of the system, the authors [1] usually refer to idealized situations, in which users always verify the files for authenticity, as soon as they start a download This is obviously not the case in the real world: first of all, most resources require to

be at least partially owned, in order to be checked Besides, some users could simply decide not to check them for long time Even worse, other users could simply forget about a downloaded resource and never check it Last but not least, other users might verify it, but simply not report anything, if it’s not authentic

2 Reputation and P2P Systems

Since uploading bandwidth is a limited resource and the download priority queues are based on a uploading-credit system to reward the most collaborative peers on the network, some malicious users create inauthentic files, just to have something to share, thus obtaining credits, without being penalized for their behavior To balance this, RMSs have been introduced, which dynamically assign to the users a reputation value, considered in the decision to download files from them or not RMSs are proven, via simulation, to make P2P networks safe from attacks by malicious peers, even when forming coalitions In networks of millions of peers attacks are less frequent, but users still have a benefit from sharing inauthentic files It’s not clear if RMSs can be effective against this selfish widespread misbehavior, since they make several ideal assumptions about the behavior of peers who have to verify files to discover inauthentic ones This operation is assumed to be automatic and with no costs Moreover, since the files are usually shared before downloading is completed, peers downloading inauthentic files unwillingly spread them if they are not cooperative enough to verify their download as soon as possible In the present work, the creation and spreading of inauthentic files is not considered as an attack, but as a way in which some agents try to raise their credits, while not possessing the real resource that's being searched by others A basic RMSs is introduced, acting as a

Trang 20

positive or negative reward for the users and human factor behind the RMSs is considered, in the form of costs and benefits of verifying files Most approaches, most notably EigenTrust [2], assume that verification is made automatically upon the start of download of the file By looking as we do at the collaboration factor in dealing with RMSs, we can question their real applicability, an issue which remains unanswered in the simulation based tests made by the authors To provide an answer

to this question it is necessary to build a simulation tool which aims at a more accurate modeling of the users’ behavior rather than at modeling the reputation system in detail

de facto excluded from the network, still being a working link from and to other agents The agents are randomly connected on a graph and feature the following parameters: Unique ID, Reputation value, set of neighbors, set of owned resources, set of goals (resources), set of resources being downloaded, set of suppliers (by resource) At each time step, agents reply to requests for download, perform requests (according to their goals) or verify files While an upload is performed – if possible - each time another agent makes a request, requesting a resource and verification are performed in alternative Verification ratio is a parameter for the simulation and acts stochastically on agents’ behavior All agents belong to two disjoint classes: malicious agents and loyal ones They have different behaviors concerning uploading, while feature the same behavior about downloading and verification: malicious agents are simply agents who exploit for selfishness the weaknesses of the system, by always uploading inauthentic files if they don’t own the authentic ones Loyal agents, on the contrary, only upload a resource if they own it A number of resources are introduced

in the system at the beginning of the simulation, representing both the owned objects and the agents' goals For coherence, an owned resource can't be a goal, for the same agent The distribution of the resource is stochastic During the simulation, other resources (and corresponding goals) are stochastically distributed among the agents Each agent (metaphorically, the P2P client) keeps track of the providers, and this information is preserved also after the download is finished

Trang 21

In order to test the limits and effectiveness of a reputation mechanism under different user behaviors an agent based simulation of a P2P network is used as methodology, employing reactive agents to model the users; these have a deterministic behavior based on the class they belong to (malicious or loyal) and a stochastic idealized behavior about verifying policy Their use shows how the system works at an aggregate level However, reactive agents can also be regarded as a limit for our approach, since real users have a flexible behavior and adapt themselves to what they observe We built a model which is less idealized about the verifying factor, but it’s still rigid when considering the agents’ behavior about sending out inauthentic files That’s why we envision the necessity to employ cognitive agents based on reinforcement learning techniques Though, reactive agents can also be a key point, in the sense that they allow the results to be easily readable and comparable among them, while the use of cognitive agents would have moved the focus from the evaluation of collaborative factor to that of real users’ behavior when facing a RMS, which is very interesting, but beyond the purpose of the present work In future works, this paradigm for agents will be considered

4 Model Specifications and Parameters

The P2P network is modeled as an undirected and non-reflexive graph Each node is

an agent, representing a P2P user Agents are reactive: their behavior is thus determined a priori, and the strategies are the result of the stimuli coming from the environment and of the condition-action rules Their behavior is illustrated in next

section Formally the multi agent system is defined as MAS = <Ag; Rel>, with Ag set

of nodes and Rel set of edges Each edge among two nodes is a link among the agents

and is indicated by the tuple ; with and belonging to Ag Each agent

features the following internal parameters:

– Unique ID (identifier),

– Reputation value (or credits) N( ),

– Set of agent’s neighbors RP( ),

– Set of owned resources RO( ),

– Set of goals (resource identifiers) RD( ),

– Set of resources being downloaded P( ),

– Set of pairs < supplier; resource >

A resource is a tuple <Name, Authenticity>, where Name is the resource identifier and Authenticity is a Boolean attribute indicating whether the resource is authentic or

not The agent owning the resource, however, does not have access to this attribute unless he verifies the file

The resources represent the object being shared on the P2P network A number of resources are introduced in the system at the beginning of the simulation; they represent both the owned objects and the agents' goals For coherence, an owned resource can't be a goal, for the same agent The distribution of the resource is stochastic During the simulation, other resources are stochastically introduced In this way, each agent in the system has the same probabilities to own a resource, independently from her inner nature (malicious or loyal) In the same way also the

Trang 22

corresponding new goals are distributed to the agents; the difference is that the

distribution probability is constrained by its being possessed by an agent Formally R

be the set of all the resources in the system We have that:

RD( ) ⊆ R, RO( ) ⊆ R, RD( )∩ RO( ) = Ø (1)

Each agent in the system features a set of neighbors N( ), containing all the agents to

which she is directly linked in the graph:

This information characterizes the information of each agent about the environment The implemented protocol is a totally distributed one, so looking for the resource is heavily based on the set of neighbors

In the real word the shared resources often have big dimensions; after finding the resource, a lot of time is usually required for the complete download In order to

simulate this the set of the "resources being downloaded" (Ris) introduced These are described as Ris = <resource ID, completion, check status>, where ID is the resource identifier, completion is the percentage already downloaded and "check status"

indicates whether the resource has been checked for authenticity or not In particular,

it can be not yet verified, verified and authentic and verified and inauthentic:

check status {NOT CHECKED;AUTH; INAUTH} (3)

Another information is ID of the provider of a certain resource, identified by P( )

Each agent keeps track of those which are uploading to him, and this information is preserved also after the download is finished Real systems allow the same resource to

be download in parallel from many providers, to improve the performance and to split the bandwidth load This simplification should not affect the aggregate result of the simulation, since the negative payoff would reach more agents instead of just one (so the case with multiple provider is a sub-case of that with a single provider)

4.1 The Reputation Model

In this work we assume a simple idealized model of reputation, since the objective is not to prove the effectiveness of a particular reputation algorithm but to study the effect of users' behavior on a reputation system We use a centralized system which assumes the correctness of information provided by users, e.g., it is not possible to give an evaluation of a user with whom there was no interaction The reason is that

we focus on the behavior of common agents and not on hackers who attack the system

by manipulating the code of the peer application In the system there are two reputation thresholds: the first and higher one, under which it’s impossible to ask for resources to other agents, the second, lower than the other, which makes it impossible even to share the owned files This guarantees that an agents that falls under the first one (because she shared too many inauthentic files), can still regain credits by sharing authentic ones and come back over the first threshold On the contrary, if she

continues sharing inauthentic files, she will fall under the second threshold, being de

facto excluded from the network, still being a working link from and to other agents

Trang 23

4.2 The User Model

Peers are reactive agents replying to requests, performing requests or verifying files While upload is performed each time another agent makes a request, requesting a file and verification are performed (in alternative) when it is the turn of the agent in the simulation All agents belong to two disjoint classes: malicious agents and loyal agents The classes have different behaviors concerning uploading, while they have the same behavior concerning downloading and verification: malicious agents are just common agents who exploit for selfishness the weaknesses of the system When it is the turn of another peer, and he requests a file to the agent, he has to decide whether

to comply with the request and to decide how to comply with it

- The decision to upload a file is based on the reputation of the requester: if it is below the "replying threshold", the requestee denies the upload (even if the requestee

is a malicious agent)

- The "replyTo" method refers to the reply each agent gives when asked for a resource When the agent is faced with a request he cannot comply but the requester's reputation is above the "replying threshold", if he belongs to the malicious class, he has to decide whether to create and upload an inauthentic file by copying and renaming one of his other resources The decision is based depending on a parameter

If the resource is owned, she sends it to the requesting agent, after verifying if her reputation is higher than the "replying threshold" Each agent performs at each round

of simulation two steps:

1) Performing the downloads in progress For each resource being downloaded, the agents check if the download is finished If not, the system checks if the resource is still present in the provider's "sharing pool" In case it's no longer there, the download

is stopped and is removed from the list of the "owned resources" Each file is formed

by n units; when 2/n of the file has been downloaded, then the file gets automatically owned and shared also by the agent that is downloading it

2) Making new requests to other peers or verifying the authenticity of a file downloaded or in downloading, but not both:

a) When searching for a resource all the agents within a depth of 3 from the requesting one are considered The list is ordered by reputation A method is invoked on every agent with a reputation higher than the "requests threshold", until the resource is found or the list reaches the ending point If the resource is found, it's put in the "downloading list", the goal is cancelled, the supplier is recorded and linked with that specific download in progress and her reputation

is increased according to the value defined in the simulation parameters If no resource is found, the goal is given up

b) Verification means that a file is previewed and if the content does not correspond to its description or filename, this fact is notified to the reputation system Verification phase requires that at least one file must be in progress and it must be beyond the 2/n threshold described above An agent has a given probability to verify instead of looking for a new file In case the agent verifies, a random resource is selected among those “in download” and not checked If authentic, the turn is over Otherwise, a "punishment" method is

Trang 24

invoked, the resource deleted from the "downloading" and from the "owned " lists and put among the "goals" once again

The RMS is based on the "punishment" method which lowers the supplier's reputation, deletes her from the "providers" list in order to avoid cyclic punishment chains, and recursively invokes the "punishment" method on the punished provider A punishment chain is thus created, reaching the creator of the inauthentic file, and all the aware or unaware agents that contributed in spreading it

5 Results

The simulation goes on until at least one goal exists and/or a download is still in progress In the following table a summary of the most important parameters for the experiments are given:

Table 1 The main parameters

Parameters Value

Initial reputation (credits) for each agent 50

Total number of resources at the beginning 50

Number of turns for introduction of new resources 1

Number of new resources to introduce 3

In all the experiments, the other relevant parameters are fixed, while the following ones change:

Table 2 The scenarios

by a batch execution mode for the simulation This executes 50 times the simulation with the same parameters, sampling the inauthentic/total ratio every 50 steps This is

Trang 25

to overcome the sampling

this technique gives an high

we have a total of 40 samp

time step is calculated, and

of the average reputations f

in a bar chart In Fig 1,

represented for the results

experiment 4 is discussed la

Experiment 5 depicts the

a P2P network without a R

point, it gets constant over

among all the agents with th

new resources to share, a

resources they do not own

agents are 50 malicious an

preferred when asking for a

fly away, and that an high

63%) Experiment 1 shows

is already sufficient to low

time We can see a positive

an over 100% improvem

punishment for inauthentic

30% This is quite low, sinc

(downloaded, but never us

influence the way in which

has been increased up to 4

surprisingly good: the inau

(less than 10% after 200), r

checked is quite a realistic p

the simple RMS propose

inauthentic files In order

effect; many variables in the simulation are stochastic

h level of confidence for the produced results In 2000 tuples After all the executions are over, the average for erepresented in a chart In the same way, the grand averfor loyal and malicious agents is calculated, and representhe chart with the trend of inauthentic/total resourcecoming from experiments 1, 2, 3, 5 and 6 The resultsater

Fig 1 Inauthentic/total ratio

e worst case: no negative payoff is given: this is the caseRMS behind it The ratio initially grows and, at a cert

r time, since new resources are stochastically distribu

he same probability In this way also malicious agents hand they will send out inauthentic files only for th In the idealized world modeled in this simulation, si

nd 50 loyal, and since the ones with higher reputation

a file, it’s straightforward that malicious agents’ reputat

h percentage of files in the system are inauthentic (abhow a simple RMS, with quite a light punishing factorwer the percentage of inauthentic files in the network o

e trend, reaching about 28% after 2000 time steps, whicent compared to the situation in which there was

c files In this experiment the verification percentage i

ce it means that 70% of the files remain unchecked foresed) In order to show how much the human factor

a RMS works, in experiment 2 the verification percent40%, leaving the negative payoff still at 3 The resuluthentic/total ratio is dramatically lowered after few tureaching less than 1% after 2000 steps Since 40% of fpercentage for a P2P user, this empirically proves that e

ed here dramatically helps in reducing the number

to assign a quantitative weight to the human factor

, so urns, each rage nted

s is

s of

e of tain uted have hose ince are tion bout

r (3) over

h is

no

s at ever can tage

lt is urns files even

r of , in

Trang 26

experiment 3, the negative

verification percentage to 3

than in experiment 2, mean

compared to a higher nega

negative payoff is lighter

experiment 2 The trend is

particular, the ratio of inaut

gets quite interesting to fin

the verification rate After

40% of verification and 3

negative payoff must be se

trend in the ratio This is do

inauthentic files with a nega

about 0.7 with 8 and 30% re

This clearly indicates tha

RMS to work correctly and

r (2), but the verification rate is again at 40%, asvery similar – just a bit worse - to that of experiment 3thentic files, after 2000 turns, is about 16% At this poin

nd the “break even point” among the punishing factor

some empirical simulations, we have that, compared wnegative payoff, if now verification is just at 30%,

et to a whopping value of 8, in order to get a comparaone in experiment 4 (Fig 2): after 2000 turns, there’s 1%ative payoff of 3 and a verification percentage of 40%, espectively

at collaboration factor (the files verification) is crucial fogive the desired aggregate results (few inauthentic files olar, a slightly higher verification rate (from 30% to 40

g 2 Weighting the collaboration factor

3 Final average reputations for the agents

the orse rate, the

s in

3 In

nt, it and with the able

% of and

or a over 0%)

Trang 27

weights about the same of a heavy upgrade of the punishing factor (from 3 to 8) This can be considered as a quantitative result, comparing the exogenous factor (resource verification performed by the users) to the endogenous one (negative payoff)

Besides considering the ratio of inauthentic files moving on a P2P network, it’s also crucial to verify that the proposed RMS algorithm could punish the agents that maliciously share inauthentic files, without involving too much unwilling accomplices, which are loyal users that unconsciously spread the files created by the former ones This is considered by looking at the average reputations, at the end of simulation steps (Fig 3)

In the worst case scenario, the malicious agents, that are not punished for producing inauthentic files, always upload the file they are asked for (be it authentic or not) In this way, they soon gain credits, topping the loyal ones Since in the model the users with a higher reputation are preferred when asking files, this phenomenon soon triggers

an explosive effects: loyal agents are marginalized, and never get asked for files This results in a very low average reputation for loyal agents (around 70 after 2000 turns) and a very high average value for malicious agents (more than 2800) at the same time

In experiment 1 the basic RMS presented here, changes this result; even with a low negative payoff (3) the average reputations after 2000 turns, the results are clear: about

700 for loyal agents and slightly more than 200 for malicious ones The algorithm preserves loyal agents, while punishing malicious ones In experiment 2, with a higher verification percentage (human factor), we see a tremendous improvement for the effectiveness of the RMS algorithm The average reputation for loyal agents, after

2000 steps, reaches almost 1400, while all the malicious agents go under the lower threshold (they can’t either download or share resources), with an average reputation

of less than 9 points Experiment 3 explores the scenario in which the users just check 30% of the files they download, but the negative payoff is raised from 3 to 4 The final figure about average reputations is again very good Loyal agents, after 2000 steps, averagely reach a reputation of over 1200, while malicious ones stay down at about 40 This again proves the proposed RMS system to be quite effective, though, with a low verification rate, not all the malicious agents get under the lower threshold, even if the negative payoff is 4 In experiment 6 the verification percentage is again at the more realistic 40%, while negative payoff is reduced to 2 Even with this low negative payoff, the results are good: most malicious agents fall under the lowest threshold, so they can’t share files and they get an average reputation of about 100 Loyal agents behave very well and reach an average reputation of more than 900 Experiment 4 is the one in which we wanted to harshly penalize inauthentic file sharing (negative payoff is set at 8), while leaving an high laxity in the verification percentage (30%) Unlikely what it could have been expected, this setup does not punish too much loyal agents that, unwillingly, spread unchecked inauthentic files After 2000 turns, all the malicious agents fall under the lowest threshold, and feature an average reputation of less than 7 points, while loyal agents fly at an average of almost 1300 points The fact that no loyal agent falls under the “point of no return” (the lowest threshold) is probably due to the fact that they do not systematically share inauthentic files, while malicious agents do Loyal ones just share the inauthentic resources they never check Malicious agents, on the other side, always send out inauthentic files when asked for a resource they do not own, thus being hardly punished by the RMS, when the negative payoff is more than 3

Trang 28

5.1 Comeback Mode: Whitewashing

A "whitewashing" mode is implemented and selectable before the simulation starts, in

order to simulate the real behavior of some P2P users who, realizing that they cannot download anymore (since they have low credits or, in this case, bad reputation), disconnect their client, and then connect again, so to start from the initial pool of credits/reputation When this mode is active, at the beginning of each turn all the agents that are under a given threshold reset it to the initial value, metaphorically representing the disconnection and reconnection In experiments 7, 8 and 9 this is tested to see if it affects previous results

In Fig 4, the ratio among inauthentic and total resources is depicted, and in Fig 5 the final average reputation for agents, when whitewashing mode is active

Fig 4 Inauthentic/total ratio in whitewashing mode

Even with CBM activated, the results are very similar to those in which this mode

is off They are actually a bit worse when the negative payoff is low (3) and so is the verification percentage (30%): the ratio of inauthentic files in the network is quite high, at about 41% after 2000 turns versus the 27% observed in experiment 1, which had the same parameters, but no CBM When the verification percentage is increased

to 40%, though, things get quite better Now the ratio of inauthentic files has the same levels as in experiment 2 (less than 1% after 2000 steps) Also with a lower verification percentage (again at 30%), but leaving the negative payoff at 4, the figure

is almost identical to the one with the same parameters, but without a CBM After

2000 turns, the inauthentic files ratio is about 12%

The experiments show that malicious agents, even resetting their own reputation after going below the lowest threshold, can’t overcome this basic RMS, if they always produce inauthentic files This happens because, even if they reset their reputation to the initial value, it’s still low compared to the one reached by loyal agents; if they shared authentic files, this value would go up in few turns, but since they again start spreading inauthentic files, they almost immediately fall under the thresholds again

Trang 29

Fig 5 Final average reputations for the agents, in whitewashing mode

5.2 Scaling Issue

Changing the numbers of agents, agent based models can suffer from scaling issues, meaning that the results obtained with a certain number of agents don’t hold when this number changes significantly To verify this, two more experiments were carried

on The number of agents is increased to 150 (three times the previous value) Coherently, the number of edges, the initial pool of resources and the number of resources introduced at each turn are also tripled The trend is very similar: with a low negative payoff (3) and a low verification rate (30%), we have just a slightly higher ratio of inauthentic files The same comparison is carried on with the average final reputations Again, the results are very similar, even if the system with more agents has a slightly worse aggregate behavior than the smaller one Generally speaking we conclude that the difference is very low, so the scaling issue is not influencing the results shown, on a 1:3 basis In future works this study will be extended to even more agents

6 Discussion and Conclusions

The main purpose of the work is to show, by means of an empirical analysis based on simulation, how the collaboration coming from the agents in a social system can be a crucial driver for the effectiveness of a RMS As a test-bed we considered a P2P network for file sharing and, by an agent based simulation, we show how a basic RMS can be effective to reduce inauthentic files circulating on the network

In order to enhance its performance, though, the collaboration factor, in the form of verifying policy, is crucial: a 33% more in verification results in about thirty times less inauthentic files on the network While a qualitative analysis of this factor is straightforward for the presented model, we added a quantitative result, trying to weight the exogenous factor (the verification rate) by comparing it to the endogenous one (the negative payoff) We showed that a 33% increase in verification percentage leads to similar results obtained by increasing the negative payoff of 66% Again, the collaboration factor proves to be crucial for the RMS to work efficiently

Trang 30

First of all in this paper we look for the factors controlled by users determining the effectiveness of RMSs in P2P network Two are the critical points: the decision of sharing inauthentic files and the decision to verify or not the downloaded files

While the benefit of not verifying is determined by the saved time (verifying is incompatible with making new searches and starting new downloads), the benefit of spreading inauthentic files must be confirmed by the simulation Without the mechanism for punishing malicious agents, inauthentic files increase sharply, and at the end of simulation the reputation of malicious agents is much higher than loyal ones', and this is reached very soon Thus, producing inauthentic files is a worthy strategy, under no enforcement mechanism However, the behavior of malicious agents strikes back against them: they are not attackers and have the goal of downloading resources, as well Here we assume that if a file is verified, then the reputation of the uploader is decreased immediately, due to the lower cost of this action A more fine grained model should consider also this human factor Analogously we do not consider the possibility to punish peers without first receiving and checking a file - a behavior which should be prevented by the software itself - as well as we do not consider the possibility of punishing even if the file is authentic As stated in the Introduction, our goal is to model the behavior of normal user, not of hackers attacking the system

The second question of the work is: how to evaluate the role of these factors by means of agent based simulation? Which factors have most impact? The simulation framework for reputation gives interesting results: the key factor to lower the number

of inauthentic files in a file sharing P2P system is the proportion of verifications made

by peer Even a low figure like 30% sharply limits the behavior of malicious agents when we do not consider the possibility of whitewashing after comeback The role of punishment, in terms of reputation points, has instead a more limited impact Surprisingly, even when whitewashing is allowed, the number of inauthentic files in the system can be limited if peers verify files 40% of the times The same results cannot be achieved by increasing the figure of the punishment and decreasing the proportion of verifications

The moral of our study is that a mechanism for stimulating users to check the authenticity of files should be promoted, otherwise the entire file sharing system is flooded by inauthentic files In contrast, usual approaches to RMS [1] consider verification as automatic, thus ignoring the human factor: since we show that verification has a sharp effect according to its rate, it cannot be ignored in simulating the effect of a RMS Thus, we identify the conditions when even a simple RMS can dramatically reduce the number of inauthentic files over a P2P system and penalize malicious users, without banishing them from the network, as proposed in other models based on ostracism

The proposed model is simplistic The reader must be aware of several limitations, which are the object of ongoing work Resources are not divided in categories; inauthentic files, in reality, are mostly found in newer resources Thus, we are aiming

at using real data to differentiate the kinds of resources, distinguishing in particular newly requested resources At present we distinguish malicious agents from loyal agents based on their response, but all the agents have the same behavior for other aspects, e.g they verify with the same proportion It could be useful to simulate using different parameters in each agent of the two classes Bandwidth is not considered:

Trang 31

thus all downloads proceeds at the same rate, even if the decision to upload to a peer

is based on his reputation Moreover, a peer decides to upload on the basis of which agent has the highest reputation It is well known that this algorithm can create unbalance among peers, but we abstract here from this problem, since we are not proposing a new P2P mechanism but checking the efficacy of a RMS on the specific problem of inauthentic files Note, however, that this strategy has a negative effect when malicious peers get high reputation, but if the reputation system is well tuned, malicious agents should never get high reputation Finally, we allow agents to disconnect and reconnect, but this whitewashing happens without changing their position on the graph, or behavior

The real improvement in our ongoing work, is moving from reactive agents to more sophisticated ones, able to learn from what happened in the network While in the current model agents stochastically decide whether to upload an inauthentic file or not, or to verify or not, cognitive agents adapt to the circumstances, by considering how many objectives they can achieve using their current strategy, and by looking for new alternatives Modeling adaptive agents allows to check for further vulnerabilities, like what happens when agents produce inauthentic files at a variable rate which does not decrease too much their reputation

Acknowledgements The authors wish to gratefully acknowledge Mr Gianluca

Tornese, who initially contributed to the implementation of the model described in the present work

Trang 32

Concurrent Web 2.0 Applications

Mattia Minotti, Giulio Piancastelli, and Alessandro Ricci

DEIS, Alma Mater Studiorum — Universit`a di Bologna, Italy

mattia.minotti@studio.unibo.it, a.ricci@unibo.it

Abstract Using the event-driven programming style of JavaScript to develop the

concurrent and highly interactive client-side of Web 2.0 applications is showingmore and more shortcomings in terms of engineering properties such as reusabil-ity and maintainability Additional libraries, frameworks, and AJAX techniques

do not help reduce the gap between the single-threaded JavaScript model and theconcurrency needs of applications We propose to exploit a different program-ming model based on a new agent-oriented abstraction layer, where first-class

entities – namely agents and artifacts – can be used, respectively, to capture

con-currency of activities and their interaction, and to represent tools and resourcesused by agents during their activities We specialise the model in the context

of client-side Web development, by characterising common domain agents andartifacts that form an extension of an existing programming framework Finally,

we design and implement a simple but significant case study to showcase the pabilities of the model and verify the feasibility of the technology

ca-Keywords: Concurrent Programming, Agent-Oriented Programming, Web 2.0.

1 Introduction

One of the most important features of the so-called Web 2.0 is a new interaction modelbetween the client user interface of a Web browser and the server-side of the applica-

tion Such rich Web applications allow the client to send multiple concurrent requests in

an asynchronous way, avoiding complete page reload and keeping the user interface liveand responding Periodic activities within the client-side of the applications can be per-formed in the same fashion, with clear advantages in terms of perceived performance,efficiency and interactivity

The client user interface of rich applications is programmed with extensive use ofJavaScript and AJAX techniques Being JavaScript a single-threaded language, most ofthose programs are written in an event-driven style, in which programs register callbackfunctions that are triggered on events such as timeouts A single-threaded event loopdispatches the appropriate callback when an event occurs, and control returns back tothe loop when the callback completes To implement concurrency-intensive features,still keeping the interface responsive, programmers must chain callbacks together—typically, each callback ends by registering one or more additional callbacks, possiblywith a short timeout However, this style of event-driven programming is tedious, bug-prone, and harms reusability [3]

J Cordeiro and J Filipe (Eds.): WEBIST 2009, LNBIP 45, pp 17–29, 2010.

c

 Springer-Verlag Berlin Heidelberg 2010

Trang 33

The limitations of the JavaScript programming model have been faced by introducinglibraries that attempt at covering event-based low-level details behind the emulation ofwell-known concurrent abstractions Concurrent.Thread [7] builds a thread abstraction

on top of the event-driven JavaScript model, converting multi-threaded code into anefficient continuation-passing style The WorkerPool API in Google Gears1simulates acollection of processes that do not share any execution state, thus can not directly accessthe browser DOM of Web pages Albeit working in practice, neither approach is feasible

to support a sound programming model for concurrency in Web 2.0 applications [6].Frameworks such as Rails,2 with its Ruby-to-JavaScript compiler plug-in, andGoogle Web Toolkit (GWT)3approach the problem from a different angle, by adoptingthe programming model of the single language employed for implementing both theclient- and server-side of the application Even if the application development benefitsfrom the use of object-oriented languages in terms of decomposition and encapsulation,the limitations of the target execution environment make such solution less effective: forexample, GWT does not allow the use of Java threads, and only offers a timer abstrac-tion to schedule tasks (implemented as methods) within the single-threaded JavaScriptinterpreter

We argue that, despite these several efforts, Web application client-side technologies

do not offer suitable abstractions to manage the coordination and interaction problems

of typical distributed applications in a simple yet complete fashion With the aim of porting the development of Web applications as much similar as possible to distributeddesktop applications, JavaScript and AJAX are not enough: a different programmingmodel is needed, so as to provide mechanisms and abstractions really oriented to col-laboration and cooperation among concurrent computational entities In this sense, eventhe object-oriented paradigm promoted by GWT shows its shortcomings Indeed, main-stream object-oriented programming languages such as Java are currently undergoing a

sup-concurrency revolution [13], where (i) support for multi-threading is extended by chronisation mechanisms providing a fine-grained and efficient control on concurrent computations, and (ii) it is becoming more and more important to also provide more coarse-grained entities that help build concurrent programs from a set of higher-level

syn-abstractions

We believe that, to effectively face challenges such as distributed computation,

asynchronous communication, coordination and cooperation, agents represent a very

promising abstraction, natively capturing and modeling concurrency of activities andtheir interaction; therefore, they can be considered a good candidate for defining aconcurrent programming model beyond languages and technologies currently avail-able within browsers on the client-side of Web applications The agent abstraction

is meant to model and design the task-oriented parts of a system, encapsulating thelogic and control of such activities Not every entity in the system can (or should) be

modeled in this way, though; we also introduce the companion artifact abstraction, as

the basic building block to design the tools and resources used by agents during theiractivities

Trang 34

Accordingly, in this paper we first describe the agent and artifact abstractions as fined by the A&A (Agents and Artifacts) programming model [8], recently introduced inthe field of agent-oriented programming and software engineering; then, we specialisethe model in the context of client-side Web application development, by characterisingcommon domain agents and artifacts that form a small extension of an existing program-ming framework based on A&A Further, we describe the design and implementation of

de-a simple but significde-ant cde-ase study to showcde-ase the cde-apde-abilities of the frde-amework, de-andconclude with some final remarks, including next steps of this work

2 The Agents and Artifacts Programming Model

In the A&A programming model, the term “agent” is used to represent an entity “whoacts” towards an objective or task to do pro-actively, and whose computational be-

haviour accounts for performing actions in some kind of environment and getting mation back in terms of perceptions Differently from the typical software entity, agents

infor-have no interface: their interaction with the environment takes place solely in terms ofactions and perceptions, which concerns in particular the use of artifacts The notion of

activity is employed to group related actions, as a way to structure the overall behaviour

public class MailAgent extends Agent {

@ACTIVITY void setup() { /* */ }

@ACTIVITY void fetch() { /* */ }

@ACTIVITY void check() { /* */ }}

Agent activities in simpA can be either atomic or structured, composed by some kinds

of sub-activities Atomic activities are implemented as methods with the @ACTIVITY

4

http://simpa.sourceforge.net

Trang 35

annotation, with the body of the method defining the computational behaviour of theagent corresponding to the accomplishment of the activity Structured activities intro-

duce the notion of agenda to specify the hierarchical set of the potential sub-activities composing the activity, referenced as todo in the agenda Each todo names the sub-

activity to execute, and optionally a pre-condition When a structured activity is cuted, the todos in the agenda are executed as soon as their pre-conditions hold, but

exe-if no pre-condition is specexe-ified, the todo is immediately executed Thus, multiple activities can be executed concurrently in the context of the same (super) activity Astructured activity is implemented by methods with an @ACTIVITY WITH AGENDAannotation, containing todo descriptions as a list of @TODO annotations A todo can be

sub-specified to be persistent—in this case, once it has been completely executed, it is

rein-serted in the agenda so as to be possibly executed again This is useful to model cyclicbehaviour of agents when executing some activities

In the A&A programming model, the artifact abstraction is useful to design passive

resources and tools that are used by agents as basic building blocks of the environment The functionality of an artifact is structured in terms of operations whose execution can be triggered by agents through artifact usage interface Similarly to the interface

of objects or components, the usage interface of an artifact defines a set of controlsthat an agent can trigger so as to execute operations, each one identified by a labeland a list of input parameters Differently from the notion of object interfaces, in this

use interaction there is no control coupling: when an agent triggers the execution of

an operation, it retains its control (flow) and the operation execution on the artifact iscarried on independently and asynchronously The information flow from the artifact to

agents is modeled in terms of observable events generated by artifacts and perceived by

agents; therefore, artifact interface controls have no return values An artifact can also

have some observable properties, allowing to inspect the dynamic state of the artifact

without necessarily executing operations on it

Artifact templates in simpA can be created by extending the base alice.simpa.Artifactclass For example, an artifact representing a Web page, storing its DOMmodel in an observable property, and exposing an operation to change an attribute of anelement in the model, may be structured as follows:

public class Page extends Artifact {

@OBSPROPERTY Document dom;

@OPERATION void setAttribute(String id,

String attr,String val) {Element e = dom.getElementById(id);

Besides the usage interface, each artifact may define a linking interface, applying

the @LINK annotation on operations that are meant to be invoked by other artifacts

Trang 36

use opX(params)

opX EXECUTION TRIGGERED

1a

2a

XYZ opX EXECUTION

e e

Fig 1 Abstract representation of an agent using an artifact, by triggering the execution of an

operation (left, step 1a) and observing the related events generated by the operation execution(right, step 1b)

Thus, it becomes possible to create complex artifacts as a composition of simpler ones,assembled dynamically by the linking mechanism

An artifact typically provides some level of observability, either by generating servable events during execution of an operation, or by defining observable propertiesusing the @OBSPROPERTY annotation An observable event can be perceived by the

ob-agent that has used the artifact by triggering the operation generating the event; changes

to observable properties, triggered by the updateProperty primitive, can be sensed

by any agent that has focussed on the artifact, without necessarily having acted on it Besides agents and artifacts, the notion of workspace completes the basic set of ab-

stractions defined in A&A: a workspace is a logic container of agents and artifacts, and

it is the basic means to give an explicit (logical) topology to the working environment,

so as to help scope the interaction inside it

We conclude this section by focussing on the main ingredients of the agent-artifactinteraction model: a more comprehensive account and discussion of these and otherfeatures of agents, artifacts and workspaces – outside the scope of this paper – can befound in [8,11]

2.1 Agent-Artifact Interaction: Use and Observation

The interaction between agents and artifacts strictly mimics the way in which peopleuse their tools For a simple but effective analogy, let us consider a coffee machine Theset of buttons of the coffee machine represents the usage interface, while the displaysthat are used to show the state of the machine represent artifact observable properties.The signals emitted by the coffee machine during its usage represent observable eventsgenerated by the artifact

Interaction takes place by means of a use action (stage 1a in Figure 1, left), formed by an agent in order to trigger the execution of an operation over an artifact;such an action specifies the name and parameters of the interface control correspond-ing to the operation The observable events, possibly generated by the artifact while

per-executing an operation, are collected by agent sensors, which are the parts of the agent

Trang 37

conceptually connected to the environment where the agent itself is situated Besidesthe generation of observable events, the execution of an operation results in updatingthe artifact inner state and possibly artifact observable properties Finally, a sense ac-tion is executed by an agent to explicitly retrieve the observable events collected by itssensors.

It must be noted that, dually to indirect interaction through artifacts, agents can alsodirectly communicate with each other, through a message passing mechanism provided

by the framework

3 An Agent-Oriented Model for Client-Side Web Programming

The main objective of our programming model based on agent and artifact abstractions

is to simplify development of those parts on the client-side of Web applications thatinvolve elements of concurrency, by reducing the gap between design and implementa-tion At the design level, it is first necessary to identify the task-oriented and function-oriented parts of the client-side system; then, such organisation drives to the definition

of agents and artifacts as depicted by the A&A model At the implementation level,the design elements can be directly realised using the simpA framework The notion

of activity and the hierarchical activity model adopted in the agent abstraction allow

a quite synthetic and readable description of possibly articulated behaviours, dealingwith coordination management on a higher level than threads, timeouts, and callbacks.The adopted model of artifacts permits to specify possibly complex functionalities to beshared and concurrently exploited by multiple agents, abstracting away from low-levelsynchronisation mechanisms and primitives

In the domain of client-side Web development, all computation takes place in the

browser environment, that can be straightforwardly mapped onto the workspace

ab-straction; agents and artifacts downloaded as the client-side part of Web applicationswill join this workspace, interacting and cooperating in a concurrent fashion Amongthose agents and artifacts, there are a number of recurring elements in the majority ofapplications; here follows a description of the main elements that we identified, alongwith the role they will play in our programming model

Page The page is typically represented by an accessible, tree-like, standardised DOM

object, allowing to dynamically update its content, structure, and visualisation style;also, the page generates internal events in response to user’s actions The direct mapping

of the DOM API onto operations, and of the response to user’s actions onto observable

events, suggests the artifact as the most natural abstraction to model pages.

HTTP Channel An entity is needed to perform transactions in the HTTP protocol,

allowing to specify the operation to execute (e.g GET, POST, PUT, DELETE), set theheader values and possibly a payload in the request; such an entity also has to receiveresponses and make them available to the other entities in the system The channel doesnot account for autonomous actions, but it is supposed to be used by a different, pro-

active entity; it is therefore modeled as an artifact, so that asynchronous communication

towards the server can be ensured by the agent-artifact interaction model

Trang 38

Common Domain Elements Common components that are frequently and repeatedly

found in specific application domains can also be modeled in terms of agents and facts As an example, consider the shopping cart of any e-commerce Web site: since itscontrol logic and functionalities are almost always the same, it could be implemented

arti-as a client-side component, in order to be used by a multiplicity of e-commerce tions towards different vendors, and allowing comparisons and other interesting actions

transac-on items stored for possibly future purchase Given the possibly complex nature ofthe computations involved, such common domain elements would probably need to bemodeled as a mix of both agents and artifacts

There also are some important issues in client-side Web application developmentthat, due to the lack of space, we only intend to acknowledge without providing a com-plete description of their possible modeling and programming solution First, securityneeds to be addressed, in order to manage read/write permissions on the file system,mobile code authentication, origin control, and access authorisation; to this purpose,the RBAC model [12] provided by simpA can be exploited Then, albeit architecturallydeprecated, also cookies need to be taken into account as a popular mechanism to allowdata persistence between working sessions; they can be devised as another particularkind of artifact template

3.1 From simpA to simpA-Web

simpA-Web is a thin layer that exploits classes and mechanisms of the simpA work to define agent and artifact templates oriented to client-side Web development.Whereas simpA supports A&A abstractions, simpA-Web offers specific agents andartifacts representing the common elements comprised by the programming modelexplained above For example, simpA-Web provides implementations for the HTTPChannel and the Page artifacts

frame-The HTTP Channel artifact represents a HTTP working session used by a part ofthe client-side Web application The artifact allows communication through the HTTPprotocol, and is able to store and manage both local and remote variables The userinterface of HTTP Channel exposes three operations:

– setDestination stores the URI of the server to which the session represented

by this artifact will communicate This operation takes the destination URI as aparameter, and generates a DestinationChanged observable event

– setHeader adjusts a HTTP header for subsequent requests It takes the header

name and value as parameters, and generates a HeaderChanged observable event

– send transmits a HTTP request to the server identified by the stored URI It takes

the HTTP payload as a parameter, and generates three events: RequestSent, as soon

as the HTTP request has been committed; Response, containing headers and body,when a HTTP response is received; Failure, if no response has been sent back.The Page artifact represents the interface to access the page visualised by the browser,encapsulating its events and its main characteristics The Page artifact features an ob-servable property called DOM, representing the Document Object Model of the whole

Trang 39

page The artifact interface exposes six operations, each generating a correspondingobservable event at the end of a successful execution:

– changeDOM substitutes the old DOM with a new object taken as a parameter, thus

changing the representation of the whole page

– setElement changes the content of a DOM element identified by the first id

parameter to the string passed as second parameter

– setAttribute does the same as the previous operation, but on a property of a

DOM element

– addChild appends a DOM element as a child to an element with a given id.

The presented entities in the simpA-Web layer represent a common agent-orientedinfrastructure deployed as an additional library alongside simpA, so as to free client-side Web application developers from dealing with such a support level and let themfocus on application domain issues

4 A Sample Agent-Oriented Web 2.0 Application

To verify the feasibility of our A&A-based programming model and test-drive the bilities of the simpA-Web framework, we designed a sample Web application to searchproducts and compare prices from multiple services The characteristics of concurrencyand periodicity of the activities that the client-side needs to perform make this casestudy a significant prototype of the typical Web 2.0 application

capa-We imagine the existence ofN services (of type A) that offer product lists with

features and prices, codified in some standard machine-readable format Each type Aservice lets users download an agent (typically programmed by the service supplier)that allow product searching on that service; each agent communicates with the corre-sponding service using its own protocol, possibly different from the protocols of theother agents in the system We further imagine the existence of an additional service (oftype B) offering a static list of type A services or allowing to dynamically search theWeb for such services

The client-side in this sample Web application needs to search all type A servicesfor a product that satisfies a set of user-defined parameters and has a price inferior to acertain user-defined threshold The client also needs to periodically monitor services so

as to search for new offerings of the same product A new offering satisfying the straints should be visualised only when its price is more convenient than the currentlybest price The client may finish its search and monitoring activities when some user-defined conditions are met—a certain amount of time is elapsed, or the user interruptsthe search with a click on a proper button in the page displayed by the browser Finally,

con-if such an interruption took place, by pressing another button it must be possible to letthe search continue from the point where it was blocked

It’s worth remarking that the sample application should be considered meaningful

not for evaluating the performance or efficiency of the framework – which is part of

future work – but for stressing the benefits of the agent-oriented programming model infacing the design of articulated and challenging application scenarios The main value

in this case is having a small set of high-level abstractions that make it possible to

Trang 40

keep a certain degree of modularisation, separation of concerns, and encapsulation indesigning and programming applications, despite of the complexities given by aspectssuch as concurrency, asynchronous interactions, openness/dynamism, distribution.

4.1 Design

From the above description, it is fairly easy to use high-level abstractions such as agentsand artifacts in order to represent the different computational elements in the client-side

of the application

We define the Application Main (AM) Agent as the first agent to execute on the

client, with the responsibility to setup the application once downloaded Immediatelyafter activation, the agent populates the workspace with artifacts and other agents needed

to perform the product search as defined by the requirements After validating defined data for product querying, the agent spawns another agent to interface with the

user-B service so as to get a list of A services and commence the product search The AMAgent also needs to control results and terminate the research when a suitable condition

is verified; moreover, it has to manage search interruption and restarting as a quence of user’s explicit commands

conse-The task of the Service Finder (SF) Agent is to use the type B service so as to find

a list of type A services, and to concurrently download search agents from their sites,asking them to start their own activity The SF Agent also has to periodically monitorthe type B service for new type A services to exploit

The Product Finder (PF) Agent instances interact with their own type A service by

a possibly complex communication protocol, so as to get a list of products satisfying theuser-defined parameters After, each agent passes the resulting list to another element

in the workspace dealing with product data storage Each PF Agent also periodicallychecks its service in order to possibly find new products or removing from the listproducts that are no more available

The Product Directory (PD) Artifact stores product data found by the PF agents,

and applies filtering and ordering criteria to build a list of available products, possiblycontaining only one element The artifact also shows a representation for its list on thepage in the browser, updating results as they arrive

Finally, an additional Pausing Artifact is introduced to coordinate the agents in

the workspace on the interruption or restarting of the research activities in response

to a user’s command While the control logic of search interruption and restarting isencapsulated in the AM agent, the agent does not directly know all the PF agents; insuch a case when direct communication between agents is unfeasible, an intermediatecoordination element is needed to ask agents to perform the actions requested by theuser

The complete architecture of the client-side application is depicted in Figure 2, whereinteractions with pre-defined artifacts and agents in the browser are shown In partic-ular, it must be noted that the Service Finder and Product Finder agents use their owninstances of the HTTP Channel artifact to communicate through the Web with the cor-respondent sites; also, the Product Directory artifact links the Page artifact in order tovisualise the product list by directly use the DOM API operations that are made avail-able through the Page artifact linking interface

Ngày đăng: 14/09/2020, 16:15

TỪ KHÓA LIÊN QUAN