1. Trang chủ
  2. » Công Nghệ Thông Tin

Web Technologies phần 9 pdf

269 238 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Profiling of Web Services
Tác giả Noll
Trường học Not Available
Chuyên ngành Web Technologies
Thể loại Bài báo
Năm xuất bản 2004
Thành phố Not Available
Định dạng
Số trang 269
Dung lượng 6,7 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

It not only characterizes a service but also allows for services comparison based on aggregated values of non-functional parameters and, in consequence, selection of a service most suite

Trang 1

a user and are a part of the QoR concept.

Users’ feedback (assessment) understood

as their satisfaction from the returned result

(not from the interface through which they

communicate with a service) expressed in

a defined scale However, it would be very

difficult, if not impossible, to collect such

information from users They would rather

provide an overall evaluation of both a

service implementation and real service

effects

The QoR concept is domain specific In fact,

it is very difficult, if not impossible, to define a

measure that would hold for all possible services

It is not the case with QoE, which is independent

of the domain and rather easy to compute The

quality of execution relates to the underlying

technology (i.e., technical and network-related

aspects) The following properties may be a part

of the QoE model:

Response latency: Time needed for the

control data to get to the service and back

to the client

Maximal throughput: How many

re-quests a provider is able to process in a given time period

Execution duration: Time needed to fulfil

a user request (time between sending a quest and receiving an answer)

re-• Execution price: Amount of money a user

needs to pay in order to use an interface to the service

Service robustness: The ability of a

ser-vice to act properly if some of the input rameters are missing or incorrect (e.g., the wrong coordinates or incorrect data types, etc.)

pa-The following table summarizes our short discussion on the differences between the QoR and QoE concepts

Another aspect that needs to be mentioned is

Table 3 RoutePlanning service example (Noll, 2004)

Service name Route Planning Service

Description Creates a route description for the customer’s coordinates and the given attraction The route

de-scription consists of a coloured high-resolution picture and a textual dede-scription.

Nonfunctional Properties

Service Name Map24RoutePlanningService

Provider Name Map24.de

Information Quality High

Functional Properties

Preconditions Location ls, Location lg

Positive Effects RouteDescription rd, hasRoute (rd, r)

Table 4 Comparison of QoR and QoE

Quality of Result Quality of Execution

Trang 2

the difference between an execution price and

a service price A service price is the amount of

money a user has to pay for a real service; for

example, when using a route planning it is a price

of the attraction ticket (e.g., ticket to the cinema)

(it influences QoR) In this case, an execution

price is the amount of money we have to pay for

using the interface to book tickets, not a price of

the ticket itself (it influences QoE) When buying

a book at Amazon.com, the execution price is 0

(using the Amazon Web page to search and order

is free), but the service price is the price of the

book and the delivery costs In case of

informa-tion services (such services where output returned

by a service is equal to the effect we wanted to

obtain) it is rather unclear whether the price we

have to pay for the information is a service price

or execution price, and the classification may

depend on many factors

Most of the current initiatives aiming at

pro-viding definitions and descriptions of quality

dimension address only some generic parameters

(mostly network related), such as execution price

and duration, availability and reliability, and so

forth (Liu, Ngu, & Zeng, 2004; Menasce, 2002;

Zeng, Benatallah, Dumas, Kalagnanam, & Sheng,

2003), and do not differentiate between the QoR

and QoE concepts More parameters, considering

also QoR, are presented by O’Sullivan et al (2002),

but they are not widely used in practice Moreover,

QoR properties are not considered in most of the

methods trying to compute the values of

non-functional properties Therefore, in the remaining

part of this chapter, whenever a reference to QoS

is made, it refers to those quality parameters of

a service that are computable (therefore, in most

cases they exclude QoR parameters) Whenever

a clear differentiation needs to be made between

quality of result and quality of execution,

respec-tive terms are used

Methods And ApproAches

to derIve vAlues of functIonAl propertIes

non-The simplest way to derive values of NFP is to rely

on service providers advertising this information However, taking directly the values advertised

by a service provider is not advisable It requires users to trust the accuracy of the values declared

by service providers However, service providers

do have an interest in overestimating NFP of their services, so a solution allowing measurement of (programmatically) the values of NFP for verifi-cation purposes is needed Moreover, values of non functional parameters are often assumed to

be constant in time and space (service location), but they may change, depending on the details

of the service request, execution environment, and so forth For example, the response time of

a Web service may be less than 5 minutes during the working days, but during the weekends, it may be less than 1 minute as the interest in the particular service decreases

To avoid the problems of accuracy of functional properties’ values given by service providers, some other methods to derive (or verify) their values are needed (Abramowicz et al., 2005) Ran (2003) proposes a QoS model using

non-a QoS certifier to verify published QoS criterinon-a The approach requires all Web services providers

to advertise their services with the QoS certifier However, this approach does not take into account the dynamism of the environment and the fact that the values of a Web service change in time The approach does not provide, for example, methods

to update the QoS values automatically and it lacks the details regarding the verification process.Sheth, Cordoso, Miller, and Kochut (2002) propose a QoS middleware infrastructure that requires a built-in tool to monitor metrics of NFP automatically Such an approach requires the willingness of service providers to give up some

of their autonomy It may also require service providers to cover execution costs Moreover, if

Trang 3

the polling interval is set to too long, the QoS will

not be up-to-date If the polling interval is set to

too of a short time, it might incur a high

perfor-mance overhead A similar approach emphasizing

a service reputation, is proposed by Maximilien

and Singh (2002a, 2002b)

Another approach obtains information on

val-ues of QoS parameters from the users themselves

When collecting quality information from the

users feedback, each user is required to evaluate

QoS (and at the same time QoR) of the consumed

service The main advantage of this approach is

that QoS values can be computed based on the

real user experience (up-to-date runtime

execu-tion data) The main disadvantage is the fact that

a user judgment is not objective; users use

dif-ferent definitions of quality, have difdif-ferent past

experiences, and so forth

In other approaches called “a’posteriori

ap-proach” (Casati, Castellanos, Dayal, & Shan,

2004) QoS values are solely collected through

an active monitoring The monitoring can be

performed by a user, service broker or platform,

dedicated QoS registry (Kuropka & Weske,

2006; Liu et al., 2004), or an already mentioned

QoS certifier (Ran, 2003) The data are collected

from the actual consumption of a service and

therefore are accurate and objective One avoids

the necessity to install rather expensive

middle-ware in order to constantly check large numbers

of service providers However, there is a high

overhead since QoS must be constantly checked

for a large number of Web services On the other

hand, the approach that relies on a third party to

rate or endorse a particular service provider is

expensive and static in nature

When the service related data collection is

envi-sioned through, for example, workflow monitoring

or user feedback, another important issue is how to

compute the values of quality-related parameters

from the collected data There are a few initiatives

to solve the problem One of them (Maximilien &

Singh, 2004) suggests performing an analysis of

past executions of atomic and composite services

by using data mining and workflow log mining techniques Moreover, some statistical methods can be applied as well (Liu et al., 2004)

Workflow management systems are a very important infrastructure for complex applications They usually register the start and completion

of activities as well as other events that occur during execution This information is stored as workflow log files (Aalst, Zhang, Shanahas, & et al., 2003) that further are processed using work-flow and process mining techniques The goal of workflow mining is to find a workflow model on

a basis of a workflow log (Aalst et al., 2003) In turn, process mining is a method of distilling a structured process description from a set of real executions (Aalst et al., 2003) Many methods to perform these tasks were developed (e.g., proba-bilistic workflow mining, or Petri nets [Aalst et al., 2003]) and may be successfully applied also

to the Web services area

In the next section, the Web services profiling, being an alternative method to derive the values

of non-functional properties of a Web service, is presented

Web servIce profIlIng, servIce profIle, And Its eleMents

Service profiling is a process of computation of values of non-functional properties The main goal

of service profiling is to create service profiles of atomic and composite services A service profile may be defined as an up-to-date description of a selected subset of non-functional properties of

a service It not only characterizes a service but also allows for services comparison based on aggregated values of non-functional parameters and, in consequence, selection of a service most suited to the requirements of a user

In order to compute the values of tional properties, service profiling needs first to collect information on services executions, aggre-gate it, and then derive required information The

Trang 4

non-func-raw data may come from multiple data sources

Every source has its own specific purpose and

provides different information The following

possible sources of information that further feed

the profiling system with appropriate data may

be distinguished: service registries, monitoring

data, data coming from service level agreements

(SLA) storing information on contracted QoS

values, feedback from service consumers about

obtained service quality, and so forth

The aim of the Web services profiling is to

perform fair and open NFP computation

There-fore, as the service execution history data are the

most objective and reliable source of information

on the service, they are in fact the primary source

of information The Web services profiling does

not perform only the core workflow mining It

analyses log files in order to obtain data needed

for the profiling process, but, in addition, it takes

advantage of the raw data collected from service

properties defined in SLA, published by service

providers, and obtained from users’ feedback For

instance, it compares contracted values from SLA

against these from execution In consequence, it

is possible to check to what extent the agreement

between a provider and a consumer is fulfilled

Moreover, appropriate algorithms may discover

which values of particular parameters are, for

example, likely to be guaranteed by providers

Service profiling is, in our opinion, a

trust-worthy method of service quality measurement

It does not rely on providers’ declarations about

quality of their services Statistical procedures used

to compute values, data coming from execution

logs, and so forth, assure high reliability of results

of service profiling The information declared

initially by a service provider might be verified

by what is stated in SLA, being approved by its

provider and then by the results of the analysis of

execution data This kind of verification increases

the reliability of our mechanism and we do not

need a third party to verify the correctness of the

values of profile parameters as procedures are

transparent and parameters precisely defined In

addition, a service profiling mechanism is generic (a number of parameters it operates on may be easy modified) and independent of the service description provided by a service provider

service profile

As already stated, a service profile may be fined as an up-to-date description of a subset of non-functional properties of a service It allows for services comparison based on non-functional parameters and selection of the service most suited

de-to the requirements of a user

In order to create an adequate service tion one needs to consider that the collected or derived data, taken into account by a service profiling mechanism, may differ in terms of its stability in time Regarding the type of informa-tion on services, we can distinguish three main categories:

descrip-• Static Information: Values of service

properties that do not change over time, such as name of the service, and are pro-vided by a service provider

Semistatic information: Values of service

properties that may change over time, such

as quality of service and price This mation changes periodically, but not very often

infor-• Dynamic Information: Values of service

properties that may be (and usually are) different in every execution of the service

It relates mainly to the network related quality of service

From the profiling point of view, the most interesting parameters are the dynamic and semistatic ones In addition, parameters that are estimated and finally included in a service profile may be simple reflections of service behaviour or adequately aggregated to show an overall quality

of a service Therefore, we consider two groups

of non-functional properties:

Trang 5

Simple Properties: Values of service

properties that can be monitored on an

individual level This is mostly

informa-tion presented in service level agreements

Such properties may include, for example,

latency time, execution cos,t and so on

Derived Properties: Where additional

manipulation is needed (performed by a

service profiling system) Such properties

may include reliability, availability, or, in

our case, a synthetic indicator

Our belief is that a service profile should be

easily interchanged between building blocks of

SOA systems In order to allow for simple

mes-saging and processing of profiles, we decided to

represent them as XML documents The greatest

advantage of this solution is that XML schema is

easily verifiable and interpretable by machines

A standardized form of a service profile makes it

easy to be adapted in industrial applications

Because of flexibility of service profiling, the

set of parameters included in a profile may vary

due to different quality parameters considered in

different IT systems The exemplary structure of

a profile (as seen in Figure 1) was derived based

on the requirements defined in the already tioned ASG project

men-The excerpt of a service profile schema is sented in the Listing 1 Please note that for some parameters, average, minimal, and maximal values are determined These values may be helpful when

pre-a user precisely expresses the user’s needs on qupre-al-ity parameters Therefore, a user may specify that the user is looking for a service where parameters meet accurately expressed criteria

qual-Additionally, a service profiling system may offer provider profiles that show how, in gen-eral, services of a given provider behave They may be useful to represent the overall quality of services provided by a concrete provider These profiles are more quality-oriented, whereas service profiles are more performance-oriented In this case, quality orientation means that time-related QoS parameters are less important than the fact whether a given service was accessible or produced expected results

Figure 1 Service profile structure - class diagram

Trang 6

service profile computation

The most popular information sources for service

profiling are execution logs These log files

usu-ally have a strictly defined structure (Aalst et al.,

2003), so the automated processing of them is

feasible and algorithms are rather straightforward

For example, the execution duration may be

eas-ily counted as a difference between end time and

start time of a service execution (these values are stored in the log file) Of course, to compute the values of other parameters other methods may

be required For instance, in order to compute the value of a reliability parameter, a profiling system needs to keep track of service execution states In our approach, we consider the finite state machine of Web service transitions as shown in the Figure 3

Figure 2 Listing 1: Excerpt of exemplary service profile schema

Trang 7

Therefore, it is possible to determine the

num-ber of started services that were completed Thus,

the assessment of reliability parameter is not a

problem A similar approach is used for

acces-sibility parameter computation For more details

please refer Kowalkiewicz, Ludwig, Kaczmarek,

and Zyskowski (2005) In the Table 5 we present

an exemplary set of non-functional properties and

outline methods of their computation

When creating a service profile the time

horizon is taken into account A user may need

a particular instance of a service only once in a

given point of time or may need to use the service

a few times in a given time period Therefore, the

horizon of the prognosis should be considered

In the first case, short-time information about

a service is important, and in the second case,

more attention should be paid to the long-term

behaviour of a service, taking into account also more historical data

Another challenging issue is the set of functional parameters that should be used to describe composite services and the way to compute values of these parameters The possible solutions may be found presented by Liu et al (2004), Maximilien and Singh (2004), and Zeng

non-et al (2003) They suggest using a similar snon-et of attributes, as for atomic services and computing their values using statistical methods

Composite service profiles are the aggregations

of atomic service profiles A description of a posite service profile is very similar to a service profile, because it treats a composite service like an atomic one That is why the structure of its profile does not differ significantly from the profile of

com-an atomic service However, the values of some

Figure 3 Types of Web services events Based on Aalst et al (2003)

Table 5 Some parameters of service profile and their computation methods

Parameter name Computation method

Execution duration Difference between end and start time of service execution

Accessibility Number of successful invocations divided by all the invocations in a given time period

Reliability Number of successful executions divided by all of the executions in a given time period

Synthetic indicator Statistical aggregation of all considered parameters denoting an overall quality of a service

Trang 8

parameters are computed as statistical measures

based on characteristics of atomic services

in-cluded in the composed service Moreover, not

all parameters that are computed for an atomic

service profile are included in composite service

profiles For example, the response latency value

is only computable for atomic services

In order to compute a value of quality

pa-rameters of a composite service we can proceed

twofold:

The execution log mining may be

per-•

formed in order to compute values of

pa-rameters using methods similar to these for

atomic services;

A composite service execution plan may

be used to compute hypothetical value of

quality parameter Such plans are usually

described using business process execution

language for Web services (BPEL4WS)

language First, the average values for each

atomic service included in the composition

are computed, then the plan is analysed,

the critical path is identified, and the hy-pothetical value is computed For instance,

the execution duration of the composite

service is computed as a sum of execution

durations of services being on the critical

path Other calculations include analysis of

workflow patterns, determination of how

many times services were executed (in case

of loops), and so forth Details about such

computation are given by Kowalkiewicz et

al (2005)

It can be very interesting to rank services

ac-cording their quality In order to do that, a method

that would allow one to compare objects (in our

case, services) with regard to different properties

that describe these objects was defined Our

deci-sion was to take advantage of the multiple criteria

analysis (MCA) that ideally fitted to our needs

We used the MCA method to rank services based

on their quality attributes This ranking was

cre-ated by computing a synthetic indicator reflecting the overall service quality Then, it was possible

to compare the values of synthetic indicators

of several services and make a choice between them The detailed description of MCA and the procedure to compute the value of a synthetic indicator is described by Abramowicz, Haniewicz, Kaczmarek, and Zyskowski (2006b)

dynamic service profiling in the Adaptive services grid project

Taking into account the issues discussed in the previous section, the architecture of the service profiling system should consist of at least a few components It should include the repository that will store the data gathered by the system and should have component(s) responsible for com-munication with the data sources Moreover, it should provide interfaces that allow all interested parties to ask queries Finally, it should have the profiling mechanism, responsible for analysing the data and deriving/computing the values of parameters, to be included in a service profile

As an example of the architecture of service profiling system, the dynamic service profiling component of the Adaptive Services Grid project, may be presented The main goal of the ASG project (Kuropka & Weske, 2006) was to develop

a proof-of-concept prototype of a platform for adaptive services discovery, creation, composi-tion, enactment, as well as negotiations and service profiling In order to support the above-mentioned interactions, the ASG platform and mechanisms require the ability to differentiate and compare different services and service substitutes (services having the same functionality) There are some requirements that need to be met in order to make the service differentiation feasible First, the non-functional parameters must be taken into account,

as every customer perceives the service not only from the side of what functionality it gives, but is also interested in non-functional properties of the service The next issue is to deliver a QoS model

Trang 9

that everybody would accept Such a

standard-ized QoS model is the first step to the agreement

on monitoring mechanisms, common SLAs, and

other elements that should be a part of every

ma-ture marketplace The last challenge is to create

adequate description of a service that will give a

user hints about the distinctive features of service

substitutes Thanks to the monitoring, it should be

possible to analyse the information coming from

service executions, SLA violations, and so forth

Based on the execution data and users’

prefer-ences, it is reasonable to create a service profile

which reflects QoS values of a given service in

a considered time horizon Moreover, the user

should be capable of ranking these profiles and

choosing the most suitable Web service Such

a mechanism is implemented in the Adaptive

Services Grid platform (Kuropka & Weske,

2006) using a dynamic service profiling (DSP)

mechanism The ASG service delivery process

is presented in the figure below

The architecture of a dynamic service profiling

(see Figure 5) system, being a part of the entire

ASG platform, consists of a few components

(Abramowicz, Kaczmarek, Kowalkiewicz, &

Zyskowski, 2006):

• Data collector, which is responsible for collecting data (by either a push or a pull method) from different sources, process-ing them, and saving properly aggregated

to the DSP repository

Service profiler, which is responsible for

• deriving QoS attributes to answer requests The Service profiler creates an up-to-date profile of a service (or a provider), when-ever it receives a query Two types of que-ries may be distinguished: a request for a profile of composed service, taking time horizon into consideration; and a request for profiles and a ranking of a set of atomic services, taking time horizon into consid-eration When creating profiles, the service profiler uses the following data about ser-vices: data from the provider’s declaration (service registry), and values of service attributes form the past execution (DSP repository) In order to create a profile, the appropriate values of characteristics, depending on the prognosis horizon, are computed Then, based on the computed values a synthetic indicator for a service is created As an interaction with a user is not

Figure 4 Service delivery process in the ASG ©Krause, 2005 (used with permission)

Trang 10

implemented, default user preferences are

used After computing the indicators for all

of the services returned for the given

que-ry, services can be compared and the best

of them can be identified

DSP repository, which is the internal

per-•

sistent data storage fed by the data collector

and responsible for storing all data relevant

to service profiles Only the data collector

can change information in the DSP

reposi-tory Other subsystems have read-only

ac-cess to the repository

Event Manager, which handles workflow

events The event manager is the

subcom-ponent responsible for processing

work-flow events and receiving execution logs

If any crucial information is included in

such an event, it is passed to the data

col-lector for further analysis

As verified in the prototype implementation

within the ASG project, such an architecture

ful-fils goals and requirements of a service profiling

system

suMMAry

This chapter familiarizes users with the idea of Web services profiling As a background, the current initiatives in the field of Web services description, especially non-functional properties and methods to derive the values of these proper-ties, were presented Moreover, the readers were introduced to different approaches to the quality-of-service concept The focus of the chapter was placed on Web service profiling successfully implemented within the ASG system A service profile, in its final state, aggregates all measured values of quality parameters to give a user the holistic view on a service quality Taking into account information from profiles, it is possible

to select the most suitable service, with regard to the user-specific quality expectations

references

W3C (2004) Owl-s: Semantic markup for Web

services Retrieved May 26, 2008, from http://

Trang 11

Aalst, W D (2003) Workflow mining: A survey

of issues and approaches Data & Knowledge

Engineering, 47(2), 237–267

doi:10.1016/S0169-023X(03)00066-1

Abramowicz, W., Filipowska, A., Kaczmarek, M.,

Kaczmarek, T., Kowalkiewicz, M., Rutkowski, W.,

et al (2006) Service interdependencies: Insights

into use cases for service compositions Paper

presented at the IFIP 2006

Abramowicz, W., Haniewicz, K., Kaczmarek,

M., & Zyskowski, D (2006a) Automatic Web

services interactions - requirements, challenges

and limits from the F-WebS system perspective

Paper presented at the International Conference

on Next Generation Web Services Practices,

Seoul, Korea

Abramowicz, W., Haniewicz, K., Kaczmarek,

M., & Zyskowski, D (2006b) Filtering of

Se-mantic Web services with F-WebS system Paper

presented at the The Semantic Web: ASWC 2006

Workshop

Abramowicz, W., Kaczmarek, M., Kowalkiewicz,

M., & Zyskowski, D (2005) A survey of QoS

computation for Web services profiling Paper

presented at the 18th International Conference on

Computer Applications in Industry and

Engineer-ing (ISCA), Honolulu

Abramowicz, W., Kaczmarek, M., Kowalkiewicz,

M., & Zyskowski, D (2006) Architecture for

service profiling Paper presented at the

Model-ling, Design and Analysis for Service-Oriented

Architecture Workshop in conjunction with the

2006 IEEE International Conferences on Services

Computing (SCC 2006) and Web Services (ICWS

2006), Chicago

Abramowicz, W., Kaczmarek, M., & Zyskowski,

D (2006) Duality in Web services reliability

Paper presented at the International Conference

on Internet and Web Applications and Services

(ICIW‘06) Guadeloupe, French Caribbean

Baida, Z., Gordijn, J., Omelayenko, B., &

Ak-kermans, H (2004) A shared service terminology

for online service provisioning Paper presented at

the Sixth International Conference on Electronic Commerce (ICEC04), Delft, The Netherlands.Casati, F., Castellanos, M., Dayal, U., & Shan, M

C (2004, November 2004) Probabilistic,

context-sensitive and goal-oriented service selection

Paper presented at the ICSOC‘04, New York.Eenoo, C V., Hylooz, O., & Khan, K M (2005)

Addressing non-functional properties in software architecture using ADL Paper presented at the 6th

Australian Workshop on Software and Systems Architectures - AWSA‘05, Brisbane, Australia

Farrel, J., & Lausen, H (2006) Semantic

Annota-tions for WSDL DERI Innsbruck.

Kowalkiewicz, M., Ludwig, A., Kaczmarek, M., &

Zyskowski, D (2005) Documented mechanisms

for dynamic service prpofiling and agreement life-cycle management (ASG internal deliverable,

FP6-IST-004617)

Krause, H (2005) Next generation service

de-livery: Adaptive services grid, European project, 2007.

Kuropka, D., & Weske, M (2006, January) Die

adaptive services grid platform: Motivation, tential, funktionsweise und anwendungsszenarien

po-Paper presented at the EMISA Forum

Kuster, U., Koenig-Ries, B., Stern, M., & Klein,

M (2007, May 8-12) DIANE: An integrated

approach to automated service discovery, making and composition Paper presented at the

match-WWW 2007, Banff, Alberta, Canada

Lara, R., & Olmedilla, D (2005, June) Discovery

and contracting of Semantic Web services Paper

presented at the Position Paper for the Workshop

on Frameworks for Semantics in Web Services, Innsbruck, Austria

Trang 12

Liu, C., Peng, Y., & Chen, J (2006) Web services

description ontology-based service discovery

model Paper presented at the International

Con-ference on Web Intelligence (WI 2006)

Liu, Y., Ngu, A H H., & Zeng, L (2004, May)

QoS computation and policing in dynamic Web

service selection Paper presented at the 13th

International Conference on World Wide Web

(WWW), New York

Maximilien, E M., & Singh, M P (2002a)

Reputation and endorsement for Web

ser-vices ACM SIGecom Exchanges, 3(1), 24–31

doi:10.1145/844331.844335

Maximilien, E M., & Singh, M P (2002b)

Conceptual model of Web services reputation

SIGMOD Record

Maximilien, E M., & Singh, M P (2004,

Novem-ber, 2004) Towards autonomic Web services trust

and selection Paper presented at the ICSOC‘04,

New York

Menasce, D A (2002) QoS issues in Web

ser-vices IEEE Internet Computing, 6(6), 72–75

doi:10.1109/MIC.2002.1067740

Noll, J (2004) ASG based scenarios in

telecom-munications, telematics and enhanced enterprise

IT Retrieved May 26, 2008, from

http://asg-platform.org

O’Sullivan, J., Edmond, D., & Hofstede, A T

(2002) What’s in a service? Towards an accurate

description of non-functional properties of Web

services Distributed and Parallel Databases,

(12): 117–133 doi:10.1023/A:1016547000822

Preist, C (2004, November) A conceptual

archi-tecture for Semantic Web services Paper presented

at the International Semantic Web Conference

2004 (ISWC 2004)

Ran, S (2003) A model for Web services discovery

with QoS ACM SIGecom Exchanges, 4(1), 1–10

doi:10.1145/844357.844360

Roman, D., et al (2006) WWW: WSMO, WSML

and WSMX in a nutshell Paper presented at the

First Asian Semantic Web Conference (ASWC 2006)

Rosa, N S., Cunha, P R., Freire, L., & Justo,

(2002, July) QoS for service-oriented middleware

Paper presented at the 6rh World ence on Systemics Cybernetics and Informatics (SCI02)

Multiconfer-Toma, I (2006) Non-functional properties in Web

services DERI.

UDDI (2004) UDDI Version 3.0.2 UDDI Spec

Technical Committee

Zeng, L., Benatallah, B., Dumas, M., Kalagnanam,

J., & Sheng, Q Z (2003, May) Quality driven

Web services composition Paper presented at the

12th International Conference on World Wide Web (WWW), Budapest, Hungary

This work was previously published in Managing Web Service Quality: Measuring Outcomes and Effectiveness, edited by K Khan, pp 96-113, copyright 2009 by Information Science Reference (an imprint of IGI Global).

Trang 13

The tremendous growth of the Internet has

intro-duced a number of interoperability problems for

distributed multimedia applications These problems

are related to the heterogeneity of client devices,

network connectivity, content formats, and user’s

preferences The challenge is even bigger for

mul-timedia content providers who are faced with the

dilemma of finding the combination of different

variants of a content to create, store, and send to

their subscribers that maximize their satisfaction and

hence entice them to come back In this chapter, the

authors will present a framework for trans-coding

multimedia streams using an orchestration of

Web-services The framework takes into consideration

the profile of communicating devices, network connectivity, exchanged content formats, context description, users’ preferences, and available adapta-tion services to find a chain of adaptation services that should be applied to the content to make it more satisfactory to clients The framework was implemented as a core component for an architecture that supports personal and service mobility

IntroductIon

The tremendous growth of the Internet has duced a number of interoperability problems for distributed multimedia applications These problems are related to the heterogeneity of client devices, network connectivity, content formats, and user’s preferences The diversity of client devices, network

intro-DOI: 10.4018/978-1-60566-330-2.ch008

Trang 14

connectivity, content formats, and user’s

prefer-ences posed also some challenges in aligning and

customizing the exchanged data between different

users with different preferences The challenge is

even bigger for multimedia content providers who

are faced with the dilemma of finding the

combi-nation of different variants of a content to create,

store, and send to their subscribers that maximize

their satisfaction and hence entice them to come

back Most content providers have taken the costly

approach of creating different versions of content

for different access devices and networks

Content adaptation is an effective and attractive

solution to the problem of mismatch in content

format, device capability, network access and

user’s preferences Using content adaptation, a

number of adaptations is applied to the original

content to make it satisfy the device constrains

of the receiving device and the preferences of its

user Most currently available content adaptation

modules are designed to make the Web easier to

use Examples of such adaptations modules include

conversion of HTML pages to Wireless Markup

Language (WML, 2001) pages, enlarging text size,

reducing the size of an image, changing text and

background colors for better contrast, removal of

redundant information, audio to text conversion,

video to key frame or video to text conversion,

content extraction to list a few These adaptation

modules do not have though the same requirements

and challenges of real-time multimedia content

adaptations Real-time multimedia applications

involve large volumes of data making trans-coding

a computationally very expensive task (Chandra

& Ellis, 1999, Han et al.,1998) To address this

challenge, some trans-coding services have been

implemented in hardware and deployed on

inter-mediate network nodes or proxies The

disadvan-tage of this approach is that there are always new

types of clients that cannot be supported by the

deployed hardware A more suitable approach to

address the computational challenge of multimedia

trans-coding is based on the observation that the

general trans-coding process can be defined as

a combinatorial process (Mohan, Smith, & Li, 1999), and that multiple trans-coding services can be chained effectively together to perform a complex trans-coding task So, instead of having all trans-coding done by one single trans-coding service, a number of trans-coding services can collaborate to achieve a composite adaptation task

For instance, trans-coding a 256-color depth jpeg image to a 2-color depth gif image can be carried

out in two stages: the first stage covers ing 256-color to 2-color depth, and the second

convert-stage converts jpeg format to gif format Using

the software approach, transcoders can then be built more easily in software, and deployed and advertised more quickly to meet the needs of the users Software-based trans-coding are also more reliable since its components can be simpler and they can also be replicated across the network Moreover, transcoders can be modularized and re-used in different situations and contexts.Given a composite adaptation task that can be carried out in a number of stages, and given that there could be a number of possible configurations

to adapt the sender’s content to make it presentable

at the receiver’s device, the challenge is to find the appropriate chain of available trans-coding ser-vices that best fits the capabilities of the device, and

at the same time, maximizes the user’s satisfaction with the final delivered content In this chapter,

we will discuss a Quality of Service (QoS) tion algorithm for providing personalized content through web-service composition The function

selec-of the algorithm is to find the most appropriate chain of available trans-coding services between the sender and the receiver, and also to select the values for the configuration parameters for each trans-coding service The proposed algorithm uses the user’s satisfaction with the quality of the trans-coded content as the optimization metric for the path selection algorithm

The rest of the chapter is organized as follows:

In Section 2, we will introduce content adaptation and present the existing different models used in content adaptation Section 3 lists all the required

Trang 15

elements for providing customized content

adap-tation In Section 4 we present our methodology

for using the required element from Section 3 to

construct a graph of trans-coding services; the

algorithm for selecting the chain of trans-coding

services is then presented The selection criterion

for the algorithm as well as its characteristics is

also presented in Section 4, and finally, we end

Section 4 with an example that shows

step-by-step the results of the algorithm Our conclusion

is presented in Section 5

content AdAptAtIon

In today’s Internet, there is a wide range of client

devices in terms of both hardware and software

capabilities Device capabilities vary in different

dimensions, including processing power, storage

space, display resolution and color depth, media

type handling, and much more This variety on

device capabilities makes it extremely difficult

for the content providers to produce a content

that is acceptable and appreciated by all the

cli-ent devices (Fox, Gribble, & Chawathe, 1998),

making application-level adaptation a necessity

to cover the wide variety of clients

There are three main approaches for handling

this diversity in content formats: a static content

adaptation, a dynamic content adaptation, and

a hybrid of the static and dynamic approaches

(Chang & Chen, 2002, Lum & Lau, 2002) The

first two approaches differ in the time when the

different content variants are created (Lei &

Georganas, 2001) to match the requested format

In static adaptation, the content creator generates

and stores different variants of the same content

on a content server, with each variant formatted

for a certain device or class of devices Hafid

and Bochmann (1996) presented an architecture

for news-on-demand using this scheme Static

adaptation has three main advantages: (1) it is

highly customized to specific classes of client

devices, and (2) it does not require any runtime

processing, so no delay is incurred, and (3) the content creator has the full control on how the content is formatted and delivered to the client

On the other hand, static adaptation has a number

of disadvantages, mainly related to the ment and maintenance of different variants of the same content (Lum & Lau, 2002): (1) different content formats need to be created for each sort

manage-of device or class manage-of devices, and needs to be done when new devices are introduced, and (2)

re-it requires large storage space to keep all variants

of the same content

With dynamic content adaptation, the content

is trans-coded from one format to the other only when it is requested Depending on the location where the trans-coding takes place, dynamic content adaptation technologies can be classified into three categories: server-based, client-based, and proxy-based In the server-based approach (Mohan, Smith, & Li, 1999), the content server is responsible for performing the trans-coding; the content provider has all the control on how the content is trans-coded and presented to the user Additionally, it allows the content to be trans-coded before it is encrypted, making it secure against malicious attacks On the other hand, server-based adaptation does not scale properly for a large number of users and requires high-end content and delivery server to handle all requests

As for the client-based approach (Björk et a., 1999, Fisher et al., 1997), the client does the trans-coding when it receives the content The advantage of this approach is that the content can

be adapted to match exactly to the characteristics

of the client But at the same time, client-based adaptation can be highly expensive in terms of bandwidth and computation power, especially for small devices with small computational power and slow network connectivity, with large volume of data might be wastefully delivered to the device

to be dropped during trans-coding

The third adaptation approach is the based approach (Chandra & Ellis, 1999, Chandra, Ellis, & Vahdat, 2000, Floyd & Housel, 1998,

Trang 16

proxy-Fox, A., Gribble, Chawathe, Brewer, & Gauthier,

1997), where an intermediary computational entity

can carry out content adaptation on the fly, on

behalf of the server or client Proxy adaptation

has a number of benefits including leveraging

the installed infrastructure and scaling properly

with the number of clients It also provides a clear

separation between content creation and content

adaptation On the other hand, some content

provider may argue that they prefer to have full

control on how their content is presented to the

user Also, using proxies for adaptation does not

allow the use of end-to-end security solutions

chArActerIzAtIon And

reQuIreMents for

content AdAptAtIon

Advances in computing technology have led

to a wide variety of computing devices, which

made interoperability very difficult Added to

this problem is the diversity of user preferences

when it comes to multimedia communications

This diversity in devices and user preferences

has made content personalization an important

requirement in order to achieve results that satisfy

the user The flexibility of any system to provide

content personalization depends mainly on the

amount of information available on a number of

aspects involved in the delivery of the content to

the user The more information about these aspects

is made available to the system, the more the

content can be delivered in a format that is highly

satisfactory to the user These relevant aspects are:

user preferences, media content profile, network

profile, context profile, device profile, and the

profile of intermediaries (or proxies) along the

path of data delivery We will briefly describe

here each of these aspects; interested readers

might refer to (El-Khatib & Bochmann, 2003)

for more details

User Profile: The user’s profile captures the

personal properties and preferences of the user,

such as the preferred audio and video receiving/sending qualities (frame rate, resolution, audio quality…) Other preferences can also be related

to the quality of each media types for tion with a particular person or group of persons For instance, a customer service representative should be able to specify in his profile his/her preference to use high-resolution video and CD audio quality when talking to a client, and to use telephony quality audio and low-resolution video when communicating with a colleague at work The user’s profile may also hold the user’s policies for application adaptations, such as the preference of the user to drop the audio quality

communica-of a sport-clip before degrading the video ity when resources are limited The MPEG-21 standard (MPEG-21, 2001) is the most notable standards on user profiles

qual-Content Profile: Multimedia content might

enclose different media types, such as audio, video, text, and each type can have different formats (Lei & Georganas, 2001) Each type has its format characteristics and parameters that can

be used to describe the media Such information about the content may include storage features, variants, author and production, usage, and many other metadata The MPEG-7 standard (MPEG-

7, 2000), formally named “Multimedia Content Description Interface”, offers a comprehensive set of standardized description tools to describe multimedia content

Context Profile: A context profile would

include any dynamic information that is part of the context or current status of the user Context information may include physical (e.g location, weather, temperature), social (e.g sitting for din-ner), or organizational information (e.g acting senior manager) The MPEG-21 standard includes tools for describing the natural environment char-acteristics of the user, including location and time,

as well as the audio and illumination characteristics

of the user’s environment Resource adaptation engines can use these elements to deliver the best experience to the user

Trang 17

Device Profile: To ensure that a requested

content can be properly rendered on the user’s

device, it is essential to include the capabilities and

characteristics of the device into the content

ad-aptation process Information about the rendering

device may include the hardware characteristics

of the device, such as the device type, processor

speed, processor load, screen resolution, color

depth, available memory, number of speakers,

the display size, and the input and output

capa-bilities The software characteristics such as the

operating system (vendor and version), audio and

video codecs supported by the device should also

be included in the device profile The User Agent

Profile (UAProf) created by the Wireless

Applica-tion Forum (WAP) and the MPEG-21 standard,

both include description tools for describing

device capabilities

Network Profile: Streaming multimedia

con-tent over a network poses a number of technical

challenges due to the strict QoS requirements of

multimedia contents, such as low delay, low jitter,

and high throughput (Ng, Tan, & Cheng, 2001)

Failing to meet these requirements may lead to a

bad experience of the user (Katchabaw, Lutfiyya,

& Bauer,, 1998, Poellabauer, Abbasi, & Schwan,

2002) With a large variety of transport networks, it

is necessary to include the network characteristics

into content personalization and to dynamically

adapt the multimedia content to the fluctuating

network resources (Wu, Hou, Zhang, 2001)

Achieving this requires collecting information

about the available resources in the network, such

as the maximum delay, error rate, and available

throughput on every link over the content delivery

path A description tool for network capabilities,

including utilization, delay and error characteristics

are included in the MPEG 21 standard

Profile of Intermediaries: When the content

is delivered to the user across the network, it

usually travels over a number of intermediaries

These intermediaries have been traditionally used

to apply some added-value services, including

on-the-fly content adaptations services (Chandra,

Ellis, & Vahdat, 2000, Fox, Gribble, Chawathe, Brewer, & Gauthier, 1997) For the purpose of content adaptation, the profile of an intermedi-ary would usually include a description of all the adaptation services that an intermediary can provide These services can be described using any service description language such as the JINI network technology (JINI, 1998), the Service Location Protocol (Guttman, Perkins, Veizades,

& Day, 1999), or the Web Service Description Language (WSDL, 2002) A description of an adaptation service would include, for instance, the possible input and output format to the service, the required processing and computation power

of the service, and maybe the cost for using the service The intermediary profile would also in-clude information about the available resources at the intermediary (such as CPU cycles, memory)

to carry out the services

Qos selectIon AlgorIthM

In this section, we will describe the overall QoS selection algorithm that finds the most appropriate chain of trans-coding services between the sender and the receiver, and also selects the configuration for each trans-coding service We will first start

by defining the user’s satisfaction as the selection criterion for the algorithm, and then show how to construct the directed graph for adaptation, using the sender’s content profile, receiver’s device pro-file, and the list of available trans-coding services After constructing the graph, we will show how

to apply some optimization techniques on the graph to remove the extra edges in the graph, and finally present the actual QoS path and parameter selection algorithm

user’s satisfaction as selection criteria

Most Internet users are indifferent about the derlying technologies such as protocols, codecs,

Trang 18

un-or resource reservation mechanisms that enable

their communication session They are also

indif-ferent about network level QoS characteristics,

such as bandwidth, delay, or throughput All what

is important for these users in the end is making

the communication session work in a satisfactory

way: for instance, hearing without jitter and seeing

without irregularity

As we mentioned earlier, the user’s preferences

expressed in the user’s profile can be classified

as application layer QoS parameters In order to

compute the user’s satisfaction with all values of

the application layer configuration parameters, we

have used the approach presented by Richards,

Rogers, Witana, & Antoniades (1998), where

each application level QoS parameter is

repre-sented by a variable xi over the set of all possible

values for that QoS parameter The satisfaction

or appreciation of a user with each quality value

is expressed as a satisfaction function Si(xi) All

satisfaction functions have a range of [0 1], which

corresponds to the minimum acceptable (M) and

ideal (I) value of xi The satisfaction function Si(xi)

can take any shape, with the condition that it must

increase monotonically over the domain Figure

1 shows a possible satisfaction function for the

frame rate variable

In the case when there are more than one

ap-plication parameter (frame rate, resolution, color

depth, audio quality,…), Richards et al proposed

using a combination function f comb that computes

the total satisfaction S tot from the satisfactions sifor the individual parameters (Equa 1)

s

tot comb n

i i n

We think that the approach described above is a major step towards a simple user-friendly inter-face for user level QoS specification, however, further considerations could be taken into account

as described below A first improvement results from the observation that users in telecommunica-tion session might find some media types more important than others For instance, a user of a news-on-demand service might prefer to receive high quality audio with low quality video as com-pared to average quality audio and average quality video In the case of a user watching a sport event the situation may be the opposite (if the user does not care about the audio of the commenter).This preference to individual media can play a factor when it comes to the calculation of the total satisfaction S tot By assigning different weights

wi to the different parameters xi, S tot will reflect

Figure 1 Possible satisfaction function for the frame rate.

Trang 19

the user preference for different media types The

combination function for the total user satisfaction

can be redefined as follows:

w s tot

user

i i i n

( , ,1 2 3 , , , ,1 2 3 , )

1

(2)

where w i is the weight for the individual

satisfac-tion s i and w w

n

i i

n

= å= 1 Equa 2 have similar

properties as Equa 1, which is to:

Prop 1 One individual low satisfaction is enough

to bring the total satisfaction to a low value

Prop 2 The total satisfaction of equal individual

satisfactions si with equal weight is equal to the

satisfactions si

These constants weight factors

(AudioWeight-Factor, VideoWeight(AudioWeight-Factor, ) can be selected

by the user, and stored in the user profile The

selection of these weights depends on the type

of service the user is willing to receive when

using a specific service or communicating with

a given callee

Additionally, we have so far considered only

the QoS preferences of a single user But all

conversational multimedia applications involve

several users It is therefore important to determine

how the possibly conflicting preferences of the

different users are reconciled in order to come

up with QoS parameters that are suitable for all

participating users

In certain circumstances, some given

param-eters may be determined simply based on the

preferences of a single user This may be the case

in a two-way teleconference between two users

A and B, where the parameters of the video

vis-ible by User A would be determined based on the

preferences of User A alone, and the video in the opposite direction based on the preferences of User

B However, the situation may be more complex

if the cost of the communication is paid by User

A and the selection of the video received by User

B has an impact on the communication cost

In other circumstances, as for instance in the case of the joint viewing of a video clip by sev-eral participants in a teleconference, the selected quality parameters should be determined based

on the preferences of all participating users In such circumstances, we propose to use the same combination function for user satisfaction con-sidered above and (optionally) introduce a weight

for each of the participating users, called the QoS

selection weight, which determines how much

the preferences of the user influences overall QoS parameter selection The total satisfaction (computed for all users) is then given by

a s

tot comb tot

usr tot usr tot usr

i

=

(3)

where s tot usr i is the total satisfaction for user i, and

a i is the QoS selection weight for user i In the

case that the weight of a given user is zero, the preferences of this user are not taken into account for the selection of the QoS parameters

constructing a directed graph

Trang 20

adapta-content The elements of the directed graph are

the following:

1 Vertices in the graph represent trans-coding

services Each vertex of the graph has a

number of properties, including the

com-putation and memory requirements of the

corresponding trans-coding service Each

vertex has a number of input and output links

The input links to the vertex represent the

possible input formats to the trans-coding

service The output links are the output

formats of the trans-coding service Figure

2 shows a trans-coding service T1, with two

input formats, F5 and F6, and four possible

output formats, F10, F11, F12 and F13 The

sender node is a special case vertex, with

only output links, while the receiver node

is another special vertex with only input

links

To find the input and output links of each

vertex, we rely on the information in

differ-ent profiles The output links of the sender

are defined in the content profile, which

includes as we mentioned earlier, meta-data

information (including type and format) of

all the possible variants of the content Each

output link of the sender vertex corresponds

to one variant with a certain format The input

links of the receiver are exactly the possible

decoders available at the receiver’s device

This information is available through the

description of the receiver’s device in the

device profile The input and output links

of intermediate vertices are described in the service description part of the intermediaries profile Each intermediary profile includes the list of available trans-coding services, each with the list of possible input and out-put formats Each possible input format is represented as an input link into the vertex, and the output format is represented as an output link

2 Edges in the graph represent the network connecting two vertices, where the input link of one vertex matches the output link

of another vertex

To construct the adaptation graph, we start with the sender node, and then connect the outgoing edges of the sender with all the input edges of all other vertices that have the same format The same process is repeated for all vertices To make sure that the graph

is acyclic, the algorithm continuously fies that all the formats along any path are distinct

veri-Figure 3 shows an example of an adaptation graph, constructed with one sender, one receiver, and seven intermediate vertices, each representing a trans-coding service

As we can see from the graph, the sender node is connected to the trans-coding service T1 along the edge labeled F5 This means that the sender S can deliver the content in format F5, and trans-coding service T1 can convert this format into format F10, F11, F12, or F13

Adding constraints to the graph

As we have discussed earlier, the optimization criterion we have selected for the QoS selection algorithm is the user’s satisfaction computed

using the function f comb presented in Section 4.2 The maximum satisfaction achieved by using

a trans-coding service Ti depends actually on a number of factors

Figure 2 Trans-coding service with multiple input

and output links

Trang 21

The first factor is the bandwidth available for

the data generated by the trans-coding service

Ti The more bandwidth is available to the

trans-coding service, the more likely the trans-trans-coding

service will be able to generate trans-coded

con-tent that is more appreciated by the receiver The

available bandwidth between two trans-coding

services is restricted by the amount of bandwidth

available between the intermediate servers where

the trans-coding service Ti is running and the

intermediate server where the next trans-coding

service or receiver is running We can assume

that connected trans-coding services that run on

the same intermediate server have an unlimited

amount of bandwidth between them

Other factors that can affect the user’s faction are the required amount of memory and computing power to carry out the trans-coding operation Each of these two factors is a function

satis-of the amount satis-of input data to the trans-coding service

graph optimization

By looking at the graph in Figure 3, we can see that there are some edges like F1, F2 or F17 that are connected only to one trans-coder These edges cannot be a part of any path from the sender to the receiver The same principle also applies to trans-coders other than the sender and receiver that are not on any path from the sender to the receiver T5 is an example of a trans-coder that cannot be used to send data through it on the way from the sender to the receiver Removing these edges and vertices help reduce the computational time of the algorithm, since it helps pruning dead-ends from the graph Applying optimization for the graph in Figure 3 would result in the graph shown in Figure

5 The pseudo-code for the graph optimization is shows in Figure 4

Qos selection Algorithm

Once the directed acyclic adaptation graph has been constructed, the next step is to perform the

Figure 3 Directed trans-coding graph

Figure 4 Pseudo-code for the graph optimization

Trang 22

QoS selection algorithm to find a chain of

trans-coding services, starting from the sender node and

ending with the receiver node, which generates

the maximum satisfaction of the receiver

Find-ing such as path can be similar to the problem of

finding the shortest path in a directed weighted

graph with similar complexity, except that the

optimization criterion is the user’s satisfaction,

and not the available bandwidth or the number

of hops

Our proposed algorithm uses two variables

representing two sets of trans-coding services, the

set of already considered trans-coding services,

called VT, and the set of candidate trans-coding

services, called CS, which can be added next on

the partially selected path The candidate

trans-coding services set contains the trans-trans-coding

services that have input edges coming from any

trans-coding service in the set VT At the

begin-ning of the algorithm, the set VT contains only the

sender node, and CS contains all the other

trans-coding services in the graph that are connected

to sender, and also the receiver In each iteration,

the algorithm selects the trans-coding service T i

that, when using it, generates the highest user

satisfaction The user satisfaction is computed as

an optimization function of the audio and video

parameters for the output format for T i, subject

to the constraint of available bandwidth between

T i and its ancestor trans-coding service, and also

subject to the remaining user’s budget T i is then added to VT The CS set is then updated with

all the neighbor trans-coding services of T i The algorithm stops when the CS set is empty, or

when the Receiver node is selected to be added

to VT The complete description of the algorithm

is given in Figure 6

As indicated in Step 2 and Step 8, the algorithm

selects from CS the transcoder T i that can generate the highest satisfaction value for the receiver To compute the satisfaction value for each transcoder

T i in CS, the algorithm selects the QoS parameter

values x i that optimize the satisfaction function in Equa 2, subject only to the constraint remaining user’s budget and the bandwidth availability that

connects T i to Tprev in VT i.e

bandwith_requirement(x 1 x n )≤ Bandwidth_ AvailableBetween(T i,Tprev ) (4)Since each trans-coding service can only reduce the quality of the content, when the algorithm ter-minates, the algorithm would have computed the

best path of trans-coding services from the sender

to the receiver, and the user’s satisfaction value

computed on the last edge to the receiver node

is the maximum value the user can achieve To show this, assume that the selected path is the path

{T 11 ,…T 1n } in Figure 7 If the path {T 21 ,…T 2m} is

a better path, then T 2m should have converted the content into variant that is more appreciated by

the user than the variant generated by T 1n Since transcoders can only reduce the quality of con-

tent, all transcoders along the path {T 21 ,…T 2m}, should have also produced a content with higher satisfaction function than the variant produce by

T 1n, and hence all these transcoders should have

been selected before T 1n, which contradicts with the assumption

Figure 5 Optimized directed trans-coding

graph

Trang 23

In this section, we will present an example to show

how the QoS path selection algorithm works We

will assume that the graph construction algorithm

has generated the graph shown in Figure 8 The

graph also shows the selected path with and without

trans-coding service T 7 as part of the graph The selected trans-coding services, user satisfaction,

as well as the best current path produced by the algorithm are also shown in Table 1 Each row

in the table shows the results for one iteration of the algorithm

Figure 6 QoS selection algorithm

Figure 7 Graph selection Figure 8 Example of trans-coding graph

Trang 24

future reseArch dIrectIon

In this section, we will outline some potential

directions for future research works

In this chapter, we have not addressed the issues

regarding autonomic service management of

real-time mulreal-timedia services One of the challenges

is the efficient, autonomous management of these

real-time content adaptation services in future

generation networks The autonomous service

management is crucial for the self-management

of real-time multimedia services According to

Ganek and Corbi of IBM (Ganek and Corbi,

2003), the autonomous or self-management

as-pects include optimizing, healing, configuring, and self-protecting The existing approach and framework contributes towards

self-a system thself-at is not fully self-autonomic in self-all four management aspects

Current, we are looking at nature-inspired automatic service management solution that is inspired from the Bee colony metaphor The al-legory comprises how bee agents mimic functional services related to multimedia applications, in order to autonomously monitor and configure multimedia services The objective of this research

is to ensure complete autonomic behaviour of the four main management activities (configura-

Table 1 Results for each step of the path selection algorithm

Round Considered Set (VT) Candidate set (CS) Selected

trans-coding service Selected Path Delivered Frame

Rate

User satisfaction

Trang 25

tion, repair, optimization and protection) of an

autonomous system Such direction could enable

customization of the service for the current and

future generation network conditions

conclusIon

Content adaptation is a natural solution to address

the problem of heterogeneity of Internet clients

and users In this chapter, we have presented a

solution to the problem of heterogeneity which

takes into consideration the capabilities of the

cli-ent devices, network connectivity, contcli-ent format,

and users’ preferences An important part of the

framework is the QoS path selection algorithm

that decides on the chain of adaptation services

to add and the configuration parameters for each

service The decision is based on the profile of

communicating devices, network connectivity,

exchanged content formats, context description,

and available adaptation services

references

Björk, S., Holmquist, L E., Redström, J., Bretan, I.,

Danielsson, R., Karlgren, J., & Franzén, K (1999)

WEST: a Web browser for small terminals

Pro-ceedings of the 12th annual ACM symposium on

User interface software and technology

(pp.187-196) Asheville, North Carolina, United States

Chandra, S., Ellis, C., & Vahdat, A (2000)

Application-Level Differentiated Multimedia Web

Services Using Quality Aware Transcoding IEEE

Journal on Selected Areas in Communications,

18(12), 2265–2544 doi:10.1109/49.898736

Chandra, S., & Ellis, C S (1999) JPEG

Compres-sion Metric as a Quality Aware Image Transcoding

Second Usenix Symposium on Internet

Tech-nologies and Systems (USITS ‘99) (pp 81–92)

Boulder, CO

Chang, C Y., & Chen, M S (2002) Exploring gregate Effect with Weighted Transcoding Graphs for Efficient Cache Replacement in Transcoding

Ag-Proxies Proceedings of the 18th IEEE

Interna-tional Conference on Data Engineering (ICDE-O)

(pp 383—392) San Jose, CA, USA

El-Khatib, K., & Bochmann, G v (2003,

Decem-ber) Profiles in Content Adaptation Technical

report University of Ottawa, Canada

El-Khatib, K., Zhang, Z E., Hadibi, N & mann, G v (2004) Personal and Service Mobility

Boch-in Ubiquitous ComputBoch-ing Environments Journal

of Wireless communications and Mobile ing, 4(6), 595-607.

Comput-Fisher, B., Agelidis, G., Dill, J., Tan, P., Collaud, G., & Jones, C (1997) CZWeb: Fish-Eye Views

for Visualizing the World-Wide Web In

Proceed-ing of the 7th Int Conf on Human-Computer Interaction (HCI International ‘97) (pp 719-722)

IEEE Personal Communications, 5(4), 10–19

doi:10.1109/98.709365Fox, A., Gribble, S D., Chawathe, Y., Brewer,

E A., & Gauthier, P (1997) Cluster-Based

Scal-able Network Services In Proceeding of the 16th

ACM Symp On Operating Systems Principles (pp

78–91) Saint-Malo, France

Ganek, A G., & Corbi, T A (2003) The dawning

of the autonomic computing era IBM Systems

Journal, 42(1), 5–18.

Trang 26

Guttman, E., Perkins, C., Veizades, J., & Day,

M (1999) Service Location Protocol Version

2 http://ietf.org/rfc/rfc2608.txt

Hafid, A & Bochmann, G.v., (1996)

Qual-ity of Service Negotiation in News-on-Demand

Systems: an Implementation In Proceedings of

the Third International Workshop on Protocols

for Multimedia Systems (pp 299-315) Springer

Berlin/Heidelberg

Han, R., Bhagwat, P., LaMaire, R., Mummert, T.,

Perret, V., & Rubas, J (1998) Dynamic adaptation

in an image trans-coding proxy for mobile WWW

browsing IEEE Personal Communication, 5(6)

JINI network technology (TM) (1998) Http://

java.sun.com/product/JINI

Katchabaw, M., Lutfiyya, H., & Bauer, M (1998)

Driving resource management with

application-level quality of service specifications, (pp 83-91)

ACM Press

Lei, Z., & Georganas, N D (2001) Context-based

Media Adaptation in Pervasive Computing On

Proceeding Can.Conf on Electr and Comp Engg

(pp 913-918) Toronto, Canada

Lum, W Y., & Lau, F C M (2002) On

Balanc-ing Between Trans-codBalanc-ing Overhead and Spatial

Consumption in Content Adaptation [Atlanta,

USA.] Mobicom, 2002, 239–250.

Mohan, R., Smith, J R., & Li, C S (1999)

Adapt-ing Multimedia Internet Content for Universal

Access IEEE Transactions on Multimedia, 1(1),

104–114 doi:10.1109/6046.748175

MPEG-21 (2001): International Standards

Or-ganisation Information technology – multimedia

framework (MPEG-21) – part 1: Vision,

technolo-gies and strategy ISO/IEC 21000-1 Accessed on

Jan 10, 2007

MPEG-7 http://www.chiariglione.org/mpeg/

standards/ mpeg-7/mpeg-7.htm Accessed on

Jan 10, 2007

Ng, C.W., Tan, P.Y., & Cheng, H (2001) Quality

of Service Extension to IRML IETF

INTERNET-DRAFT, ‘draft-ng-opes-irmlqos-00.txt’

Poellabauer, C., Abbasi, H., & Schwan, K (2002) Cooperative run-time management of adaptive

applications and distributed resources In

Proceed-ing of the Tenth ACM Internationalconference on Multimedia, (pp 402-411) ACM Press.

Richards, A., Rogers, G., Witana, V., & toniades, M (1998) Mapping user level QoS

An-from a single parameter In Second IFIP/IEEE

International Conference on Management of Multimedia Networks and Services (pp 14-20)

Versailles, France

Smith, J R., Mohan, R., & Li, C.-S (1999) able Multimedia Delivery for Pervasive Comput-

Scal-ing ACM Multimedia, (pp 131 – 140) Orlando,

Florida, United States

WML: Wireless Markup Language (2001)

Wire-less Markup Language (WML) 2.0 Document Type Definition.

WSDL Web Service Description Language (2002) http://www.w3.org/TR/wsdl

Wu, D., Hou, Y T., & Zhang, Y (2001) Scalable Video Coding and Transport over Broad-band

Wireless Networks Proceedings of the IEEE,

89(1), 6–20 doi:10.1109/5.904503

AddItIonAl reAdIng

Ahmed, I., Wei, X., Sun, Y., & Zhang, Y Q (2005) Video transcoding: an overview of various

techniques and research issues IEEE

Transac-tions on Multimedia, 7(5), 793–804 doi:10.1109/

TMM.2005.854472Ardon, S.(n.d.) MARCH: a distributed content

adaptation architecture International Journal of

Communication Systems, 16, 97–115.

Trang 27

Dey, A K (2001) Understanding and Using

Con-text Springer Personal and Ubiquitous

Comput-ing, 5(1), 4–7 doi:10.1007/s007790170019

El Saddik, & Hossain, M S (2007) Multimedia

Streaming for wireless communication In B

Furht (Ed.), Encyclopedia of Wireless and Mobile

Communications CRC Press, Taylor & Francis

Group

El Saddik, A., & Hossain, M S (2006)

Multi-media content repurposing In B Furht, (Ed.),

Encyclopedia of Multimedia Berlin, Germany:

Springer Verlag

Han, J R et al.(1998, December) Dynamic

adaptation in an image transcoding proxy for

mobile WWW browsing IEEE Personal

Com-mun., 5(6).

Han, R., & Smith, J R (1999) Internet

Transcod-ing for Universal Access In J Gibson (Ed.),

Multimedia Communications Handbook.

Hossain, M S., & El Saddik, A (2008) A

Biologi-cally Inspired Multimedia Content Repurposing

System in Heterogeneous Network Environments

ACM/Springer Multimedia Systems J., 14(3),

135-144

Hossain, M S., Alamri, A., & El Saddik, A (2007)

A framework for qos-aware multimedia service

selection for wireless clients In Proc the 3rd ACM

Workshop on Wireless Multimedia Networking and

Performance Modeling (WMuNeP 07), Chania,

Crete Island, Greece, October 22 - 22

Liang, Y., Chebil, F., & Islam, A (2006)

Com-pressed domain transcoding solutions for MPEG-4

visual simple profile and H.263 baseline videos in

3GPP services and applications IEEE

Transac-tions on Consumer Electronics, 52(2), 507–515

doi:10.1109/TCE.2006.1649672

Lum, W Y., & Lau, F C M (2002) On ing between Transcoding Overhead and Spatial

Balanc-Consumption in Content Adaptation In Proc

MobiCom’02, Atlanta, Georgia, USA, Sep

23-26 (pp 239-250)

Maheshwari, A., Sharma, A., Ramamritham, K.,

& Shenoy, P (2002) TransSquid:Transcoding and caching proxy for heterogeneous e-commerce

environments In Proc 12th IEEE Int Workshop

Research Issues in Data Engg, San Jose,

Califor-nia, USA, 26 Feb - 1 March, 2002, (pp 50-59).Mao, M., So, H W., Kang, B., & Katz, R H (2001) Network support for mobile multimedia

In Proc 11th Intl Workshop on Network and

Operating System Support for Digital Audio and Video (NOSSDAV-2001), New York, USA.

Nahrstedt, K., & Balke, W T (2004) A taxonomy

for multimedia service composition In Proc 12th

ACM Conf Multimedia (ACM MM 04), New York,

NY, USA, 10–16 October 2004, (pp 88-95).Nahrstedt, K., & Balke, W T (2005) Towards building large scale multimedia systems and

applications: Challenges and status In Proc the

First ACM Intl.Workshop Multimedia Service Composition, Hilton, Singapore, (pp 3-10).

Nguyen, V A., & Tan, Y P (2005) Efficient video transcoding between H.263 and H.264/

AVC standards In Proc IEEE Intl Symposium

on Circuits and Systems (ISCAS’05), Kobe, Japan,

May 23-26

Richards, A., Rogers, G., Witana, V., & des, M (1998, November) Mapping user level

Antonia-QoS from a single parameter IIn 2nd IFIP/IEEE

Intl Conf Manage Multimedia Networks and Services, Versailles.

Shin, I., & Koh, K (2004) Hybrid Transcoding for

QoS Adaptive Video-on-Demand Services IEEE

Transactions on Consumer Electronics, 50(2).

Trang 28

Smith, J R., Mohan, R., & Li, C S (1999)

Scal-able Multimedia Delivery for Pervasive

Comput-ing In Proc ACM Multimedia’ 99, Orlando, FL,

USA, Oct.30 - Nov.5

Vetro, A., Xin, J., & Sun, H (2005) Error resillence video transcoding for wireless communications

IEEE Wirel Commun, 12(4), 14–21 doi:10.1109/

MWC.2005.1497854

This work was previously published in Services and Business Computing Solutions with XML: Applications for Quality Management and Best Processes, edited by P C K Hung, pp 121-135, copyright 2009 by Information Science Reference (an imprint of IGI Global).

Trang 29

Chapter 7.15

Reconceptualising Information Literacy for the

This chapter questions whether the shift from

the Web as a vehicle for storing and transmitting

information to the new Web as a series of social

networking environments, requires significant

changes in how students interact with information

when they are studying within a formal learning

environment It explores the origins and growth

of the idea of information skills development, the

translation of this work into frameworks and

se-quential models and the adaptation of these models

to take account of changes in information storage

and transmission brought about by the Internet

The chapter then examines the changing contexts

and changes in learning being brought about by

the Web 2.0 environment and questions whether adjustment of existing information literacy models

is a sufficient response to deal with these changes

We conclude that although Web 2.0 developments are not fundamentally undermining the nature of teaching and learning they do provide important possibilities for more effective information literacy development work A non-sequential framework is offered as a contribution to supporting HE students when seeking to obtain, store and exploit information simultaneously in the informal social world of Web 2.0 and in their formal academic discipline

the rIse of InforMAtIon sKIlls

In the early 1980s a spate of books appeared in the

UK containing a new term in the title: ‘information

DOI: 10.4018/978-1-60566-208-4.ch022

Trang 30

skills’ This term was the brainchild of a working

party concerned about school pupils’ competence

in “using libraries, exploring references and

making notes” (Marland, 1981, p7) and arose

out of the Schools Council’s desire to explore

what a curriculum for a changing world might

comprise The working party report asserted that

“Individuals today have an increasing need to be

able to find things out…never before have our lives

depended so much on our ability to handle

infor-mation successfully” (Marland, 1981, p9) Narrow

concerns about library skills and user education

were replaced by a focus on students’ problems

in finding and using information to tackle

assign-ments and conduct their research within a formal

learning environment This intervention was due

to the interest in these skills by educationalists,

who, working alongside librarians, ensured wider

adoption for information skills and a clearer place

for the concept within the learning process

However, despite this development and the

ap-pearance of a number of books exploring the place

of information skills in learning (see, for example,

Markless and Lincoln, 1986, and Wray, 1985)

the concept of information skills was far more

widely accepted by librarians than by teachers

This resulted in heavy emphasis on competence

in resource use and on finding information

Models of InforMAtIon sKIlls

From the outset writers wanted to show the need for

students to develop these ‘new’ information skills

The issue was presented as one of skills deficit

and consequently led to a plethora of information

skills frameworks and models, spelling out what

students should be able to do (Many of these

models were later ‘rounded up’ and described by

Loertscher and Woolls, 2002.) Model constructors

conceived the requisite process as tying together

distinct elements of information-related behaviour

into a logical, sequential process which could then

be taught (e.g Marland, 1981; Brake, in Markless

and Lincoln 1986)

An important retrospective review of these models and frameworks (Eisenberg and Brown, 1992) concluded that

while each author may explain this process with different terms … all seem to agree on the overall scope and the general breakdown of the process

… it appears that the various process models are more alike than different and it may be possible and desirable to begin speaking about a common process approach to library and information skills instruction (p 7)

The approach to information skills as a mon process’ to be applied to library research and information handling unfortunately tended to result in a disregard for the context of learning Skills were perceived as generic; the sequential process outlined in the models was to be adopted at all ages and across different subjects The process formed a ‘curriculum’ to be taught to students and applied by them whenever necessary This view was hardly challenged in the early world of information skills although research on informa-tion behaviour in context and on critical thinking skills was calling into question the whole notion

‘com-of easy transfer, which is also a well-established assumption in mainstream education (Perkins and Salomon, 1992)

Perhaps the most influential of these generic information skills models was advanced as the Big6 This model was created by Eisenberg and Berkowitz (1990); it was widely disseminated in book form and continues to be heavily promoted

in the USA and internationally through their website and through an extensive programme

of workshops We will use this Big6 framework

as the basis of our critique for the remainder of this chapter because it is one of the frameworks most widely used in USA and UK schools to sup-port information skills teaching and because its authors were amongst the first to integrate ICT into information skills in a distinct and transpar-

Trang 31

ent manner.

The main elements of this model are outlined

below:

1 Task Definition: (determine the purpose and

need for information)

2 Information Seeking Strategies: (examining

alternative approaches to acquiring the

ap-propriate information to meet needs)

Determine the range of possible

resources

Evaluate the different possible

sources to determine priorities

3 Location and Access: (locating information

sources and information within sources)

Locate sources (intellectually and

5 Synthesis: (integrating information drawn

from a range of sources)

Organize information from multiple

Eisenberg and Berkowitz (1990)

It is not surprising that when the concept of

information skills was new, and people sought to

understand its scope, frameworks such as the Big6 were widely adopted They provided a foundation

on which to build learning activities and ment Would such frameworks survive intact into the ‘information age’ of ICT?

assess-the shIft to InforMAtIon lIterAcy: A broAder vIeW?

With the advent of Worldwide Web and the extensive accompanying investment in ICT in educational institutions of all kinds, concerns about students’ ability to find and use information grew exponentially and a new vocabulary began

to emerge in formal education - that of tion literacy The notion of information literacy developed in the USA in the 1980s in response

informa-to a move informa-towards more active learning in versities and the concomitant need to move away from terms implying passive instruction (Martin, 2006) Use of the term expanded considerably in the 1990s (Bawden, 2001) and has gained some worldwide influence, leading to a declaration by UNESCO (2003) stressing the global importance

uni-of information literacy within the information ety A parallel growth in the UK has seen the term widely adopted in academic libraries and national educational bodies (but with most school libraries until now still preferring to focus on information skills - Streatfield and Markless, 2007)

soci-Did the new term signify any fundamental change in thinking or signal a new characterisa-tion of the skills or processes previously called information skills? National Information Literacy Standards in Australia (CAUL, 2001) and the USA (ACRL, 2000) echoed much of what was in the earlier process models, as did the information lit-eracy model proposed in the UK by the Society of College, National and University Libraries (1999) Despite the fact that ‘literacy’ is a problematic and contested concept (it has been variously described

as encompassing notions of functional competence and skills, of sets of wider cognitive abilities, and

Trang 32

as part of a contextualised approach to learning

in its social and economic context - Bowden,

2001), information literacy was usually reduced

in presentation to a series of skills, procedures and

technicalities This inhibited approach attracted

some criticism for being too mechanistic and

some writers moved towards a conceptualization

that includes attitudes, underpinning knowledge

and meta-cognitive abilities (Kuhlthau, 1988;

Bruce, 1997) Although Kuhlthau recognised the

importance of student attitudes and emotions in her

information process model, these elements have

not been integrated into other process models -

although the commentaries accompanying these

models usually refer in some way to motivation

and attitudes

InforMAtIon sKIlls

And the Internet

In this phase of its development, the Internet was

viewed primarily as a new information storage and

delivery system for which existing information

skills frameworks could simply be expanded or

adapted to take account of the growth in access

to information via the Internet Eisenberg and

Johnson (1996) exemplified this view when they

explicitly integrated ICT into the Big6 Skills

model, saying that

Students need to be able to use computers

flex-ibly, creatively and purposefully… (they) should

be able to use the computer as part of the process

of accomplishing their task (p 2)

During the 1990s, the creators of the Big6

confi-dently extended the model to include student use of

ICT when solving learning problems They claimed

that various computer and information technology

skills were integral parts of the Big6 Skills This

claim was sustained as their model continued to

be implemented in schools across the USA and

the UK (Eisenberg and Berkowitz, 2000)

Adher-ents of this and other process models confidently asserted that the basic principles of information seeking and use, derived from years of watching and helping students to interact with print-based information, remained unchallenged

We have chosen to exemplify current process models by citing the Big6 when looking at whether the concept of information literacy needs to be repackaged or reconceptualised because:

the model crystallizes the general process

• approach favoured until now and serves as

an adequate exemplar of the model-driven approach

it serves our purpose because it was the only

• model advanced until recently that system-atically encompasses the ICT dimension

It is still currently being used and promoted

in that form

The Big6 framework is useful for this purpose because it is a systematic and widely adopted model Our comments should not be construed

as an attack on this particular framework

exAMInIng the process Models

What are the assumptions underpinning the Big6 and similar models and what are their main char-acteristics?

A

• sequential view of the process of student research, conceived as a series of logical steps

Use of prescriptive language to convey an

‘ideal approach’ to information-seeking and use (e.g “After students determine their priorities for information-seeking they must locate information from a va-riety of sources”; “once the information problem has been formulated, the student must consider all possible information sources and develop a plan for searching”)

Trang 33

This approach is commonplace in this

pe-riod, despite the warning offered a decade

earlier by Tabberer and Altman (1986)

about the danger of idealising study

behav-iour and promoting ‘the right way to …’

They stressed that success came by diverse

routes and as a result of different choices

made in different situations They warned

that students did not always gain much by

being confronted with ‘the ideal’ because

there is a range of influences that prevent

adoption of ‘best behaviour’

The process models were designed to

sup-•

port information skills teaching (i.e to

pro-vide a ‘curriculum’ for the teachers and a

pathway to be followed by students when

doing their research)

A particular and limited conception of

in-•

formation-related behaviour is represented

in these models, with much emphasis on

information seeking, location and access

Use of information is reduced to

deter-mining relevance and extracting pertinent

items of information (by taking notes or

re-sorting to cut and paste) The words

knowl-edge, understanding and making sense of,

seldom occur in these models, nor does the

idea of creating one’s own viewpoint The

apparent assumptions are that this

short-coming will be addressed in the subject

teaching or that the acts of extracting and

organising relevant information will

them-selves stimulate the construction of

mean-ing What happens instead is frequently cut

and paste activity leading to more or less

unintentional plagiarism In these models,

synthesis is not about transforming

infor-mation to encapsulate new knowledge

Overall they present ways to support

teaching (“innovative instructional

meth-ods”) designed to provide a framework to

guide teachers or librarians when

prepar-ing appropriate activities or tasks for their

students

These models reflected the main uses ceived for the Web in this period as a vehicle for storing and transmitting information

con-InforMAtIon lIterAcy And Web 2.0: chAngIng the context, chAngIng the leArnIng?

The ‘orthodoxy’ of information skills within formal learning environments, as enshrined in the Big6 Model, is being increasingly challenged Recent research into information literacy is moving away from technological processes and skills-based models, recognising the complexi-ties inherent in finding and using information A more experiential perspective that recognises the contextual and affective elements of information literacy is emerging (Williams and Wavell, 2007) Two complementary developments have influ-enced this shift in focus: greater interest amongst information literacy researchers and practitioners

in the processes of learning (especially theory about variation in learning and constructivist ap-proaches); and an electronic environment that is increasingly being shaped by its users

Have traditional views of information literacy really been rendered obsolete? Does learning through Web 2.0 require different skills and abilities? Are a new range of cognitive and meta-cognitive strategies needed to learn effectively within the Web 2.0 environment? Or, does the Web 2.0 environment provide tools that enable teachers to engage students more effectively in well-established learning processes than could be achieved hitherto?

In our view, learning is not fundamentally ferent within Web 2.0, nor does the ‘new’ social software change the basic processes of learn-ing Where Web 2.0 has made a difference is in making it easier to engage with some aspects of learning that were previously difficult to address (for example, real collaboration and groupwork, peer critique, hearing students’ authentic voices

Trang 34

dif-and construction of new knowledge) None of

these important aspects of effective learning are

new: all can be found in the education literature

of the 20th Century, from Dewey to Ausubel, and

from Vygotsky to Marton However, despite their

importance, few of these elements have found

their way into information literacy models or

practice

When the Worldwide Web was primarily a

ve-hicle for storing and delivering information it was

easy to portray information literacy as an ordered

sequence of skills to be transmitted to students,

whilst ignoring other approaches to learning Web

2.0 effortlessly undermines this approach with its

disregard for authority, hierarchy and order and its

focus on the voice of the individual and on ever

changing constructed groups Any contemporary

approach to information literacy must consider

how to engage more effectively with learners,

by understanding these multiple aspects of how

they can learn

Before we examine in a little more detail

some of these key elements of learning and their

relationship to information literacy and social

software, we need to note two other factors that

may influence this relationship: the reluctance

of individuals and institutions to change; and the

ways in which the ‘Google generation’ of ‘digital

natives’ may interact with information and learn

in new and different ways What are the key

ele-ments of learning as they relate to information

literacy and social software? Some at least of

these key elements are:

1 Reluctance to change (institutions and

teachers)

Faced with the unfamiliar challenge of a new

world of social networking, some education

institutions have tended to react in a predictably

conservative way by blocking access to elements

such as Face book and Second Life As a result

of such embargos, as well as a reluctance by

teachers to engage with this new world, students

are frequently operating in different electronic environments during formal learning from those

in their out of hours experience (especially in schools) This makes teaching of information literacy more problematic

To somewhat over-dramatize the dilemmas created: as a teacher, how can you fully engage with students in helping them to exploit informa-tion if you don’t have easy access to what may constitute their major sources of information? Or, from a student perspective, why should you bother

to engage with all this ‘information literacy stuff’

if your perception is that all you have to do to get the information and help that you need, is to resort

to your social networks? When you are away from the institution, if you can effortlessly manipulate multi-media information to build your own web pages, why jump through what might be seen as sterile and irrelevant information literacy hoops when you are in formal learning mode? Again,

as the world of Web 2.0 becomes increasingly sophisticated, the version of ICT encountered

in formal learning is likely to appear ever more limited and pedestrian

2 Digital natives and others

“ Future students in higher education belong to

a generation that has grown up with a PC mouse in their hands, a TV remote control, a mobile phone,

an i-pod, a PDA and other electronic devices for communication and entertainment … computer games, the Internet, MSN, wikis and blogs being

an integral part of their lives” (Veen, 2007, p.1) Prensky has labelled these young people ‘digital natives’ and has asserted that they now exhibit different characteristics from their forbears (the digital immigrants) due to the extent of their exposure to technology in all its forms.(Prensky, 2001) He claims that changes in activity during development may result in different neural wiring via processes of ‘neuro-plasticity’; a view recently echoed by Martin Westwell of the Institute for the Future of the Mind (2007) Both advocates assert

Trang 35

that current students have much better visual skills,

do better at visual-spacial tests, are able to deal

with lots of information at once, and can process

this information and make decisions quickly On

the other hand, this generation of students may

have shorter attention spans, be easily distracted,

may not maintain focus well when interrupted and

may have less ability to reflect on topics than the

previous generation Veen (2007) adds to this list

of differences, talking about non-linear learning

behaviour; clicking and zapping to deal with

in-formation overload; using exploratory approaches

to new situations; and becoming experienced at

problem solving at a young age “We now have

a new generation with a very different blend of

cognitive skills than its predecessors – the digital

natives.” (Prensky, 2001)

As a result of Web 2.0 developments, we can

also anticipate that ‘digital natives’ may have

dif-ferent social skills This is because the Internet

is increasingly used for socialisation rather than

just information-seeking, with even those

seek-ing information often doseek-ing so via peer groups

Westwell claims that more people use Second

Life and Facebook than use Google Whether

or not we believe all these claims, Oblinger and

Oblinger (2005) have forecast that the next

gen-eration of students entering higher education will

be digitally literate, highly Internet-familiar,

con-nected via networked media, used to immediate

responses, and preferring experiential learning

This generation will be highly social: they will

prefer to work in teams and will crave

interactiv-ity in image-rich environments as distinct from

text-intensive environments

Where does this leave traditional

informa-tion literacy, with its focus on using libraries and

finding primary sources, its reliance on laborious

sequential steps and its scant reference to

collabo-ration or to multi-media resources? If Westwood

and others are correct, their picture of our ‘new’

students implies that not only have they gained

from their early digital experiences but they have

also lost in terms of opportunities for reflection and

‘slow-learning’ This picture of gains and losses calls into question the widespread claims that ele-ments of Web 2.0 (wikis etc.) automatically help

to develop meta-cognitive skills However, it is also interesting to note that traditional information literacy frameworks do not emphasise reflection and its role throughout learning

Web 2.0, InforMAtIon lIterAcy And forMAl leArnIng

Where do all these changes leave information literacy? How might traditional models of infor-mation literacy need to be altered to accommodate the experience and expectations of students within formal education? Where does Web 2.0 fit in?The

• sequential view of skills deployment

is now being questioned Learning tasks make a range of different demands on students, which call into question the no-tion of applying the same series of steps

to meet all these demands Observations of pupils from 5-18 in schools and students in further education colleges show that they seldom follow the prescribed sequence (Streatfield and Markless, 1994; Moore, 1997; Markless and Streatfield, 2000) Formal studies of information-seeking be-haviour in universities again challenge this premise (Foster, 2006) To be fair, most

of the process models that are set out in steps are accompanied by some form of caveat recognising or even advising that

it is not necessary to follow the prescribed sequence However, there is usually little help offered on how to use the model in

a non-sequential way, with the result that the framework tends to be taught as a se-quence The desire to inflict sequences on students is remarkably resilient in the world

of information literacy Even writers who are responding to the Web 2.0 environment

Trang 36

tend to present a sequence of processes to

be learned in order to become ’information

fluent‘ (e.g the five-stage process of Jukes

(2007): asking questions; accessing data;

analysing and authenticating information;

applying it to real-life problems; assessing

product and process) This approach takes

no account of the influence of context on

any sequence, the influence of learners’

cognitive styles, or the need to make sense

of any information and transform it into

knowledge

In addition, a core characteristic of Web 2.0

tools is that they transfer power, ownership

and authority to the participants This

in-evitably gives people license to design their

own routes through learning tasks in any

way that suits them Finding information is

less likely to involve systematic information

seeking than, for example, interest groups,

peer web pages or social bookmarking

These observations lead to the key

ques-tion - can the Big6 or any similar

infor-mation literacy model be adapted to take

account of how students actually find and

use information, especially in the Web 2.0

environment?

Although the importance of learning as

construction is recognised within the

rhet-oric of information skills pedagogy and

“Information literacy is often seen as the

school library version of constructivism”

(Moore, 2005 p.3), much of the observed

planning and practice1 suggests heavy

reli-ance on transmission, learner practice, and

feedback, all heavily structured into

man-ageable segments and strongly ‘teacher’

controlled (that is, the classic behaviourist

approach) Early voices such as Kuhlthau’s

(1993), which present information-seeking

as a process of seeking meaning, were at

first largely ignored in practice In recent

years there have been intensified efforts

to ensure that people who are teaching

information literacy adopt constructivist approaches (e.g Todd, 2001) Limberg (2007) asserts that to learn is not to receive knowledge and information, but is about changing the relationship between a person and the world She claims that information-seeking is too often focussed on teaching technical procedures and on fact-finding rather than on students formulating authen-tic questions and constructing their own positions The concept of authenticity is central to Limberg’s ideas on information literacy Contrived questions and tasks, de-signed solely to meet externally imposed assessment and with no other consequenc-

es for the student, will not engage and tivate students Without a real and personal interest, students will be satisfied with the superficial answer, the first ‘hit’, or ‘good enough’ information There is no incentive

mo-to go beyond using technical skills mo-to lect facts

col-Again, the latest outputs from the USA-based Center for International Scholarship in School Libraries (Kuhlthau and others, 2007) focus on the concept of ‘guided inquiry’ as the basis for teaching and learning of information skills The main characteristics of guided inquiry are:active engagement by students in the learn-

• ing processstudents building on what they already

• knowhigh levels of reflection

a recognition of the importance of social

• interaction and of students’ different ways

of learning

(Kuhlthau and Todd 2007)

All these are recognisable characteristics of learning as construction (see, for example, Papert and Harel, 1991) There is little doubt that con-structivist approaches are particularly suited to

Trang 37

Web 2.0 tools In this environment, students can

construct artefacts such as video presentations,

blog entries and wiki pages both individually

and collaboratively Teachers can join in with

collaborative editing and can scaffold students’

work It seems likely that the constructivist

ap-proach to teaching and learning so well supported

by Web 2.0 tools may finally lead to information

literacy teaching becoming more attuned to how

students learn

If constructivist principles are used to inform

and guide information literacy work, students will

be required to develop a repertoire of strategies that

are conspicuously absent from most information

literacy models This will involve:

reflection: the ability to reflect

construc-•

tively and to use that reflection in planning

for their own development

evaluation of the processes undertaken as

well as of the products of their study

making sense (deep understanding) of the

information that they obtain, linked to the

ability to transform the information to

re-flect their own emerging views

We do not think that these aspects of learning

can simply be grafted onto existing frameworks

or inserted after any particular element of a linear,

sequential model They are part of an iterative

process of learning not well represented in

exist-ing information literacy frameworks

the IMportAnce of context

The importance of context in relation to

informa-tion behaviour is well established (e.g Streatfield

and Wilson, 1980; Dervin, 1992; Ingwersen and

Jarvelin, 2005) Context in information-related

behaviour is recognised as multi-dimensional:

with different facets reflecting features of the task;

characteristics of the learner; and features of the

system Louise Limberg observed in a ence presentation that “Influential studies have abandoned the idea of information literacy as a set of generic skills applied anywhere Informa-tion literacy is not generic but should be seen as social practice …” (Limberg, 2007) Looking at secondary schools, Williams and Wavell (2007) warned that if we are trying to to develop pupils’ information literacy we cannot ignore content

confer-in favour of technicalities and procedures - if

we do so, we will get trivial learning outcomes Nevertheless, as we have already noted, informa-tion literacy advocates have persisted in offering generic skills development frameworks that take little no account of context

How can the importance of context be reflected

in an information literacy framework? We believe that a different type of framework is needed; one that moves away from offering a list of abilities

to be taught or applied in an unvarying sequence, irrespective of context

Alongside the challenge of producing an propriate information literacy framework we face another problem: how can we teach information literacy in ways that respect the influence of con-text? Current views on skills development (e.g Luke, 2006; Williams and Wavell, 2006) assert that

ap-if students are to develop their information-related skills through assignments there is a need for:Authentic tasks that are recognised as rele-

• vant by the students (tasks that have mean-ing to students on a personal or academic level; not contrived to allow them to prac-tice particular skills)

Immersion in authentic contexts (realistic

• environments, current information drawn from the real world, engagement with real world problems and concerns)

High quality tasks related to current

aca-• demic work (e.g asking students to con-duct critical evaluation of sources to con-struct a position for an essay, rather than

Trang 38

offering general guidance on evaluating

information)

Learning embedded in the relationships,

values and discourse of the learning

com-munity (inherently social)

Timely teacher interventions in order to

move learners on at transition points in

their work

Web 2.0 can once again be a powerful

sup-port for increasing authenticity and enabling the

deployment of information literacy strategies in

a variety of meaningful contexts The

possibil-ity of a public platform for their work may help

students to take more seriously the underlying

information literacy processes involved in

pro-ducing that work

student reflectIon

If we are to take context into account when

deciding on information literacy strategies, this

immediately introduces the concept of variation

Bowden and Marton (1998) argued that not only

do students need to experience variation in order

to learn, but they must also explore variation by

comparing and analysing their experiences To

do this, students need to:

actively engage in discussion and

reflec-•

tion about finding and using

informa-tion in order to uncover variainforma-tion in their

Since at least the 1970s, reflection has been

seen as a mainstay of learning and this concept

has found its way into many models of learning

(e.g Kolb, 1975; Schon, 1983)

Reflection is a particularly important

ele-ment in developing the processes underpinning

learning and is therefore potentially important in any systematic approach to information literacy Reflection is taken for granted in most models of information literacy or placed at the very end of the process This approach in not likely to enable the development of the meta-cognitive strate-gies necessary to perform problem-solving with information It is likely to be difficult to integrate reflection into existing information literacy frame-works in any meaningful way (see the discussion about constructivism above) The possibilities for learning provided by Web 2.0 may provide a way forward For example, peer critique and the col-laborative production of artefacts may automati-cally stimulate reflection If not, engagement in these processes should provide opportunities for

a more formal emphasis on reflection as part of information literacy teaching

collAborAtIve leArnIng

Collaborative learning has long been seen as a desirable process: for example, groupwork is

a key element of training courses for teachers

in all sectors Web 2.0 tools have turned many students into sophisticated social networkers via YouTube, Facebook, blogs and discussion boards (Ipsos MORI, 2007) The same tools can also be used to facilitate collaboration in formal learning settings, whether the focus is on creating specific interest groups, building learning communities or enabling the collaborative production and editing

of artefacts

Collaborative learning requires many skills

of communication and interaction, but does it make fundamentally different information literacy demands on learners than those made when in-dividually finding and using information? There

is little in recent research to indicate that this is the case (Williams and Wavell, 2007; Kuhlthau, 2007) The influence of context (subject, learner characteristics and teacher expectations) is not just about whether students are working individually or

Trang 39

in groups to find and use information At the same

time, Web 2.0 can be seen as working counter to

collaboration through increased personalisation

of learning paths Overall, this aspect of Web 2.0

raises important issues in the wider context of

ap-proaches to learning by providing increased scope

for a variety of activities It may offer valuable

avenues for the teaching of information literacy

but does not seem to fundamentally affect the

information handling skills required

leArners’ expectAtIons

of InforMAtIon

Web 2.0 inevitably raises questions of ownership

and authority of information It is an environment

in the course of creation by its participants These

participants individually and collaboratively

gen-erate content in a form, format and structure that

best suits their own needs and preferences This

process works well when the primary focus is on

participation in social networks or developing

personal interests However, it can create major

difficulties when the same processes are applied

in formal learning Keen (2007) claims that we are

diving headlong into an age of mass mediocrity

because of the absence of gatekeeper expertise

and the increase in user-created content This

view is echoed by Gorman in his Britannica Blog

(2007) which identifies an erosion of traditional

respect for authenticity and expertise in a world

in which everyone is an expert “ignorant of the

knowledge they will never acquire and the rich

world of learning that search engines cannot

cur-rently deliver to them.”

Most students should be able to operate both

in the social world of web 2.0 and in more

for-mal learning environments (even before we take

account of the growing presence of academic

interests and institutions on Web 2.0) However,

to operate effectively in formal learning

envi-ronments, student autonomy may have to give

way to recognised academic authority Students’

preferred use of Wikipedia and social marking, alongside their facility in creating new

book-‘knowledge’ through remixing text, image and audio, or through the collaborative creation and editing of web pages may come into conflict with the necessity to conform to academic norms of using externally-validated information Students will not be able to simply replicate their social/leisure on-line behaviour when engaging in for-mal academic tasks Information literacy should help in this arena: traditional information literacy models do focus on evaluating sources of infor-mation, on considering authority and credibility Such an emphasis should raise students’ aware-ness of the problems associated with following their own preferences and concentrating on their own perspectives A new balance may need to be drawn between encouraging students to use the range of pathways to information that are open

to them in Web 2.0 and ensuring that they have the ability to choose the most appropriate for academic study

However, do we also need to respond more positively to students’ expectations of informa-tion? Should the information literacy field legiti-mise elements of students’ preferred information-related behaviour? For example, should we ensure that information literacy frameworks encompass such concepts as ‘good enough’ information, trial and error, and peer ‘expertise’ rather than focusing primarily on a set of competencies that appear to

be designed to turn all learners into systematic researchers, regardless of the task context?

does fIndIng InforMAtIon reAlly MAtter Any More?

One question likely to worry traditional tion literacy proponents is whether there will be

informa-a continuing need for skills in informinforma-ation ing, given an information world in which search engines are become increasingly sophisticated and in which Web 2.0 offers a range of enticing

Trang 40

seek-alternatives to systematic searching According

to Carol Kuhlthau (2007) what is important in

the 21st century is the ability to use information

for problem-solving not “the technology of

find-ing.”

Is A neW Model of InforMAtIon

lIterAcy needed to Meet

the chAllenge of Web 2.0?

We are not convinced that the Web 2.0

environ-ment on its own necessitates the developenviron-ment

of new sets of abilities for finding and using

information It does, however, move learning

into new directions (e.g increased collaboration,

more authentic tasks, peer critique, non-linear

aaproaches to information) In doing so, learning

with Web 2.0 tools should put increasing pressure

on proponents of information literacy to move in

the direction of well recognised learning principles

and practices In particular, information literacy

can be enhanced in a formal learning environment

by exploiting some possibilities offered through

cial bookmarking and folksonomies)

Organising information in new ways (e.g

using tagging)

Increasing authenticity of work by

present-•

ing ideas to others in a more public space

and using a wider range of media

Providing ‘just-in-time’ scaffolding to

sup-•

port students

Facilitating student reflection using records

of individual and group processes and

pro-viding virtual contemplative spaces

None of these aspirations are new to formal

education but some have been difficult to achieve hitherto without the benefits of advances in Web 2.0

If the information literacy community is pared to design materials, activities and support mechanisms based on the opportunities offered

pre-by Web 2.0, can they adapt existing information literacy frameworks to scaffold their work? Is a framework needed at all to enable information literacy development in formal education set-tings?

Any model or framework will be flawed cause it cannot fully take account of the influence

be-of context on information use or the problems inherent in producing any generic view of infor-mation literacy However, whilst doing research and development work in many further and higher education institutions and schools, we have found that staff and students want to put some sort of framework in place They want a public statement that clarifies what is encompassed by information literacy; a guide to support curriculum planning; and something that students can refer to when doing research and tackling academic tasks.The following framework (Markless and Streatfield, 2007) was originally designed to address problems being encountered by the University of Hertfordshire The University was trying to develop an institution-wide approach

to supporting students when finding and using information in an electronic environment At first

it was thought that an existing framework could

be used or adapted to meet the needs of staff and students However, consideration of the issues explored in this chapter made the shortcomings

of such an approach apparent We concluded that many of the traditional information literacy models had been built on a series of assumptions about learning and information behaviour that were problematic and that the increasing use of Web 2.0 threw these assumptions into stark relief We therefore needed to offer a different solution that

is more in keeping with the changing learning environment

Ngày đăng: 14/08/2014, 14:20

w