This architecture arises from the resultsof an empirical experiment where metadata creation tools, YouTube and an MPEG-7 modelling tool, were used by users to create movie metadata.. Col
Trang 1Fig 16 The path of the local avatarQ (thicker line) and the path of the non-local avatar P (thinner line) rendered on Q’s local machine which zoomed into the last 60 s
chronization (AS) Using AS, each host advances in time asynchronously from theother players but enters into the lockstep mode when interaction occurs When en-tering the lockstep mode, in every timeframe t each involved player must wait forall packets from other players before advancing to timeframe t C 1 Because this
is a stop-and-wait protocol, extrapolation cannot be used to smooth out any delaycaused by the network latency
In [12], the authors improve the performance of the lockstep protocol by addingpipelines Extrapolation is still not allowed under the pipelined lockstep protocol.Therefore, if there is an increased network latency and packets are delayed, thegame will be stalled
In [10], the authors propose a sliding pipeline protocol that dynamically adjuststhe pipeline depth to reflect current network conditions The authors also introduce
a send buffer to hold the commands generated while the size of the pipeline is justed The sliding pipeline protocol allows extrapolation to smooth out jitters.Although these protocols are designed to defend against the suppress-correctcheat, it can also prevent speed-hacks when entering into the lock-step mode be-cause players are forced to synchronize within a bounded amount of timeframes
Trang 2ad-However, speed-hack can still be effective when lock-step mode is not activated.And since these protocols do not allow packets to be dropped, any lost packet must
be retransmitted until they are finally sent and acknowledged Therefore, the imum timeframe of the game cannot be shorter than the maximum latency of theplayer with the slowest connection and all clients must run the game at a speed thateven the slowest client can support Furthermore, any sudden increase in the latencywill cause jitters to all players
min-Our protocol does not incur any lock-step requirement to game clients whilethe advantage of loose synchronization in conventional dead-reckoning protocol iscompletely preserved Thus, smooth gameplay can be ensured As we have proved inSection “Proof of Invulnerability”, a cheater can only cheat by generating malicioustimestamps and it can be detected easily and immediately Therefore, the speed-hackinvulnerability of our protocol will be enforced throughout the whole game session
so that any action of cheating can be detected immediately
Moreover, the AS protocol requires a game client to enter the lock-step modewhen interaction occurs which requires a major modification of the client code torealize it However, existing games can be modified easily to adapt our proposedprotocol One can simply add a plugin routine to convert a dead-reckoning vector
to the synchronization parameters before sending out the update packets, and addanother plugin routine to convert back the synchronization parameters to a dead-reckoning vector on receiving the packets
The NEO protocol [13] is based on [2], the authors describe five forms of cheating
and claim that the NEO protocol can prevent these cheating.
In [17], the authors show that for the five forms of cheating [13] designed to
pre-vent, it prevents only three They propose another Secure Event Agreement (SEA)
protocol that prevents all five forms of cheating which the performance is at worst
equal to NEO and in some cases better.
In [19], the authors show that both NEO and SEA suffer from the undo cheat Let
performs the undo cheat as follows: both players send their encrypted game moves
If PC find that MC is poor against MH; PC will purposely drop KC and therefore
games called RACS which relies on the existence of a trusted referee The referee
is responsible for T1 receiving player updates, T2 simulating game play, T3 validating and resolving conflicts in the simulation, T4 - disseminating updates toclients and T5 - storing the current game state
-The referee used in RACS works very likely to a traditional game server in ventional client-server architecture The security of RACS completely depends on
con-the referee For example, speed-hack can be prevented with validating every state
updates by the referee Although RACS is more scalable than client-server
architec-ture, it suffers from the same problem that the involvement of a trusted third party
is required
Trang 3is authorized by the server (in client-server architecture) or among all peers (in P2Parchitecture) We have used various examples to illustrate our protocol and provedthe security feature of our proposal We have carried out simulations to demonstratethe feasibility of our protocol.
3 Counter Hack (2007) Types of Hacks http://wiki.counter-hack.net/CategoryGeneralInfo
4 DeLap M et al (2004) Is runtime verification applicable to cheat detection In: Proceedings of NetGames 2004, Portland, August 2004, pp 134–138
5 Diot C, Gautier L (1999) A distributed architecture for multiplayer interactive applications on the internet In: IEEE Networks magazine, Jul–Aug 1999
6 Diot C, Gautier L, Kurose J (1999) End-to-end transmission control mechanisms for tiparty interactive applications on the internet In: Proceedings of IEEE INFOCOM, IEEE, Piscataway
mul-7 Even Balance (2007) Official PunkBuster website http://www.evenbalance.com
8 Feng WC, Feng WC, Chang F, Walpole J (2005) A traffic characterization of popular online games IEEE/ACM Trans Netw 13(3):488–500
9 Gautier L, Diot C (1998) Design and evaluation of mimaze, a multiplayer game on the Internet In: Proceedings of IEEE Multimedia (ICMCS’98) IEEE, Piscataway
10 Jamin S, Cronin E, Filstrup B (2003) Cheat-proofing dead reckoned multiplayer games (extended abstract) In: Proc of 2nd international conference on application and development
of computer games, Hong Kong, 6–7 January 2003
11 Lee FW, Li L, Lau R (2006) A trajectory-preserving synchronization method for collaborative visualization IEEE Trans Vis Comput Graph 12:989–996 (special issue on IEEE Visualiza- tion’06)
12 Lenker S, Lee H, Kozlowski E, Jamin S (2002) Synchronization and cheat-proofing col for real-time multiplayer games In: International Worshop on Entertainment Computing, Makuhari, May 2002
proto-13 Lo V, GauthierDickey C, Zappala D, Marr J (2004) Low latency and cheatproof event ordering for peer-to-peer games In: ACM NOSSDAV’04, Kinsale, June 2004
14 Mills DL (1992) Network time protocol (version 3) specification, implmentation and analysis In: RFC-1305, March 1992
15 MPC Forums (2007) Multi-Player Cheats http://www.mpcforum.com
16 Pantel L, Wolf L (2002) On the impact of delay on real-time multiplayer games In: ACM
Trang 4NOSSDAV’02, Miami Beach, May 2002
17 Schachte P, Corman AB, Douglas S, Teague V (2006) A secure event agreement (sea) protocol for peer-to-peer games In: Proceedings of ARES’06, Vienna, 20–22 April 2006, pp 34–41
18 Simpson ZB (2008) A stream based time synchronization technique for networked computer games http://www.mine-control.com/zack/timesync/timesync.html
19 Soh S, Webb S, Lau W (2007) Racs: a referee anti-cheat scheme for p2p gaming In: ings of NOSSDAV’07, Urbana-Champaign, 4–5 June 2007, pp 34–42
Proceed-20 The Z Project (2007) Official HLGuard website http://www.thezproject.org
21 Wikipedia (2007) Category: Anti-cheat software cheat software
Trang 5http://en.wikipedia.org/wiki/Category:Anti-Collaborative Movie Annotation
Damon Daylamani Zad and Harry Agius
Introduction
Web 2.0 has enjoyed great success over the past few years by providing users with
a rich application experience through the reuse and amalgamation of different Webservices For example, YouTube integrates video streaming and forum technologieswith Ajax to support video-based communities Online communities and social net-works such as these lie at the heart of Web 2.0 However, while the use of Web 2.0
to support collaboration is becoming common in areas such as online learning [1],operating systems coding [2], e-government [3], and filtering [4], there has beenvery little research into the use of Web 2.0 to support multimedia-based collab-
multimedia content-based activities collaboratively, such as content analysis, mantic content classification, annotation, and so forth At the same time, spurred
se-on by falling resource costs which have reduced limits se-on how much cse-ontent userscan upload, online communities and social networking sites have grown rapidly inpopularity and with this growth has come an increase in the production and sharing
of multimedia content between members of the community, particularly users’ created content, such as song recordings, home movies, and photos This makes iteven more imperative to understand user behaviour
self-In this paper, we focus on metadata for self-created movies like those found onYouTube and Google Video, the duration of which are increasing in line with fallingupload restrictions While simple tags may have been sufficient for most purposesfor traditionally very short video footage that contains a relatively small amount
of semantic content, this is not the case for movies of longer duration which body more intricate semantics Creating metadata is a time-consuming process thattakes a great deal of individual effort; however, this effort can be greatly reduced
em-by harnessing the power of Web 2.0 communities to create, update and maintain it
D.D Zad and H Agius ( )
School of Information Systems, Computing and Mathematics, Brunel University,
Uxbridge, Middlesex, UK
e-mail: damon.zad@brunel.ac.uk; harryagius@acm.org
B Furht (ed.), Handbook of Multimedia for Digital Entertainment and Arts,
DOI 10.1007/978-0-387-89024-1 12, c Springer Science+Business Media, LLC 2009
265
Trang 6Consequently, we consider the annotation of movies within Web 2.0 environments,such that users create and share that metadata collaboratively and propose an archi-tecture for collaborative movie annotation This architecture arises from the results
of an empirical experiment where metadata creation tools, YouTube and an
MPEG-7 modelling tool, were used by users to create movie metadata The next sectiondiscusses related work in the areas of collaborative retrieval and tagging Then, wedescribe the experiments that were undertaken on a sample of 50 users Next, theresults are presented which provide some insight into how users interact with exist-ing tools and systems for annotating movies Based on these results, the paper thendevelops an architecture for collaborative movie annotation
Collaborative Retrieval and Tagging
We now consider research in collaborative retrieval and tagging within three areas:research that centres on a community-based approach to data retrieval or data rank-ing, collaborative tagging of non-video files, and collaborative tagging of videos.The research in each of these areas is trying to simplify and reduce the size of a vastproblem by using collaboration among members of a community This idea lies atthe heart of the architecture presented in this paper
Collaborative Retrieval
Retrieval is a core focus of contemporary systems, particularly Web-based timedia systems To improve retrieval results, a body of research has focused onadopting the collaborative approach of social networks One area in which collab-oration has proven beneficial is that of reputation-based retrieval, where retrievalresults are weighted according to the reputation of the sources This approach is
retrieval using an agent reputation model that is based on social network sis methods Sub-group analysis is conducted for better support of collaborativeranking and community-based search In social network analysis, relational data isrepresented using ‘sociograms’ (directed and weighted graphs), where each partici-pant is represented as a node and each relation is represented as an edge The value
analy-of a node represents an importance factor that forms the corresponding participant’sreputation Peers who have higher reputations should affect other peers’ reputations
to a greater extent, therefore the quality of data retrieval of each peer database can
be significantly different The quality of the data stored in them can also be ent Therefore, the returned results are weighted according to the reputations of thesources Communities of peers are created through clustering
differ-Koru [6] is a search engine that exploits Web 2.0 collaboration in order to provideknowledge bases automatically, by replacing professional experts with thousands or
Trang 7even millions of amateur contributors One example is Wikipedia, which can bedirectly exploited to provide manually-defined yet inexpensive knowledge bases,specifically tailored to expose the topics, terminology and semantics of individualdocument collections Koru is evaluated according to how well it assists real users
in performing realistic and practical information retrieval tasks
framework for collaborative filtering that circumvents the problems of traditionalmemory-based and model-based approaches by applying orthogonal nonnegativematrix tri-factorization (ONMTF) Their algorithm first applies ONMTF to simul-taneously cluster the rows and columns of the user-item matrix, and then adopts theuser-based and item-based clustering approaches respectively to attain individualpredictions for an unknown test rating Finally, these ratings are fused with a linearcombination Simultaneously clustering users and items improves on the scalabilityproblem of such systems, while fusing user-based and item-based approaches canimprove performance further As another example, Yang and Li [8] propose a collab-orative filtering approach based on heuristic formulated inferences This is based onthe fact that any two users may have some common interest genres as well as differ-ent ones Their approach introduces a more reasonable similarity measure metric,considers users’ preferences and rating patterns, and promotes rational individualprediction, thus more comprehensively measuring the relevance between user anditem Their results demonstrate that the proposed approach improves the predictionquality significantly over several other popular methods
Collaborative Tagging of Non-Video Media
Collaborative tagging has been used to create metadata and semantics for differentmedia In this section, we review some examples of research concerning collab-orative tagging of non-video media SweetWiki [9] revisits the design rationale ofwikis, taking into account the wealth of new Web standards available, such as for thewiki page format (XHTML), for the macros included in pages (JSPX/XML tags),for the semantic annotations (RDFa, RDF), and for the ontologies it manipulates(OWL Lite) SweetWiki improves access to information with faceted navigation,enhanced search tools and awareness capabilities, and acquaintance networks iden-tification It also provides a single WYSIWYG editor for both metadata and contentediting, with assisted annotation tools (auto-completion and checkers for embeddedqueries or annotations) SweetWiki allows metadata to be extracted and exploitedexternally
There is a growing body of research regarding the collaborative tagging of tos An important impetus for this is the popularity of photo sharing sites such asFlickr Flickr groups are increasingly used to facilitate the explicit definition of com-munities sharing common interests, which translates into large amounts of content(e.g pictures and associated tags) about specific subjects [10] The users of Flickrhave created a vast amount of metadata on pictures and photos This large number
Trang 8pho-of images has been carefully annotated for the obvious reason they were accessible
to all users and therefore the collaboration of these users has resulted in producing
an impossible amount of metadata that is not perceivable without such ration Zonetag [11] is a prototype mobile application that uploads camera phonephotos to Flickr and assist users with context-based tag suggestions derived frommultiple sources A key source of suggestions is the collaborative tagging activ-ity on Flickr, based on the user’s own tagging history and the tags associated withthe location of the user Combining these two sources, a prioritized suggested taglist is generated They use several heuristics that take into account the tags’ socialand temporal context, and other measures that weight the tag frequency to create
collabo-a fincollabo-al score These heuristics collabo-are spcollabo-aticollabo-al, socicollabo-al collabo-and temporcollabo-al chcollabo-arcollabo-acteristics; theygather all tags used in a certain location regardless of the exact location, tags theusers themselves applied in a given context are more likely to apply to their cur-rent photo than tags used by others, and finally tags are more likely to apply to
an-notation service for conference photos which exploits sharing and collaborativetagging through RDF (Resource Description Framework) to gain advantages likeunrestricted aggregation and ontology re-use Finally, Bentley et al [13] performedtwo separate experiments: one asking users to socially share and tag their personalphotos and one asking users to share and tag their purchased music They discov-ered multiple similarities between the two in terms of how users interacted andannotated the media, which have implications for the design of future music andphoto applications
Collaborative Tagging of Video Media
We now review some examples of research concerning collaborative tagging of
annota-tion based on social activities associated with the content of video clips on theWeb This approach has been demonstrated through assisting users of online forumsassociate video scenes with user comments and through assisting users of We-blog communications generate entries that quote video scenes The system extractsdeep-content-related information about video contents as annotations automatically,allowing users to view any video, submit and view comments about any scene,and edit a Weblog entry to quote scenes using an ordinary Web browser Theseuser comments and the links between comments and video scenes are stored inannotation databases An annotation analysis block produces tags from the accu-mulated annotations, while an application block has a tag-based, scene-retrievalsystem
IBM’s Efficient Video Annotation (EVA) system [15] is a server-based tool forsemantic concept annotation of large video and image collections, optimised forcollaborative annotation It includes features such as workload sharing and support
in conducting inter-annotator analysis Aggregate-level user data may be collected
Trang 9during annotation, such as time spent on each page, number and size of thumbnails,and statistics about the usage of keyboard and mouse EVA returns visual feedback
on the annotation Annotation progress is displayed for the given concept duringannotation and overall progress is displayed on the start page
Ulges et al [16] present a system that automatically tags videos by detectinghigh-level semantic concepts, such as objects or actions They use videos from on-line portals like YouTube as a source of training data, while tags provided by usersduring upload serve as ground truth annotations
about social networks and family relationships can be used to improve semanticannotation suggestions This includes up to 82% recall for people annotations aswell as recall improvements of 20-26% in tag annotation recall when no anno-tation history is available In addition, utilising relationships among people whilesearching can provide at least 28% higher recall and 55% higher precision thankeyword search while still being up to 12 times faster Their approach to speed-ing up the annotation process is to build a real-time suggestion system that usesthe available multimedia object metadata such as captions, time, an incompleteset of related concepts, and additional semantic knowledge such as people andtheir relationships
Finally, Li and Lu [18] suggest that there are five major methods for collaborativetagging and all systems and applications fit into one of these five categories:
del.ici.ous and maps them to various ontology concepts, has helped to strate that semantics can be derived from tags However, before any ontologicalmapping can occur, the vocabulary usually must be converted to a consistentformat for string comparison
control and manipulate inconsistency and ambiguity in collaborative tagging.Statistical and pattern methodologies work well in general Internet indexingand searching, such as Google’s PageRank or Amazon’s collaborative filteringsystem
net-work knowledge into collaborative tagging to improve the understanding of tagbehaviours
visu-alization, such as showing a navigation map or displaying the social networkrelations of the users
incon-sistency and ambiguity issues associated with collaborative tagging which stemfrom a lack of user consensus Prominent applications, such as those offered byWikipedia that ask users to contribute more extensive information than tags, haveplaced more focus on this issue Given the complexity of the content being con-tributed, collaborative control and consensus formation is vital to the usability of
a wiki and is driving extensive research
Trang 10This section considered example research related to collaborative retrieval andtagging There is a great deal of research focused on retrieval that exploits user col-laboration to improve results Mostly, user activity is utilised rather than informationexplicitly contributed or annotated; consequently, there tends to be less useful, gen-eral purpose metadata produced that could be exploited by other systems There
is also a rising amount of research being carried out on collaborative annotation
of non-video media, especially photos, spurred on by websites such as Flickr anddel.icio.us Such sites provide the means for users to collaborate within a commu-nity to produce extensive and comprehensive annotations However, the static nature
of the media makes it less complicated and time-consuming to annotate than video,where there are a much greater number of semantic elements to consider which can
be intricately interconnected due to temporality There is far less understanding ofhow users behave collaboratively when annotating video; consequently, a body ofresearch is starting to emerge here, some examples of which were reviewed above,where user comments in blogs and other Web resources, tags in YouTube, sam-ple data sets, and power user annotations have been the source for annotating thevideos Since the majority of systems rely on automatic annotation or manual anno-tation from power users, the power of collaboration from more typical ‘everyday’users, who are far greater in number, to tackle this enormous amount of data is un-derexplored As a result, we undertook an experiment with a number of everydayusers in order to ascertain their typical behaviour and preferences when annotatingvideo, in particular, when annotating user-created movies (e.g those found on siteslike YouTube) The experiment design and results are described in the followingsections
Experiment Design
In order to better understand how users collaborate when annotating movies, weundertook an experiment with 50 users This experiment is now described and theresults presented in the subsequent section
Users were asked to undertake a series of tasks using two existing video data tools and their interactions were tracked The users were chosen from a diversepopulation in order to produce results from typical users similar to the ZoneTag [11]average user approach The users were unsupervised, but were communicating withother users via an instant messaging application, e.g Windows Live Messenger, sothat transcripts of all conversations could be recorded for later analysis These tran-scripts contain important information about the behaviour of users in a collaborativecommunity and contain metadata information if they are considered as comments
meta-on the videos This is similar to the approach of Yamamoto et al [14] who tried toutilise user comments and blog entries as sources for annotations Users were alsointerviewed after they completed all tasks
Trang 11Video Metadata Tools and Content
The two video metadata tools used during the experiment were:
YouTube enables users to upload their videos, set age ratings for the videos, enter
a description of the video, and also enter keywords
annotation with MPEG-7 With this system, users can model video content anddefine the semantics of their content such as objects, events, temporal relationsand spatial relations [19,20]
The video content used in the experiment was categorised according to the mostpopular types of self-created movies found on sites such as YouTube and GoogleVideo The categories were as follows:
fam-ily, friends and work colleagues Content is typically based around the people,occasion or location
commer-cial purposes It mainly includes videos created for advertising and promotion,such as video virals
and learning or research
purposes other than personal, business or academic, such as faith, hobbies,amusement or filling free time
In addition, the video content exhibits certain content features We consider the keycontent features in this experiment as follows:
explosion, a gunshot, a type of music Aural occurrences include music, noisesand conversations
oc-cur), user (uses another object or event), part (is part of another object or event),specialises (a sub-classification of an object or event), and location (occurs or ispresent in a certain location)
The video content used in the experiment was chosen for its ability to richly exhibitone or more of these features within one or more of the above content categories.Each segment of video contained one or more of these features but was rich in
a particular category, e.g one video might be people-rich while another is rich In this way, all the features are present throughout the entire experiment andparticipants’ responses and modelling preferences, when presented with audiovisualcontent that includes these features, can be discovered
Trang 12noise-User Groups and Tasks
Users were given a series of tasks, requiring them to tag and model the content of thevideo using the tools above Users were assigned to groups (12-13 per group), onefor each of the four different content categories above, but were not informed of this.Within these category groups, users worked together in smaller experiment groups
of 3-6 users to ease the logistics of all users in the group collaborating together at thesame time Members of the same group were instructed to communicate with othergroup members while they were undertaking the tasks, using an instant messagingapplication, e.g Windows Live Messenger The collaborative communication tran-scripts were returned for analysis using grounded theory [21] Consequently, groupmembership took into account user common interests and backgrounds since thiswas likely to increase the richness and frequency of the communication The impor-tance of user communication during the experiment was stressed to users
The four user category groups were given slightly different goals as a result ofdifferences between the categories The personal category group (Group 1) wasasked to use their own videos, the business category group (Group 2) was pro-vided with business-oriented videos, the academic category group (Group 3) wasprovided with videos of an academic nature, and the recreational category group(Group 4) were provided with a set of recreational videos The videos for eachcategory group differed in which features they were rich in, with other features alsoexhibited Table1summarises the relationships between the content categories, usercategory groups and content rich features
Each user was required to tag and model the content of 3-5 mins worth of videos
in YouTube and COSMOSIS This could be one 5 min long video or a number of
Table 1 Mapping of content categories to user category groups to content features
Content Category: Personal Business Academic Recreation
Trang 13videos that together totalled 5 mins This ensured that users need not take morethan about 15 mins to complete the tasks, since more time than this would greatlydiscourage them from participating, either initially or in completing all tasks Atthe same time, the video duration is sufficient to accommodate meaningful seman-tics Users did not have to complete all the tasks in one session and were given atwo week period to do so YouTube tags, COSMOSIS metadata and collaborativecommunication transcripts were collected post experiment.
After the users had undertaken the required tasks, a short, semi-structured view was performed with each user The focus of the interviews was on the users’experiences with, and opinions regarding, the tools
inter-Experiment Results
This section presents the results from the experiment described in the above tion The experiment produced three types of data from four different sources: themetadata from tagging videos in YouTube, the MPEG-7 metadata created by COS-MOSIS, the collaborative communication transcripts, and the interview transcripts.The vast amount of textual data generated by these sources called for the use of asuitable qualitative research method to enable a thorough but manageable analysis
sec-of all the data to be performed
Research Method: Grounded Theory
A grounded theory is defined as theory which has been “systematically
methodology is comprised of systematic techniques for the collection and analysis
of data, exploring ideas and concepts that emerge through analytical writing [23].Grounded theorists develop concepts directly from data through its simultaneous
coding which includes theoretical comparison and constant comparison of the data,
up to the point where conceptual saturation is reached This provides the concepts, otherwise known as codes, that will build the means to tag the data in order to properly memo it and thus provide meaningful data (dimensions, properties, rela-
tionships) to form a theory Conceptual saturation is reached when no more codescan be assigned to the data and all the data can be categorised under one of thecodes already available, with no room for more codes In our approach, we include
an additional visualisation stage after memoing in order to assist with the analysisand deduction of the grounded theory Figure1illustrates the steps taken in our dataanalysis approach
As can be seen in the figure, the MPEG-7 metadata and the metadata gatheredfrom YouTube tagging, along with the collaborative communication transcripts and
Trang 14Dimensions, Properties, Relationships
Theoretical Comparison
Constant Comparison
Conceptual Saturation
Open Coding
Data (Total)
Data (individual)
Data (Category Groups)
Data (Total)
Data (Category Groups)
Data (Experiment Groups)
Data (individual)
Memoing
Concepts Raw
data
Visualised Data
Fig 1 Grounded theory as applied to the collected data in this experiment
interviews, form the basis of the open coding process The memoing process is thenperformed on a number of levels The process commences on the individual levelwhere all the data from the individual users is processed independently Then thedata from users within the same experiment group are memoed Following this, thedata for entire user category groups is considered (personal, academic, business andrecreational) so that the data from all the users who were assigned to the same cat-egory are memoed together to allow further groupings to emerge Finally, all thecollected data is considered as a whole All of the dimensions, properties and rela-tionships that emerge from these four memoing stages are then combined togetherand visualised Finally, the visualised data is analysed to provide a grounded the-ory concerning movie content metadata creation and system feature requirements
Trang 15The most important results are presented in the following two sub-sections and arethen used to form the basis of an architecture for a collaborative movie annotationsystem.
Movie Content Metadata Creation
This section presents the key metadata results from the grounded theory approach
We first consider the most commonly used tags; then we discuss the relationshipsbetween the tags
Most Commonly Used Tags
differ-ent users when modelling a video can assist with combining the ontology approachwith the social networking approach (described earlier) when designing a collabo-rative annotation system Our results indicate that there were some inconsiderabledifferences in the use of tags for movies in different content categories and that,overall, the popularity of tags remains fairly consistent irrespective of these cate-gories Figure2to Figure5represent the visualisation of the tags used in YouTube
in different categories and show all of the popular tags The four most commonlyused tags in YouTube concerned: