1. Trang chủ
  2. » Luận Văn - Báo Cáo

Learning lbsn data using graph neural network

67 4 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Learning LBSN Data Using Graph Neural Network
Tác giả Hoàng Thành Đạt
Người hướng dẫn Assoc. Prof. Huynh Quyet Thang
Trường học Hanoi University of Science and Technology
Chuyên ngành Data Science and Artificial Intelligence
Thể loại Thesis
Năm xuất bản 2022
Thành phố Hanoi
Định dạng
Số trang 67
Dung lượng 2,62 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Nội dung

Using the LBSN data, our method first constructing the LBSNheterogeneous hypergraph which contains four types of nodes user, time stamp,POI and category and two types of hyperedges frien

Trang 1

HANOI UNIVERSITY OF SCIENCE AND TECHNOLOGY

MASTER THESIS

Learning LBSN data using graph neural network

HOANG THANH DAT Dat.HT202714M@sis.hust.edu.vn School of Information and Communication Technology

Supervisor: Assoc Prof Huynh Quyet Thang

Supervisor’s signature

School: Information and Communication Technology

May 26, 2022

Trang 2

SĐH.QT9.BM11 Ban hành lần 1 ngày 11/11/2014

CỘNG HÒA XÃ HỘI CHỦ NGHĨA VIỆT NAM

Độc lập – Tự do – Hạnh phúc

BẢN XÁC NHẬN CHỈNH SỬA LUẬN VĂN THẠC SĨ

Họ và tên tác giả luận văn: Hoàng Thành Đạt

Đề tài luận văn:Học biểu diễn dữ liệu LBSN sử dụng mạng neural trên đồ thị

Chuyên ngành:Khoa học dữ liệu và Trí tuệ nhân tạo

Mã số SV: 20202714M

Tác giả, Người hướng dẫn khoa học và Hội đồng chấm luận văn xác nhận tác giả

đã sửa chữa, bổ sung luận văn theo biên bản họp Hội đồng ngày 28/04/2022 với các nội dung sau:

1) Sửa từ viết tắt Asso Prof thành Assoc Prof.

2) Thêm bảng danh mục từ viết tắt ở trước mục Introduction.

3) Sửa một số lỗi soạn thảo, đặc biệt ở phần mô tả Hypergraph Convolution (mục 2.2.5), liên quan tới các ký hiệu toán học.

4) Thêm ví dụ minh họa và mô tả về node degree và hyperedge degree ở phần 2.2.5.

5) Bỏ phần 3.1 Introduction do trùng lặp nội dung với các phần trước đó, và thay bằng đoạn mô tả ngắn nối chương 2 với chương 3.

6) Mô tả kỹ hơn về dataset ở mục 4.1.1 Thêm thống kê số lượng siêu đỉnh, siêu cạnh trên đồ thị và giải thích giá trị 168.

7) Thêm mục 4.1.2 về Implementation, trong đó mô tả mã nguồn của phương pháp đề xuất, baselines, và cấu hình máy thử nghiệm.

Trang 3

Graduation Thesis Assignment

Name: Hoang Thanh Dat

Phone: +84343407959

Email: Dat.HT202714M@sis.hust.edu.vn; thanhdath97@gmail.com

Class: 20BKHDL-E

Affiliation: Hanoi University of Science and Technology

Hoang Thanh Dat - hereby warrants that the work and presentation in this sis performed by myself under the supervision of Assoc Prof Huynh QuyetThang All the results presented in this thesis are truthful and are not copied fromany other works All references in this thesis including images, tables, figures,and quotes are clearly and fully documented in the bibliography I will take fullresponsibility for even one copy that violates school regulations

the-Student

Signature and Name

Trang 4

I would like to express my gratitude to my primary supervisor, Assoc Prof.Huynh Quyet Thang, who not only guided me throughout this project but alsoencouraged me during my 5 years of university I would also wish to show myappreciation to Mr Huynh Thanh Trung who read my numerous revisions andhelped make some sense of the confusion I also want to extend my specialthanks to Dr Nguyen Quoc Viet Hung who inspired me and helped me in mythe research career

I would like to thank to all the lecturers in the School of Information and nication Technology, who provided valuable knowledges and experiences duringthe Master program I also like to thank my friends and family who supported meand offered deep insight into the study, especially Mr Tong Van Vinh and Mr.Pham Minh Tam who supported me with the experimental machines and expand

Commu-my analysis for this work

Trang 5

Location-based service networks (LBSNs) such as Facebook, Instagram haveemerged recently and attracted millions of people[1], allowing users to share theirreal-time experiences via checkins LBSN data has become a primary source forvarious applications from studying human mobility and social network analy-sis[2][3] In LBSNs, there are two essential tasks, friendship prediction and POIrecommendation that have been widely researched While the friendship predic-tion aims to suggest the social relationship that will be formed in the future, thePOI recommendation predicts the location user will visit in a given time Thetwo task is correlated, using the mobility data can greatly enhance the friendshipprediction performance[4] and vise versa

Traditional approaches often require expert domain knowledge, designing a set ofhand-crafted features from user mobility data (e.g co-location rates[5][3]) or userfriendships (e.g Katz index[3][6]) and combine those features for downstreamtasks These approaches, though requiring huge human efforts and domain ex-perts but lack of generalizability to different applications[7] Recent techniquescapture the joint interactions between social relationship and user mobility by ap-plying graph embedding techniques[8][9][10] The graph embedding techniquesembed the nodes into low-dimensional embedding spaces that can be transferredinto downstream tasks but can only learn the information from pairwise relation-ships, they can not handle the complex characteristics of a checkin and divide

a checkin into classical pairwise edges, thus result in loss of information Weanalyse that LBSN graph is heterogeneity and indecomposable, thus traditionaltechniques on classical graphs can not capture the deep semantic in LBSN data.Recently, the Graph Neural Networks (GNNs) have attracted wide attention [11][12]due to their capability in capturing the complex structural context in a graph.Traditional graph neural network, however, can only learn pairwise relationships.Therefore, hypergraph convolution[13] was released to model the hypergraph andlearn the n-wise proximity from hyperedges

In this work, we propose HC-LBSN, a heterogeneous hypergraph convolutionfor LBSN task Using the LBSN data, our method first constructing the LBSNheterogeneous hypergraph which contains four types of nodes (user, time stamp,POI and category) and two types of hyperedges (friendship and checkin) Hence,

we apply several hypergraph convolution layers to capture the complex structuralcontext The embedding of every nodes are learned in a unified vector space, how-ever, we do not directly compare the similarity of nodes in the encoding space for

Trang 6

downstream tasks but stack the decoding layers to transform encoded node tors into comparable vector spaces We analyse that the two essential downstreamtasks can be transformed into one problem: scoring hyperedges (see section 3.5for more details) Therefore, we apply a common method for the both tasks Inparticular, for each hyperedge candidate, a hyperedge embedding is generatedand pass into a N-tuplewise similarity function in order to measure its existence.Extensive experiments illustrate the improvement of our model over baseline ap-proaches, proving that our method can capture the deep semantic in LBSN data,and dealing with the heterogeneity and indecomposable of LBSN hypergraph.The analysis of hyperparameter sensitivity has shown that future work shouldovercome the balance parameters to automatically adjust an appropriate parame-ter for various datasets, and also apply the graph attention mechanism.

vec-The result of this work can be applied to analyse and build features for a socialnetwork platform such as suggesting friends and recommending tourist places.The proposed model can also be applied for learning on different dataset in otherdomains due to its generalizability and no expert knowledge requirement

Student

Signature and Name

Trang 7

TABLE OF CONTENTS

CHAPTER 1 INRODUCTION 1

1.1 Location-based social networks (LBSNs) 1

1.2 Research history 3

1.3 Research challenges 4

1.4 Our proposed method 6

1.5 Contributions and Thesis Outline 7

1.6 Selected Publications 9

CHAPTER 2 BACKGROUND 11

2.1 Learning on LBSNs data 11

2.2 Graph Embedding Techniques 11

2.2.1 Overview 11

2.2.2 Deepwalk 13

2.2.3 Graph Neural Networks 14

2.2.4 Heterogeneous graph learning 16

2.2.5 Hypergraph and Hypergraph convolution 20

CHAPTER 3 MULTITASK LEARNING FOR LBSNs USING HY-PERGRAPH CONVOLUTION 24

3.1 Framework Overview 24

3.2 Notations and Definitions 26

3.3 LBSNs Hypergraph Construction 26

Trang 8

3.4 Hypergraph convolution 28

3.5 Loss function 29

3.6 Hyperedge embedding function 32

3.7 Optimization 33

CHAPTER 4 EXPERIMENTS 34

4.1 Setting 34

4.1.1 Datasets 34

4.1.2 Implementation 35

4.1.3 Downstream tasks and metrics 36

4.1.4 Baselines 37

4.1.5 Hyperparameter setting 37

4.2 End-to-end comparison 38

4.2.1 Friendship prediction 38

4.2.2 POI recommendation 39

4.3 The effectiveness of hyperedge embedding functions 42

4.4 Hyperparameter sensitivity 44

4.4.1 Checkin hyperedge weight 44

4.4.2 The number of hypergraph convolution layers 46

CHAPTER 5 CONCLUSION 49

Trang 9

LIST OF FIGURES

1.1 Example of LBSNs 2

2.1 Illustration of a general GNN inductive framework on a specificnode (the red node)[12] 152.2 Example of heterogeneous graph for bibliographic network[33] 162.3 Example of a heterogeneous graph where node a and b shouldhave similar embedding, different node colors represent for dif-ferent node types 192.4 The different between a simple graph (a) and a hypergraph (b)[13]

202.5 Example of node degrees on a simple graph (a) and a hypergraph(b) 222.6 Example of node degrees on a weighted simple graph (a) and aweighted hypergraph (b) 223.1 Illustrate the HC-LBSN framework 25

4.1 Friendship prediction performance of HC-LBSN (blue line) withother techniques on four experimental datasets SP, KL, JK and IST 404.2 POI recommendation performance (Hit@3) of HC-LBSN com-paring to other techniques on experimental datasets IST, SP, KLand JK 414.3 POI recommendation performance (Hit@5) of HC-LBSN com-paring to other techniques on experimental datasets IST, SP, KLand JK 414.4 POI recommendation performance (Hit@10) of HC-LBSN com-paring to other techniques on experimental datasets IST, SP, KLand JK 424.5 Friendship prediction performance on SP, KL, JK and IST datasetwith various hyperedge embedding functions 424.6 POI recommendation performance on SP, KL, JK and IST datasetwith various hyperedge embedding functions 43

Trang 10

4.7 Friendship prediction performance of HC-LBSN when increasingthe checkin hyperedge weight in four experimental datasets 454.8 POI recommendation performance of HC-LBSN when increasingthe checkin hyperedge weight in four experimental datasets 454.9 Friendship prediction performance of HC-LBSN when increasingthe number of hypergraph convolution layers in SP, JK and KLdataset 474.10 POI recommendation performance of HC-LBSN when increasingthe number of hypergraph convolution layers in SP, JK and KLdataset 47

Trang 11

LIST OF TABLES

3.1 Notations 264.1 Statistics of the datasets 35

Trang 13

CHAPTER 1 INRODUCTION

1.1 Location-based social networks (LBSNs)

A social network is a framework where users can interact with others in manyform, such as friendship, common interests, and shared knowledge Generally,

a social networking service builds on and reflects the real-life social networksamong people through online platforms such as a website, providing ways forusers to share ideas, activities, events, and interests over the Internet.[14]

With the development of location-acquisition technology (e.g GPS and WiFi),people can add a location to existing online social network For instance, peo-ple can share their photos with their current location, called checkins such as inFacebook and Twitter They can also interact with the others like commenting on

a hotel page in social network, and browser other people reviews on that hotel

By adding the location into social network, the gap between social network andreality becomes closer, and such social networks are called Location-based socialnetworks (LBSNs) These LBSNs such as Foursquare, Gowalla, Facebook Local

or Yelp have emerged recently and attracted million of users [1] In LBSNs, userscan share their real-time experiences via checkins, which includes four essentialinformations, a user, a specific timestamp, an activity (e.g swimming, trekking)and a POI (point of interest) such as a swimming pool, supermarket

Since LBSN data includes objects and various relationships between them, searchers often model LBSN data using a graph Figure 1.1 illustrates an exam-ple of a LBSN graph which contains four kinds of nodes: user, time, POI andsemantic activity In LBSNs, users, presented by u, play a central role Userscan establish relations with each other, such relations between users are calledfriendships In figure 1.1, the friendship edges are represented by blue dash lines,for example, the relations between users (u1, u2), (u3, u4) Users can also formrelations with other node types such as locations, activities A user’s activity inLBSNs is called checkin which is formed by a quadtuple including four crucialinformations (user, time stamp, POI, semantic), denoted by (u, t, p, s) A POI p

re-represents for a tagged location of a checkin whilesindicates the activity of usersuch as swimming or shopping A time stampt is the time when checkin occurs.For a specific POI, different semantic activities can occur like shopping, eating,hanging out at a shopping mall

LBSN data contains rich socio-spatial properties of user activities LBSNs offer

Trang 14

Figure 1.1: Example of LBSNs

many new research challenges, it is a primary data source to study human ity and social network analysis[2][3] Two typical applications from LBSN datahave been widely investigated, friendship prediction and POI recommendation

mobil-In which, the former suggests social friendships that likely will be established inthe future For example, people who enjoy the same activity at the same placewithin overlapping period potentially meet and make friend with each other, due

to a common presence and hobby POI recommendation aims at predicting theplace people will visit at a given time For instance, people in a community havehigh chance to collaborate in the same activity (e.g students attending a class,gym members working out at a gym)

Though LBSN data has been studied widely, effectively capturing the structures

of LBSN graph still remain challenging due to its heterogeneity and posable of hyperedges In the next section, we describe the history researches ofLBSNs with several baseline approaches and bold the research challenges whichwill be considered in the new proposed method

Trang 15

indecom-1.2 Research history

Recently, location-based social networks such as Facebook Local, Yelp, Foursquare

or Gowalla have attracted million of users to share their daily experiences withothers These platforms contain great pool of information and mostly accessi-ble to the public To this end, many researches attempts to learn the underlyingpatterns of user behaviour in LBSNs[15]

Due to the intrinsic correlation between human mobility and social ship, existing work has shown that considering such correlation can improve theperformance on both friendship prediction [16][17] and location prediction[17].The earlier techniques often require expert domain knowledge, designing a set

relation-of hand-crafted features from user mobility data For example, Wang et al.[5]defineds the co-location rate based on their observation in dataset and reality.Specifically, they determine a threshold depended on the number of shared com-munication activities, if two people have many shared activities, they will poten-tially be friends in the future Yang et al [7] characterises user mobility based

on two criteria: the total time-independent travel distance and the probability ofreturning to particular locations Song et al [18] uses a metric called mobilityentropy to reflect users’ daily mobility Backstrom and Kleinberg [19] proposeddispersion metric to estimate the tie strength between the connected users and de-tect the couples and romantic partners by their strong social bond pattern Theseapproaches, however, require significant human effort and domain knowledge aswell as lack of generalibility to different applications

Recent techniques leverage the advances in graph representation learning[11][10][20]

to embed the nodes into low-dimensional embedding spaces that automaticallycapture the users mobility and social context, based on the original graph topol-ogy and nodes’ attribute The graph representation learning approach translatesthe complex structure of a graph into a latent space In which, nodes are assigned

to low-dimensional vectors so that the learnt latent space can reflect the ogy of the original graph For example, if there is an edge between two nodes,these two nodes should be embedded to be close to each other in the embeddingspace If two users are friend, they should be closely embedded into a latentspace, similar phenomenon for other relationships such as user-POI and user-activity Hence, the friendship prediction is often performed by link predictiontask, by measuring the similarity between two user nodes[10][4] such as cosinesimilarity and dot product Also the POI recommendation task could leveragethese learnt embedding to enhance to prediction ability [4]

Trang 16

topol-However, classical graph embedding techniques are unable to capture the plex characteristics in the checkins A checkin is formed by multiple objects ofvarious types such as (user, time, POI, category), these objects have a strong rela-tionship Thus, in order to capture such complex relationship, tradional methodsdivide a checkin into pairwise edges and apply simple graph learning process forpairwise relationships[4] For example, in order to learn vector representationfor LBSNs graph, the Deepwalk algorithm decomposes a checkin into 6 classi-cal edges, including (user, time), (user, POI), (user, category), (time, POI), (time,category) and (POI, category)[4] The transformation is inversible and thus thesepairwise relationships can not describe a checkin and therefore, it leads to down-grading in performance And moreover, automatic decomposition algorithm gen-erates less informative relationships like (user, time) and (POI, time) Such strongand indecomposable relationship like a checkin is called a hyperedge, where a hy-peredge can contains more than two vertices.

com-With the indecomposable characteristic of the checkin hyperedges, LBSN2Vec[4]was proposed in order to capture the complex properties and heterogeneity ofLBSN graph LBSN2Vec follows random walk based approach, inspired fromthe idea of Deepwalk algorithm In which, LBSN2vec introduces a random-walk-with-stay scheme to jointly sample friendships and checkins from LBSNhypergraph, generating sequences of user nodes In each user node, Yang etal.[4] randomly sample a set of checkins related to the current user and optimizethe embedding of the user to its checkins and with other friends in its local context

as well Thus, the LBSN2Vec can capture the relationship between objects of ferent types LBSN2Vec produces promising results comparing to other baselineapproaches due to its flexibility in learning the interactions between social friend-ship and user mobility However, the strategy of LBSN2Vec is equivalent to thepath-based embedding used in learning heterogeneous graph[21] which can onlylearn from the co-occurrence between nodes from a fixed window size The ran-dom walk mechanism is also inversable, that means by using generated sentencesfrom random walk algorithm can not reconstruct the original graph, therefore,learning on generated sentences cause information loss and can not fully exploitthe original topology of the graph

dif-1.3 Research challenges

Though many researches on LBSNs, these approaches are still facing many issues

as they can not fully exploit the structure of LBSN graph For example, the

Trang 17

hand-crafted features approach can not capture deep semantic and high-order proximity

in LBSN graph, as mentioned in section Auto feature learning methods considerboth the first-order and higher-order proximity but still results in information loss,and thus can not fully capture the complex characteristics of LBSN[4] LearningLBSN hypergraph is a challenging task due to the complexity of LBSN graph anddownstream tasks requirements, we summary the research challenges in LBSNwith three points:

• Heterogeneity: LBSN data includes many objects of various types, for stance, users, locations, time stamps and semantic activities Network em-bedding techniques on LBSNs have to consider about the heterogeneity ofthe graph Classical embedding algorithms like Deepwalk[10], Node2Vec[20],GraphSAGE[12] can only handle homogeneous graph while nodes belong-ing to one type only While nodes having different type, different relation-ships on graph are made and thus each connection has to be treated differentthan others For example, in LBSN, two common connections are friendshipedges and checkin edges Since the number of checkins are often very largecomparing to the number of friendships, the influence of checkins into userembedding has to be different than user-user relationship For instance, twousers are more likely to be friends if they both go to the gym club at the sametime in a week rather than having 7 mutual friends Due to often meet at thegym club, they have a high chance to chat with each other instead of havingsome mutual friends but no related between them Although the number ofmutual friendships and checkins are the same but their influence to user aredifferent, thus, when generating vector representation for user, the two con-nections must have different affects on user How to weighted the influencefrom various connections are also a challenging task of heterogeneous graphembedding techniques

in-• Indecomposibility: As mentioned above in section 1.3, consider a checkinwhich contains four informations: a user, a time stamp, a POI and a semanticactivity Those objects have a strong relationship and any decomposing algo-rithm will cause loss of information Thus, in order to capture the complexproperty from a checkin, classical embedding techniques[10][12], which canonly capture the pairwise relationships, can not be applied

• Multitask requirement: In LBSNs, there are two essential tasks that havebeen widely researched, friendship prediction and POI recommendation.These tasks reflect the embedding quality of proposed methods, therefore,

Trang 18

a new proposed model has to evaluate on the both task This results inmulti-task training and the problem is balancing the influencing of socialrelationship and user mobility information to each other For example, highattention on user mobility causes poor results on friendship prediction andvice versa[4].

Understanding the drawbacks of LBSNs learning methods, together with the main challenges, this motivates us to propose a novel method that can handlethe heterogenity and indecomposibility of a LBSN hypergraph The proposedmethod can also be trained by multi-task learning in an end-to-end fashion De-tails of our proposed model is introduced in the next section

re-1.4 Our proposed method

Recent techniques LBSN2Vec on learning LBSN hypergraphs can capture boththe first-order and high-order proximity[4] However, LBSN2Vec[4] generatedsequences of nodes inspired from the random walk based approach and thus lossthe information of the structural of graph An improvement of random walk basedapproach is the subgraph-based method[21] which captures the structural contextbetween nodes Compared with the structures of meta-path from random walklike LBSN2Vec and Deepwalk, the structural context contains much more se-mantic information Example is shown section 2.2.4 where the nodeaandbbothconnecting to the same subgraph and therefore their embedding should be similar.Motivate from using structural context, we analyse the using of graph neural net-work into LBSNs Recently, graph neural network has attracted wide attentiondue to its successfully capturing the complex relationship between nodes andoutperforms other techniques in various downstream tasks like node classifica-tion and link prediction[11][12][22] Traditional graph neural network such asGCN[11] and GAT[22], however, are designed to learn pairwise relationship in

a homogeneous graph Thus, hypergraph convolution[13] was released to modelthe hypergraph and learn the n-wise proximity from hyperedges We analyse thatthe hypergraph convolution could be effective in learning LBSN data since it cancapture the complex characteristic from checkin hyperedges However, it solvesonly one of the three main challenges mentioned above and thus we improve thehypergraph convolution algorithm for our task on LBSN dataset, which considerthe heterogenity and multi-task learning

In this work, we propose HC-LBSN, a heterogeneous hypergraph convolution for

Trang 19

LBSN task The heterogeneous hypergraph contains four types of nodes (user,time stamp, location, category) in the LBSNs task with two types of hyperedges:friendship hyperedge and checkin hyperedge Our method follows the encoder-decoder architecture, where the encoder uses several hypergraph convolution lay-ers to learn the node representation for different types of nodes in a unified vectorspace By stacking several hypergraph convolution layers, the encoder can cap-ture the high-order proximity between nodes.

In LBSN2Vec, Yang et al.[4] mentioned that different node types should be bedded into different embedding spaces, a unified space can not reflex the similar-ity between nodes For example, consider a famous person who has many friends,this person also connected to a POI via checkin but none of her friends are linked

em-to this POI By learning the embedding of user and POI in a unique space, theembedding of the famous user is closed to the POI and closed to her friends, atthe same time the embedding of her friends should be far way from that POI due

to no connection This is conflict However, in our proposed method, we do notdirectly compare the similarity of nodes in this encoding space for downstreamtasks, but using two decoders aiming for the two essential tasks, friendship pre-diction and POI recommendation In particular, since the friendship and checkinare represented by friendship hyperedges and checkin hyperedges, respectively,predicting missing hyperedge is equivalent to predicting friendship and checkin.Hence, we use a common method for the both tasks Specifically, our decoderfirst generates hyperedge embeddings and pass them to a non-linear function tomeasure a score for the existence of a hyperedge Higher score value reflectshigher probability for a hyperedge to be exists, and vice versa

Our proposed method is trained using multi-task learning in an end-to-end ion Two essential tasks for LBSNs task are performed to optimize both the en-coder and two decoders Thus, generated node embedding can be used to furtherdownstream tasks, for predicting new forming social relationship and suggest-ing POI to user In order to balance the influence of each hyperedge type intofinal node representations, we add weights to hyperedges, adjusting it affect theembedding quality to be more attentive to social relationship or human mobility

fash-1.5 Contributions and Thesis Outline

With the proposed approach that considering the three main challenges and dling the drawbacks of traditional methods, we summary our contributions to thisthesis as:

Trang 20

han-• A novel approach that applies hypergraph convolution for the LBSNs task,thus dealing with the indecomposability of LBSN graph and capturing thehigh-order structures of user-checkin relationships.

• The model follows the encoder-decoder architecture which handles the erogeneity of node embeddings and enables multi-task learning for both pre-dicting friendship and future checkins

het-• Extensive experiments illustrate the capability of our proposed model oncapturing deep semantic structures in LBSN hypergraph Our method out-performs baseline approaches on the two task: friendship prediction and POIrecommendation

The thesis is organized as follows

This chapter 1 provides an introduction about our work, including problem duction of LBSN (section 1.1), baseline methods (section 1.3) and remain chal-lenges in section 1.3 Thus from observations we proposed our novel model todeal with the challenges in LBSN task (section 1.4)

intro-In the next chapter, chapter 2, we consider more details about the backgroundknowledges including a summarization about learning on LBSN data which shortlyintroduces different approaches on LBSN data (section 2.1) Hence, we intro-duce in details about referenced embedding techniques in section 2.2, approach-ing the problem from a top-down view In this section, we first represent aboutthe overview graph embedding techniques with three main approaches (section2.2.1) Therefore, we describe several graph embedding methods which will

be used in experiments and influence the proposed method, including Deepwalk(section 2.2.2) and Graph Neural Networks (section 2.2.3) Hence, section 2.2.4approaches more closer into our method, we represent heterogeneous graph learn-ing, a sub-field in graph embedding which handles different types of nodes andconnections in a graph In this section, we introduce popular methods to cap-ture the complex relationship in a heterogeneous graph Classical heterogeneousgraphs are still not relevant to LBSN data, thus we describe a more complex graphdata structure called hypergraph, which is better to model LBSN data Section2.2.5 provides the definition of a hypergraph and the most effective method tolearn it, named hypergraph convolution Hypergraph convolution is a kind ofGNNs allowing capture the complex characteristic of hyperedges such as check-ins

Chapter 3 describes our proposed method, in this chapter we first give a short

Trang 21

introduction including observations that motivates us to this work Therefore, werepresent our model in details, also using the top-down view Particularly, wefirst introduce the framework in section 3.1 which illustrates model componentsand flows We then describe each stages of our model including hypergraph con-struction in section 3.3, then applying learning on constructed graph in section3.4 using hypergraph convolution The model is trained in an end-to-end fashionand the loss function is defined in section 3.5 In the end, we show how to learnour model in the optimization section (see section 3.7).

After describing our proposed method for learning downstream tasks in LBSNs,

we perform experiments to prove the capability of model in capturing the complexrelationship in LBSN hypergraph Chapter 4 shows our experiments on commondatasets for evaluating LBSNs We first provide the settings for experiments insection 4.1, first introduce about those popular datasets, then how to evaluate themodel with downstream tasks and metrics We also provide settings for baseliseapproaches and configuring hyperparameters for our model In the next sections,extensive experiments are performed to show the quality of generated embed-dings We compare our method HC-LBSN with other baseline approaches forboth friendship prediction and POI recommendation, it was mentioned in section4.2 We then analyse the influence of each model components, several componentcandidates, which are difficult to set, are evaluated in the next section, hyperpa-rameter sensitivity 4.4 This section provides a inside view about the influence ofdifferent hyperparameters including the checkin hyperedge weight and the num-ber of hypergraph convolution layers

Chapter 5 concludes our work and discusses about future work

pool-• Pham Minh Tam*, Hoang Thanh Dat*, Huynh Thanh Trung and HuynhQuyet Thang ”Social multi-role discovering with hypergraph embeddingfor Location-based Social Networks.” In 14th Asian Conference on Intelli-gent Information and Database Systems (2022) (Submitting)

Trang 22

This chapter has introduced about the overview of our work, providing sary knowledge about my thesis from understanding the problem and motivationfor the proposed method In the next chapter, we will describe compulsory back-ground before approaching to the proposed method, the background includinglearning on LBSN data and the graph embedding techniques.

Trang 23

necces-CHAPTER 2 BACKGROUND

2.1 Learning on LBSNs data

Recently, location-based social networks such as Facebook Local, Yelp, Foursquare

or Gowalla have attracted million of users to share their daily experiences withothers These platforms contain great pool of information and mostly accessi-ble to the public To this end, many researches attempts to learn the underlyingpatterns of user behaviour in LBSNs [15] The earlier techniques often leveragehand-crafted features such as daily routines [18], [23], dispersion metric [19] andapplies heuristics to retrieve the needed insight about the users For example,Wang et al [5] observes that friends often have shared communication activities,thus they define a threshold and determine that two people potentially be friends

if the number of shared activity between them being greater than the threshold.Yang et al [7] characterises user mobility based on two criteria: the total time-independent travel distance and the probability of returning to particular loca-tions Song et al [18] uses a metric called mobility entropy to reflect users’ dailymobility Backstrom and Kleinberg [19] proposed dispersion metric to estimatethe tie strength between the connected users and detect the couples and romanticpartners by their strong social bond pattern However, the feature engineeringrequires significant human effort and expert knowledge

2.2 Graph Embedding Techniques

2.2.1 Overview

In this section, we represent about the learning process on classical graph Let

the matrix X ∈ R|V |×d is the node feature matrix, d is the number of features.For example, in social network,V denotes the set of users and edges indicate thesocial friendship between them The personal information of users such as ages,gender and hobby can be encoded into node feature matrix

Most existing graph embedding approaches focus on preserving pairwise tionship between nodes in a graph The graph embedding techniques can bedivided into three main approaches[24]:

rela-• Matrix factorization based: These methods represent the connections in the

Trang 24

graph, either first-order proximity (e.g adjancency matrix) or higher-orderpromixy (e.g the similarity of the structural context between two nodes) inthe form of matrix Thus, the node embeddings are learned by factorizingthis matrix Most of these methods only exploit the nodes and their relation-ship which illustrated by edges, they are unable to aggregate the informationfrom the node features Hence, these methods are not widely researched inthe last 3 years Most popular matrix factorization methods can be listed asGraRep[25], TADW[26] and HOPE[8],

• Random walk based: Random walk based approaches are able to capturemore complex relationship comparing to matrix factorization approaches

By applying random walk mechanism, a input graph is thus transformed into

a list of sentences Each sentence is a sequence of nodes This idea is herited from the word embedding process on documents by treated nodes aswords and the representation of node is learned through predicting the its lo-cal context (predicting its neighbor nodes) Deepwalk[10] is the first methodthat applies random walk mechanism on a graph to capture the structure ofgraph The random walk mechnism used in Deepwalk is completely random,does not consider the edge weights between nodes Thus, Node2Vec[20]improves the random walk mechanism by exploiting the edge weight in-formation The biased random walk mechanism proposed in Node2Vec al-lows capturing more flexible structure by adjusting the hyperparameters p

in-andqthat tunes the weight between two traveral strategies BFS and DFS Itshould be noted that, the random walk based methods can only applied onnon-attributed graphs

• Deep learning based: Due to the successfully of deep learning models on ious tasks The deep learning is also applied on node representation learning.Some popular methods can be listed as SDNE[27], VAE[28] and the mostfamous method, which is widely used recently, is the Graph Neural Network(GNNs) Some variants based on GNNs such as GCN[11], GraphSAGE[12]and GAT[22] enables capturing the structural of a graph by various aggre-gation functions More details on Graph Neural Network is represented

var-in section 2.2.3 This approach outperforms traditional methods on manydownstream tasks[12][22] due to it capability to capture and aggregate theinformation from node features and relationships in a graph In particular,the GNNs allow learning from both the node features and edges in a graph.Moreover, they also can exploit the edge features information and handleheterogeneous graphs[21]

Trang 25

In the next section, we represent some graph embedding techniques includingDeepwalk and GNNs Deepwalk algorithm is a popular method in random walkbased approach, it is also used in our experiment section (see section 4.2), there-fore, in the next section we introduce about the Deepwalk algorithm We alsointroduce the general process of GNN algorithms following the message-passingmechanism due to their successful in learning network embedding.

2.2.2 Deepwalk

Though GNN approaches often perform better than random walk based method

on various downstream tasks such as node classification, node clustering [12][22],the Deepwalk algorithm is still widely used due to its low computation cost com-paring to GNNs In this work, the Deepwalk algorithm is also used as comparisonbaseline approach, thus, in this section we introduce in details about Deepwalk al-gorithm, in order to show that they are unable to capture the complex relationship

in a hyperedge which will be mentioned later in section 4.2

Deepwalk inherits the idea from the Skip-gram model[29] [30] that enable ing node representation based on its context The Deepwalk algorithm transforms

learn-a grlearn-aph into sentences using rlearn-andom wlearn-alk mechlearn-anism learn-and the lelearn-arning process isthen applied on generated sentences Given a sentence generated from a randomwalk algorithms = {v1, v2, , vL}whereLis the walk length Based on the Skip-Gram, Deepwalk learns the node representationvi by predicting its local context(predicting its neighbor nodes) The objective function is illustrated as belows:

both appear in the same context, it means thatxandyis synonyms soxandycanreplace each other in a sentence On a graph, having similar structural contextmeans having many common neighbors so the embeddings of nodes that havemany common neighbors should be embedded close to each other in embeddingspace Reversely, if two nodes have very different context then their distance is

Trang 26

large in the embedding space.

That the general idea of the Deepwalk algorithm, other variants using randomwalk based approach improves the random walk mechanism to be more flexible todifferent graph types[20] or to capture more complex relationships The randomwalk based, however, learning from the co-occurrence of nodes in a sentence,nodes that often co-occur in the same window size are closed in embedding space.The Graph Neural Network often performs better than random walk based due

to their capability in exploiting the deep local structural context, higher-orderproximity, more flexible than random walk methods In the next section, weintroduce about the common process of GNNs

2.2.3 Graph Neural Networks

uses the graph structure and the node features to learn a vector representation

hv for each node Recent GNN methods follow the message-passing mechanismwhere the vector representation of each node is iteratively updated by aggregatingthe hidden representations of neighbor nodes [11] [31] Following completion ofthe k iteration, the vector representation ofv holds the information of the k-hopnetwork where v is a central vertex For instance, at iteration k, GNN performthese functions:

where a(k)v ,h(k)v represent vectors ofN (v)andv at the iterationk, respectively

representation of nodev is initialized as its feature vector,h(0)v = x v

There are a number of AGGREGAT E andCOM BIN E functions, for example,GraphSAGE-MAX [12] uses the AGGREGAT E function as given by Eq.(2.5):

where W is a learning matrix parameter andM AX is the maximum

‘element-wise’ function TheCOM BIN E function at equation 2.4 represents a

Trang 27

Figure 2.1: Illustration of a general GNN inductive framework on a specific node(the red node)[12].

vector concatenation or the summation ‘element-wise’ function followed by amapping matrixW.[h(k−1)v , a(k)v ]

A further relevant example of a GCN where the mean ‘element-wise’ is mented is shown in [11] The AGGREGAT E and COM BIN E functions areshown in Eq.(2.6)

Figure 2.2 illustrates the GNN process on a specific (red node) In the initialstage, the neighborhood sampling mechanism is applied to randomly select a set

of neighbor nodes This process is essentialfor large graphs in order to address thememory consumption issue when a large number of nodes with large number ofGNN layers easily leads to the “out of memory” error Following the sampling ofneighbor nodes, theAGGREGAT EandCOM BIN Efunctions are implemented

At each layer, the node features are aggregated to form the features for the nextlayer using the functions AGGREGAT E and COM BIN E, thus node featuresbecome more abstract at the next layer due to the capability to capture informationfrom a longer distance At the last layer, the hidden representation of nodes isforwarded to downstream tasks such as node classification and clustering

There are many variants of GCNs which only differ in the AGGREGAT E and

of convolutional neural networks (CNN) for image processing CNN aggregatesthe adjacent pixels of the current pixel to extract local features such as shapes andbackgrounds of an image While image processing operates on pixels, in graphs,

Trang 28

Figure 2.2: Example of heterogeneous graph for bibliographic network[33].

the GCN operates on node features For each vertex on the graph, the GCN proach aggregates the features of adjacent (neighbor) vertices and then generatesthe hidden representations for that vertex Thus similar to image processing, thisvertex contain the information of the structures of a graph

ap-2.2.4 Heterogeneous graph learning

TV as the number of node types,V = {V t }Tt=1and the set of edgesE belonging to

TE typesE = {E t }Tt=1 A classical graph is called heterogeneous graph if either

TV ≥ 2orTE ≥ 2[32]

Heterogeneous graph has many examples in real-world, for example in ographic network such as academic network[33], the network includes 5 nodetypes (conference, conference-time, conference, paper, author and organization).Various edge types represent for different relationships such as collectedIn, pub-lishedIn, citation, authoredBy and affiliation Further more complex relationshipscan be inferred from heterogeneous graph, for example the semantic “organiza-tion published paper in conference” which is derived from the meta-path orga-nization, author, paper, conference-time, conference Due to the capability ofmodelling the complex structures in real-world scenarios, researches on hetero-geneous graph have been experiencing in data mining and machine learning, forexample in text analysis[34][35] and cybersecurity[36]

Trang 29

bibli-Different than homogeneous graph, learning from heterogeneous graph is morechallenging due to the complex structure and heterogeneous attributes Tradi-tional graph learning techniques can not capture the complex structure of hetero-geneous graph For example, a social network contains two node types, user andhobby and two kinds of relationships between objects, friendship between usersand user’s hobby In fact, people with many mutual friends are likely to be friends

in the future while people with common hobby may not due to lack of interactionsbetween them Different edge types influence to the downstream tasks in differ-ent ways, thus, using homogeneous graph learning techniques can not reflex thecomplex characteristic of heterogeneous graph since it treats objects in the sameway Researchers have been studied in learning heterogeneous graph in variousways, most common methods can be divided into three approaches[21]: link-based embedding, path-based embedding and subgraph-based These approachesare described below

a, Link-based embedding

This approach is the most basic one which capture only the first-order proximity

in a heterogeneous graph In particular, to distinguish different edge types, theidea is to transform nodes into different vector spaces rather than a unified vectorspace Methods in link-based approach define metricsSto optimize the similaritybetween nodes that have a connection For instance, given an edgee = (vi, vj),PME[37] defines a distance function between two nodesv i and v j and minimizetheir distance in transformed embedding space The distance function is repre-sented as:

Trang 30

exam-Thus, other approaches have been proposed to capture more complex semanticinformation in graph such as path-based embedding and subgraph-based embed-ding.

b, Path-based embedding

As mentioned above, path-based approach allows capturing more complex tural semantic in heterogeneous graph comparing to link-based method Thepath-based method often use random walk mechanism to generate sequences ofnodes, transforming a heterogeneous graph into sentences[39][40] However, due

struc-to the number of high-order relations is very large, metapath2vec[39] choosesonly relations with rich semantics For instance, in user-hobby network, severalmetapaths contain rich information which can be used to predict future friendshipsuch as user-user-user (illustrating users with mutual friends), hobby-user-user-user-hobby (friends of friends may have the same hobby with user) By applyingtraditional homogeneous graph learning techniques on sentences with appropri-ate window size, metapath2vec is able to capture both first-order and high-orderproximity in a graph The learning process is mostly similar to Deepwalk whichhas been introduced in section 2.2.2

Though in theory, the path-based methods can capture very complex semanticrelationship in a graph, however they only consider the co-occurrence of nodes

in a fixed window size The transformation process from graph into sentences

is decomposable, therefore, it causes information loss Deep complex semanticstructures can not be represented by sequences but only by subgraph For ex-ample, consider a heterogeneous graph in figure 2.3, the nodes a and b shouldhave similar vector representation due to its similar context, both connecting to acompleted graph of 6 nodes Hence, a more generalized method is introduced tocapture structural context in heterogeneous graph, called subgraph-based embed-ding

c, Subgraph-based embedding

The subgraph represents a more complex structure in the graph comparing tosentences in path-based approach Subgraph-based embedding thus significantlyimprove the quality of node embedding[41][42][43] In which, mg2vec[43] gen-erates metagraphs using metagraph-guided random walk mechanism The mech-anism is quite similar to random walk algorithm, however, instead of generating

a sequence of nodes, metagraph-guided random walk creates subgraphs The

Trang 31

Figure 2.3: Example of a heterogeneous graph where node a and b should havesimilar embedding, different node colors represent for different node types.

learning process preserves the proximity between node pair and between nodesand metagraphs, the formulation can be shown as:

meta-Another popular algorithm is DHNE[42] which is designed to learn node bedding in heterogeneous hypergraph (see section 2.2.5 for definition) In hyper-graph, edges can contain more than two vertices which is called hyperedges Ahyperedge can be considered as a subgraph, where all nodes connected to eachother, forming a completed subgraph Since a hyperedge is indecomposable, theDHNE algorithm produces a non-linear tuple-wise similarity function to measurethe proximity of nodes to a hyperedge Given a hyperedge of size 3, the formula-tion can be represented as:

where Wa, Wb, Wc is tranformation matrices for different node types [.] is thevector concatenation algorithm,σ is a sigmoid function andS represents for the

Trang 32

Figure 2.4: The different between a simple graph (a) and a hypergraph (b)[13].

proximity value between nodes and a hyperedge

The DHNE algorithm capture both the first-order proximity between nodes andhyperedge and also the second-order proximity of nodes in a graph The second-order proximity is learned by an auto-encoder, the process first learns node em-beddings and tries to reconstruct hypergraph using generated node embeddings.Therefore, DHNE can capture the complex characteristics, deep semantic struc-tural not only for heterogeneous graph but also for hypergraph This method isused in experiment section for comparison (see section 4.2

2.2.5 Hypergraph and Hypergraph convolution

a, Hypergraph definition

Hypergraph is a graph whose hyperedges connect two or more vertices A peredge is an arbitrary non-empty subsets of the vertex[44], a hyperedge is undi-rected Instead of having edges between pairs of vertices like in simple graphs,hypergraph have edges that connect sets of two or more vertices The different be-tween simple graph and hypergraph is illustrated in figure 2.4 In a simple graph,each edge connects two vertices which is denoted by black lines In hypergraph,each edge is represented by an ellipse connecting more than two nodes

hy-Hypergraph has many examples and applications in real-world For instance,

in Facebook where nodes represent people Each user belongs to zero or moregroups Each group can be modelled by a hyperedge connecting multiple users.Other real-world social networks can also be structured as hypergraphs, peopleare vertices and different relationships between people such as being in the samecommunity, same interests are modelled by different hyperedges A hyperedgelike community can contain an arbitrary number of users, and reversely, a user

Trang 33

can belong to an arbitrary number of hyperedges.

We also define the concept of heterogeneous hypergraph In which, there aremore than one types of nodes and different types of hyperedges For example,

in modelling the friendship and checkin of people in social networks such asLBSNs task, there are four types of nodes: user, time, location and category toform a checkin Hence, a hyperedge is neccesary to represent a checkin Differenttypes of relationships between users can also be modelled by different types ofhyperedges, like friendship, being in the same community or same interests

b, Hypergraph convolution

Most existing variants of GNN assume pairwise relationships between objects,the Hypergraph convolution, however, operating on a high-order hypergraph wherethe between-object relationships are beyond pairwise In the work of Bai et al.[13], the author defines two differentiable operators, hypergraph convolution andhypergraph attention, which is intuitive and flexible in learning more discrimina-tive deep embeddings In this work, since we are not using hypergraph attention,

we represent the primary concept of hypergraph convolution only

diagonal matrixW ∈ RM ×M Different than classical graph where an adjacencymatrix is defined, in hypergraph, we instead use the notation incident matrixH ∈

RN ×M The incident matrix indicates which node belongs to which hyperedge.Specfically, if the hyperedgeecontains node v thenH ve = 1, otherwise 0

The vertex degreeD ∈ RN ×N and hyperedge degreeB ∈ RM ×M are defined as:

Ngày đăng: 12/08/2022, 23:15

Nguồn tham khảo

Tài liệu tham khảo Loại Chi tiết
[1] P. Kefalas, P. Symeonidis, and Y. Manolopoulos, “A graph-based taxonomy of recommendation algorithms and systems in lbsns,” IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 3, pp. 604–622, 2015 Sách, tạp chí
Tiêu đề: A graph-based taxonomyof recommendation algorithms and systems in lbsns
[2] E. Cho, S. A. Myers, and J. Leskovec, “Friendship and mobility: User movement in location-based social networks,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011, pp. 1082–1090 Sách, tạp chí
Tiêu đề: Friendship and mobility: Usermovement in location-based social networks
[3] S. Scellato, A. Noulas, and C. Mascolo, “Exploiting place features in link prediction on location-based social networks,” in KDD, 2011, pp. 1046–1054 Sách, tạp chí
Tiêu đề: Exploiting place features in linkprediction on location-based social networks
[4] D. Yang, B. Qu, J. Yang, and P. Cudr´e-Mauroux, “Lbsn2vec++: Hetero- geneous hypergraph embedding for location-based social networks,” IEEE Transactions on Knowledge and Data Engineering, 2020 Sách, tạp chí
Tiêu đề: Lbsn2vec++: Hetero-geneous hypergraph embedding for location-based social networks
[5] D. Wang, D. Pedreschi, C. Song, F. Giannotti, and A.-L. Barabasi, “Hu- man mobility, social ties, and link prediction,” in Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining, 2011, pp. 1100–1108 Sách, tạp chí
Tiêu đề: Hu-man mobility, social ties, and link prediction
[6] L. Katz, “A new status index derived from sociometric analysis,” Psy- chometrika, vol. 18, no. 1, pp. 39–43, 1953 Sách, tạp chí
Tiêu đề: A new status index derived from sociometric analysis
[7] D. Yang, B. Qu, J. Yang, and P. Cudre-Mauroux, “Revisiting user mobility and social relationships in lbsns: A hypergraph embedding approach,” in The world wide web conference, 2019, pp. 2147–2157 Sách, tạp chí
Tiêu đề: Revisiting user mobilityand social relationships in lbsns: A hypergraph embedding approach
[8] M. Ou, P. Cui, J. Pei, Z. Zhang, and W. Zhu, “Asymmetric transitivity pre- serving graph embedding,” in Proceedings of the 22nd ACM SIGKDD in- ternational conference on Knowledge discovery and data mining, 2016, pp. 1105–1114 Sách, tạp chí
Tiêu đề: Asymmetric transitivity pre-serving graph embedding
[9] J. Qiu, Y. Dong, H. Ma, J. Li, K. Wang, and J. Tang, “Network embedding as matrix factorization: Unifying deepwalk, line, pte, and node2vec,” in Proceedings of the eleventh ACM international conference on web search and data mining, 2018, pp. 459–467 Sách, tạp chí
Tiêu đề: Network embeddingas matrix factorization: Unifying deepwalk, line, pte, and node2vec
[10] B. Perozzi, R. Al-Rfou, and S. Skiena, “Deepwalk: Online learning of so- cial representations,” in Proceedings of the 20th ACM SIGKDD interna- tional conference on Knowledge discovery and data mining, 2014, pp. 701–710 Sách, tạp chí
Tiêu đề: Deepwalk: Online learning of so-cial representations
[11] T. N. Kipf and M. Welling, “Semi-supervised classification with graph con- volutional networks,” arXiv preprint arXiv:1609.02907, 2016 Sách, tạp chí
Tiêu đề: Semi-supervised classification with graph con-volutional networks
[12] W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learn- ing on large graphs,” arXiv preprint arXiv:1706.02216, 2017 Sách, tạp chí
Tiêu đề: Inductive representation learn-ing on large graphs
[13] S. Bai, F. Zhang, and P. H. Torr, “Hypergraph convolution and hypergraph attention,” Pattern Recognition, vol. 110, p. 107 637, 2021 Sách, tạp chí
Tiêu đề: Hypergraph convolution and hypergraphattention
[14] Y. Zheng, “Location-based social networks: Users,” in Computing with spa- tial trajectories, Springer, 2011, pp. 243–276 Sách, tạp chí
Tiêu đề: Location-based social networks: Users
[15] Q. V. H. Nguyen, K. Zheng, M. Weidlich, et al., “What-if analysis with conflicting goals: Recommending data ranges for exploration,” in ICDE, 2018, pp. 89–100 Sách, tạp chí
Tiêu đề: What-if analysis withconflicting goals: Recommending data ranges for exploration
[16] F. K. Gl¨uckstad, “Terminological ontology and cognitive processes in trans- lation,” in Proceedings of the 24th Pacific Asia conference on language, information and computation, 2010, pp. 629–636 Sách, tạp chí
Tiêu đề: Terminological ontology and cognitive processes in trans-lation
[17] A. Sadilek, H. Kautz, and J. P. Bigham, “Finding your friends and follow- ing them to where you are,” in Proceedings of the fifth ACM international conference on Web search and data mining, 2012, pp. 723–732 Sách, tạp chí
Tiêu đề: Finding your friends and follow-ing them to where you are
[18] C. Song, Z. Qu, N. Blumm, and A.-L. Barab´asi, “Limits of predictability in human mobility,” Science, vol. 327, no. 5968, pp. 1018–1021, 2010 Sách, tạp chí
Tiêu đề: Limits of predictability inhuman mobility
[19] L. Backstrom and J. Kleinberg, “Romantic partnerships and the dispersion of social ties: A network analysis of relationship status on facebook,” in CSCW, 2014, pp. 831–841 Sách, tạp chí
Tiêu đề: Romantic partnerships and the dispersionof social ties: A network analysis of relationship status on facebook
[20] A. Grover and J. Leskovec, “Node2vec: Scalable feature learning for net- works,” in KDD, 2016, pp. 855–864 Sách, tạp chí
Tiêu đề: Node2vec: Scalable feature learning for net-works

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN