Abstract

The present paper introduces a hybrid technique to measure the expertise of users by analyzing their profiles and activities in social networks. Currently, both job seekers and talent hunters are looking for new and innovative techniques to filter jobs and candidates where candidates are trying to improve and make their profiles more attractive. In this sense, the Skillrank approach is based on the conjunction of existing and well-known information and expertise retrieval techniques that perfectly fit the existing web and social media environment to deliver an intelligent component to integrate the user context in the analysis of skills confidence. A major outcome of this approach is that it actually takes advantage of existing data and information available on the web to perform both a ranked list of experts in a field and a confidence value for every professional skill. Thus, expertise and experts can be detected, verified, and ranked using a suited trust metric. An experiment to validate the Skillrank technique based on precision and recall metrics is also presented using two different datasets: (1) ad hoc created using real data from a professional social network and (2) real data extracted from the LinkedIn API.

1. Introduction

In recent years, social network research has been carried out using data collected from online interactions and from explicit relationship links in online social network platforms like, for instance, Facebook and Linkedin [1]. Among these tasks, expert and people search is one of the most challenging tasks that one can try in social networks [2, 3].

Expertise represents the skill of answering some questions or conducting some activities [4]. Thus, the focus of expertise location is finding an answer, a solution, or a person with whom details of a problem can be discussed [5] or a task can be performed [6]. In other words, expert finding addresses the task of identifying the right person with the appropriate skills and knowledge. Effective management of expertise can benefit both organizations and individuals by easing the access to knowledge, as well as sharing and applying knowledge [7].

In this light, expert finding involves two main aspects including expertise identification (“Who are the experts on Topic X?”) and expertise selection (“What does Expert Y know?”) [8]. In the later topic, expert profiling turns the expert-finding task around and asks the following: What topic(s) does a person know about? [9]. Topics such as expertise relevance and authority within a community have been pointed out as some of the factors to assess expert’s competence [10, 11]. Given that complete and accurate expert profiles enable people and search engines to effectively and efficiently locate the most appropriate experts for an information need [9], this paper presents an expert profiling approach to analyze expert’s skills confidence by means of hybrid soft computing techniques. One of the main advantages of Skillrank is the use of LinkedIn as a source of expertise. LinkedIn is likely the most notable example of business-oriented social networking site. The company was founded in December 2002 and launched six months late. LinkedIn reports by December 2014 more than 330 million users in 200 countries and territories. In this professional social networking site, users are allowed to track and publish their career paths, skills and past experiences, the size and tenure of the teams with whom they’ve worked, and the roles they played on each team [12]. LinkedIn users self-report their expertise and ask members of their social network to provide positive references or recommendations for them [13]. Although LinkedIn has been used in the literature for expertise search [14, 15], to the best of authors knowledge, there is not a study devoted to the application of self-disclosure and social network integrators to assess the quality and confidence of professional skills in this network.

On the other hand, a good number of techniques have been designed to exploit the information available in social networks and, in general, to address problems that contain an implicit graph. The well-known algorithm PageRank [16] by Google Inc. was developed to assign a measure of importance to each web page. This algorithm works by counting the number and quality of links to a page to determine a rough estimate of how important a website is. The underlying assumption is that more important websites are likely to receive more links from other websites. In the same way, the HITS (Hyperlink-Induced Topic Search) algorithm [17] also known as “hubs and authorities” is a kind of analysis technique that also rates web pages. It was designed by Kleinberg from the Department of Computer Science at Cornel and the idea behind hubs and authorities stemmed from a particular insight into the creation of web pages when the Internet was originally forming; that is, some web pages are known as hubs and serve as hubs that compile large directories of web pages. These directories are not actually authoritative in some topic but a good hub represents a page that points to many other pages and a good authority page is expected to be linked to many hubs. The main restriction of the HITS algorithm lies in its applicability since it only operates in a small subgraph. This subgraph is considered to be query dependent, whenever the search contains a different query phrase, the seed changes as well as the HITS algorithm ranks the seed nodes according to their authority and hub weights. The SPEAR (Spamming-resistant Expertise Analysis and Ranking) algorithm [18] is another tool for ranking users in social networks by their expertise and influence within a community. It is also a graph-based technique to measure the expertise of users by analyzing their activities and interaction. The main idea behind this technique lies on the ability of users to find new and high-quality information on the Internet. This algorithm is an extension of the aforementioned HITS algorithm including two main elements: (1) Mutual reinforcement of user expertise and document quality and (2) Discoverers versus followers. The combination of both elements has been demonstrated to reward quality over quantity of user activities and that is why it has been also applied to detect spam attacks [19]. Although graph analysis techniques [20] have been widely used to study social networks (e.g., trend detection, opinion mining, sentiment analysis, information retrieval, etc.) and, in most of cases, the PageRank algorithm can be seen as a precursor of this kind of approach, there is still lack of techniques to deal with quality over quantity. In this sense, the SPEAR algorithm offers us a technique that can be applied to a rather wide area of domains such as assessment of skills quality. In the context of graph-based algorithms for expertise ranking, the ExpertRank [10] algorithm proposes a novel technique to evaluate the expertise of users based on both document-based relevance and one’s authority in this or her knowledge community. Authors modified the PageRank algorithm to evaluate one’s authority so that it reduces the effect of certain biasing communication behavior in online communities. As an important cornerstone and relevant to this work, they explored three different expert ranking strategies that combine document-based relevance and authority: linear combination, cascade ranking, and multiplication scaling. This evaluation has been done using a popular online knowledge community showing that the proposed algorithm achieves the best performance when both document-based relevance and authority are considered.

In this paper a reinterpretation and extension of the SPEAR algorithm, called Skillrank, is presented. Furthermore, the evaluation of the presented approach is carried out by comparing existing approaches for expertise ranking such as the HITS and SPEAR algorithms to the proposed technique when tests are executed on top of two datasets extracted from the LinkedIn API. To do so, a panel of experts has established a set of expected results and values that are compared to the real results provided by each algorithm with the aim of obtaining measures of precision and recall.

The remainder of this paper is organized as follows. Section 2 presents the related literature. The proposed approach for skill ranking is illustrated in Section 3. In Section 4, experiment evaluations are conducted to compare our approach with other methods. Section 5 presents main conclusions and future research directions.

2. State of the Art

Expert and credibility finding is not a new issue in literature. As a result of this, literature has vastly reported works on the topic and even produced relevant surveys on the topic for example [21, 22]. Methodologies of expert finding can be divided into three categories [4, 7]: Content-Based Approach, Network-Based Approach, and Hybrid Approach. On the other hand, other works [10], propose a different taxonomy of existing expert finding systems. These authors indicate that these systems are based on four kinds of expertise indicators: self-disclosed information, authored documents, social network analysis and hybrid techniques [23]. An analysis performed by these authors reveal that hybrid techniques are not combining self-disclosure indicators with social network analysis or document-based indicators. In other words, authors underline that self-disclosure indicators can be seen as isolated indicators that need deeper analysis.

In [9], authors make in-depth review of benchmarking techniques and components that constitute a test collection with special emphasis on error analysis. They also give an overview of different test collections for expert profiling and expert finding. In [24], author reviews more recent examinations of the validity of a test collection approach and evaluation measures as well as he outlines trends in current research exploiting query logs and live labs to finally show that, despite its age, this long-standing evaluation method is still a highly valued tool for retrieval research. Furthermore, the Text REtrieval Conference (TREC) or the Yandex Personalized Web Search Challenge also falls in this area of methods for assessment ad-hoc datasets through train and test processes in different topics. For instance, in 2008, the main topic of the TREC conference was focused on expert finding where a dataset from the Tilburg University (http://ilk.uvt.nl/uvt-expert-collection/documentation/documentation.html) was used as input of a competition to find and rank experts. Usually these evaluation methodologies are in charge of asserting the results of an information retrieval process by comparing expected results to actual results and taking into account measures [25] of precision, recall, sensitivity, or stability. On the other hand, relevance assessment methods are usually created by a panel of experts and a good number of collections can be found in different domains such as web search, movie/tourism recommendation, medical diagnosis, and so forth. Finally statistical significance tests are used to estimate the average of system performance according to a set of queries that can be generalized. In this sense, the Wilcoxon test, Student’s -test, and the Fisher pairwise comparison are common techniques to assess a value under certain degrees of freedom. All these techniques have been reviewed in [9] and they are relevant to this work due to the fact that a validation of the expected results must be done to assess if the Skillrank algorithm is properly working. To do so, the validation section introduces the evaluation method that is a combination of an ad hoc collection built by a panel expert with measures of precision and recall.

Regarding online skills evaluation, in [26] a technique is introduced to establish a credibility rank (also known as Skillrank) to online profiles based on user’s confirmations and six requirements for online skills evaluation. According to these six requirements, a credibility model is defined and populated from on-line profiles. Afterwards, a pilot is implemented to show the functional architecture that supports the online evaluation of skills. Nevertheless, the real evaluation of skills and online profiles is still an open issue and, only the architecture is presented. Authors also comment some of the limitations of their approach: (1) skills are evaluated as they are and a scale will be necessary to establish an order of expertise and (2) spread of experience and skills are also under evaluation. On the other hand, they also raise some relevant questions in the evaluation of online profiles: What kind of information can be incorporated to enrich the skill evaluation model? and How half-time jobs, activities or tasks can also be included in the evaluation model? This work is very closely related to the approach presented in this paper. However, they have been focused in the definition of a skill model and a pilot architecture instead of comparing different algorithms working on the same datasets. Other recent works [27] can also be found applying gamification techniques to build online personal skills and boost the learning process. Thus, it is possible to ensure the acquisition of skills from the early stages of learning by analyzing the online behavior and interactions [28] between peers. Finally, other works are also paying attention to the evaluation of online profiles with different purposes such as digital inclusion. As an example, authors in [29] theorize how people’s online social networking skills may condition their uses of various digital media for communication.

On the other hand, reputation management systems have emerged to understand the influence of individuals or groups in a certain group. Different metrics with a particular level of effectiveness [30] are applied to assess the reputation in a social network, mailing list or any other collaborative site. For instance the Stackoverflow system, a question and answer system [31], uses a simple formula to establish a level of “karma” for each individual depending on their participation in the system. The Research Gate site, a social network for science and research, also establishes a score depending on publications and contributions that you have added to the site. In this case, reputation is passed from researcher to researcher, allowing us to build and leverage our reputation based on anything we choose to contribute. Interactions or activities in this social network will determine our score by looking in our activities (how our peers receive them) but also at who these peers are. Higher scores will be reached as much as higher scores peers interact with our activities; it can be seen as an application of the Spreading Activation technique [32] that has been widely used in information/document retrieval and recommending systems [33].

The idea behind of all these reputation management systems lies in a set of internal metrics (they are usually private to avoid fraudulent profiles) that are collected in just one value to create a rank of users by tracking their activity: asking and responding questions, using online feedback of other users (How much a response is better than another?), and so forth. These systems are also relevant to professional social network sites such as LinkedIn, ResearchGate, or Xing in which users try to complete, at the most, their profiles adding own education, professional experience, rewards, publications, and so forth as well as feedback from their connections to improve and enrich their profiles. In this sense, talent hunters have got a new way of detecting specialists in a topic by performing advanced search through tools in these websites. Nevertheless the access to this valuable information is commonly restricted and only quantitative information can be found. In this context some works have emerged for topic extraction systems and online reputation management [34] which they use a set of evaluation metrics based on handmade metadata annotation to assess the quality of different factors. Thus, trust and provenance analysis is becoming a major challenge to avoid vandalism, fraud, and so forth in public profiles, more specifically in user and company profiles. Existing works for example [35] are then focused on applying techniques to characterize profiles in reputation management systems to demonstrate through algorithms such as Eigen Trust, TNA-SL, or distributed approaches [8] are secure enough to effectively manage trust in communities.

In the field of information retrieval, expertise retrieval [36], and expert finding [37] systems have been widely studied to provide a new approach to tackle the discovery of experts in some area [15]. Currently, practices as such competitions [38] or challenges are common techniques to retrieve experts based on their performance in a certain topic. The main objective of expertise retrieval [36] lies in the application of existing information retrieval techniques as building blocks to design advanced algorithms that can serve to create content-based links between topics and people. Nevertheless, authors have also outlined both applications to other domains such as entity retrieval as well as some conjectures on what the future may hold for expertise retrieval research. For instance, last times new novel approaches [39] are emerging to include time and evolution of skills over time as variables in expertise retrieval processes.

As a closely related field to expertise retrieval, community detection [40] in social networks is a widely active research area to segment large communities by a certain criteria. In the specific case of expertise in some topic, these algorithms can be applied to narrow down the search of experts. These techniques can be roughly divided into two groups: (1) global and (2) local approaches. The first ones assume knowledge of the entire network while local ones only assume knowledge of subcommunities featured by some attributes such as location. Global detection algorithms were firstly proposed by Girvan and Newman by iteratively removing edges until the social graph is partitioned (each partition can be consider as a community). The key point of these techniques lies in the selection of the edge to be removed and, in general, some metrics such as betweeness centrality are calculated for each edge. Thus, a large social network is divided into high-dense connected communities that share some features and, therefore, can be considered sub-communities. The main drawback of global approaches lies in the necessity of knowing the full graph (it is usually expensive in terms of time and size). In order to decrease the complexity of handling a large graph, local approaches aim for detecting communities in a more scalable and applicable way starting with a set of seed nodes to detect implicit communities. For instance, Clauset’s algorithm uses intracommunity and intercommunity measures to iteratively establish or remove the connection between two nodes. Thus from a starting node communities dynamically emerge. Although these approaches are completely correct to detect underlying communities the use or inferring of dynamic attributes of nodes (users in most of cases) is still an open issue that has been studied in some works such as [41] and it allows a better and more accurate community partition. As a possible application of these aforementioned techniques, the detection of violent communities, hostility, or rivalry is currently under study [42] since the internet is understood to be a social space conducive to increased hostility, greater disinhibition, and increased social freedom. Moreover, these authors see a link between virtual hostility and actual violence. In social networks research, [43] predicts user personality by mining social interactions including Aggression-Hostility traits and [44] modeled online social interactions incorporating the effects of hostile interactions.

Finally the relevance of expertise ranking in social networks and Internet has been presented in some works to understand and exploit enterprise know-how [45], find competence gaps and learning needs inside corporations [46], improve Scrum processes [47], improve human to human interactions [48], or tackling information asymmetries in electronic marketplaces [49] to name a few.

In conclusion, a list of methods and techniques for benchmarking has been introduced with the aim of comparing existing approaches to assess personal skills quality. Furthermore, existing works related to reputation management systems and, more specifically, trust and provenance analysis can be also applied to the Skillrank technique since methods to evaluate public profiles are an emerging topic due to the current use of the web. That is why the Skillrank technique seeks for providing an innovative method to assess user profiles from a qualitative point of view through an agnostic technique that can help both talent hunters and managers to exactly know where a capability or skill can be found in their connections or employees with a certain degree of trust and provenance.

3. Skillrank: Reinterpreting the SPEAR Algorithm to Assess Skills Quality in Professional Social Networks

3.1. Summary of the HITS and SPEAR Algorithms

As the previous section has introduced, the HITS algorithm [50] identifies good authorities and hubs for a certain topic by assigning two numbers to a page: an authority and a hub weight where weights are recursively defined. A higher authority weight occurs if the page is pointed to by pages with high hub weights. A higher hub weight occurs if the page points to many pages with high authority weights. More specifically in the context of web search, the HITS algorithm first collects a base document set for each query. After that it recursively calculates the hub and authority values for each document. In order to gather the base document set , first, a root set matching the query is fetched from the search engine. Once this root set is configured for each document , a set of documents that point to another set of documents that are pointed to by are added to the set as ’s neighborhood. Then, for each document , let and be the authority and hub values, respectively, that are initialized to 1. While the values have not converged, the algorithm iteratively proceeds as follows.(1)For all which points to , (2)For all which is pointed to , (3)Normalize and values so that .

A good hub increases the authority weight of the pages it points to. A good authority increases the hub weight of the pages that point to it. The idea is then to apply the two operations above alternatively until equilibrium values for the hub and authority weights are reached. The author also demonstrated that the algorithm will likely converge but the bound on the number of iterations is unknown (in practice the algorithm converges quickly). New improved versions of this algorithm have emerged such as BHITS by giving a document a default authority weight of if the document is in a group of documents on a first host which link to a single document on a second host, and a default hub weight of if there are links from the document on a first host to a set of documents on a second host. Nevertheless and according to its authors, this new version of the algorithm generated bad results when a root link has few in-links but a large number of out-links that are not relevant to the query.

On the other hand, the SPEAR algorithm [18, 19] makes use of the HITS definition to introduce the concept of expert, someone with a high level of knowledge, technique, or skills in a particular domain. This implies that experts are reliable sources of relevant resources and information but with two main assumptions.(1)Mutual reinforcement of user expertise and document quality. The expertise of a user in a particular domain will depend on the quality of the documents he/she has found. In the same way, quality of documents will depend on the expertise of the user who has found them. This is an issue that has been studied in Psychology and it states that expertise involves the ability of selecting best and relevant information in a certain context. The SPEAR algorithm is based on this assumption and an expert should be someone who selects by quality instead of quantity.(2)Discoverers versus followers. The second assumption of the SPEAR algorithm lies in the definition of a discoverer (expert user that finds high-quality and relevant information) versus a follower (an user that annotates a document after a discoverer does).Under the aforementioned assumptions the SPEAR algorithm produces a ranking of users with regard to a set of one or more tags. It assumes that a topic of interest is represented by a tag . The algorithm works as follows [18, 19].(i)Firstly the set of tags is extracted from an underlying folksonomy in a certain social network. Each tag is represented by the tuple where is the user, is the time when the tag was assigned to the document , and if refers to an earlier time than .(ii)Then, the next vectors are defined:(a)a vector containing the expertise scores of users where is the number of unique users in ,(b)a vector containing the quality scores of documents where is the number of unique documents in .(iii)According to the first assumption, mutual reinforcement refers to the idea that the expertise score of a user depends on the quality scores of the documents to which he tags with , and the quality score of a document depends on the expertise score of the users who assign tag to it. Authors define an adjacency matrix of size where if user has annotated with the tag the document , and otherwise. Based on this matrix, the calculation of expertise and quality scores is an iterative process similar to that of the HITS algorithm: and .(iv)On the other hand, the second assumption is implemented by changing the definition of the aforementioned adjacency matrix. Instead of assigning either 0 or 1 (like the HITS algorithm) the following equation is used to populate the initial values of the matrix .(a).(b)Thus, the cell is equal to 1 plus the number of users who have assigned tag to document after user . Hence, if is the first to assign to , will be equal to the total number of users who have assigned to . If is the most recent user to assign to , will be equal to 1. The effect of such initialization is that matrix represents a sorted timeline of any users who tagged a given document .(v)The last step is to assign a proper credit score to users by applying a credit scoring function to each element . According to the authors three different functions could be applied to the matrix .(a)A linear credit score . This function was initially discarded by the authors because discoverers of a popular document would receive a comparatively higher expertise score although they might have not contributed in any other document thereafter.(b)An increasing function but with a decreasing first derivative to retain the ordering of the scores in . Authors demonstrated that this kind of function enables the possibility of keeping discoverers score higher than followers but differences between higher scores will be reduced to avoid the undesirable effect of assigning high expertise scores to users who were the first in tagging a few set of popular documents but without further contribution in high-quality documents thereafter. Finally, the authors selected the function as credit score for their experiments.

3.2. Skillrank in Online Communities

A simplistic definition of an online community or social network is a set of , where is the set of users that interact with each other, is a set of static features or attributes, is a set of dynamic or inferred attributes that define the community, and is the set of all resources generated by users. More specifically, a user is also described by a set of attributes , where is the set of static attributes that describe the user profile and they are usually defined by the own user. On the other hand, represents a dynamic set of attributes that can be inferred or predicted by tracking the user’s activity and interaction in the context of social network . Furthermore, any social network can be divided into different subcommunities (subgraphs) that are also communities, and by extension, a social network can be also be defined as the union of several subcommunities . Formally, let be an index set, and for each then the family of sets is the union set that represents an online-community:

Commonly, the set of users are not disjoint sets, so a user can be a member of several subcommunities. Nevertheless, the set of features , dynamic features and resources could be shared among subcommunities but they could be also disjoint sets depending of the characteristics of the social network.

Following these definitions, we can describe a social network such as LinkedIn containing a subcommunity “MyLinkedIn” that can be also partitioned in several subcommunities such as “MyUniversity” or “MyWork”. According to the theoretical model,(i) where(ii) is the set of all registered users.(iii), type = “professional social network”, name = “Linkedin”, is a set of key-value pairs.(iv) is also a set of dynamic key-value pairs in a certain moment time.(v)On the other hand we can also define this social network by the union of several disjoint sub-communities. Thus where .(vi)Finally, users will be members of some community so represents a LinkedIn user that creates the subcommunity “MyLinkedIn”. This user and its subcommunity generate resources such as “posts,” “connections,” “endorsements,” and so forth as an example = “endorsement”, time = timestamp, tag = “skill”} describes the resource in the community “MyLinkedIn” that was created by the user using a “endorsement” in a certain moment time on the user .

Although communities, users, and resources can be described through different static attributes there is still a set of dynamic or behavioral features that must be inferred to make a better description of foreknown communities and to be able to create new intercommunity relationships. Since communities, user endorsements, and so forth are evolving characteristics, it is necessary to analyze emerging or implicit user’s behaviors [41, 51]. In this sense, the aforementioned community detection algorithms follow a similar approach but studying the structure of the social graph instead of analyzing contents.

Here, we propose the adaptation of the SPEAR algorithm to support the quality assessment of endorsements generated by a subcommunity; more specifically the following contexts can be identified.(i)Community (Local context). Figure 1 shows that a user endorses another user with the skill “java” in the time . After that another user belonging to the same sub-community generated by user also uses the same endorsement but at time where . The assumption behind this behavior is that after seeing the new endorsement (made by ) also realizes that this endorsement is correct and adds again the same endorsement to the user . This situation implies that the first post (see discoverer in the SPEAR algorithm) has activated new annotations (see follower in the SPEAR algorithm) reinforcing both: (1) the skill “java” in user and (2) the initial annotation of the user . Similarly, if a user notices that a user has endorsed another user at time this can lead to an endorsement of user for the same skill by user at time where (Figure 2). Finally Figure 3 depicts the situation in which a user is activated by some activity but instead of applying the same tag she uses another tag to annotate knowledge of user (the one that started the interaction).(ii)Community (Global context). Figure 4 shows that an user endorses another user with the skill “java” in the time . After that another user , outside of the subcommunity (represented by a dashed circle) also uses the same endorsement but in time , where , to endorse user . The idea behind this behavior is that after seeing the new endorsement (made by ) also realizes that this endorsement can be applied to . This situation implies that the first post (see discoverer in the SPEAR algorithm) has activated new annotations (see follower in the SPEAR algorithm) reinforcing the skill of the user .

On the other hand, Figures 3 and 5 depict a situation in which an user assigns a skill to user in time but although other user is activated and assigns another skill, different from the one assigned by , there is not actually a correlation between them and both assignments can be interpreted as independent endorsements. The Skillrank technique covers the aforementioned scenarios to take advantage of the data delivered by tracking user activities.

Taking into account the inputs required by the SPEAR algorithm, the following vectors are redefined and the pseudocode of the algorithm is also presented in Listing 1.(i)The set of skills is extracted from the activities generated by a subcommunity in a certain social network. Each endorsement is also represented by the tuple where is the source user that endorses with the skill to a target user at time .(ii)Then, the next vectors are also redefined:(a)a vector containing the expertise scores of users where is the number of unique users in ,(b)a vector containing the quality scores of skills where is the number of unique skills in .

Input: Number of users  
Input: Number of skills  
Input: A set of skills  
Input: Credit scoring function    (the same as in the standard SPEAR)
Input: Number of iterations  
Output: A list    of users.
()   Set    to be the vector  
()   Set    to be the vector  
()     Generate Adjacency Matrix  ()
()   for to    do
()   
()   
()   Normalize  
()   Normalize  
()   end for
()   Sort users by their expertise scores in    and quality skills scores in  
() return  

According to these definitions the unique difference between the original version of the SPEAR algorithm and this new version seems to be the naming of elements (“document” by “skill”). Nevertheless, the Skillrank facilitates a two-step process to run the SPEAR algorithm in before with the aim of populating both vectors and with real values. Hence, a new interpretation of the adjacency matrix can be done. Instead of considering the adjacency matrix for the whole social network or folksonomy, each user will generate an adjacency matrix in which rows represent connections and columns skills respectively (see Table 1). The interpretation of this table is as follows: 0 represents that the user have not yet endorsed using the skill while another value such 2 in cell and 1 in cell represents that user used the skill before user .

On the other hand, an analysis of the temporal and spatial complexity of the algorithm can be carried out to show the computational complexity of the technique depicted in Error! Reference source not found. Firstly and regarding the spatial complexity, the algorithm makes use of a set of skills which contains all skill endorsements registered for a community , the aforementioned vectors and and a matrix of dimensions: , number of users in a community and , number of unique skills in a community . Secondly and regarding the temporal complexity, the standard SPEAR algorithm performs iterations containing a main operation (matrix multiplication) two times, an operation to transpose a matrix and two vector normalizations. Assuming that the adjacency matrix, , has dimension , the computation complexity of the algorithm can be expressed through the next expression using the Big-O notation: = .

4. The Case Study

To illustrate the performance in terms of precision and recall of the presented algorithms, HITS and SPEAR, with regards to the adaptation designed in Skillrank a case study using different datasets is provided. Here, the evaluation of performance is not a mere question since there is a lack of real datasets containing the required information of users, connections, skills and time. To mitigate this problem, a synthetic dataset, as the basis of the experiment, has been designed after collecting real data from the LinkedIn API (currently, this API provides access to valuable but incomplete information that must be fixed by the own users). Thus, simulated communities, users, skills, and endorsements are generated to study the behavior of the different approaches. To carry out both experiments the following steps have been carried out.(1)Select and prepare dataset. For every dataset to be evaluated a set of tuples in the form must be provided.(2)Create a dataset for unit testing purposes. To do so a panel of experts has established a category, using an official competence scale [52], for every user and skill.(3)Definition of precision and recall. In order to calculate both measures, next definitions are also required (given an user):(i)true positives (tp): “number of skills that were expected to reach a certain level of quality.”(ii)false positives (fp): “number of skills that have reached a different level of quality.”(iii)true negative (tn): “number of skills that were not expected to reach a certain level of quality.”(iv)false negative (fn) “number of skills that have not reached a different level of quality.”Once we have the aforementioned definitions, precision and recall can be defined and calculated as follows.(i)Precision is defined as “the number of user skills that have reached the proper level of quality established by the panel of experts”: (ii)Recall is defined as “the number of user skills that have not reached the proper level of quality established by the panel of experts”: (iii) score is then defined as (4)Inclusion of the frequency as a basic technique for each user and skill. Given a user and a skill the quality of the skill is calculated as follows: (5)Run the experiment for every dataset and technique.(6)Configure all techniques with default parameters (credit score function, etc.) as previous section has presented.(7)Extract measures of precision, recall, and by comparing expected results to real results.

4.1. Design of the Experiment

The first step to run the experiments lies in the proper creation of a synthetic dataset inspired by real data extracted from the LinkedIn API. To do so a community must be modeled including the required input parameters for the target algorithms. Thus, a set of users, containing 10 different profiles has been designed including an average between 30 and 50 connections per user (these values have been inferred from the real data). On the other hand, a set of skills , see Table 2, must be also designed according to next features: (1) technical, professional, and management skills must be available for each user and (2) all skills must be, at least, in one profile but no all profiles contain all skills. Finally, a set of endorsements in the form are generated from each user being the user/connection that assigns the skill to the user in a certain time .

Once the input dataset is designed, it is necessary to create a dataset containing the expected results with the aim of performing automatic unit testing. To do so, a panel experts that has already participated easing the access to their profiles in LinkedIn, has also established for their real connections a level of expertise for each skill in . Thus, this dataset contains a set of tuples in the form , where is an user with a level of competence on the skill . The different levels of competence have been taken from [52] in which authors present “The Individual Competency Index (ICI),” see Table 3.

After the creation of the input and test datasets, the algorithms and unit tests can be executed to finally extract the measures of precision, recall, and and compare the different techniques. Last step involves the creation of a function to convert numerical values into a level of expertise. To do so, a percentile rank for every level of expertise is defined. The aforementioned steps have been also followed to perform the same experiment on the LinkedIn dataset. As a final remark it is relevant to discuss some research limitations that have emerged during the creation of both datasets.(i)The use of the LinkedIn API is restricted and it is not possible to access all information that is available through the public website. Thus, some relevant information with regards to the skills is missing such as who has endorsed someone. To overcome this issue our panel of experts and collaborators were asked to complete this information.(ii)Another issue in the use of the LinkedIn API lies in the lack of time for each endorsement. This is a critical point since algorithms are based in this assumption. To overcome this issue we have follow two strategies: (1) ask the panel of experts and collaborators to estimate a date in which the endorsements were created and (2) estimate the time of the endorsement by using the join data in the social network.(iii)The LinkedIn API also provides a level of proficiency for each skill. Nevertheless these features cannot be used since it is not available in all skills and it is based on a particular taxonomy.(iv)Finally, in order to access all required information, an URL to query the official LinkedIn REST API (https://developer.linkedin.com/apis) was designed as shown in Listing 2. Nevertheless and due to privacy setting of the API, every participant in the experiments was asked to execute this request through the APIgee service (https://apigee.com/console/linkedin) using their own OAuth credentials and to send us the request’s output as XML. Through this request the user will be asked to grant access to their full profile. Then, all public personal information, a profile containing: first name, last name, headline, industry, location, number of connections, summary, specialties, positions, associations, honors, interests, publications, patents, languages, skills (id, proficiency, and years), certifications, education, courses, volunteer, three-current-positions, number of recommenders, and connections (and their full profile) will be gathered using the LinkedIn REST API.

https://api.LinkedIn.com/v1/people/~/connections:(id,first-name,last-name,formatted-name,email-address,headline,industry,
location,num-connections,summary,specialties,positions,site-standard-profile-request,public-profile-url,api-standard-profile-
request,proposal-comments,associations,honors,interests,publications,patents,languages,skills:(id,skill,proficiency,years),
certifications,educations,courses,volunteer,three-current-positions,num-recommenders,following,job-bookmarks,date-of-
birth,member-url-resources,connections)

4.2. Results and Discussion

After the execution of the different techniques, the averaged measures (for all skills) and for every user in dataset are presented in Table 4. Obviously, the first technique based on the number of times a user has been endorsed is not actually relevant in terms of quality as results show. On the other hand, the HITS algorithm provides better results in terms of precision but the drawbacks of this algorithm (not considering time as a relevant variable—see Section 3.1) implies a low-precision in some users with a behavior close to the frequency-based technique. As an improvement or more accurate version of the HITS algorithm, the SPEAR technique seems to get better results that are closer to the expert’s opinion. Here, it is clear that the assumption of time as a key-variable to assess quality is a determinant to detect the level of expertise. Finally, the Skillrank technique that previously configures the level of expertise of every user before making endorsement seems to have a similar behavior to the SPEAR algorithm. Although in some cases there is a relevant gain, the truth is that values in both techniques are very similar and to actually assert that Skillrank is better than the simple version of the SPEAR technique more data should be used.

Following this discussion, Table 5 shows the results of the different techniques using real data. In general, a decrease of precision can be found in this table with regards to previous results. This can be explained due to the fact that this dataset is not customized and real behavior of users and skills is found implying, in general, worse results.

As final remark, a change in the parameters such as the set of users and skills could lead us to get better results. Nevertheless, this initial effort will be used as a baseline to compare further improvements. Regarding similar approaches that have been implemented, as the related work section has outlined, the presented approach is closely related to [26] since the same problem is being addressed. The main difference is that they have also outlined a functional architecture while we focus here in addressing some of the existing open issues: alignment of the skills quality to an existing competency index. On the other hand, we have tried to reuse the most existing techniques. That is why we have made use of the well-known techniques such as the HITS and SPEAR algorithms that have been demonstrated to detect experts under certain characteristics of a graph. The Skillrank technique gets inspiration of these techniques to adapt the underlying concepts and execution steps to the problem of quality assessment of skills available in online profiles.

5. Conclusions and Future Work

The present paper has introduced different techniques to assess quality in graph-based structures. The well-known algorithms HITS and SPEAR have been also presented as inspiration for the Skillrank technique. This approach reinterprets the notions and underlying concepts of the SPEAR algorithm to apply them to the context of skills quality assessment in professional social networks. On the other hand two main experiments have been conducted using synthetic and real data to evaluate the behavior of the aforementioned techniques in terms of precision and recall. Both approaches—the SPEAR and Skillrank algorithms—have shown similar results in the test datasets implying that these techniques can be meaningfully applied to assess quality of skills. From another perspective the quality assessment of user profiles and more specifically user skills is an active research area that ranges from applying expertise retrieval techniques to expertise profiling, topic extraction, and so forth. In this sense there are still some open issues that must be tackled in order to provide automatic methods for user profiling, talent hunter or expert finding processes. Currently, a lot of professional social networks are emerging but the problem of creating groups of users by a certain topic is becoming a major challenge since it is necessary to improve trust and provenance of information or user’s activities. The relevance of this work is that it can serve to manage enterprise know-how and to detect experts inside organizations. Their competitiveness can be increased by better human resources management processes that could take advantage of exploiting existing data generated by tracking user’s activities. From a technical point of view Skillrank is a first substantial effort (including a good number of “artisan” tasks) and new capabilities such as including new data sources to assess skills quality, use of more advanced ordered weighted averaging (OWA) operators and adaptation of other datasets for experimenting purposes should be added in the future as well as new variables in the core of the algorithm. Finally, we also plan to release the information of our experiments using some existing standard such as nanopublications to ease the reuse and comparison with new techniques.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This wok has been partially supported by the European Commission (programme LifeLong Learning—action Leonardo da Vinci—Transfer of Innovation) through project “ECQA Certified Social Media Networker Skills” (2011-1-ES1_LEO05-35930), by the Spanish Ministry of Economy and Competitiveness through INNPACTO project Post-Via 2.0 (IPT-2011-0973-410000), by the German Federal Ministry of Education and Research trough project “OpSIT—Optimaler Einsatz von Smart-Items-Technologien in der Stationären Pflege” (16SV6048), and by the German Federal Ministry of Economics and Technology through project “PrevenTAB” (KF3144902DB3).