Jan 19, 2023

Public workspaceIntelligent Job and Career Recommender Systems: A Systematic Review

  • 1King's College London
Icon indicating open access to content
QR code linking to this content
Protocol CitationMaximin Lange, Ricardo Twumasi, nikos koutsouleris 2023. Intelligent Job and Career Recommender Systems: A Systematic Review. protocols.io https://dx.doi.org/10.17504/protocols.io.5qpvor38xv4o/v1
License: This is an open access protocol distributed under the terms of the Creative Commons Attribution License,  which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited
Protocol status: Working
We use this protocol and it's working
Created: November 17, 2022
Last Modified: January 19, 2023
Protocol Integer ID: 72888
Keywords: Systematic Review, Job Recommender Systems, Career Trajectory Prediction, Job Mobility, Next Job
Funders Acknowledgement:
London Interdisciplinary Social Science DTP (LISS DTP)
Grant ID: 19035129
Abstract
We are supplying the first systematic review on intelligent job recommender systems (IJRS). In this work, we analyse prominent issues and problems regarding IJRSand remedial techniques available in theliterature. We furthermore categorise ML and data mining approaches employed in IJRS, extracting core features of an ideal IJRS. We furthermore review performance of employed algorithms regarding experimental design, used datasets and variables measured. At last, we focus on future directions presented in the studies in this review.   We are focusing on the following research questions: Regarding IJRSsystems,
  1. Which types of engines or algorithm are available?
  2. Where is their data input coming from?
  3. How was the algorithm assessed by the authors?
  4. Which qualities does an ideal IJRS possess?
  5. What are promising directions for future work?
Methodology
Methodology

Methodology
As Suddaby et al. (2017) note, when systematically reviewing existing research, several methods are available to the researcher, which have been distinguished by Noblit and Hare (1988) as either integrative or interpretive. While integrative reviews are to be chosen when analysed material is similar in type of data collected and constructs are accurately specified, hence terms and data are compatible, interpretive reviews allow for large thematic and inductive analysis, i.e., are more appropriate when associations between papers are unclear or blurry, or when different methods are used. This is especially applicable when constructs, data and variables are loosely defined or not comparable. Therefore, we are taking an interpretative approach in this review, given the large number of heterogenous papers regarding IJRS. This is also in line with other reviews investigating comparable research topics and questions (Seele et al., 2021; Hunkenschroer & Luetge, 2022, Koechling & Wehner, 2020). 

Consideration of Meta-analytic estimates
Depending on the findings of the systematic review, a meta-analysis might be conducted. This is dependent on availability of articles indicating an accuracy measure which allows for a pooled estimate.

Search Process
As per the Preferred Reporting Items for Systematic Reviews (PRISMA), we will offer a detailed report on the number and type of records identified, included, and excluded which ensures the validity, reliability, and replicability of the review. 
A structured keyword-based literature search based will be conducted between December 2022 and January 2023. Searched will be major online databases of English language peer-reviewed scientific journal articles and conference presentations. A list of included data bases can be found in Table 1. Depending on the number of acceptable articles found, additional, manual searches might be conducted in reference sections of selected papers. General sources e.g., Google Scholar will not be included in the initial search, as results would be included in primary sources, however, will be used as a backwards search to see if anything were missing. We will include conference proceedings despite them often not being peer reviewed, since the field of IJRS is new with some of the seminal papers being published in conference proceedings.

Table 1 Article Sources Considered Data Base URL IEEE Xplore Digital Library https://ieeexplore.ieee.org/Xplore/home.jsp Web of Science https://www.webofscience.com Scopus https://www.scopus.com/home.uri

Inclusion and Exclusion Criteria A complete list of articles included and excluded at the first eligibility stage will be made available openly once preliminary data collection is complete. Since research on AI and IJRS is high in novelty and interdisciplinarity, a broad search strategy will be employed, favouring an open sample rather than strict inclusion criteria e.g., only high impact journals. The time frame is limited to studies after 2000, as no serious IJRS has been in existence.
The following set of keywords are defined as preliminaries: {Job Recommender System; Machine Learning,}.

We will employ different terminology for search terms, such as synonyms, singular and plural forms, different spellings as well as broad and narrow terms and other search strings, as shown in Table 2.
Data Extraction and Synthesis Primary studies fulfilling inclusion criteria will be read multiple times in full text. This phase will provide the opportunity to identify available data in the articles relevant to our research questions introduced in the background section of this protocol. Our data extraction will be done in accordance with the PRISMA guidelines (McInnes et al., 2018). The first two authors will review each article independent from each other, to increase reliability and objectivity of results. Articles will be analysed abductively, using a set of predefined analytic categories. Those will be “Authors, year”, “Main Focus of Study” “Method Employed” “Data Source”, ”Field of Research” ”Key Findings” and ”Geography”. Pre-existing codes will not be used to extract information, which is in line with comparable reviews (Koechling & Wehner, 2019; Portugal et al., 2018; Tripathi et al. 2016; Hunkenschroer & Luetge, 2022).
Robustness Check To display an authentic and transparent snapshot of IJRS literature, we will furthermore implement a robustness check. This ensures the inclusion of all relevant papers in this review. The robustness check will be conducted in March 2023, i.e., three months after the initial search process. For the robustness check, we will include two additional keywords in the search, namely “career recommender systems” and “labour market”.
Quality Assessment
Based on Monti et al. (2020), to ensure objective quality assessment of the included studies, we defined eight quality assessment questions. A full list and sample scorecard for each study reviewed available in table 3. Each article will be assessed using these questions, independently getting assigned a score by the first two authors: 0, 0.5, or 1, corresponding to Yes, Partly, and No. Scores by the first two authors will be averaged. The third author will check a subset of articles chosen at random to further ensure reliability of scores. A combined quality score will be computed averaging all scores of all quality questions.

Table 2 Literature Search and Selection Details
Key Words
(‘‘Career Path* OR Career Predict* OR Career Path Predict* OR Career Path Model* OR Career Trajectory* OR Career Trajectory Predict* Next Job* OR Job Mobil*” OR Career Recommend* OR E-Recruit* OR Job Recommend*”) AND (‘‘Machine Learning* OR Intelligent* OR Artificial Intelligence* OR Algorithm*”)
Language English
Time Frame 2000-2022
Inclusion Criteria Articles or conference proceedings concerning intelligent IJRS, accessible in full text, Articles or conference proceedings published in peer reviewed academic journals or conferences
Exclusion Criteria Articles or conference proceedings not addressing IJRS Articles or conference proceedings addressing IJRS without the use of machine learning, Articles or conference proceedings reporting only abstracts, letters to the editor, commentaries, interviews, or posters Articles or conference proceedings describing planned research, Articles, or conference proceedings done outside the predefined data range

Registration To increase transparency and to allow reviewers to compare eventual findings with the initial protocol, as well as to be compliant with established reporting criteria and reduce publication bias, this systematic review will be registered prior to data collection (https://www.protocols.io).

Table 3 Sample Scorecard for Included Articles Reviewed QA Question Score
Do the authors clearly describe the problems they are investigating?
Do the authors review related work to the problem?
Do the authors compare its approach with other alternatives?
Do the authors describe the components of their IJRS?
Do the authors provide an empirical evaluation of their solution?
Do the authors present a clear statement of their findings?
Do the authors analyse application scenarios of their IJRS?
Do the authors recommend any further research directions?
Total


References

Hunkenschroer, A. L., & Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 1-31
Köchling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13(3), 795-848.
McInnes, M. D. F., Moher, D., Thombs, B. D., McGrath, T. A., Bossuyt, P. M., Clifford, T., Cohen, J. F., Deeks, J. J., Gatsonis, C., Hooft, L., Hunt, H. A., Hyde, C. J., Korevaar, D. A., Leeflang, M. M. G., Macaskill, P., Reitsma, J. B., Rodin, R., Rutjes, A. W. S., Salameh, J.-P., … Willis, B. H. (2018). Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies. JAMA, 319(4), 388. https://doi.org/10.1001/jama.2017.19163
Monti, D., Rizzo, G., & Morisio, M. (2021). A systematic literature review of multicriteria recommender systems. Artificial Intelligence Review, 54(1), 427–468. https://doi.org/10.1007/s10462-020-09851-4
Noblit, G., & Hare, R. (1988). Meta-Ethnography. SAGE Publications, Inc.
Portugal, I., Alencar, P., & Cowan, D. (2018). The use of machine learning algorithms in recommender systems: A systematic review. Expert Systems with Applications, 97, 205-227.
Seele, P., Dierksmeier, C., Hofstetter, R., & Schultz, M. D. (2021). Mapping the ethicality of algorithmic pricing: A review of dynamic and personalized pricing. Journal of Business Ethics, 170(4), 697-719.
Suddaby, R., Bitektine, A., & Haack, P. (2017). Legitimacy. Academy of Management Annals, 11(1), 451-478.