As Suddaby et al. (2017) note, when systematically reviewing existing research, several methods are available to the researcher, which have been distinguished by Noblit and Hare (1988) as either integrative or interpretive. While integrative reviews are to be chosen when analysed material is similar in type of data collected and constructs are accurately specified, hence terms and data are compatible, interpretive reviews allow for large thematic and inductive analysis, i.e., are more appropriate when associations between papers are unclear or blurry, or when different methods are used. This is especially applicable when constructs, data and variables are loosely defined or not comparable. Therefore, we are taking an interpretative approach in this review, given the large number of heterogenous papers regarding IJRS. This is also in line with other reviews investigating comparable research topics and questions (Seele et al., 2021; Hunkenschroer & Luetge, 2022, Koechling & Wehner, 2020).
Consideration of Meta-analytic estimates
Depending on the findings of the systematic review, a meta-analysis might be conducted. This is dependent on availability of articles indicating an accuracy measure which allows for a pooled estimate.
As per the Preferred Reporting Items for Systematic Reviews (PRISMA), we will offer a detailed report on the number and type of records identified, included, and excluded which ensures the validity, reliability, and replicability of the review.
A structured keyword-based literature search based will be conducted between December 2022 and January 2023. Searched will be major online databases of English language peer-reviewed scientific journal articles and conference presentations. A list of included data bases can be found in Table 1. Depending on the number of acceptable articles found, additional, manual searches might be conducted in reference sections of selected papers. General sources e.g., Google Scholar will not be included in the initial search, as results would be included in primary sources, however, will be used as a backwards search to see if anything were missing. We will include conference proceedings despite them often not being peer reviewed, since the field of IJRS is new with some of the seminal papers being published in conference proceedings.
Table 1 Article Sources Considered
Data Base URL
IEEE Xplore Digital Library
https://ieeexplore.ieee.org/Xplore/home.jsp
Web of Science
https://www.webofscience.com
Scopus
https://www.scopus.com/home.uri
Inclusion and Exclusion Criteria
A complete list of articles included and excluded at the first eligibility stage will be made available openly once preliminary data collection is complete. Since research on AI and IJRS is high in novelty and interdisciplinarity, a broad search strategy will be employed, favouring an open sample rather than strict inclusion criteria e.g., only high impact journals.
The time frame is limited to studies after 2000, as no serious IJRS has been in existence.
The following set of keywords are defined as preliminaries:
{Job Recommender System; Machine Learning,}.
We will employ different terminology for search terms, such as synonyms, singular and plural forms, different spellings as well as broad and narrow terms and other search strings, as shown in Table 2.
Data Extraction and Synthesis
Primary studies fulfilling inclusion criteria will be read multiple times in full text. This phase will provide the opportunity to identify available data in the articles relevant to our research questions introduced in the background section of this protocol. Our data extraction will be done in accordance with the PRISMA guidelines (McInnes et al., 2018). The first two authors will review each article independent from each other, to increase reliability and objectivity of results. Articles will be analysed abductively, using a set of predefined analytic categories. Those will be “Authors, year”, “Main Focus of Study” “Method Employed” “Data Source”, ”Field of Research” ”Key Findings” and ”Geography”. Pre-existing codes will not be used to extract information, which is in line with comparable reviews (Koechling & Wehner, 2019; Portugal et al., 2018; Tripathi et al. 2016; Hunkenschroer & Luetge, 2022).
Robustness Check
To display an authentic and transparent snapshot of IJRS literature, we will furthermore implement a robustness check. This ensures the inclusion of all relevant papers in this review. The robustness check will be conducted in March 2023, i.e., three months after the initial search process. For the robustness check, we will include two additional keywords in the search, namely “career recommender systems” and “labour market”.
Based on Monti et al. (2020), to ensure objective quality assessment of the included studies, we defined eight quality assessment questions. A full list and sample scorecard for each study reviewed available in table 3. Each article will be assessed using these questions, independently getting assigned a score by the first two authors: 0, 0.5, or 1, corresponding to Yes, Partly, and No. Scores by the first two authors will be averaged. The third author will check a subset of articles chosen at random to further ensure reliability of scores. A combined quality score will be computed averaging all scores of all quality questions.
Table 2 Literature Search and Selection Details
(‘‘Career Path* OR Career Predict* OR Career Path Predict* OR Career Path Model* OR Career Trajectory* OR Career Trajectory Predict* Next Job* OR Job Mobil*” OR Career Recommend* OR E-Recruit* OR Job Recommend*”) AND (‘‘Machine Learning* OR Intelligent* OR Artificial Intelligence* OR Algorithm*”)
Inclusion Criteria
Articles or conference proceedings concerning intelligent IJRS, accessible in full text, Articles or conference proceedings published in peer reviewed academic journals or conferences
Exclusion Criteria
Articles or conference proceedings not addressing IJRS Articles or conference proceedings addressing IJRS
without the use of machine learning, Articles or conference proceedings reporting only abstracts, letters to the editor, commentaries, interviews, or posters
Articles or conference proceedings describing planned research, Articles, or conference proceedings done outside the predefined data range
Registration
To increase transparency and to allow reviewers to compare eventual findings with the initial protocol, as well as to be compliant with established reporting criteria and reduce publication bias, this systematic review will be registered prior to data collection (https://www.protocols.io).
Table 3 Sample Scorecard for Included Articles Reviewed QA Question
Score
Do the authors clearly describe the problems they are investigating?
Do the authors review related work to the problem?
Do the authors compare its approach with other alternatives?
Do the authors describe the components of their IJRS?
Do the authors provide an empirical evaluation of their solution?
Do the authors present a clear statement of their findings?
Do the authors analyse application scenarios of their IJRS?
Do the authors recommend any further research directions?
Hunkenschroer, A. L., & Luetge, C. (2022). Ethics of AI-enabled recruiting and selection: A review and research agenda. Journal of Business Ethics, 1-31
Köchling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: a systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13(3), 795-848.
McInnes, M. D. F., Moher, D., Thombs, B. D., McGrath, T. A., Bossuyt, P. M., Clifford, T., Cohen, J. F., Deeks, J. J., Gatsonis, C., Hooft, L., Hunt, H. A., Hyde, C. J., Korevaar, D. A., Leeflang, M. M. G., Macaskill, P., Reitsma, J. B., Rodin, R., Rutjes, A. W. S., Salameh, J.-P., … Willis, B. H. (2018). Preferred Reporting Items for a Systematic Review and Meta-analysis of Diagnostic Test Accuracy Studies. JAMA, 319(4), 388. https://doi.org/10.1001/jama.2017.19163 Noblit, G., & Hare, R. (1988). Meta-Ethnography. SAGE Publications, Inc.
Portugal, I., Alencar, P., & Cowan, D. (2018). The use of machine learning algorithms in recommender systems: A systematic review. Expert Systems with Applications, 97, 205-227.
Seele, P., Dierksmeier, C., Hofstetter, R., & Schultz, M. D. (2021). Mapping the ethicality of algorithmic pricing: A review of dynamic and personalized pricing. Journal of Business Ethics, 170(4), 697-719.
Suddaby, R., Bitektine, A., & Haack, P. (2017). Legitimacy. Academy of Management Annals, 11(1), 451-478.