The proposed scoping review will be conducted in accordance with the JBI methodology for scoping reviews (5).
Search strategy
The search strategy will aim to locate both published and unpublished studies. A three-step search strategy will be utilised in this review. First an initial limited search of MEDLINE and CINAHL was undertaken to identify articles on the topic. The text words contained in the titles and abstracts of relevant articles, and the index terms used to describe the articles were used to develop a full search strategy for Medline, CINAHL, APA PsychInfo and EMBASE. The search strategy, including all identified keywords and index terms, will be adapted for each included database and/or information source. The reference list of all included sources of evidence will be screened for other relevant studies.
Searches will be conducted in EMBASE (via Ovid), CINAHL (via EBSCO), Medline (Via OVID), PsycINFO (via EBESCO). Databases and will use Boolean operators to link between and within groups of terms reflecting the population of interest (e.g. patient, public), the setting (e.g. breast imaging), the ‘intervention’ (e.g. artificial intelligence) and the outcomes (e.g. trust, acceptance). Sources of unpublished studies and grey literature to be searched include Google scholar, Proquest, Trip, Open access theses and dissertation, NHS England, cancer charity websites. The first 100 papers will be included from these. Only studies published in English will be included. There will be no limitations regarding date of publication.
Study/Source of evidence selection
Following the search, all identified citations will be collated and uploaded into EndNote 21.4.0.20467 (Clarivate Analytics, PA, USA) and duplicates removed. Following a pilot test, titles and abstracts will then be screened by two or more independent reviewers for assessment against the inclusion criteria for the review. Potentially relevant sources will be retrieved in full, and their citation details imported into the JBI System for the Unified Management, Assessment and Review of Information (JBI SUMARI) (JBI, Adelaide, Australia) (6). The full text of selected citations will be assessed in detail against the inclusion criteria by two or more independent reviewers. Reasons for exclusion of sources of evidence at full text that do not meet the inclusion criteria will be recorded and reported in the scoping review. Any disagreements that arise between the reviewers at each stage of the selection process will be resolved through discussion, or with an additional reviewer/s. The results of the search and the study inclusion process will be reported in full in the final scoping review and presented in a PRISMA flow diagram (5).
Data extraction
Data will be extracted from papers included in the scoping review by two or more independent reviewers using a data extraction tool developed by the reviewers. The data extracted will include specific details about the participants, concept, context, study methods and key findings relevant to the review question/s.
Qualitative: A data extraction form will be developed, using Excel to record the following:
Study characteristics: author, year, location, population
Research aim
Methodology (e.g.interviews, focus groups)
Themes related to trust and acceptance of AI
Quotations or narrative descriptions that reflect perceptions of AI, concerns, or barriers to trust
Outcomes related to public attitudes, concerns, or areas of confidence in AI technology.
Quantitative: Astandardised data extraction form will be used, developed in Excel, to record:
Study characteristics: author, year, location, population size
Study design (e.g., cross-sectional, cohort
Outcome measures related to trust, acceptance, perceived usefulness, and ease of use of AI
Statistical measures (e.g., means, percentages, standard deviations, confidence intervals, p-values) relating to trust and acceptance of AI.
The draft data extraction tool will be modified and revised as necessary during the process of extracting data from each included evidence source. Modifications will be detailed in the scoping review. Any disagreements that arise between the reviewers will be resolved through discussion, or with an additional reviewer/s. If appropriate, authors of papers will be contacted to request missing or additional data, where required.
Data analysis and presentation
The data will be analysed thematically to identify key factors influencing trust and acceptance of AI in breast imaging. Results will be presented in both narrative and tabular formats, with a narrative summary describing how the results relate to the review’s objectives.
Apply a thematic analysis approach following Braun and Clarke’s (2006) method.
Familiarisation: Read through the qualitative data multiple times to gain a sense of key themes.
Coding: Assign initial codes to relevant pieces of data (e.g., sentiments regarding trust, data security, perceived competence of AI in diagnosis).
Theme Development: Group similar codes to identify overarching themes such as trust in technology, the role of clinicians, privacy concerns, and accuracy.
Theme Refinement: Check themes against the extracted data and adjust as needed to ensure consistency and coherence.
Sub-themes may also be developed to highlight nuances, such as different types of trust (e.g., trust in AI vs. trust in human-AI collaboration).
Textual Narrative Synthesis: Present findings in a narrative format, describing key themes and patterns that emerged from the data.
Tables and Diagrams: Include tables summarising the studies (author, year, population, key themes) and figures showing thematic maps or relationships between key factors influencing trust and acceptance of AI.
Quotations: Use verbatim quotes to support the thematic findings, providing a rich description of participants’ perspectives.
Quantitative:
Descriptive Statistics: Summarise key study characteristics and patient/public attitudes using descriptive
statistics, such as frequencies, percentages, means, and medians.
Use software such as SPSS or R to compute basic descriptive summaries.
Meta-analysis (if appropriate): If the studies are sufficiently homogenous in terms of their outcome measures, conduct a meta-analysis.
Calculate pooled estimates of trust and acceptance scores and assess heterogeneity using the I² statistic.
Subgroup Analysis: Analyse data by subgroups (e.g., age, gender, education level) to explore variations in trust and acceptance of AI.
Correlation Analysis: If possible, calculate correlation coefficients (e.g., Pearson’s or Spearman’s) to assess relationships between trust/acceptance and other variables such as prior experience with
technology or exposure to breast imaging.
Data Presentation
Summary Tables: Present descriptive statistics and pooled estimates in summary tables, detailing each study’s population, methodology, and main results regarding trust and acceptance of AI.
Graphs and Charts: Use bar charts, pie charts, or forest plots to illustrate key quantitative findings, such as the proportion of participants who trust AI versus those who do not.
Narrative Summary: Complement the tables and figures with a brief narrative summary of the key quantitative findings, highlighting any trends or significant differences between studies or
subgroups.