Assessing User Apprehensions About Mixed Reality Artifacts and Applications: The Mixed Reality Concerns (MRC) Questionnaire


Current research in Mixed Reality (MR) presents a wide range of novel use cases for blending virtual elements with the real world. This yet-to-be-ubiquitous technology challenges how users currently work and interact with digital content. While offering many potential advantages, MR technologies introduce new security, safety, and privacy challenges. Thus, it is relevant to understand users’ apprehensions towards MR technologies, ranging from security concerns to social acceptance. To address this challenge, we present the Mixed Reality Concerns (MRC) Questionnaire, designed to assess users’ concerns towards MR artifacts and applications systematically. The development followed a structured process considering previous work, expert interviews, iterative refinements, and confirmatory tests to analytically validate the questionnaire. The MRC Questionnaire offers a new method of assessing users’ critical opinions to compare and assess novel MR artifacts and applications regarding security, privacy, social implications, and trust.

In Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems

We are committed to keeping you informed and involved. This website will be regularly updated with the latest publications using the MRC Questionnaire, alongside new analyses and discussions about the tool.

Stay connected, stay informed, and be a part of the continued development of this questionnaire. Your insights and participation are invaluable to us. Please reach out to us if you used the questionnaire or have any questions regarding its use. We will be happy to share your results and talk about your work! You can find our contact info at the end of this page.

Table of Contents


Understanding the development of Mixed Reality (MR) technology involves more than just acknowledging its technical progress. It is crucial to consider and address any concerns individuals might have about adopting and using this technology. In this context, a new challenge emerges for HCI: Understanding how individuals apprehend novel MR systems regarding their perceived concerns about this technology.

The following sections present the Mixed Reality Concern (MRC) Questionnaire, created to quickly and easily assess such concerns.

Construction of the MRC Questionnaire

The development of the MRC Questionnaire followed a systematic approach based on the guidelines by Boateng et al.1 Initially, a conceptual model of potential concerns was formulated, drawing from relevant research in the field. This model comprises four primary categories, with 30 subcategories that cover a broad spectrum of potential user concerns.

**User Concerns About MR Systems.** The conceptual framework with its four categories and their respective subcategories aiming to classify potential user concerns regarding MR systems.
User Concerns About MR Systems. The conceptual framework with its four categories and their respective subcategories aiming to classify potential user concerns regarding MR systems.

Subsequently, an initial set of 120 items derived from this conceptual model was generated. These items were then refined through expert feedback and underwent an exploratory factor analysis, resulting in the final scale composed of 9 items. A comprehensive evaluation of this scale followed to ensure the validity of its results. More information about the development and evaluation can be found in the paper itself, linked above.

**The Development Process of the Questionnaire.**
The Development Process of the Questionnaire.

The MRC Questionnaire

The MRC Questionnaire is comprised of nine items in total, split up into three subscales. It is intentionally designed not to assess the specific, objective problems or risks associated with a technology but rather to focus on user apprehensions and concerns. Its primary purpose is to measure the subjective perceptions and feelings of users regarding a technology, particularly any unease or worries they may experience.

By concentrating on user apprehensions, the scale aims to capture the emotional and psychological aspects of how MR systems might be perceived even before actual user experiences can be gathered. It recognizes that people’s perceptions and concerns can vary widely, even when faced with similar objective risks or issues. Therefore, the scale provides a means to gauge how users interpret and respond to these risks on a personal level.

The final **MRC Questionnaire**, comprised of three factors with three items each.
The final MRC Questionnaire, comprised of three factors with three items each.

Application Scenarios

Next to analyzing how an MR system is perceived during or after the initial contact with a concept or prototype, it can also be used to assess actual implementations. Users’ apprehensions often reveal pain points or areas of discomfort about the technology at hand. This information is valuable for pinpointing specific issues that may need addressing, whether they relate to security, privacy, social implications, or the inherent trust in the system. User concerns can also guide the development of educational materials or resources to help users understand the technology better. Addressing misconceptions or alleviating fears through education can contribute to a more positive user experience. In summary, while the scale’s primary focus is on assessing user apprehensions and perceptions, it can serve as a versatile tool for evaluating new parts of the user experience in actual technology implementations, which other scales currently do not assess. By understanding and addressing user concerns, developers can enhance the overall quality and acceptance of MR systems and other technologies.

The evaluation suggests that applying the MRC Questionnaire is suitable for both between-subject and within-subject studies, as well as for repeated-measures studies. Although the analysis of the subscales generally presents favorable results for evaluating them on their own, we do not explicitly recommend this application. The intentional brevity of the scale serves the purpose of offering a quick initial insight into potential user concerns.


Currently, the MRC Questionnaire is only available in English. As the validity of research employing translated instruments can be compromised if there is insufficient focus on the methods used to establish equivalence between the original and the translated version of the tool2, we do not recommend using the MRC Questionnaire in another language without a thorough analysis of the equivalence a translated version.

However, we are open to translating this questionnaire and welcome any assistance or interest from third parties in doing so. Should translations become available, we will provide links to them here and update this section accordingly.

How to use the MRC Questionnaire

The MRC Questionnaire is developed to be used in conjunction with other questionnaires. It is meant to be a part of larger studies to get a more comprehensive overview of participant’s perceptions as easily as possible. Therefore, you can administer it whenever you see fit during your study. You can find ready-to-use resources and examples below.

Common Survey Tools (e.g., Google Forms, Limesurvey)

We have created an assortment of templates for the most common survey tools. Feel free to reach out to us if you need guidance with implementing the questionnaire into your survey tool of choice.

In Google Forms, you can simply use a multiple-choice grid to set up the Likert-style question matrix.

For convenience, you can also copy this Google Forms template to get started.

When using LimeSurvey, you can use the Array question type to model the questionnaire.

Alternatively, you can also download the premade question and simply import it into your survey.

Similarly, most other survey tools offer such matrix-style question types to quickly create such stacked Likert scale question blocks.

Paper Based

While we do not recommend running paper-based surveys due to the higher possibility of falsely transferring answers to digital mediums, sometimes the best way to quickly gauge participants’ assessments still is done with pen and paper.

For this, we also offer a simple (and ink-saving) printable version of the MRC Questionnaire.


The MRC Questionnaire is scored on a 5-point Likert scale, ranging from Strongly disagree (1) to Strongly agree (5). All items of the Trust subscale are reverse-coded.

\begin{align*} \text{MRC} &= \text{MRC}_\text{SP} + \text{MRC}_\text{SI} + \text{MRC}_\text{T}\\ \text{with } \text{MRC}_\text{SP} &= \text{SP1} + \text{SP2} + \text{SP3}\\ \text{and } \text{MRC}_\text{SI} &= \text{SI1} + \text{SI2} + \text{SI3}\\ \text{and } \text{MRC}_\text{T} &= \text{T1}_R + \text{T2}_R + \text{T3}_R\\ \end{align*}

Exemplary scoring (using Python)

We will assume a CSV-type file where each new line corresponds to one participant. It might look like this:


The following script will aggregate the data and give a quick summary of some descriptive statistics:

# Let's use the powerful pandas library to make our live easier
import pandas as pd

# Read the CSV as follows
data = pd.read_csv('<name-of-csv-file>.csv')

# Reverse the reverse coded items of the Trust subcategory
data['T1'] = 6 - data['T1']
data['T2'] = 6 - data['T2']
data['T3'] = 6 - data['T3']

# Aggregate the total for the subcategories and full MRC
data['SP_Total'] = data['SP1'] + data['SP2'] + data['SP3']
data['SI_Total'] = data['SI1'] + data['SI2'] + data['SI3']
data['T_Total'] = data['T1'] + data['T2'] + data['T3']
data['MRC_Total'] = data['SP_Total'] + data['SI_Total'] + data['T_Total']

# Create text descriptions
SP_txt = "Security & Privacy Subcategory. Mean: {0:0.2f} | SD: {1:.2f}".format(data['SP_Total'].mean(),data['SP_Total'].std())
SI_txt = "Social Implications Subcategory. Mean: {0:0.2f} | SD: {1:.2f}".format(data['SI_Total'].mean(),data['SI_Total'].std())
T_txt = "Trust Subcategory. Mean: {0:0.2f} | SD: {1:.2f}".format(data['T_Total'].mean(),data['T_Total'].std())
MRC_txt = "Full MRC Questionnaire. Mean: {0:0.2f} | SD: {1:.2f}".format(data['MRC_Total'].mean(),data['MRC_Total'].std())

# Print said text descriptions
print('================ DESCRIPTIVE STATISTICS ================')

With the example CSV above, you will then receive the following output:

================ DESCRIPTIVE STATISTICS ================
Security & Privacy Subcategory. Mean: 5.67 | SD: 2.08
Social Implications Subcategory. Mean: 7.00 | SD: 1.00
Trust Subcategory. Mean: 6.00 | SD: 1.73
Full MRC Questionnaire. Mean: 18.67 | SD: 3.21

Of course, inference statistics (such as t-tests or Wilcoxon-Mann-Whitney tests) can be used as well to compare two or more ratings during a study.

Exemplary scoring (manual)

Let’s assume a participant ranked their agreement with the nine items like the following.

We can now translate this into numbers. As mentioned above, we will convert a Strong disagree to a value of 1, a Disagree to a value of 2, etc. As the last three items (the ones pertaining to the Trust subcategory) are reverse-coded, a Strong disagree will be converted to a 5, a Disagree to a 4, etc.
By adding the values of each subcategory up, we can get a quick assessment of the participants’ concerns regarding the MR system. Lower values indicate fewer concerns regarding the overarching topic of the subcategory, and higher values indicate more concerns, respectively.
Finally, by adding up the results for each subcategory, we get one value indicating a rough analysis of the number and severity of apprehensions or concerns a user/participant of a study might have. Again, the main objective of the MRC Questionnaire is to get an initial understanding of potential user concerns. A singular value can of course not replace qualitative and more thorough research, but it might serve as a quickly assessable metric to compare rough prototypes and ideas of MR systems.

Feel free to reach out to us in case of any questions regarding the development, application and evaluation of the MRC Questionnaire, or more. We would also love to hear from other researchers and practitioners if you come to use this tool in your research!


  1. Godfred O. Boateng, Torsten B. Neilands, Edward A. Frongillo, Hugo R. Melgar-Quiñonez, and Sera L. Young. 2018. Best Practices for Developing and Validating Scales for Health, Social, and Behavioral Research: A Primer. Frontiers in Public Health 6 (2018), 149. ↩︎

  2. Chang, A. M., Chau, J. P., & Holroyd, E. (1999). Translation of questionnaires and issues of equivalence. Journal of advanced nursing, 29(2), 316-322. ↩︎