Barriers of Human and Nonhuman Agents’ Integration in Positive Hybrid Systems: The Relationship Between the Anthropocentrism, Artificial Intelligence Anxiety, and Attitudes Towards Humanoid Robots

Keywords: positive hybrid systems, human-computer interaction, artificial intelligence anxiety, attitudes toward robots, flourishing


The article focuses on the analysis of subjective conditions for the integration of humans with humanoid robots. By interacting with each other, these units create hybrid systems that deserve to be called positive, as interactions with technological artifacts contribute to increasing the optimal functioning of users. An online survey-based study with 364 respondents was conducted that tested the relationship between anthropocentric beliefs of individuals, attitudes towards, and interactions with humanoid robots. It was found that this relationship is positive and is mediated by the aspects of fear of artificial intelligence (AI) related to the perception of AI-driven agents as scary and intimidating and by anxiety caused by the fear of their strong expansion (e.g., in the labor market). The significant strength of the relationship is an important clue for people designing hybrid systems (e.g., in the workplace), especially in conservative societies whose representatives are sensitive to the position humans in the hierarchy of entities.


Armenakis, A. A., Harris, S. G., & Mossholder, K. W. (1993). Creating readiness for organizational change. Human Relations, 46(6), 681–703.

Bernazzani, S. (2017). 10 jobs artificial intelligence will replace (and 10 that are safe).

Bertalanffy, L. von (1968). General system theory: Foundations, development. George Braziller.

Boslaugh, S. E. (2016). Anthropocentrism. Encyclopedia Britannica.

Broekens, J., Heerink, M., & Rosendal, H. (2009). Assistive social robots in elderly care: A review. Gerontechnology, 8(2), 94–103.

Brown, L., & Howard, A. M. (2013). Engaging children in math education using a socially interactive humanoid robot. The 13th IEEE-RAS International Conference on Humanoid Robots, 183–188.

Business Wire (2017). $5.43 Billion consumer robot market 2017 – Industry trends, opportunities and forecasts to 2023 – Research and Markets

Castelo, N., Bos, M. W., & Lehmann, D. R. (2019). Task-dependent algorithm aversion. Journal of Marketing Research, 56(5), 809–825.

Castelo, N., & Ward, A. (2016). Political affiliation moderates attitudes towards artificial intelligence. In P. Moreau & S. Puntoni (Eds.), Advances in Consumer Research, Vol. 44 (pp. 723–723). Association for Consumer Research.

Carvalko, J. (2012). The Techno-human shell: A Jump in the evolutionary gap. Sunbury Press.

Cave, S., Craig, C., Dihal, K., Dillon, S., Montgomery, J., Singler, B., & Taylor, L. (2018). Portrayals and perceptions of AI and why they matter. The Royal Society.

Chandler, E. W., & Dreger, R. M. (1993). Anthropocentrism: Construct validity and measurement. Journal of Social Behavior and Personality, 8(2), 169-188.

Chou, C. (2003). Incidences and correlates of internet anxiety among high school teachers in Taiwan. Computers in Human Behavior, 19(6), 731–749.

Davenport, T., Guha, A., Grewal, D., et al. (2020). How artificial intelligence will change the future of marketing. Journal of the Academy of Marketing Science, 48, 24–42.

Davies, J. (2020). AI today, AI tomorrow. The Arm 2020 global AI survey.

Deloitte Global Human Capital Trends Report. (2019) Leading the social enterprise: Reinvent with a human focus.

Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm aversion: People erroneously avoid algorithms after seeing them err. Journal of Experimental Psychology: General, 144(1), 114–126.

Donaldson, S. I., Dollwet, M., & Rao, M. A. (2015). Happiness, excellence, and optimal human functioning revisited: Examining the peer-reviewed literature linked to positive psychology. The Journal of Positive Psychology, 10(3), 185–195.

Fortuna, P. (2021). Optimum. Idea cyberpsychologii pozytywnej. Wydawnictwo Naukowe PWN.

Fortuna, P., Gut, A., & Wróblewski, Z. (submitted). Hey robot, the mind is not enough to join the moral community! The effect of assigning a mind and a soul to a humanoid robot on its moral status. Annals of Psychology.

Fortuna, P., & Razmus, W. (submitted). The effect of corporate anthropomorphized branding, perceived algorithm effectiveness and anthropocentrism on acceptance of algorithmic medical providers: A moderated mediation model. International Journal of Consumer Studies.

Fortuna, P., Wróblewski, Z., & Gorbaniuk, O. (2021). The Structure and correlates of anthropocentrism as a psychological construct. Current Psychology. A Journal for Diverse Perspectives on Diverse Psychological Issues.

Fox, J., & Gambino, A. (2021). Relationship development with humanoid social robots: Applying interpersonal theories to human–robot interaction. Cyberpsychology, Behavior, and Social Networking, 24(5), 294-299.

Fuller, S. (2021). Cyborg persons: Humanity played in a different key. Postdigital Science and Education, 3, 668–677.

Gladden, M. E. (2016). Posthuman management. Synthypnion Press.

Griffin, A. (2017). Saudi Arabia grants citizenship to a robot for the first time ever. Independent UK.

Heinssen, R. K., Glass, C. R., & Knight, L. A. (1987). Assessing computer anxiety: Development and validation of the computer anxiety rating scale. Computers in Human Behavior, 3(1), 49–59.

Hubig, C. (2008). Mensch-Maschine-Interaktion in hybriden Systemen. In C. Hubig & P. Koslowski (Eds.), Maschinen, die unsere Brüder werden: Mensch-Maschine-Interaktion in hybriden Systemen (pp. 9–17). Wilhelm Fink.

Kamieński, Ł. (2014). Nowy wspaniały żołnierz. Rewolucja biotechnologiczna i wojna w XXI wieku. Wydawnictwo Uniwersytetu Jagiellońskiego.

Kaplan, J. (2016). Artificial intelligence: What everyone needs to know. Oxford University Press.

Kelly, G. (1955). The psychology of personal constructs. Norton.

Kern, M. L., Williams, P., Spong, C., Colla, R., Sharma, K., Downie, A., Taylor J. A. Sharp, S., Siokou C. & Oades L. G. (2020). Systems Informed Positive Psychology. The Journal of Positive Psychology, 15(6), 705–715.

Keyes, C. L. M., & Haidt, J. (Eds.) (2003). Flourishing: Positive psychology and the life well-lived. American Psychological Association.

Khasawneh, O. Y. (2018). Technophobia without boarders: The influence of technophobia and emotional intelligence on technology acceptance and the moderating influence of organizational climate. Computers in Human Behavior, 88, 210–218.

Kislev, E. (2022). Relationships 5.0: How AI, VR, and robots will reshape our emotional lives. Oxford University Press.

Kostavelis, I., Giakoumis, D., Peleka, G, Kargakos, A., Skartados, E., Vasileiadis, M. & Tzovaras, D. (2019). RAMCIP Robot: A Personal robotic assistant; Demonstration of a complete framework. In L. Leal-Taixé, & S. Roth, S. (Eds.) Computer Vision – ECCV 2018 Workshops. ECCV 2018. Lecture Notes in Computer Science, Vol. 11134 (pp. 96–111). Springer.

Lemay, D. J., Basnet, R. B., & Doleck, T. (2020). Fearing the robot apocalypse: Correlates of AI anxiety. International Journal of Learning Analytics and Artificial Intelligence for Education, 2(2), 24–33.

Longoni, C., Bonezzi, A., & Morewedge, C. K. (2019). Resistance to medical artificial intelligence. Journal of Consumer Research, 46(4), 629–650.

Lukaszewicz, A., & Fortuna, P. (2022). Towards Turing Test 2.0 – Attribution of moral status and personhood to human and non-human agents. Postdigital Science and Education, 4, 860–876.

Maison, D. (2019). Sztuczna inteligencja w życiu Polaków. Raport z badań. Referat wygłoszony na konferencji Maison & Partners, Warszawa, 7.03.2019.

Marchetti, A., Manzi, F., Itakura, S., & Massaro, D. (2018). Theory of mind and humanoid robots from a lifespan perspective. Zeitschrift für Psychologie, 226(2), 98–109.

McCall, R. (2017). Japan has just granted residency to an AI Bot in a world first. IFLScience.

Müller, V. C., & Bostrom, N. (2016). Future progress in artificial intelligence: A Survey of expert opinion. In V. Muelle (Ed.), Fundamental issues of artificial intelligence, (pp. 555-572). Springer International Publishing.

Modliński, A., Fortuna, P., & Rożnowski, B. (2022). Human–machine trans roles conflict in the organization: How sensitive are customers to intelligent robots replacing the human workforce? International Journal of Consumer Studies, 1-18.

Modliński, A., Fortuna, P., & Rożnowski, B. (submitted). Robots onboard? Investigating what personal predispositions influence the reactions of museums’ employees towards the delegation of tasks to social robots. Museum Management and Curatorship.

Mori, M. (1970). The Uncanny Valley. Energy, 7, 33–35.

Naneva, S., Sarda Gou, M., Webb, T. L., & Prescott, T. J. (2020). A Systematic review of attitudes, anxiety, acceptance, and trust towards social robots. International Journal of Social Robotics, 12, 1179–1201.

Nomura, T., Kanda, T. & Suzuki, T. (2006). Experimental investigation into influence of negative attitudes toward robots on human–robot interaction. AI & Society, 20, 138–150.

Oliveira, A. J., de & Oliveira, L. (2019). Technology exposure in large Portuguese catholic families. Proceedings of the International Conferences on ICT, Society and Human Beings 2019.

Pochwatko, G., Giger, J. C., Różańska-Walczuk, M., Świdrak, J., Kukiełka, K., Możaryn, J., & Piçarra, N. (2015). Polish version of the Negative Attitude Toward Robot Scale (NARS-PL). Journal of Automation, Mobile Robotics and Intelligent Systems, 9(3), 65-72.

Pruś, D., Stoma, M., & Dudziak, A. (2020). Ocena świadomości wpływu sztucznej inteligencji na życie konsumentów. In M. Bibicz & K. Kropiwiec-Domańska (Eds.), Wybrane zagadnienia z zakresu przemysłu spożywczego oraz zarządzania i inżynierii produkcji (pp. 79-86). Wydawnictwo Uniwersytetu Przyrodniczego.

Rafferty, A. E., Jimmieson, N. L., & Armenakis, A. A. (2013). Change readiness: A Multilevel review. Journal of Management, 39(1), 110–135.

Ragu-Nathan, T. S., Monideepa Tarafdar, Ragu-Nathan, B. S., & Qiang Tu (2008). The Consequences of technostress for end users in organizations: Conceptual development and empirical validation. Information Systems Research, 19(4), 417–433.

Ruijten, P.A.M., Haans, A., Ham, J., Midden, C. J. H. (2019). Perceived human-likeness of social robots: Testing the rasch model as a method for measuring anthropomorphism. International Journal of Social Robotics, 11, 477–494.

Riva, G., Banos, R. M., Botella, C., Wiederhold, B. K., et al. (2012). Positive technology: Using interactive technologies to promote positive functioning. Cyberpsychology, Behavior and Social Networking, 15, 69–77.

Schneider, S. (2019). Artificial you: AI and the future of your mind. Princeton University Press.

Stahl, B. C. (2021). Artificial intelligence for a better future. An Ecosystem perspective on the ethics of AI and emerging digital technologies. Springer Nature.

Tegmark, M. (2017). Life 3.0: Being human in the age of artificial intelligence. Knopf Publishing Group.

Terzi, R. (2020). An Adaptation of artificial intelligence anxiety scale into Turkish: Reliability and validity study. International Online Journal of Education and Teaching, 7(4), 1501-1515.

Trzebińska, E. (2008). Psychologia pozytywna. Wydawnictwa Akademickie i Profesjonalne.

Wang, Y. S. (2007). Development and validation of a mobile computer anxiety scale. British Journal of Educational Technology, 38(6), 990–1009.

Wang, Y. Y., & Wang, Y. S. (2019). Development and validation of an artificial intelligence anxiety scale: An initial application in predicting motivated learning behavior. Interactive Learning Environments, 30(4), 619–634.

Washington H., Piccolo J., Gomez-Baggethun E., Kopnina, H., Alberro, H. (2021). The Trouble with anthropocentric hubris, with examples from conservation. Conservation. 1(4), 285-298.