Output list
Journal article
Published 2026
Journal of Metaverse, 6, 57 - 70
Gamification plays a pivotal role in enhancing user engagement in the Metaverse, particularly among Generation Z users who value autonomy, immersion, and identity expression. However, current research lacks a cohesive framework tailored to designing gamified social experiences in immersive virtual environments. This study presents a framework-oriented systematic literature review, guided by PRISMA 2020 and SPIDER, to investigate how gamification is applied in the Metaverse and how it aligns with the behavioral needs of Gen Z. From 792 screened studies, seventeen high-quality papers were synthesized to identify core gamification mechanics, including avatars, XR affordances, and identity-driven engagement strategies. Building on these insights, we propose the Affordance-Driven Gamification Framework (ADGF), a conceptual model for designing socially immersive experiences, along with a five-step design process to support its real-world application. Our contributions include a critical synthesis of existing strategies, Gen Z-specific design considerations, and a dual-framework approach to guide researchers and practitioners in developing emotionally engaging and socially dynamic Metaverse experiences.
Journal article
A systematic review of multi-modal large language models on domain-specific applications
Published 2025
The Artificial intelligence review, 58, 12, 383
While Large Language Models (LLMs) have shown remarkable proficiency in text-based tasks, they struggle to interact effectively with the more realistic world without the perceptions of other modalities such as visual and audio. Multi-modal LLMs, which integrate these additional modalities, have become increasingly important across various domains. Despite the significant advancements and potential of multi-modal LLMs, there has been no comprehensive PRISMA-based systematic review that examines their applications across different domains. The objective of this work is to fill this gap by systematically reviewing and synthesising the quantitative research literature on domain-specific applications of multi-modal LLMs. This systematic review follows the PRISMA guidelines to analyse research literature published after 2022, the release of OpenAI’s ChatGPT
3.5. The literature search was conducted across several online databases, including Nature, Scopus, and Google Scholar. A total of 22 studies were identified, with 11 focusing on the medical domain, 3 on autonomous driving, and 2 on geometric analysis. The remaining studies covered a range of topics, with one each on climate, music, e-commerce, sentiment analysis, human-robot interaction, and construction. This review provides a comprehensive overview of the current state of multi-modal LLMs, highlights their domain-specific applications, and identifies gaps and future research directions.
Journal article
Measuring the digital divide: A modified benefit-of-the-doubt approach
Published 2023
Knowledge-based systems, 261, 110191
In this paper, a modified composite index is developed to measure digital inclusion for a group of cities and regions. The developed model, in contrast to the existing benefit-of-the-doubt (BoD) composite index literature, considers the subindexes as non-compensatory. This new way of modeling results in three important properties: (i) all subindexes are taken into account when assessing the digital inclusion of regions and are not removed (substituted) from the composite index, (ii) in addition to an overall composite index (aggregation of the subindexes), partial indexes (aggregated scores for each subindex) are also provided so that weak performances can be detected more effectively than when only the overall index is measured, and (iii) compared with current BoD models, the developed model has improved discriminatory power. To demonstrate the developed model, we use the Australian digital inclusion index as a real-world example.
Journal article
Published 2021
European journal of operational research, 295, 1, 394 - 397
Ghasemi, Ignatius, and Rezaee (2019) (Improving discriminating power in data envelopment models based on deviation variables framework. European Journal of Operational Research 278, 442– 447) propose a procedure for ranking efficient units in data envelopment analysis (DEA) based on the deviation variables framework. They claim that their procedure improves the discriminating power of DEA and can be an alternative to the super-efficiency model that is well-known to have the infeasibility problem and the cross-efficiency approach which suffers from the presence of multiple optimal solutions. However, we demonstrate, in this short note, that their procedure is developed based upon inappropriate use of deviation variables which leads to the development of a ranking approach that does not meet their expectations and as a result, an unreasonable ranking of decision making units (DMUs). We also show that the use of deviation variables, if interpreted and used correctly, can lead to developing a cross-inefficiency matrix and approach.
Journal article
Integrated data envelopment analysis: Linear vs. nonlinear model
Published 2018
European journal of operational research, 268, 1, 255 - 267
This paper develops a relationship between two linear and nonlinear data envelopment analysis (DEA) models which have previously been developed for the joint measurement of the efficiency and effectiveness of decision making units (DMUs). It will be shown that a DMU is overall efficient by the nonlinear model if and only if it is overall efficient by the linear model. We will compare these two models and demonstrate that the linear model is an efficient alternative algorithm for the nonlinear model. We will also show that the linear model is more computationally efficient than the nonlinear model, it does not have the potential estimation error of the heuristic search procedure used in the nonlinear model, and it determines global optimum solutions rather than the local optimum. Using 11 different data sets from published papers and also 1000 simulated sets of data, we will explore and compare these two models. Using the data set that is most frequently used in the published papers, it is shown that the nonlinear model with a step size equal to 0.00001, requires running 1,955,573 linear programs (LPs) to measure the efficiency of 24 DMUs compared to only 24 LPs required for the linear model. Similarly, for a very small data set which consists of only 5 DMUs, the nonlinear model requires running 7861 LPs with step size equal to 0.0001, whereas the linear model needs just 5 LPs.