Article Text

Original research
Validation of a quantitative instrument measuring critical success factors and acceptance of Casemix system implementation in the total hospital information system in Malaysia
  1. Noor Khairiyah Mustafa1,2,
  2. Roszita Ibrahim1,
  3. Zainudin Awang3,
  4. Azimatun Noor Aizuddin1,4,
  5. Syed Mohamed Aljunid Syed Junid5
  1. 1Department of Public Health Medicine, Universiti Kebangsaan Malaysia Fakulti Perubatan, Cheras, Federal Territory of Kuala Lumpur, Malaysia
  2. 2Ministry of Health Malaysia, Putrajaya, Malaysia
  3. 3Faculty of Business Management, Universiti Sultan Zainal Abidin, Kuala Terengganu, Malaysia
  4. 4International Casemix Centre (ITCC), Hospital Universiti Kebangsaan Malaysia, Cheras, Kuala Lumpur, Malaysia
  5. 5Department of Public Health and Community Medicine, International Medical University, Kuala Lumpur, Federal Territory of Kuala Lumpur, Malaysia
  1. Correspondence to Dr Roszita Ibrahim; roszita{at}ppukm.ukm.edu.my

Abstract

Objectives This study aims to address the significant knowledge gap in the literature on the implementation of Casemix system in total hospital information systems (THIS). The research focuses on validating a quantitative instrument to assess medical doctors’ acceptance of the Casemix system in Ministry of Health (MOH) Malaysia facilities using THIS.

Designs A sequential explanatory mixed-methods study was conducted, starting with a cross-sectional quantitative phase using a self-administered online questionnaire that adapted previous instruments to the current setting based on Human, Organisation, Technology-Fit and Technology Acceptance Model frameworks, followed by a qualitative phase using in-depth interviews. However, this article explicitly emphasises the quantitative phase.

Setting The study was conducted in five MOH hospitals with THIS technology from five zones.

Participants Prior to the quantitative field study, rigorous procedures including content, criterion and face validation, translation, pilot testing and exploratory factor analysis (EFA) were undertaken, resulting in a refined questionnaire consisting of 41 items. Confirmatory factor analysis (CFA) was then performed on data collected from 343 respondents selected via stratified random sampling to validate the measurement model.

Results The study found satisfactory Kaiser-Meyer-Olkin model levels, significant Bartlett’s test of sphericity, satisfactory factor loadings (>0.6) and high internal reliability for each item. One item was eliminated during EFA, and organisational characteristics construct was refined into two components. The study confirms unidimensionality, construct validity, convergent validity, discriminant validity and composite reliability through CFA. After the instrument’s validity, reliability and normality have been established, the questionnaire is validated and deemed operational.

Conclusion By elucidating critical success factor and acceptance of Casemix, this research informs strategies for enhancing its implementation within the THIS environment. Moving forward, the validated instrument will serve as a valuable tool in future research endeavours aimed at evaluating the adoption of the Casemix system within THIS, addressing a notable gap in current literature.

  • quality in health care
  • public health
  • health economics

Data availability statement

No data are available.

http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

STRENGTHS AND LIMITATIONS OF THIS STUDY

  • The rigorous validation process of the questionnaire, including initial validation, translation, pre-testing and exploratory factor analysis using pilot test data, followed by confirmatory factor analysis using field data, enhances the reliability and validity of the instrument used for data collection.

  • The use of statistical techniques such as the Kaiser-Meyer-Olkin (KMO) measure, Bartlett’s test of sphericity, factor loadings, Cronbach’s alpha and various validity tests (unidimensionality, construct validity, convergent validity, discriminant validity) ensures the robustness of the analysis.

  • While the large sample size enhances generalisability to some extent, the study was conducted in only five selected hospitals in Malaysia; thus, the findings may not be representative of all hospitals in the country or other healthcare systems.

  • This study does not include other professional roles, such as paramedics, medical record officers, information technology officers and finance officers because the knowledge and involvement of these roles in the Casemix system are not comparable to that of medical doctors.

  • The findings of the study may be specific to the healthcare context in Malaysia and may not be directly applicable to other countries or healthcare systems with different sociocultural, organisational or technological characteristics.

Introduction

The global healthcare landscape is witnessing profound evolution driven by an array of challenges, including the rise of non-communicable diseases, the resurgence of communicable diseases, demographic shifts and escalating healthcare costs.1 Governments and healthcare authorities worldwide are under mounting pressure to navigate these complexities while optimising operational efficiency and ensuring equitable access to quality healthcare services.1 Within this context, Malaysia has emerged as a proactive player, spearheading innovative strategies to streamline healthcare delivery and bolster system performance. The Ministry of Health (MOH) Malaysia’s proactive stance is exemplified by its robust efforts to standardise and enhance the quality of healthcare services through the implementation of clinical standards and pathways based on international best practices.2 Notably, initiatives such as the hospital information system (HIS) and the Casemix system have been instrumental in revolutionising healthcare management practices and fostering a culture of continuous improvement.3–6

Background of Hospital Information System (HIS)

The HIS stands as a cornerstone of technological innovation in healthcare management, offering a comprehensive platform for efficient data collection, storage and processing.7 HIS responsibilities include managing shared information, enhancing medical record quality, overseeing healthcare quality and error reduction, promoting institutional transparency, analysing healthcare economics and reducing examination and treatment durations.8–13 In Malaysia, the adoption of HIS, categorised into total hospital information system (THIS), intermediate hospital information system (IHIS) and basic hospital information system, has paved the way for seamless integration of patient data, administrative tasks, and financial transactions and appointment management into a single system within a hospital.14–19 The pioneering implementation of a fully integrated paperless system as a THIS facility at Hospital Selayang underscores Malaysia’s commitment to embracing cutting-edge technology to enhance healthcare delivery.20–22 Today, 19 out of 149 Malaysian hospitals have IT facilities.23 24 Despite challenges during implementation, the overall advantage of using a comprehensive system is priceless.22 25–29

Background of Casemix system

The Casemix system is a global system that categorises patient information and treatments based on their types and associated costs, aiming to identify patients with similar resource needs and treatment expenses.30 31 It is widely used globally such as in the USA, Western Europe, Australia, Eastern Europe and Asia, playing a crucial role in hospital financing.32 33 Originating from Australia, it optimises resource utilisation, improves cost transparency and enhances healthcare service efficiency.34 35 However, its adoption in developing nations like Malaysia faces challenges due to technological constraints and resource limitations.23 36 37 The Malaysian diagnosis-related group (MalaysianDRG) Casemix system categorises patients based on healthcare costs, improving efficiency and resource allocation.38–40 This system enhances provider payment measurement, healthcare service quality, equity and efficiency, and assists policymakers in allocating cash for hospitals.24 41 The information from the MalaysianDRG is integrated into the executive information system, providing access to system outputs such as DRG, severity of illness, average cost per disease and Casemix Index.38–40

Integration of Casemix within HIS

The integration of Casemix within HIS frameworks represents a paradigm shift in healthcare management, offering a unified platform for data-driven decision-making, performance monitoring and quality improvement initiatives.42 In the USA, there is a need to evaluate existing HIS against advanced hardware and software.42 As hospitals face public opposition due to rising medical expenses, governments are under pressure to manage healthcare costs more effectively.42 Casemix-based reimbursement policies aim to compensate medical expenses based on Casemix rather than the number of services provided.42 By consolidating clinical, administrative and financial data within a single system, Casemix-based systems are multifaceted and require organisational restructuring and educational initiatives for successful implementation.33 Strategies such as providing feedback to clinicians and integrating decentralised databases into HIS are crucial for ensuring data credibility and accuracy.33 Transitioning from traditional medical record management to health information management requires careful planning and adjustments due to the lack of automation in the current HIS.33

Theoretical and conceptual framework

Multiple frameworks are commonly used to evaluate technology systems’ acceptance and success attributes. There are noteworthy frameworks, such as the technology acceptance model (TAM), the DeLone and McLean Information Systems Success Model (ISSM), the HOT-Fit Evaluation Framework and the Unified Theory of Acceptance and Use of Technology (UTAUT). The TAM is a widely used framework for assessing the acceptability and success of technology systems, particularly in HIS.43–47 It suggests that user perceptions of ease of use, usefulness and intention to use significantly impact system usage.43–47 The DeLone and McLean ISSM evaluates the effectiveness of information systems by examining relationships between system quality, information quality, user happiness, individual impact, organisational impact and overall system success.48 49 The HOT-Fit Evaluation Framework, evolved from the ISSM, evaluates the congruence of persons, organisations and technology within an information system, considering technological variables, organisational factors and human factors.12 50 The UTAUT enhances the TAM by incorporating additional elements such as social impact, enabling situations, and behavioural intentions.43 51 52

By integrating these frameworks within the context of Casemix implementation within THIS, the investigators aim to assess critical success factors and address barriers to adoption and acceptance, facilitating seamless integration and maximising the potential of healthcare modernisation efforts. Hence, the investigators opted to integrate HOT-Fit and TAM frameworks as this study’s conceptual framework to achieve the research’s specific objectives, scope and contextual considerations (see figure 1). HOT-Fit offers a comprehensive framework for examining the alignment between human, organisational and technological factors, while TAM provides a focused lens on individual-level technology acceptance dynamics.12 44–47 Based on the current study’s conceptual framework, the HOT-Fit framework focuses on technological constructs like system, information and service quality, while the TAM framework covers human dimensions like perceived ease of use, usefulness, intention to use and acceptance. The integration of these frameworks is crucial for achieving the study’s specific and general objectives. Thus, these two frameworks are suitable and deemed appropriate for this study. On the other hand, UTAUT does not appear suitable for the current investigation due to the broad scope and complexity of existing TAM with additional external variables and ISSM was also not selected due to its simplicity.43 51 52

Figure 1

Conceptual framework.

This current study aims to evaluate the critical success factors (CSFs) and doctors’ acceptance of Casemix implementation within the THIS environment to understand the issues MOH Malaysia facilities experience better, fill a research gap on Casemix implementation and help shape plans for modernising healthcare. A comprehensive tool, such as a questionnaire, was created to meet the study objectives. This paper aims to examine a multidimensional instrument that was created to meet the study objectives. Consequently, the exploratory factor analysis (EFA) is instrumental in uncovering underlying factors within observed variables to ensure precision and robustness, while confirmatory factor analysis (CFA) was needed to verify the measurement model’s linkages and confirm that the theoretical model was valid, reliable and suitable for data collection, thereby yielding valuable insights.53–56 Given its merits, the current study used CFA to evaluate the measurement model’s validity. After validation processes, structural equation modelling (SEM) was employed to analyse how exogenous, mediating and endogenous constructs interrelate and determine parameters into a structural model to analyse direct, mediating and moderating effects on the study’s goals and hypotheses. While the technology evaluation frameworks offer crucial insights, it is essential to note that Casemix is designed to organise patient data and treatment costs rather than analyse the acceptability and success of technology systems. Moreover, meeting the study objectives for evaluating Casemix adoption in THIS can be done without a separate instrument for each system. It can assist healthcare organisations and policymakers in understanding CSFs facilitating the implementation and acceptance of the Casemix system, and guiding the development of targeted strategies for seamless implementation, enhancing patient care, work efficiency and resource allocation. Therefore, a reliable and valid quantitative instrument is required to achieve these goals.

Methodology

Study design and ethical approval

Study design

This study employed a sequential explanatory mixed-methods design. Nevertheless, the researchers in the present article solely highlight the exploration and development of items, as well as the reliability and validation of the quantitative study. The data collection for the quantitative pilot study was from 1–14 February 2023, the quantitative phase was from 1 April to 31 June 2023, the qualitative pilot study was on 15 September 2023 and the qualitative field study was from 17 October 2023 to 4 January 2024. This paper highlights on the development of instruments for quantitative phase procedures and findings of the validation of quantitative study only. The quantitative phase used a cross-sectional study design to gather data throughout a specified duration.53 57 58

Ethical approval

This study has obtained ethical approval from:

  1. The Medical Research Ethics Committee of the Faculty of Medicine, Universiti Kebangsaan Malaysia (JEP-2022–777), see (online supplemental file 1), and

  2. The Medical Research Ethics Committee of the Ministry of Health Malaysia (NMRR ID-22–02621-DKX), see (online supplemental file 2).

Study instrument

This study used a self-administered questionnaire to collect data on the CSF and acceptance of Casemix in THIS environment. The instrument was developed in Malay and English for a better understanding of the respondents due to the geographical areas of the study where Malay is the national language of Malaysia. The questionnaire comprised 60 items divided into three sections, each with a limited number of constructs. Section 1a consists of 8 questions that collected demographic information such as age, gender, educational background and work experience in the MOH Malaysia and current hospital. Section 1b assessed the comprehension/knowledge level of the Casemix system using 10 items. Meanwhile, Section 2 represented the perceived Critical Success Factors of Casemix implementation in the THIS context, consisting of 37 items within six constructs: system quality (SY)—4 items, information quality (IQ)—5 items, service quality (SQ)—5 items, organisational factors (O)—9 items, perceived ease of use (PEOU)—5 items, perceived usefulness (PU)—4 items and intention to use (ITU)—5 items. Section 3 encompasses the outcome of the study which is the user acceptance (UA) construct, which contains 5 items.

The study incorporates and modifies existing scholarly works rooted in the Human Organisation Technology (HOT-Fit) and TAM frameworks for sections 2 and 3. The two sections, each evaluated using a 10-point interval Likert scale. The 10-point interval scale offers respondents a greater range of response possibilities that align with their precise evaluation of a question.55 56 59 60 A score of 1 represents ‘strongly disagree’, while a score of 10 represents ‘strongly agree’. The constructs and components of the instrument were derived from previous research.12 43 44 48 50 61–64 These items represented eight constructs: SY, IQ, SQ, ORG, PEOU, PU, ITU and User Acceptance.

The constructs described in sections 2 and 3 underwent initial validation, reliability testing and EFA, using pilot data. CFA was also performed using field data. Details regarding the development validation and reliability procedures of the instrument are provided in subsequent sections. Hence, to facilitate transparency and reproducibility, a blank copy of the measurement instrument developed and validated in this study has been included as a supplementary file (see online supplemental file 3: Blank Copy of Quantitative Instrument).

Independent variables

A few constructs have been examined in this study as mentioned in Subsection 1.6, the conceptual framework comprising technology, organisation and human dimensions.

Technological factors

Constructs such as system quality (SY), information quality (IQ) and service quality (SY) constitute the technological factors. Addressing system quality issues is imperative for fostering user acceptance and realising system benefits.43 Reliable and accurate systems with dependable functionality enhance user acceptance, while a user-friendly interface and seamless performance enhance user experience. Integration with existing systems promotes acceptability and interoperability.43 44 Conversely, information quality, encompassing data security and privacy, is crucial in safeguarding patient data, bolstering user confidence and fostering system adoption.65 Service quality encompasses the support and assistance provided during and after system implementation, with practical training, responsive helpdesk support, and ongoing maintenance contributing to user satisfaction and system success.51 66 67 Hence, these three constructs encompassing technological dimensions were adapted from the HOT-Fit framework.12 50

Organizational characteristics

Organisational dimensions, such as an organisational structure and environment, can limit or facilitate the acceptance or implementation of technical advancements.68 The elements of organisational dimension were the most generally surveyed attributes in IT adoption in organisations.69 Previous research has identified relative benefit, centralisation, formalisation, top management support and perceived cost as essential organisational elements influencing any organisation’s decision to embrace current information systems technologies. Management barriers are defined as a lack of efficient planning, a lack of trained people, and limits linked to training courses, according to Abdulrahman and Subramanian.70 The management, technological, ethical-legal and financial barriers were all integrated into the organisational factor category in this study. Previous research has found that technology adoption rates are related to preparedness and impediments to readiness.71 Along with several other studies, senior leaders play a critical role in using information systems at the organisational level.72 Direct involvement of senior executives in IS operations demonstrates the importance of IS and ensures their support and involvement in the overall performance of IS efforts in the organisation.73 Organisational environment and structure can influence user acceptance of information technology, underscoring the importance of organisational improvement initiatives to enhance user acceptance.74–77 Hence, this primary construct encompassing organisational dimensions was adapted from the HOT-Fit framework.12 50

Human factors

The TAM is a framework that consists of five fundamental elements: PEOU, PU, ITU, actual system use and external Variables.78–81 PEOU is a subjective evaluation of a technology’s ease of use, influenced by usability, training and user assistance.78–81 PU quantifies the level of usefulness attributed to technology, influenced by factors such as usefulness and compatibility with user needs and responsibilities.78–81 Intention to Use (ITU), External factors, such as organisational regulations, access and availability, can also influence the interactions within the model.78–81 External variables, such as individual variances, cultural influences and supportive environments, can either amplify or reduce the impact of perceived ease of use and usefulness on behavioural intention and actual use.78–81 The TAM has been a crucial paradigm for understanding technology acceptance and has significantly impacted research in information systems and technology adoption. The HOT-Fit Evaluation technique, which focuses on system use and user satisfaction, is suitable for this study.12 50 These two constructs are interconnected to PEOU and PU, delineated by the TAM framework.78–81 For successful implementation of an information system, medical doctors perceive it as easy to use (PEOU) through adequate training, user-friendly interfaces and intuitive system design.78–81 Healthcare providers should also perceive the system as useful (PU) to ensure successful implementation, highlighting its benefits such as improved efficiency, quality of care and cost control.78–81

Dependent variable

The only dependent variable in this study is acceptance which is adapted from the TAM.44 45 82 The study presents a pragmatic taxonomy of eight different implementation outcomes, including acceptability/acceptance, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration and sustainability.64 Acceptability is a crucial aspect of implementation, referring to the acceptance of a specific intervention, practice, technology or service within a specific care setting.64 It can be measured from the perspective of various stakeholders, such as administrators, payers, providers and consumers.64 Ratings of acceptability are assumed to be dynamic and may differ during pre-implementation and throughout various stages of implementation. In similar literature, Proctor et al delineated examples of measuring provider and patient acceptability/acceptance including case managers’ acceptance of evidence-based procedures in a child welfare system and patients’ acceptance of alcohol screening in an emergency department.64 The terms acceptability and acceptance are interchangeably used to describe implementation outcomes. Therefore, in this study, the researchers would like to explore the acceptance of the Casemix system in the MOH’s THIS facilities.

Patients and public involvement

Participants in this study were medical doctors and this study did not involve any patients or the public. Hence, there was no patient or public involvement in this study.

Initial validation processes

The initial validation procedures were conducted to establish the content, criteria and face validity/pre-test of the instrument for the field study.

Content validity

Content validity is significant when developing new measurement tools because it links abstract ideas with tangible and measurable indicators.83 This involves two main steps: identifying the all-inclusive domain of the relevant content and developing items that correspond to this domain.83 The Content Validity Index (CVI) is often used to measure this validity.84–86 Recent studies have demonstrated the content validity of assessment tools using the CVI.87–90 The best method for calculating the CVI, suggesting that the number of experts reviewing an instrument should range from 2 to 20.84–86 91 92 Typically, the number of experts varies from 2 to 20 individuals.93 For the current study, two experts from the Hospital Financing (Casemix Subunit) at MOH Malaysia were selected. This is coherent with the number of experts that are recommended by a few literature in online supplemental file 4A.84 There are two types of CVI: I-CVI for individual items and S-CVI for overall scales.84–86 91 92 S-CVI can be calculated by averaging the I-CVI scores (S-CVI/Ave) or by the proportion of items rated as relevant by all experts (S-CVI/UA).84–86 91 92 Before calculating CVI, relevance ratings are converted to binary scores. The relevance rating was re-coded as 1 (scale of 3 or 4) or 0 (scale of 1 or 2), as indicated in online supplemental file 4B. Online supplemental file 4C reveals two experts’ item-scale relevance evaluations to exhibit CVI Index calculation. In this study, the experts validated the questionnaire contents, achieving perfect scores of 3 or 4 for all items, resulting in S-CVI/Ave and S-SCVI/UA scores of 1.00. In conclusion, a thorough methodological approach to content validation, based on current data and best practices, is essential to confirm the overall validity of an evaluation.

Criterion validity

Criterion validity denotes to the degree of correlation between a measure and other established measures for the same construct.62 88 89 94 95 An academic statistics expert and an expert in questionnaire development and validation procedures reviewed criterion validity. This can be reviewed in online supplemental file 5. Subsequently, a certified translator translated the instrument from English to Malay back-to-back precisely.

Face validity

A face validity assessment was undertaken to evaluate the questionnaire’s consistency of responses, clarity, comprehensibility, ambiguity and overall comments. Before commencing the pilot study and fieldwork, the researchers acknowledged and resolved the concerns that were previously mentioned.62 90 96 Following the validation process, 11 respondents were purposefully selected for face validity also known as pre-testing to accomplish the prerequisite for face validation. Furthermore, they must meet exclusion criteria like those stipulated for participants in the field study. Subsequently, these respondents were excluded from participation in the quantitative field study. The study population will be described further in Subsection 2.6.2. The objective of this pre-test or face-validation process was to assess the consistency of responses, and clarity, ambiguity and overall design of the questionnaire.97 This will be done through the evaluation from the online Google Form of the Questionnaire. Before conducting the pilot study and fieldwork, the researchers took into consideration the concerns that had been raised.97 The face validity result has been uploaded as online supplemental file 6.

Quantitative pilot test and EFA

The pilot study was conducted at a Federal Territory hospital in Malaysia, Hospital W. The pilot study population also possess similar characteristics to the participants/samples involved in the subsequent quantitative field study. Additionally, these respondents were excluded from participation in the quantitative field study. This study used a minimum of 100 samples to ensure valid results for the EFA.97 98 Hence, since the current pilot study is using EFA, the minimal sample size of 100 is therefore supported by a few studies and books experienced in research and validation procedures.54–56 97 99 Therefore, to account for a projected drop-out rate of 20%, the minimum sample size for this preliminary pilot study was determined to be 125 medical doctors.100 The research was conducted without participant or public involvement in the design, conduct, reporting or dissemination strategies. The data collection method was also like the field study. It was employed using an online Google Form Questionnaire. Participants were asked to scan a Google Form link or QR code to access information sheets, consent forms and online questionnaires. Each participant was notified that their information would be kept private their anonymity would be retained solely for the study, and they could withdraw at any time.

The pilot study will use EFA to measure data from a collection of hidden concepts. EFA is a method that generates more accurate results when each shared component is represented by many measured variables, either exogenous or endogenous constructs.54–56 97 98 101–103 The collected data will be used to identify and quantify the dimensionality of items that assess the construct.53–56 59 60 104 EFA is essential to determine whether items in a construct produce distinct dimensions from those found in previous studies.53–56 59 60 104 Factors’ dimensionality may change as they are transported from other domains to a new research topic, and fluctuations in the population’s cultural heritage, socioeconomic status and passage of time might affect dimensionality. The EFA methodology uses principal component analysis (PCA) to decrease the amount of data, but it fails to discern between common and unique changes efficiently.97 98 PCA is indicated when there is no known theoretical framework or model, and it is used to create the first solutions in EFA. Four requirements of PCA included (1) components with eigenvalues more than one, (2) factor loadings greater than 0.60 for practical relevance, (3) no item cross-loadings greater than 0.50 and (4) each factor has at least three items to be retained.97 98 The data’s eligibility for factor analysis was determined using the Kaiser-Meyer-Olkin Measure of Sampling Adequacy (KMO) of >0.6 and Bartlett’s test of sphericity.55 56 105–107 The effectiveness of Bartlett’s test for factor analysis hinges on the significant result, with a value near p<0.001 (p<0.05) indicating acceptability.53–56 107 The scree plot also determined the best number of constructs to keep.53–56

Quantitative field study

Study location

The present study gathered data from five hospitals situated in various Malaysian zones—South, North, West, East and East Malaysia—that are outfitted with the Total Hospital Information System (THIS) and Casemix system. The study used cluster sampling to select study sites in Malaysia, dividing the country into five distinct clusters. Five hospitals that had successfully implemented Casemix for at least 3 years were chosen to represent different regions of Malaysia. Hospital N was selected for the northern region, Hospital E for the eastern region, Hospital S for the southern region and Hospital W for the central/western region. Hospital EM was chosen for East Malaysia. Cluster sampling is suitable when the research encompasses a vast geographical expanse.

Target population for the study

The target population for this study was medical doctors by profession working in hospitals under MOH in 2023. The study collectively obtained a sampling frame of 3580 medical doctors by profession, encompassing hospital directors, deputy directors (medical division), consultants/specialists, medical officers and house officers from the five selected hospitals. These doctors should fulfil the inclusion and exclusion criteria of this study as follows:

Inclusion criteria
  1. Permanent/ contract of service medical doctors who were posted to current participating hospital.

  2. Has working experience in the current participating hospital for at least 3 months.

  3. Agree to participate in the study.

Exclusion criteria
  1. Attachment medical doctors.

  2. Refuse to participate in the study.

The study population of face validation/pre-test and pilot test has characteristics similar to those of the study population in the field study. The pre-test and pilot-test samples will also be excluded from samples in the field study. Participants were given surveys to complete at their own pace, without fear or pressure.

Sample size and sampling method

The target population was selected using proportionate stratified random sampling, dividing the total population into homogeneous groups.16 108–110 Proportionate stratified random sampling is a probability sampling method that includes separating the entire population into similar groups (strata) to conduct the sampling process.

The authors are concerned about the sample size needed for CFA validation of the measurement model. However, current studies do not have a consensus on the appropriate sample size. For small indicators, a minimum sample size of 100–150 respondents is often needed,111–113 whereas, precise analysis for CFA may require 250–500 respondents.114 115 Some authors suggested the following suggestions for the sample size requirement: (a) a sample size to parameter ratio of 5 or 10, (b) ten cases per observation/indicator and (c) 100 cases/observations per group for multigroup modelling.116–118 In conclusion, the researchers opted to employ five times the number of indicators in the questionnaire because the number of indicators for latent variables is large.116 119 The final questionnaires contain 59 items, requiring a total sample size of 295. However, there is an additional 20% anticipated dropout rate. The sample size was estimated using the formula n=n/1-d (n=total samples, n=minimum required samples and d=drop out rates), yielding a minimum sample size of 369.100 This is also corroborated by other research, which states that because the conceptual framework in this study consists of eight constructs, each with at least four items, the required sample size is 300, with an additional 20% expected drop-out rate, the calculated sample size was 375.56 97 100 102 As a result, the researchers opted to distribute questionnaires to the 375 participants using proportionate stratified random sampling depending on their professional roles as suggested.56 97 102 116

Data collection methods

The data collection method for the quantitative field study is similar to the techniques used in the quantitative pilot study. This data collection method was elaborated in Subsection 2.4.4. However, the link for the participant information sheet (PIS) and informed consent forms was included on the first page of the questionnaire which is https://bit.ly/3F8IF2e. The participant’s information sheet and informed consent forms are attached as online supplemental files 7 and 8, respectively. Similarly to the quantitative pilot study, respondents may do so freely without losing their data if they withdraw from the survey midway. Participants were assured that their information would be kept confidential and that their anonymity would be strictly protected during the field study. Participants who wish to participate must first consent and complete all survey questions. They were also instructed to contact the lead investigator with any questions. The participants have up to 2 weeks to complete and submit the online questionnaire. All survey information was linked to a research identification number. For example, study identifications 001 to 375 on the subject data sheets will be used instead of the subject’s name. The appropriate senior management and Casemix System Coordinators (CSCs), the department’s Casemix Coordinator and Heads of Department will be contacted 3 days before the data gathering session concludes. All measures were taken to safeguard participants’ privacy and anonymity.

Data analysis using Confirmatory Factor Analysis (CFA)

Once the EFA technique has been completed, these constructs and emerging components of the revised conceptual framework were used in the field study. Hair et al and Awang et al described two distinct models in the field study: the measurement model used in the CFA technique and the structural model used to estimate paths using the SEM.54–56 97 99 This study paradigm has the features of a confirmatory form of research, with a focus on behavioural components. This type of SEM is known as covariance based-SEM (CB-SEM) and exhibits theory testing or theory-driven research that integrates existing theories to replicate an established theory into a new domain, confirming a pre-specified relationship.54–56 97 99

The SPSS Analysis of Moment Structures (AMOS) V.24.0 software was used in CFA to evaluate the unidimensionality, validity and reliability of the measurement model.53 54 56 The instrument’s normality is also achieved using CFA.53 54 56 There are two ways to validate measurement models: pooled and individual CFA.54–56 120 121 Pooled-confirmatory factor analysis’ (Pooled-CFA) higher degree of freedom enables model identification even when some constructs have fewer than four components.54–56 120 121 The missing data will be omitted/discarded from the analysis. To ensure unidimensionality, the permissible loading factor for each latent construct is calculated, and items that cannot fit into the measurement model due to low factor loading are excluded.53 55 56 97 122–125 The cut-off value for acceptable factor loading varies depending on the research goal. However, this study used a threshold value of 0.5 to minimise item deletion.53 55 56 97 121 122 126 Convergent validity is assessed by calculating the average variance explained (AVE) for each construct.53 55 56 97 111 122 Meanwhile, composite reliability (CR) assesses how often a construct’s underlying variables are used in structural equation modeling.53 55 56 97 122 A latent construct’s CR must be 0.6 to achieve composite reliability.53 55 56 97 122

Several fitness indicators were reported among scholars. Some recommendations are to report fit indices as absolute fit (chi-squared goodness-of-fit (Χ2) and standardised root mean square residual, or SRMR), parsimony-corrected fit (root mean square error of approximation, or RMSEA), Comparative Fit Index (CFI) and comparative fit (Tucker-Lewis Fit Index (TLI)).54–56 99 123 124 126–129 They advised using at least one index from the three fitness categories: absolute fit, incremental fit and parsimonious fit.54–56 123 124 126–129 A model fit was indicated using a set of cut-off values: RMSEA values from 0.05 to 1.00, CFI >0.90 and Chisq/df<5.00, which would imply a reasonable fit.53–56 126 129–131

Results

Findings for the pilot test through exploratory factor analysis

Out of the required minimum sample size of 125, a total of 106 participants took part in the quantitative pilot study, resulting in an 84.8% response rate. According to Hair et al and Awang et al, in order to conduct an EFA, at least 100 samples are needed.54–56 97 However, considering a potential drop-out rate of 20%, the minimum required sample size for this pilot study is 125. Researchers performed an EFA to find the primary dimensions from a wide set of latent constructs represented by 42 items before conducting the CFA. EFA uses PCA as the extraction method to reduce data and create a hypothesis or model without pre-existing preconceptions about the variables’ quantity or nature.54–56 97 132 The EFA deemed indicators above 0.60 significant, and indicators loading into the same component were combined to match the measurement model.97 The measurement model (for CFA) and structural model (for path estimation) of SEM will use EFA results.54–56 97 99 EFA was used to evaluate and appraise the items measuring the construct, while CFA was used to validate the measurement.12 43 44 50 61 EFA and CFA used pilot and field study data, respectively. EFA is a method used to select factors for retention or removal, using PCA and varimax rotation. It is a popular orthogonal factor rotation approach that clarifies factor analysis.53 55 56 97 122 The extraction technique reduces the organisational factors (O) from nine to eight items, with one item, ‘Organisational competency to provide the resources for the implementation of the Casemix system in THIS setting,’ not reaching the factor loading of 0.6, hence it was 55 97 see table 1.

Table 1

Factor loading of EFA with PCA and varimax rotation

To prepare for the next stage, the researcher reorganises the objects into their respective components and begins data collection in the field study. The EFA results also reveal that the two components of the organisational characteristics (O) construct were later named organisational structure (STR) and organisational environment (ENV).53 55 56 97 122 The instrument was used for 41 items in the field study and analysed with Cronbach’s alpha, ensuring its internal reliability for the field study,53–56 97 133 see table 2 below.

Table 2

The number of items for each construct before and after EFA and Cronbach’s alpha

Consolidating correlated variables was EFA’s primary goal. EFA established eight constructs from the pilot study data and according to the researcher’s conceptual framework (See figure 1).53–55 The overall results of KMO and Bartlett’s sphericity test for all constructs, see table 3. The KMO value was 0.859, which is larger than 0.6. The result of Bartlett’s test of sphericity shows that p value <0.001 yielded statistically significant findings, which is p value <0.05.53 55 56 97 122 Therefore, it is appropriate to proceed with further study.

Table 3

Results of the KMO and Bartlett’s test of sphericity

The amount of variance accounted for, referred to as total variance explained (TVE),53–56 97 see table 1 (online supplemental file 9). Each component had an eigenvalue larger than 1 and the TVE was 84.07%, exceeding 60%.53–56 97 The researcher should contemplate incorporating more items to assess the structures as it indicates that the existing items are inadequate for accurately assessing the constructs if the TVE is less than 60%. However, this does not occur in the present study.

The EFA approach also includes the scree plot. The researcher can ascertain the number of components by observing the distinct slopes in the scree plot.53–56 97 The scree plot exhibits nine distinct slopes, as shown in figure 1 (online supplemental file 9). Hence, the EFA identifies a total of nine components.

Cronbach’s alpha would calculate measuring each item’s internal reliability. Internal reliability assesses how well the selected items measure the same construct.53–56 97 133 All constructs topped 0.7 Cronbach’s Alpha. Hence, this instrument is reliable for use in field study.

Findings for the field study through the confirmatory factor analysis

The ultimate measurement tool for field study comprises 41 elements from the EFA procedure. To adequately address the intricacy of the quantitative instrument for the field study, the researchers determined that a minimum of 300 samples was necessary to implement CFA.97 An additional 20% drop-out rate resulted in a minimum sample size of 375 individuals for the field study. Hence, out of this sample, only 343 participants answered, indicating a response rate of 91.5%.100 No missing data was reported.

CFA validates factor loading and assessment in this study. The researcher tests a theory or model using CFA. Unlike EFA, CFA is a form of structural equation modelling that makes assumptions and expectations about the number of factors and which factor theories or models best suit prior theory.53–56 97 EFA relied mainly on outer loading; however, factor loadings and fitness indices are now considered. Researchers must confirm that both folds meet standards. CFA also lets academics test financial literacy indicators and measurement models. Thus, a proper measuring model helps researchers interpret their data.

Validity, unidimensionality and reliability were necessary for all latent construct assessment models.53 55 56 97 122 The latent construct measurement model needed convergent, construct and discriminant validity.53 55 56 97 122 AVE assesses convergent validity, while measurement model fitness indicators determine construct validity.54–56 On the other hand, composite reliability (CR) was used to calculate instrument reliability since it was better than Cronbach’s alpha.54–56 133

Figure 2 shows that Pooled-CFA validated all latent constructs in the measurement model simultaneously. These constructs were aggregated using double-headed arrows to execute a Pooled-CFA. Pooled-CFA’s increased degree of freedom allows model identification even when some constructs have fewer than four components.54–56 Pooled-CFA was employed in this investigation since only one construct has two components.

Figure 2

Result from Pooled-CFA procedure.

Uni-dimensionality

Unidimensionality is a set of variables that can be explained by one construct.7–9 Unidimensionality is achieved when all construct-specific measuring items have acceptable factor loading.54–56 Remove CFA components with low factor loadings from the measurement model until fit indices are met.53–56 97 134 Table 4 summarises the build items with factor loadings >0.6.54–56

Table 4

Factor loading of all items, composite reliability (CR) and average variant extracted (AVE) and normality testing

Validity

Convergent validity

Convergent validity is a group of indicators that measures a construct.54–56 97 135 It assesses the strength of correlations between items that are hypothesised to measure the same latent construct.56 97 The average variance extracted (AVE) statistic can be used to verify the convergent validity of a construct. If the concept’s AVE is more than 0.5, it possesses convergent validity.53 56 97 136 Table 4 shows that the AVE for all structures was more than 0.5. Organisational characteristics/factors (ORG) AVE shows the highest AVE, which was 0.857, and environment component, the lowest AVE, which is 0.699. The model is, therefore, convergently valid.

Construct validity

When all model fitness indices met the criteria, construct validity was attained.55 56 97 Construct validity was established using absolute, incremental and parsimonious fit indices.55 56 97 Some researchers recommend using one fitness index from each model fit category.55 56 97 This study employed RMSEA, CFI and normed chi-square (x2)/df as its main indicators. According to table 5, this instrument met all three fitness indices: (1) the RMSEA value was below the threshold of 0.08 (0.054), confirming the absolute fit index; (2) the instrument achieved the incremental fit index category by obtaining a CFI value above 0.90; and (3) the parsimonious fit index, measured using Chisq/df, yielded a value of 2.014, which is below the accepted value of 3.0.55 56 97 This study proved the instrument’s construct validity.

Table 5

Fitness index summary

Discriminant validity

The survey’s discriminant validity was tested to ensure no redundant constructs were found in the model. The model is discriminant when the square root of the average variance extracted (AVE) for each construct is greater than its correlation value with other constructs.55 56 136 Table 6 summarises the discriminant validity index, which showed that all constructs met the threshold.55 56 136 The diagonal values (bold font) in this table were greater than all other values in their row and column, suggesting discriminant validity for all constructs.55 56 136

Table 6

Discriminant Validity Index

Composite reliability

Estimating model reliability uses composite reliability (CR).55 56 97 CR between 0.6 and 0.7 is acceptable.55 56 97 Table 4 above shows that the instrument’s composite reliability exceeded 0.6 for all structures. The environment component had the lowest CR (0.903), while the information quality construct had the highest (0.954). Therefore, this instrument’s composite reliability is accomplished.

Normality assessment

Each item evaluating the construct’s distributional normality was assessed. All skewness values must be within the usual range.56 97 Skewness between −1.5 and 1.5 is considered acceptable. All model components’ skewness values are between −1.5 and 1.5, indicating their normal distribution.56 97 The instrument’s data distribution met the normality condition, as shown in table 4.

Discussion

This study focused on redeveloping and validating an instrument to gauge medical doctors’ intent to use and accept the Casemix system within the Total Hospital Information System (THIS) context. The EFA and CFA indicated that the instrument was well-designed and validated for assessing medical practitioners’ acceptance of the Casemix system in THIS setting.55 56 97 The acceptance of the Casemix system among medical physicians in hospital information systems was found to be influenced by various factors including system and service quality, perceived ease of use, usefulness, relevance to clinical practice, training and good organisational support, impact on efficiency and productivity, and confidence in information quality involving data accuracy and security. Healthcare organisations must address these components to gain physician acceptance.43 44 137 They can optimise Casemix system use, improving patient care and results.137

Principal findings

Findings of Exploratory Factor Analysis (EFA)

The pilot test data was analysed using EFA, which helps researchers understand complex datasets and discover observed variable correlations.55 56 97 EFA reduces variable dimensions by identifying common patterns, shaping fundamental factors that influence observable variables and grouping related variables.122 126 138 It simplifies model design by computing factor loadings, which indicate the intensity and direction of factor-observable variable interactions. EFA also finds underlying components in a dataset, while CFA analyses and confirms an EFA-proposed factor structure.55 56 97

All structures underwent KMO and Bartlett’s sphericity tests, with all structures having KMO values over 0.6.55 56 105–107 The scree plot, part of EFA, was used to count components and found nine constructs on 42 items.55 56 105–107 The study found that one construct should now have two parts, mainly due to demographic changes, particularly socioeconomic status and education. Component 1 explained 14.115% of construct variance, while component 9 explained 6.610%. All constructs had 84.07% total variance Explained (TVE), exceeding the minimum threshold of 60%.55 56 60 112 129

The EFA discovered nine components, including O1-O9 for organisational factors.43 45 50 139 41 of 42 items had factor loadings above 0.6, requiring item O1 to be eliminated.53 55 56 97 122 Only organisational factors (O) had nine items reduced to eight following extraction. The remaining seven constructs had only one component and no additional components, resembling HOT-Fit and TAM framework organisational constructs.

The study stresses tool dependability and internal consistency, using markers such as Cronbach’s alpha (α), person reliability, person measure and valid responses.133 140 A Cronbach’s alpha coefficient of 0.7 or above is acceptable in social science and other studies.53 138 141 142 Internal reliability is measured by how well-selected items measure the same idea.53–56 97 98 133 143 The researcher reordered questionnaire items for the field investigation, and CFA authenticated and confirmed all eight constructs on field data, which is elaborated further in the next Subsection 4.1.2.

Findings of Confirmatory Factor Analysis (CFA)

Once the pilot data was assessed and the EFA was commenced, the final questionnaire will be used in the quantitative field study. Eventually, another procedure will be conducted to validate the questionnaire, also known as CFA, based on the field study data. The CFA will validate the instrument’s convergent, construct and discriminant validity. Unidimensionality, composite reliability and normality evaluations are also needed to reveal whether the instrument’s items are valid.53–56 97 Therefore, the findings of this study demonstrate that the quantitative instrument has been validated and proven reliable for assessing medical practitioners’ intention to use and accept the Casemix system within the context of THIS. Using EFA and CFA is imperative for ensuring the instrument’s validity, reliability and trustworthiness.53–56 97

By using EFA, the organisational factors (O) emerged into two components. The organisational factors (O) construct was renamed as organisational characteristics (ORG) in the measurement model, and the newly emerged components were named organisational structure (STR) and organisational environment (ENV). Measurement models refer to the implicit or explicit models that relate the latent variable to its indicators.55 56 97 The organisational characteristics (ORG) construct is assessed as a second-order construct due to the emerged components. When dealing with a complex framework, researchers can choose to do the CFA individually for each second-order construct, and then followed by Pooled-CFA, through item parcelling or straight away employ Pooled-CFA.55 56 The use of Pooled-CFA is beneficial because of its improved efficiency, effectiveness and ability to address identification difficulties.55 56 However, although there are many constructs in this study, this measurement model only includes one second-order construct, which is the (ORG) construct with two emerged components. The other seven constructs are made up exclusively of first-order constructs, each consisting of a maximum of five items. Therefore, a direct Pooled-CFA was employed.55 56

This study uses CFA to validate factor loading and assessment in a theory or model.53–56 97 CFA is a form of structural equation modelling that makes assumptions and expectations about the number of factors and which factor theories or models best suit prior theory.53–56 97 According to Baharum et al in their few studies, they measured success factors in newly graduated nurses’ adaptation and validation procedures.129 144 145 Likewise, for example, CFA also allows academics to test financial literacy indicators and measurement models, ensuring that a proper measuring model helps researchers interpret their data as elaborated in a few studies.146–148

Validity, unidimensionality and reliability were necessary for all latent construct assessment models.53 55 56 97 122 The latent construct measurement model needed convergent, construct and discriminant validity.53 55 56 97 122 Convergent validity is assessed using the average variance extracted (AVE) statistic, while construct validity is determined by measurement model fitness indicators.54–56 Composite reliability (CR) was used to calculate instrument reliability since it was better than Cronbach’s alpha.54–56 133

Unidimensionality is a set of variables that can be explained by one construct.7–9 Unidimensionality is achieved when all construct-specific measuring items have acceptable factor loading.54–56 Convergent validity is a group of indicators that are considered to measure a construct.54–56 97 135 Convergent validity is achieved when the concept’s AVE is more than 0.5, and the highest AVE for all structures was 0.857.53 56 97 136 Normality assessment was conducted on each item evaluating the construct’s distributional normality, with skewness values within the usual range (–1.5 to 1.5).56 97 The instrument’s data distribution met the normality condition.

Construct validity is attained when all model fitness indices meet the criteria, using absolute, incremental and parsimonious fit indices.55 56 97 The instrument met all three fitness indices, confirming the absolute fit index with RMSEA=0.054 (aim<0.1), achieving the incremental fit index category by obtaining a CFI value above 0.90 and yielding a parsimonious fit index of 2.014 (aim<5.0).55 56 97

Discriminant validity was tested to ensure no redundant constructs were found in the model.55 56 136 The model obtained discriminant validity since each construct’s square root of average variance extracted (AVE) is bigger than its correlation value with other constructs.55 56 136 The summary discriminant validity index showed all constructs met discriminant validity.

The instrument’s composite reliability exceeded 0.6 for all structures, with the environment component having the lowest CR (0.903) and the information quality construct having the highest (0.954).55 56 136 Calculating model reliability with composite reliability (CR).55 56 97 Acceptable CR is 0.6–0.7.55 56 97 As shown in table 1, the instrument’s composite reliability exceeded 0.6 for all constructs. The environment component (ENV) had the lowest CR (0.903), while information quality had the highest (0.954). Thus, this instrument’s composite reliability is achieved.

Therefore, all necessary procedures to determine validity, reliability and normalcy were conducted, and no items were excluded. As a result, the total number of items remained at 41. Construct, convergent, discriminant validities and composite reliability have all been attained. All things satisfied the criteria of normality.

Strengths and weaknesses of the study

There are various ways in which this study could benefit the medical community and policymakers.149 150 The research assesses important success elements that affect physicians’ adoption of the Casemix system in hospitals that have a THIS. Policymakers and hospital administrators may find it easier to pinpoint the critical elements influencing the Casemix system’s effective deployment with the aid of the study’s findings.151 To successfully implement clinical pathway/case management programmes, policymakers may find the study to help understand the significance of ongoing clinician support and acceptance, top management leadership and support, and a committed team of case managers, nurses and paramedical professionals.151 152 Policymakers can potentially use the findings to impact admissions decisions, thereby increasing clinical practice openness.152–154

Strengths and limitations exist in this research. One of the strengths of the study was that it employed a sequential explanatory mixed-method approach to investigate the CSFs and acceptance of the Casemix system among medical practitioners in THIS.58 155 156 The findings revealed that there might be unnoticed CSFs in the quantitative phase, suggesting the need for a qualitative method to identify more CSFs, perceptions and challenges/barriers. Quantitative data support hypothesised associations, but qualitative data provide in-depth data to supplement quantitative conclusions.157 The mixed-method approach is expected to improve research design and yield more valid results.

Additionally, another strength of this study is that it uses a strict methodological approach to instrument development and validation. It uses both EFA with pilot test data and CFA using field data, which makes the instrument used for data collection more reliable and valid. Many statistical tests were used to make sure the instrument worked well and the analysis was accurate. These included the KMO measure, Bartlett’s test of sphericity, systematic deletion of items based on factor loadings, Cronbach’s alpha and different validity tests such as unidimensionality, construct validity, convergent validity and discriminant validity.55 56 105–107

Although the study had a large sample size, it was only conducted in five selected hospitals in Malaysia. Therefore, the findings may not accurately represent all THIS hospitals in the country or other healthcare systems. Other professional positions, including paramedics, medical record officers, information technology officers and finance officers, are not included in this study since their involvement and level of understanding in the Casemix system are not similar to that of medical practitioners, despite being relatively involved in the Casemix system. Hence, this may limit the generalisability of the findings could be a potential weakness of the study. The study’s findings are likely to be distinctive/unique to the healthcare setting in Malaysia and may or may not be directly transferable to other nations or healthcare systems that have distinct sociocultural, organisational or technological characteristics. While this study’s findings are rooted in Malaysia’s healthcare setting, where the Casemix system and THIS are prevalent, their applicability to other countries or healthcare systems with different sociocultural, organisational or technological characteristics should be carefully considered. Despite this, there are potential avenues through which the insights gained from this research could benefit other nations or healthcare systems. For example, the principles of efficiency and effectiveness in healthcare management highlighted in this study could be adapted and implemented in various settings. Additionally, the lessons learnt from the challenges faced in Malaysia’s healthcare system could serve as valuable guidance for other countries looking to improve their systems.

Strengths and weaknesses concerning other studies

Compared with previous studies, this research contributes to the field by providing a validated instrument tailored to assess the acceptance of the Casemix system within the THIS environment. Prior literature has examined various aspects of Casemix implementation in Malaysia as well as in other countries. However, no one has investigated Casemix in THIS or even in HIS. Thus, this study offers a comprehensive evaluation tool that addresses critical success factors influencing medical doctors’ acceptance, filling a significant research gap. Given the absence of prior research in this area, the newly created quantitative tool would be advantageous in achieving the study objectives and serve as a point of reference for future investigations.

However, previous literature by Beth Reid describes the importance of developing Casemix-based hospital information system management.33 The Casemix-based hospital information system is a comprehensive approach to healthcare management that involves estimating costs per diagnosis-related group (DRG), building a Casemix-based system and addressing organisational design and education issues for successful implementation.33 It is crucial to provide Casemix reports to hospital staff and clinicians to identify errors in data. Improving the quality of data is essential for both hospitals and universities. To ensure the credibility of the HIS, it must tap into decentralised databases to ensure common input data for each patient’s diseases and procedures.33 Sharing data is beneficial for clinicians as it allows them to avoid investing time and effort in ensuring database accuracy to discover that the data used for Casemix activities, such as funding, is obtained from the medical record.40 This approach is essential for ensuring the accuracy and efficiency of healthcare management.33

Additionally, a study by Saizan showed that THIS hospital showed the lowest Casemix performance in terms of accuracy of the main diagnosis, the completeness of other diagnoses, and the coding of main and other diagnoses.16 This article outlines two themes with three subthemes, each theme based on why the performance is the lowest. These two themes are the poor commitment of clinicians and obstacles in the work process. Furthermore, another study revealed that one THIS hospital in Malaysia had the lowest Casemix performance in terms of main diagnosis accuracy, other diagnosis completeness, and main diagnosis and other diagnostic coding accuracy.16 This article presents two overarching themes, each consisting of three subthemes based on the qualitative, in-depth interview findings. These themes are centred around the underlying reasons behind the lowest Casemix performance. The two main themes identified are the lack of dedication among professionals and the challenges encountered in the workflow.

Meaning of the study: possible explanations and implications

The validated and reliable instrument developed in this study holds implications for clinicians, policymakers and healthcare organisations aiming to optimise Casemix system implementation within HIS. Identifying critical factors influencing acceptance, such as system, information and service quality, is imperative to meet study objectives. Organisational characteristics such as environment and structure, as well as human factors such as perceived ease of use and perceived usefulness, the findings offer actionable insights for enhancing system adoption, utilisation and success. Policymakers and hospital administrators can use these findings to streamline Casemix deployment strategies, improving patient care outcomes and operational efficiency within the THIS.

First, while the specific details of the findings may not directly translate to other contexts, the underlying principles and methodologies employed in this study can serve as a valuable template for researchers in different settings. By adapting and contextualising the research methods and instruments used in this study, researchers in other countries can conduct similar investigations tailored to their healthcare environments.158 159

Second, the identification and evaluation of critical success factors for implementing healthcare information systems, such as the Casemix system, are universal challenges healthcare organisations face worldwide.33 158 160 Because of this, the conceptual framework and analytical methods created in this study can help us understand what makes people accept and use these kinds of systems in different situations. Researchers and policymakers in other countries can leverage these insights to inform their strategies for implementing and optimising healthcare information systems.

Additionally, while the contexts and details of the Casemix system and THIS may vary across different countries, the broader goals of improving resource allocation, clinical decision-making and quality of care are shared objectives across healthcare systems globally. Therefore, the findings of this study, particularly regarding the factors influencing system acceptance and success, have the potential to resonate with stakeholders in other countries who are working towards similar goals.151 161 162

Overall, while recognising the contextual specificity of the study’s findings, there is potential for the insights generated to contribute to the broader body of knowledge on healthcare information systems and inform practices in other countries or healthcare settings with distinct characteristics. Through collaboration and adaptation, the lessons learnt from this research can be extrapolated and applied to diverse healthcare contexts, ultimately contributing to advancing healthcare delivery worldwide.33 158 160 By sharing best practices and lessons learnt, healthcare systems around the world can benefit from the findings of this study and improve their information systems. This collaborative approach can lead to more efficient and effective healthcare delivery on a global scale.

Unanswered questions and future research

The current study proposes employing this instrument in future research, broadening the target population to include more professional occupations and increasing the sample size for more robust results. The novelty of this research lies in its comprehensive analysis of the direct and indirect effects of these parameters on user acceptance of implementing Casemix within THIS environment. SEM was employed to investigate the proposed model. Apart from that, mediating effects have been examined in this study involving a few critical constructs, such as PEOU, PU and ITU, using similar analysis methods. Additionally, more information on moderating characteristics, including age, gender, professional positions, degree of education, years of experience in MOH Malaysia and current THIS hospital and Casemix system knowledge, could improve the instrument. These moderating effects were examined using SEM as well.

The innovation of this study is that it examines the CSFs that influence the acceptance of the Casemix system in the THIS environment, specifically in MOH hospitals in Malaysia. The immediate findings have clear significance for healthcare organisations and policymakers in Malaysia, and even globally. However, the more significant implications for readers in other countries are also relevant. First and foremost, recognising CSF in implementing the Casemix system provides valuable information that can be applied to healthcare systems, especially those equipped with THIS facility universally. Gaining insight into these aspects can provide valuable strategic decision-making guidance in other nations seeking to implement or improve similar systems within their healthcare infrastructure.

Furthermore, the study uses a methodological approach that involves the use of a mixed-methods approach. The quantitative phase, elaborated on in this article, employs a reliable quantitative instrument that validates exploratory and confirmatory factor analyses and reliability testing. Moreover, semi-structured, in-depth interviews were conducted with the Deputy Directors representing the top management and the CSCs of 5 participating hospitals. Hence, these mixed-methods studies provide a strong foundation for evaluating the adoption of the Casemix system within healthcare information systems. Readers from different countries might use and modify these approaches to conduct comparable investigations in their specific circumstances, enhancing the comprehension of healthcare informatics worldwide.

Moreover, the study highlights the significance of interdisciplinary collaboration among healthcare practitioners, technology specialists and policymakers in facilitating the practical application of the Casemix system as one of the clinical and costing modules essential in healthcare settings, especially in facilities equipped with HIS. This interdisciplinary approach to tackling issues in healthcare informatics is generally applicable and can be implemented in various countries and healthcare systems.

To summarise, this study’s immediate findings may address the CSF of the Casemix system implementation within THIS of the healthcare system in Malaysia. However, its broader significance lies in providing valuable insights, methodological frameworks and interdisciplinary approaches that can be applied globally to adopt the Casemix system within the realm of the HIS in other countries, and it is not only applicable locally in the Malaysian setting.

Conclusion

In summary, this research has comprehensively evaluated the fundamental principles outlined in the conceptual framework. Various methodological approaches, including content validity, criterion validity, translation, pre-testing for face validity, pilot testing using EFA and field study employing CFA, have been employed to assess the validity of the items.12 43 44 50 61 The EFA analysis computed KMO, Bartlett’s test for sphericity and Cronbach’s alpha values, all meeting the criteria for sample adequacy, sphericity and internal reliability.53–56 97 Additionally, the CFA analysis tested for unidimensionality, construct validity, convergent validity, discriminant validity, composite reliability and normality, further confirming the validity and reliability of the instrument used to evaluate critical success factors and the acceptance of the Casemix system within the THIS context.53–56 97

Consequently, this validated instrument holds promise for future quantitative analyses, including covariance-based structural equation modeling (CB-SEM) or variance-based structural equation modeling (VB-SEM). In this study, CB-SEM, in conjunction with SPSS-AMOS V.24.0, was used to explore the direct, indirect, mediating and moderating effects among the constructs outlined in the conceptual framework. The findings from these quantitative analyses will be presented in forthcoming articles, providing further insights into the Casemix system’s applicability within the current healthcare landscape. Moreover, the instrument’s demonstrated statistical reliability and validity position is a valuable tool for future research endeavours concerning the Casemix system in the THIS context, addressing an existing research gap. With the establishment of the instrument’s normality, validity and reliability, it can now be considered operational and validated for use in subsequent studies. This research holds the potential to enhance our understanding of the critical success factors and acceptance of the Casemix system, thereby facilitating its improved implementation within the THIS setting. Moving forward, the instrument will be instrumental in conducting further research initiatives to assess the adoption and effectiveness of the Casemix system in THIS environment, addressing a current scarcity of literature.

Data availability statement

No data are available.

Ethics statements

Patient consent for publication

Ethics approval

This study was approved by both the Medical Research Ethics Committee from the Ministry of Health and the Medical Research Ethics Committee from the Faculty of Medicine, Universiti Kebangsaan Malaysia with the reference numbers: NMRR ID-22-02621-DKX and JEP-2022-777 respectively. Informed consent was obtained from all participants through the Google form with a statement that all data would be confidential. All methods were carried out under the ethical standards of the institutional research committee and conducted according to the Declaration of Helsinki. All methods were performed based on the relevant guidelines and regulations. This study was not funded by any grants. The authors declare there were no conflicts of interest concerning this article.

Acknowledgments

In recognition of their involvement and contributions to this study, the authors would like to express their gratitude to the respondents. In addition, the authors would like to express their gratitude to all content and criterion validators of this study: Dr. Fawzi Zaidan and Dr. Nuratfina from the Hospital Financing (Casemix) Unit of the Ministry of Health Malaysia, and Prof. Dr. Zainudin Awang from Universiti Sultan Zainal Abidin. Their remarks and recommendations made a significant contribution to the advancement of this instrument.We express our appreciation to the Casemix System Coordinators, as well as the Hospital and the Deputy Directors from Hospitals W, E, S, N, and EM, for their great collaboration in distributing the questionnaire link and for actively engaging in this study.

Additionally, for their suggestions on improving this paper, the authors would like to express their gratitude to the reviewers. Finally, we also want to express our appreciation to Associate Professor Ts. Dr. Mohd Sharizal for proofreading this article.

References

Supplementary materials

Footnotes

  • Contributors All authors, NKM, RI, ZA, ANA and SMASJ, have substantial contributions to the conception or design of the work; the acquisition, analysis or interpretation of data for the work; and drafting the work or reviewing it critically for important intellectual content. NKM carried out the pilot test and fieldwork, prepared the literature review, extensive search of articles, critical review of articles, performed the statistical analysis, interpretations, and technical parts, and designed the organization of this paper and original draft write-up. RI advised and supervised the overall write-up and conducted the final revisions of the article. ZA checked and validated the statistical analysis and interpretation of the results. ANA and SMASJ co-supervised the study, the manuscript preparation and the article revision. All authors have read and agreed to the final draft of the manuscript, hence, obtaining a final approval of the version to be published. Additionally, all authors agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. RI is responsible for the overall content as guarantor, since she is a corresponding author for this study. The guarantor accepts full responsibility for the finished work and/or the conduct of the study, has access to the data and controls the decision to publish.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests None declared.

  • Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting, or dissemination plans of this research.

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.