The accreditation of built environment programmes is a quality assurance measure which seeks to ensure defined deliverables are continuously met. This study presents the findings of the explored challenges associated with built environment programme accreditation in higher education institutions (HEIs) in South Africa. This is done with a view to proffer formidable recommendations to help improve the processes involved in accreditation of built environment programmes.
The study employed a quantitative approach, necessitating the collection of responses from a defined sample of respondents aided by a structured questionnaire. The data analysis phase of the study was designed using a four-pronged approach comprising descriptive and inferential statistics.
The outcome of the analysis conducted shows that the most rated challenges facing the accreditation of built environment programmes are a lack of funding and insufficient accreditation assessors. Furthermore, the result of the EFA derived five constructs: assessors' challenges in programme accreditation, institutional challenges in programme accreditation, teaching and learning challenges, lack of support services and resistance to change and financial and technical challenges.
The study's outcome can inform policy reforms to streamline the process and enhance consistency in evaluations of the delivery of accreditation of programmes in the built environment.
From a South African perspective, no study has been conducted to explore the challenges faced in accreditation of built environment programmes. This study contributes to the body of knowledge by shoring up this gap in knowledge.
Introduction
The built environment comprises all man-made surroundings, including buildings, infrastructure, utilities and other built-up spaces. It plays an important function in shaping the quality of human life, the drive for economic development and conformance with all categorisations of sustainability (Ebekozien et al., 2023; Lazar and Chithra, 2022). Maphongwane et al. (2024) and Hanif (2018) noted that with the rapid growth of urban population worldwide, the built environment increasingly showcases its importance and significance towards the support of essential requirements for human dwelling through the provision of housing, healthcare, education, sewerage, transportation and commerce. It directly influences access to economic opportunities, social inclusion and public health, hence making the need for its effective planning and management vital towards equitable development (Ibrahim et al., 2024; Saud et al., 2019). As a multidisciplinary domain, the built environment incorporates the tenets and principles of architecture, engineering, urban planning and environmental sciences to meet the needs of society. However, a key dimension towards the contribution of the built environment to any society is the development of the key professionals saddled with the responsibilities of delivering these mandates. The training of these professionals is conducted within the ambit of the set academic framework, which ensures that learners acquire the requisite knowledge needed. Also, the educational framework is primarily made up of a curriculum which is structured to cover the spectrum of theoretical, technical and practical knowledge applications of the courses within the built environment realm (Ebekozien et al., 2025; Iyer-Raniga and Andamon, 2016).
Built environment academic programmes in higher education institutions (HEIs) are key to cultivating the requisite professional standards and expertise needed for planning, designing, construction and management of infrastructure that attains the modern societal demands (Ebekozien et al., 2023; Williamson, 2018). These programmes ensure an outlined educational route that enables students to develop a defined insight into the built environment's technical, scientific, managerial and regulatory dimensions. Van der Heijden (2015) notes that this provides theoretical knowledge and practical engagement, which aims to equip learners with the needed skills to deliver professional mandates. With the ever-growing demands of the built environment wrapped around technological advancement, urbanisation and alignment with sustainability demands, academic programmes must evolve to address the interconnected and complex issues. One such strategy for aligning academic programmes with the needs of current times is the accreditation of built environment programmes. Built environment programme accreditation (BEPA) is vital in driving the courses' relevance, quality, professional adherence and global recognition of educational offerings (Ebekozien and Aigabvboa, 2023). For this study, built environment programmes include offerings in architecture, quantity surveying, construction management, urban planning and civil engineering. Accreditation is a formal validation process whereby an academic programme is evaluated using verified standards outlined by regulatory and professional bodies (Andreani et al., 2020; Salto, 2018; Stura et al., 2019; Ulker and Bakioglu, 2019). The process ensures that some determined items attain the requirements or expectations of the profession and industry while also aligning with the current and future needs of the profession. These include the curriculum quality, content, intended learning outcomes, staff expertise and resources and infrastructure (Ebekozien et al., 2023; Stura et al., 2019; Ulker and Bakioglu, 2019).
In South Africa, accreditation of built environment programmes in HEIs is primarily supervised by the Council of Higher Education (CHE) in collaboration with the statutory professional councils in the built environment (Lange, 2017). A practical and thorough accreditation must be conducted for built environment programmes. The programmes might fail to deliver the critical competencies needed to provide the expected professional outcomes without proper accreditation. Also, inadequate accreditation can result in a lack of recognition from professional councils and regulatory bodies, preventing graduates from registering as professionals or obtaining necessary licenses (Amaral and Norcini, 2023; Stander and Herman, 2017). Thus, it limits the employability prospects of the graduates and diminishes their career advancement opportunities within the country and globally. Furthermore, ineffective accreditation of built environment academic programmes tends to compromise the safety and quality of the built environment (Ebekozien et al., 2023; Williamson, 2018). Foxell (2018) affirmed that when professionals do not get the right and proper training, this poses the risk of experiencing professional flaws in the form of design errors, construction failures and mismanagement of resources. These have severe social, economic and environmental consequences for society. Consequently, the quality assurance process undertaken in programme accreditation must be kept valid and reliable. Based on the aforementioned, this study aims to assess the challenges faced in the accreditation of built environment programmes in South Africa's HEIs. This is with a view to enhancing the quality of built environment higher education for the sector's overall improvement and ensuring the training of skilled professionals who can meet the demands of quality delivery.
Review of related literature
Overview of BEPA in South Africa's HEIs
Programme accreditation entails a continuous review and assessment of academic programmes to determine whether they meet the expected standards of quality (Andreani et al., 2020; Stura et al., 2019). Also, it is an external quality review process established by higher education to investigate the quality assurance and continuous improvement of university programmes (Reddy et al., 2024). Magri and Martín (2021) state that it typically involves internal self-evaluation, which is followed by a team of qualified specialists from the industry or academia conducting the independent quality assessment. It follows a predetermined process that uses standards to assess quality, where standard refers to the conditions and specifications established by accreditation agencies in conjunction with academic programme beneficiaries and stakeholders, following national and international standards (Sywelem and Witte, 2009). Kumar et al. (2020) affirm that programme accreditation aims to enhance the academic status, rating or standard of courses or HEIS by supplying an unbiased affirmation that they can produce graduates who are qualified, skilled and ready for the workforce. It also contributes to the career advancement of students by helping them to progress and maintain high employment standards (Redelsheimer et al., 2015).
Drawing from the continuous improvement model (CIM) which is a systematic approach deployed by organisations for the continued improvement of services and processes, programme accreditation can help drive expected academic deliverables. Through the iterative cycles of evaluation, feedback and refinement, quality checks and assurance can be guaranteed. Zambrano et al. (2019) affirms that CIM ensures that curricula is aligned with industry demands while driving the conformance with accreditation bodies. Consequently, fostering stakeholder engagement, pedagogical innovation and data-driven decision making. Owing to quality programme accreditation, the public would have more faith in the competency of programmes or institutions, as well as their graduates, or the professionals who obtain the qualifications, if the programme accreditation procedure achieves the quality results intended to generate. According to the National Government of South Africa (2024), the South African built environment sector is coordinated by one broad organisation known as the Council for the Built Environment (CBE), which is a statutory body established in terms of the Council for the Built Environment Act (No. 43 of 2000). It oversees the six councils for the built environment professions: architecture, engineering, landscape architecture, project and construction management, property valuation and quantity surveying (National Government of South Africa, 2024). Its primary responsibility is to transform, provide professional assistance and advise the South African government on matters of the built environment sector. Regarding programme accreditation, the CBE guarantees that the councils for the professions consistently apply outlined policies concerning accreditation. This is done in collaboration with the relevant statutory professional bodies.
From a global perspective, BEPA processes vary but share a common quality benchmark. In the United States, the accreditation board of engineering and technology (ABET) employs rigorous criteria for technology-based programmes (Schachterle et al., 2009); while the UK, the royal institution of chartered surveyors (RICS) and chartered institute of building (CIOB) place emphasis on professional competency and industry relevance (Poon and Brownlow, 2015). The Australian institute of building (AIB) integrate outcome-based assessment with stakeholder input (Davis and Savage, 2009). Accreditation generally requires self-evaluation reports, visits from peer reviewers and ongoing enhancement processes to guarantee that curricula align with changing professional, regulatory and societal demands, thus protecting the quality of education and the employability of graduates in the built environment field.
Challenges faced in the accreditation of built environment programmes
The programme accreditation process in the built environment is generally similar to that in other sectors. It is a consultative and peer-driven process that may involve requests made by programme providers. It takes the shape of accreditation visits being conducted, which is followed by accreditation decision making that can be either a full, provisional or no accreditation (Magri and Martín, 2021). According to Kumar et al. (2020), such a decision is valid for a specific cycle, following which the process must repeat itself. Accreditation is generally understood to evaluate a programme using specific prerequisites to meet competency and desired professional conduct (Van Damme, 2004). For BEPA, some challenges are faced during the process of engagement. This could potentially hinder the anticipated expectations of providing professionals who can deliver on the sector mandates. According to Jacob et al. (2020), HEIS often struggle to recruit academic staff who are suitably qualified and experienced to teach students, even though this is a criterion for attaining accreditation status. Owing to the lack of academic staff, some HEIs or programmes either fall short of the accreditation requirements or fail to maintain their accreditation status in the subsequent assessment. Also, a significant challenge to an effective programme accreditation process is flawed curriculum and programme design (Stander and Herman, 2017). A curriculum must be well balanced with built environment education components, including scientific/engineering, mathematical and computing core components and general education components, including languages, general studies, management, law, accountancy, economics and social sciences.
A large staff-to-student ratio is a challenging factor in programme accreditation that quality assessors consider when assessing because it can lead to poor quality of education (Mahabeer and Pirtheepal, 2019). This a generally a poor planning issue for it is known that a lower student-teacher ratio typically results in the following benefits: improved social interaction within the learning community, a stronger focus on offering tailored attention (based on each student's unique learning needs and pace) and better use of available resources, including time and space (Ebekozien and Aigbavboa, 2023). Moreover, Hinchcliff et al. (2013) noted that the lack of adequate funding for programmes is frequently identified during the programme accreditation process, requiring remedial attention before accreditation can be granted. This severely affects higher institutions in Africa and the rest of the underdeveloped world. Institutions or programmes sometimes grapple with a lack of staff training, which is exacerbated by a high academic staff turnover that challenges programme accreditation (Materu, 2007). For this reason, institutions often must invest more of their scarce resources and time to provide training to replace staff members. The lack of experience and knowledge gap among new faculty members can disrupt the institution's accreditation preparation process. Furthermore, faculty members and supporting staff often lack technical awareness of the importance of the programme accreditation process and its importance to the institution (Aldoseri and Sharadgah, 2021). According to Due et al. (2019), a lack of understanding makes it harder for evidence-based judgments to be reached concerning the introduction, development and implementation of accreditation systems. The lack of a clear programme accreditation delivery strategy is often experienced. A properly developed programme accreditation delivery strategy provides a straightforward process that must be followed. It specifies what documents and forms are required and how the required documents must be prepared (Lagrosen, 2017).
Ultimately, the challenge of the lack of a programme accreditation delivery strategy leads to poor knowledge and understanding of how to prepare documentation and how to present a coherent case for accreditation that meets the expected standard (Lagrosen, 2017). According to Youngblade et al. (2022), insufficient time for programme accreditation preparation is one of the significant challenges of accreditation. The accreditation process requires the generation of an excessive amount of documentation, and university staff feel that they are wasting an inordinate amount of time on administrative duties related to this extra work. Also, De Souza-Daw and Ross (2023) stated that owing to their history and connections, older academic institutions may receive preferential accreditation treatment. Academic integrity and financial probity by independent regulatory organisations' external quality assurance processes are essential in preventing the challenge of preferential treatment of old universities (Joseph and Alhassan, 2023). The subjective view of the assessors may be a bottleneck in the quest for an effective programme accreditation outcome. Some HEIs have well-established, powerful clique-like statures in the social or professional spheres. They may benefit from favouritism in the form of assessments of accreditation applications, where assessors might give favourable results based only on their standing, even though they have provided below-par or insufficient requirements (Ekpoh and Edet, 2017).
Research methodology
The study aims to evaluate the challenges faced in the accreditation of built environment programmes in South Africa's HEIs. As shown in Figure 1, the study's framework comprises a three-phase structure. The first phase is the contextualisation of the need for the study, which necessitated the review of relevant literature in the subject setting, the basis for formulating the research instrument. The other phases were the data analysis and the research outcome. Using a post-positivist philosophical position, the study employed a quantitative approach, deploying a questionnaire to elicit responses from the respondents. According to Tan (2011), the questionnaire is attributed to the ability to collect data from many respondents within a short period, while also being useful in the objectivity and quantifiability of research. Ethics clearance for the study was obtained from the University of Johannesburg Faculty of Engineering and the Built Environment Ethics Committee with an ethics clearance number UJ_FEBE_FEPC_00897. The study area was Gauteng province, South Africa. The province is home to many universities offering built environment courses. Also, the province is the home to the headquarters of most of the built environment professional accreditation bodies, such as ECSA, SACPCMP, SACAP, SACPLAN and SACQSP. The target respondents of the study were built environment professionals and programme accreditation assessors. The professionals covered were architects, quantity surveyors, engineers, construction project managers, construction managers, town planners and landscape architects. The sample size for the study was determined using the formula given by Yamane (1967), after the number of respondents making up the sampling frame was specified. A total of one hundred and fifty-two (152) questionnaires were sent out electronically, and one hundred and thirty-three (133) were returned and deemed appropriate for analysis, representing a response rate of 86%. The questionnaire had two sections, which focused on the respondents' demographic information and the challenges faced in the built environment programme accreditation of HEIs in South Africa. The literature review identified twenty-one challenges encountered in the conduct of built environment programme accreditations. These were presented to the respondents for rating based on their level of agreement using a 5-point Likert scale. The retrieved data was analysed using Statistical Package for Social Science Version 27 (SPSS), employing a four-pronged process presented in Figure 1. Firstly, the reliability and validity of the research instrument were determined using Cronbach's alpha. An alpha value of 0.902 was given, portraying good reliability since it is above the 0.70 threshold and closer to 1.00 as Tavakol and Dennick (2011) recommended. The next stage was the descriptive statistics, which assisted in analysing the background information of the study's respondents and the mean rating of the challenges faced in the built environment programme accreditation using mean item score (MIS). Also, the difference in the mean sample of the responses was analysed using a one-sample t-test. According to Friston et al. (2007), this assists in determining whether there is a statistical difference in the population's mean and the set or conceptualised mean. Lastly, the exploratory factor analysis (EFA) was employed to establish the unidimensionality and analysisability of the identified challenges, using principal component analysis (PCA) as recommended by Ikuabe et al. (2021), Akinshipe et al. (2024) and Oyediran et al. (2025). The essence of EFA is to give informed insights into structured patterns, thereby outlining the relationship between the variables in simplified terms.
The flowchart has three vertically arranged sections shown in rectangles. The first section at the top has two horizontally arranged rectangular boxes. The left box has the text “Literature Review.” An arrow from the first box points right to the second box with the text “Formulation of Research Questionnaire.” A downward arrow leads to the next section in the middle. The second section is labeled “Data Analysis” at the bottom center, and contains four rectangular boxes. The first box in this section, at the top left box has the text “Data Reliability and Validity (Cronbach’s Alpha).” An arrow from this box points right to the second box in this section, with the text “Descriptive Statistics (Percentage and M I S).” A curved downward arrow from the second box points downward to the third box in this section, labeled “Difference in Sample Mean (One Sample t-test).” A left arrow from the third box leads to the fourth box in this section with the text “Principal Component Analysis (Exploratory Factor Analysis).” A downward arrow leads to the next section at the bottom. The third section at the bottom has two horizontally arranged rectangular boxes. The left box has the text “Result Presentation.” An arrow from the first box points right to the second box with the text “Conclusion and Recommendation.”Research framework. Source: Authors’ own work
The flowchart has three vertically arranged sections shown in rectangles. The first section at the top has two horizontally arranged rectangular boxes. The left box has the text “Literature Review.” An arrow from the first box points right to the second box with the text “Formulation of Research Questionnaire.” A downward arrow leads to the next section in the middle. The second section is labeled “Data Analysis” at the bottom center, and contains four rectangular boxes. The first box in this section, at the top left box has the text “Data Reliability and Validity (Cronbach’s Alpha).” An arrow from this box points right to the second box in this section, with the text “Descriptive Statistics (Percentage and M I S).” A curved downward arrow from the second box points downward to the third box in this section, labeled “Difference in Sample Mean (One Sample t-test).” A left arrow from the third box leads to the fourth box in this section with the text “Principal Component Analysis (Exploratory Factor Analysis).” A downward arrow leads to the next section at the bottom. The third section at the bottom has two horizontally arranged rectangular boxes. The left box has the text “Result Presentation.” An arrow from the first box points right to the second box with the text “Conclusion and Recommendation.”Research framework. Source: Authors’ own work
Findings
Demographic information of respondents
The background information of the study's respondents shows that the sample distribution according to the professional affiliation reveals that 19.8% of the participants were quantity surveyors, 14.3% were engineers, 12.1% were construction project managers and 11.0% were construction managers. In addition, 11.0% were urban planners, 6.6% were SACPCMP programme accreditation assessors, 5.5% were SACLAP programme accreditation assessors and 4.4% were ECSA programme accreditation assessors. Furthermore, 3.3% were SACQSP programme accreditation assessors, 3.3% were SACAP programme accreditation assessors and 3.3% were SACPLAN programme accreditation assessors. Finally, 2.2% were architects, 1.1% were CBE programme accreditation assessors, 1.1% were real estate property valuers and 1.1% were Health and Safety Managers (H&S). Based on the highest educational qualification of the respondents, the majority of respondents had an honours degree with 39.6%, followed by a master's degree with 26.4%, a bachelor's degree with 17.6%, a doctorate with 11.0% and a post-matriculation diploma or certificate with 5.5%. Also, for the years of professional experience of the respondents, 42.9% had experience ranging between 6 and 10 years, 26.4% had experience between 1 and 5 years and 16.5% had experience between 11 and 15 years. In addition, 7.7% had experience between 16 and 20 years, 4.4% had more than 20 years of experience, while 2.2% had less than 12 months of experience.
Challenges of built environment programme accreditation
One sample t-test
The review of extant literature led to identifying twenty-one challenges faced in accreditation programmes in HEIs. Upon receipt of the respondents' responses, one sample t-test was employed for the first analysis phase. According to Ruxton and Neuhauser (2010), a one-sample t-test aids in the determination of the statistical difference between the population's mean and the hypothesised mean. With this in mind, the study set a null hypothesis, which states that a challenge is significant when the resulting mean value is less than or equal to the population mean (H0: U ≤ U0). The alternate hypothesis states a challenge is significant when the resulting mean value exceeds the population mean (Ha: U > U0). A population mean (U0) of 3.50 is set for the study, while a 95% significance level was adopted as recommended by Pallant (2005). This means that a challenge is adjudged to be significant when the resulting mean value is greater than 3.50, while it is given to be insignificant when the resulting mean value is less than or equal to 3.50. The outcome of the one-sample analysis is presented in Table 1. This shows the mean rating of the analysed responses from the study's respondents and presents the two-tailed p-value indicating the significance of the challenges faced in the built environment programme accreditation in South Africa. It is revealed that all the identified challenges have a resulting mean value greater than 3.50. This indicates that identified challenges experienced in building environment programme accreditation in HEIs are significant based on the responses received. Consequently, the alternate hypothesis (Ha) is accepted while the null hypothesis (H0) is rejected. Moreover, it is shown from the resulting p-value of the analysis that at a 95% confidence level, all the identified challenges are important. The result shows that the most difficulties rated plaguing the conduct of accreditation for built environment programme are as follows: lack of funding (mean = 4.25; sig. = 0.000); insufficient accreditation assessor (mean = 4.22; sig. = 0.000); limited flexibility and innovation (mean = 4.21; sig. = 0.000); high student to staff ratio (mean = 4.21; sig. = 0.000) and lack of suitably qualified academic staff (mean = 4.19; sig. = 0.000).
One sample t-test
| Challenges | Test value = 3.50 | ||||
|---|---|---|---|---|---|
| t-value | df | Sig. (2 tailed) | Mean | Rank | |
| Lack of Funding | 6.367 | 132 | 0.000 | 4.25 | 1 |
| Insufficient Accreditation Assessor | 2.229 | 132 | 0.000 | 4.22 | 2 |
| Limited Flexibility and Innovation | 4.093 | 132 | 0.000 | 4.21 | 3 |
| High Student to Staff Ratio | 4.525 | 132 | 0.000 | 4.21 | 3 |
| Lack of Suitably Qualified Academic Staff | 6.681 | 132 | 0.000 | 4.19 | 5 |
| Lack of Training for Academic Staff | 5.629 | 132 | 0.000 | 4.11 | 6 |
| Flawed Curricula and Programme Design | 3.114 | 132 | 0.000 | 4.1 | 7 |
| Lack of Technical Programme Accreditation Awareness | 4.086 | 132 | 0.000 | 4.09 | 8 |
| Lack of ICT Infrastructure | 5.328 | 132 | 0.000 | 4.08 | 9 |
| Subjective View of Assessors | 7.312 | 132 | 0.000 | 4.07 | 10 |
| Biased Assessors' Views | 8.472 | 132 | 0.000 | 3.99 | 11 |
| Frequent Changes in the Accreditation Specific Criteria | 2.995 | 132 | 0.000 | 3.93 | 12 |
| Lack of Clear Programme Accreditation Delivery Strategy | 1.084 | 132 | 0.000 | 3.91 | 13 |
| Inadequate Teaching and Learning Strategies | 4.778 | 132 | 0.000 | 3.9 | 14 |
| Resistance to Changes | 6.027 | 132 | 0.000 | 3.9 | 14 |
| Inducement of Assessors | 5.638 | 132 | 0.000 | 3.88 | 16 |
| Insufficient Facilities to Support Students with Specialised Needs | 6.549 | 132 | 0.000 | 3.87 | 17 |
| Complexity of the Current Quality Assurance Process | 4.783 | 132 | 0.000 | 3.78 | 18 |
| Lack of library Resources | 5.997 | 132 | 0.000 | 3.71 | 19 |
| Preferential Treatment Towards Older Institutions by Assessors | 3.646 | 132 | 0.000 | 3.68 | 20 |
| Insufficient Time for Programme Accreditation Preparation | 2.227 | 132 | 0.000 | 3.53 | 21 |
| Challenges | Test value = 3.50 | ||||
|---|---|---|---|---|---|
| t-value | df | Sig. (2 tailed) | Mean | Rank | |
| Lack of Funding | 6.367 | 132 | 0.000 | 4.25 | 1 |
| Insufficient Accreditation Assessor | 2.229 | 132 | 0.000 | 4.22 | 2 |
| Limited Flexibility and Innovation | 4.093 | 132 | 0.000 | 4.21 | 3 |
| High Student to Staff Ratio | 4.525 | 132 | 0.000 | 4.21 | 3 |
| Lack of Suitably Qualified Academic Staff | 6.681 | 132 | 0.000 | 4.19 | 5 |
| Lack of Training for Academic Staff | 5.629 | 132 | 0.000 | 4.11 | 6 |
| Flawed Curricula and Programme Design | 3.114 | 132 | 0.000 | 4.1 | 7 |
| Lack of Technical Programme Accreditation Awareness | 4.086 | 132 | 0.000 | 4.09 | 8 |
| Lack of ICT Infrastructure | 5.328 | 132 | 0.000 | 4.08 | 9 |
| Subjective View of Assessors | 7.312 | 132 | 0.000 | 4.07 | 10 |
| Biased Assessors' Views | 8.472 | 132 | 0.000 | 3.99 | 11 |
| Frequent Changes in the Accreditation Specific Criteria | 2.995 | 132 | 0.000 | 3.93 | 12 |
| Lack of Clear Programme Accreditation Delivery Strategy | 1.084 | 132 | 0.000 | 3.91 | 13 |
| Inadequate Teaching and Learning Strategies | 4.778 | 132 | 0.000 | 3.9 | 14 |
| Resistance to Changes | 6.027 | 132 | 0.000 | 3.9 | 14 |
| Inducement of Assessors | 5.638 | 132 | 0.000 | 3.88 | 16 |
| Insufficient Facilities to Support Students with Specialised Needs | 6.549 | 132 | 0.000 | 3.87 | 17 |
| Complexity of the Current Quality Assurance Process | 4.783 | 132 | 0.000 | 3.78 | 18 |
| Lack of library Resources | 5.997 | 132 | 0.000 | 3.71 | 19 |
| Preferential Treatment Towards Older Institutions by Assessors | 3.646 | 132 | 0.000 | 3.68 | 20 |
| Insufficient Time for Programme Accreditation Preparation | 2.227 | 132 | 0.000 | 3.53 | 21 |
Exploratory factor analysis
Exploratory Factor Analysis (EFA) was employed in the study to help group the identified challenges faced in the built environment programme accreditation in HEIs into simplified and manageable groups. This is actualised with the PCA using the varimax rotation method. This technique assists in reducing a large number of variables into smaller and coherent sub-groups (Tabachnick and Fidell, 2007). To determine the unidimensionality and factorability of the dataset, the Bartlett's test of sphericity and the Kaiser–Meyer–Olkin Measure of Sampling Adequacy (KMO) were employed. The KMO is deployed to evaluate the partial correlation of the variables while also evaluating the factor uniformity measurement variables (Sharma et al., 2013). According to Golbasi et al. (2015), the KMO measure should be higher than 0.60 for EFA to be considered suitable. In this study, the KMO measure of sampling adequacy achieved a value of 0.790, which is regarded as sufficient to conduct EFA. Bartlett's test had a value less than 0.001, which is statistically significant (less than 0.05) (Golbasi et al., 2015) and therefore supports the dataset for the conduct of EFA.
Table 2 displays individual eigenvalues of components associated with the challenges of built environment programme accreditation in South African HEIs. By employing Kaiser's criterion of considering eigenvalues greater than 1.0, five components had eigenvalues that exceeded 1.0 and had the following values: 5.660, 1.750, 1.303, 1.109 and 1.018. The total variance described by each factor extracted is as follows: component 1 with 35.4%, component 2 with 10.9%, component 3 with 8.1%, component 4 with 6.9% and component 5 with 6.4%. The final statistics of the PCA and the extracted components accounted for almost 67.749% of the total cumulative variance.
Total variance
| Component | Initial eigenvalues | Extraction sums of squared loadings | Rotation sums of squared loadings | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Total | % of variance | Cumulative % | Total | % of variance | Cumulative % | Total | % of variance | Cumulative % | |
| 1 | 5.660 | 35.375 | 35.375 | 5.660 | 35.375 | 35.375 | 2.788 | 17.424 | 17.424 |
| 2 | 1.750 | 10.935 | 46.310 | 1.750 | 10.935 | 46.310 | 2.284 | 14.275 | 31.699 |
| 3 | 1.303 | 8.147 | 54.457 | 1.303 | 8.147 | 54.457 | 2.170 | 13.561 | 45.260 |
| 4 | 1.109 | 6.928 | 61.385 | 1.109 | 6.928 | 61.385 | 1.856 | 11.603 | 56.862 |
| 5 | 1.018 | 6.364 | 67.749 | 1.018 | 6.364 | 67.749 | 1.742 | 10.886 | 67.749 |
| Component | Initial eigenvalues | Extraction sums of squared loadings | Rotation sums of squared loadings | ||||||
|---|---|---|---|---|---|---|---|---|---|
| Total | % of variance | Cumulative % | Total | % of variance | Cumulative % | Total | % of variance | Cumulative % | |
| 1 | 5.660 | 35.375 | 35.375 | 5.660 | 35.375 | 35.375 | 2.788 | 17.424 | 17.424 |
| 2 | 1.750 | 10.935 | 46.310 | 1.750 | 10.935 | 46.310 | 2.284 | 14.275 | 31.699 |
| 3 | 1.303 | 8.147 | 54.457 | 1.303 | 8.147 | 54.457 | 2.170 | 13.561 | 45.260 |
| 4 | 1.109 | 6.928 | 61.385 | 1.109 | 6.928 | 61.385 | 1.856 | 11.603 | 56.862 |
| 5 | 1.018 | 6.364 | 67.749 | 1.018 | 6.364 | 67.749 | 1.742 | 10.886 | 67.749 |
Note(s): Extraction Method: Principal Component Analysis
The factor loadings of the EFA are presented in Table 3. The twenty-one challenges are loaded into five components using varimax rotation, aided by the PCA used in the study. The first component has the highest factor loading consisting of four variables, namely “Preferential treatment towards older institutions by assessors” (80.9%), “Subjective view of assessors” (80.3%), “Biased assessors” view’ (78.3%) and “Inducement of assessors” (58.3%). This component accounts for 35.4% of the variance explained and is termed “Assessors” challenges in programme accreditation’. The second component consists of three variables, which are “Lack of ICT infrastructure” (76.9%), “Frequent changes in the accreditation specific criteria” (67.1%) and “Lack of suitably qualified academic staff” (63.3%). This cluster accounts for 10.9% of the total variance and is named “Institutional challenges in programme accreditation”.
Rotated component matrixa
| Challenges | Component | ||||
|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | |
| Preferential treatment towards older institutions by Assessors | 0.809 | ||||
| Subjective view of Assessors | 0.803 | ||||
| Biased Assessors' view | 0.783 | ||||
| Inducement of Assessors | 0.583 | ||||
| Lack of ICT infrastructure | 0.769 | ||||
| Frequent changes in the accreditation specific criteria | 0.671 | ||||
| Lack of suitably qualified academic staff | 0.633 | ||||
| Inadequate teaching and learning strategies | 0.831 | ||||
| High student to staff Ratios | 0.740 | ||||
| Flawed curricula and programme design | 0.664 | ||||
| Resistance to change | 0.825 | ||||
| Insufficient time for programme accreditation preparation | 0.666 | ||||
| Insufficient facilities to support students with specialised needs | 0.614 | ||||
| Lack of training for academic staff | 0.746 | ||||
| Lack of funding | 0.732 | ||||
| Lack of technical programme accreditation awareness | 0.539 | ||||
| Challenges | Component | ||||
|---|---|---|---|---|---|
| 1 | 2 | 3 | 4 | 5 | |
| Preferential treatment towards older institutions by Assessors | 0.809 | ||||
| Subjective view of Assessors | 0.803 | ||||
| Biased Assessors' view | 0.783 | ||||
| Inducement of Assessors | 0.583 | ||||
| Lack of ICT infrastructure | 0.769 | ||||
| Frequent changes in the accreditation specific criteria | 0.671 | ||||
| Lack of suitably qualified academic staff | 0.633 | ||||
| Inadequate teaching and learning strategies | 0.831 | ||||
| High student to staff Ratios | 0.740 | ||||
| Flawed curricula and programme design | 0.664 | ||||
| Resistance to change | 0.825 | ||||
| Insufficient time for programme accreditation preparation | 0.666 | ||||
| Insufficient facilities to support students with specialised needs | 0.614 | ||||
| Lack of training for academic staff | 0.746 | ||||
| Lack of funding | 0.732 | ||||
| Lack of technical programme accreditation awareness | 0.539 | ||||
Note(s): Extraction Method: Principal Component Analysis
Converged in 6 iterations
The third component accounts for 8.1% of the total variance explained. The component includes variables such as “Inadequate teaching and learning strategies” (83.1%), “High staff: student ratios” (74.0%) and “Flawed curricula and programme design” (66.4%). This cluster is named “Teaching and learning challenges. The fourth component accounts for 6.9% of the variance explained. It consists of three variables, which are “Resistance to change” (82.5%), “Insufficient time for a programme accreditation preparation”(66.6%) and “Insufficient facilities to support students with specialised needs” (61.4%). This cluster is called “Lack of support services and resistance to change”. Three variables are loaded into the fifth component: “Lack of training for academic staff” (74.6%), “Lack of funding” (73.2%) and “Lack of technical programme accreditation awareness” (53.9%). This cluster accounts for 6.4% of the variance and is named “Financial and technical challenges”.
Discussion of findings
Assessors' challenges in programme accreditation
These results were consistent with De Souza-Daw and Ross' (2023) study, which found that accreditation challenges in granting accreditation status included bias, enticement, subjectivity and preferential treatment of some HEIs by assessors during the accreditation process. The finding also agrees with Ako (2017), who highlighted the inducement of assessors as a problematic issue that could lead to the accreditation of programmes that do not meet the requirements (in terms of personnel, resources, classrooms, libraries and Internet access) that are necessary to deliver quality education. This study also noted that misconduct by the accreditation assessors led to the lowering of professional standards and a widening gap between knowledge and skills and labour market requirements. The results were also consistent with the research conducted by Benos et al. (2007), who found challenges related to the biased view of assessors, whereby they were noted as not being impartial but showed systemic prejudices that hindered their accurate and unbiased evaluation of accreditation facts. Also, Viswanadhan (2008) revealed that regardless of the complex protocols used for the accreditation process, the assessors' subjective view and overall perception are the dominant factors that ultimately determine the final assessment decision.
Institutional challenges in programme accreditation
The findings agreed with those of Youngblade et al. (2022), who highlighted those frequent changes in accreditation standards, procedures and requirements without adequate notice for those involved in accreditation document preparation as posing a challenge to programmes/institutes in obtaining full accreditation. The findings were also similar to those of Ali et al. (2021), who stated that the lack of ICT infrastructure was a challenge faced by institutions and programmes that caused denial of full accreditation. It was also assessed to affect learning and teaching delivery methods, which can result in outdated learning outcomes. Furthermore, the findings are corroborated by Jacob et al. (2020), who revealed that a lack of suitably qualified academic staff was a challenge affecting programme accreditation and the provision of quality education. These studies noted that failure to hire academic staff who are suitably qualified to provide quality education was negatively impacting programme accreditation. Also, Tashayoei et al. (2020) pointed out that the challenge of the lack of appropriately skilled personnel also negatively affected some accreditation authorities, who lacked staff to conduct accreditation.
Teaching and learning challenges
These findings aligned with previous research outcomes that showed inadequate teaching and learning strategies posed challenges for programme accreditation. Studies also indicated that poor accreditation strategies were typically characterised by ineffective teaching and learning methodology approaches that were inadequate to enhance quality education (Moloi and Dimema, 2014). These findings confirmed the results that reported that a high staff-to-student ratio and flawed curricula posed challenges for programme accreditation. Programme accreditation assessors have taken into serious consideration the issue of the staff-to-student ratio when making their evaluations because an unbalanced ratio could lead to poor teaching outcomes and poor quality of education (Stander and Herman, 2017).
Lack of support services and resistance to change
These findings were similar to those made by Kadir et al. (2016), who stated that resistance to change by educational institutions involved in programme accreditation hindered effective accreditation. It also supported the findings of Youngblade et al. (2022), who highlighted that insufficient time for preparation for programme accreditation was a challenge. It was found that educational institutions needed sufficient time to review accreditation because it has to go through designated echelons such as the head of departments, deans, school administrators and the Senate Committee on Accreditation. Limited time and poor preparation were therefore found to be responsible for universities failing to obtain academic programme accreditation. The result further corroborates the study by Paul (2000), who found that insufficient facilities to support students with specialised needs negatively affected universities' academic success and the retention rate of disabled students. Adequate facilities to help students were noted as challenges caused by insufficient planning, designing and management of facilities to ensure accessibility to all.
Financial and technical challenges
These results are consistent with those of Hinchcliff et al. (2013), who claimed that administrators of the universities who could not access adequate funding faced a challenge of acquiring the necessary human, material and operational resources. It was noted that this was one of the reasons why university programmes would not be accredited, as the required materials and human resources are not available where they should be. The findings were also in agreement with those of Materu (2007), who found that programmes often grappled with a shortage of staff, lack of staff training and a lack of accreditation awareness which was exacerbated by a high staff turnover of academic staff, and the inability to access adequate funding, which presented challenges for programme accreditation. The results further supported Due et al. (2019), who found that making evidence-based decisions about programme accreditation is difficult without a thorough understanding and awareness of the process from design, implementation, to evaluation.
Conclusion and recommendations
The study explored the challenges faced in the accreditation of built environment programmes in HEIs in South Africa. A review of the extant literature was conducted, which identified twenty-one variables which were presented to the study's respondents for rating based on their significance using a closed-ended questionnaire. The retrieved data was analysed using descriptive and inferential statistics. It was shown that the most ranked challenges are lack of funding, insufficient accreditation assessors, limited flexibility and innovation, high student-to-staff ratio and lack of suitably qualified academic staff. Also, the analysis showed that all the identified challenges were statistically significant using the set threshold of the study. Furthermore, from the EFA conducted, five constructs that serve as the challenges encountered in accrediting programmes in the built environment were derived. These are assessors' challenges in programme accreditation, institutional challenges in programme accreditation, teaching and learning challenges, lack of support services and resistance to change and financial and technical challenges. Using the findings from the study, it is recommended that the improvement of built environment programme accreditation should be pursued by fostering closer collaboration between industry players, academia and governments. Also, international role-players such as accreditation bodies, academia, research bodies and funders should be drawn in to enhance high-impact research, the use of technology, teaching and learning and to address challenges such as a lack of basic infrastructure, access to funding and limited resources. Moreover, the capacity to apply rigorous programme accreditation methods can be improved through the ongoing training and development of staff. For instance, the academic staff and the staff in the accreditation bodies need training to lead the programme accreditation process with credible knowledge and experience. More technology-based systems, such as digitally driven accreditation procedures, should be used to enhance BEPA in the future. This would address the process's inherent vulnerability due to its time-consuming nature and being prone to corruption, biases and manipulation. The constant updating of the curriculum to meet the current requirements on built environment, green infrastructure and sustainable cities, as well as education for sustainable development, is required for the future built environment programme accreditation. Lastly, it is essential to note that this study was conducted in the Gauteng province of South Africa. Further studies can be conducted in other provinces of the country. These provinces have HEIs offering built environment courses, which are a good measure for achieving a more holistic assessment of the challenges faced in the programme accreditation.

