Error / %
Data error
Model error Total error
Number of data input items
Figure 5.7 Relationship of model number of input data used, model errors and overall errors (after Chapman(A1.3))
* Note that as the number of inputs increases so the likelihood of input errors also increases. However, since the number of inputs is related to the resolution with which heat transfer processes are modelled then model errors tend to reduce. At some point, it has been argued, an optimum is reached beyond which accuracy is impaired.
QA procedures encompass how all the above issues are handled and controlled. Section 5.A1.4 gives detailed guidance on how a QAsystem may be implemented within the design process.
5.A1.2 Risk, uncertainty and sources of error
One of the main motivations in establishing QA is to reduce the risk of errors and to address uncertainties inherent in the process of design. However, a distinction should be made between avoidable errors in the use of calculation methods and the inherent uncertainties in using the calculation method, which irrespective of its level of detail is approximation of reality, be it a simple manual calculation or detailed simulation software model.
5.A1.2.1 Sources of avoidable errors
The errors that in theory could be avoided normally stem from the following given in an order of ease of tackling them:
— Blunders in using the method and/or software: QA procedures, such as routine checks of input data and careful examination of results can significantly reduce or even eliminate such errors.
— Inappropriate use of a method and errors in abstracting a problem into a form suitable for calculation or simulation: such errors are dependent on the user’s knowledge of inherent assumptions in the calculation method, experience and training. The standardisation of procedures for carrying out routine calculations, for example using docu- mented ‘performance assessment methods’(A1.4), can help reduce such errors. Reviews of modelling methodology and assumptions by more experi- enced personnel is an alternative or indeed supplement in non-routine cases.
— Errors in coding a software implementation of a method: select software which has been through various validation tests, see AM11.
— Approximations within the mathematical models being used: this too may be addressed by appropriate software selection. Alternatively, if the nature of a physical simplification is understood then it is possible to test the sensitivity of predictions to this uncertainty and so account for it in design.
5.A1.2.2 Sources of uncertainty
Generally, calculation of the energy and environmental behaviour of buildings, is carried out deterministically, i.e.
parameters used in modelling a building are treated as known values, which can either be fixed or time dependant. However, the values of most parameters are often uncertain or unknown at the time calculations are made. The most important sources of such uncertainties are as follows:
— Imperfect knowledge regarding detailed building design/operation characteristics and consequent assumptions (e.g. thermal/optical properties of materials, build quality and associated leakage, equipment used and their characteristics, etc.).
— The inherent unpredictability of the future (climate, occupants’ use and operation of the building, etc.).
— Lack of knowledge about the underlying physical process and/or approximations within the mathe- matical models.
A systematic uncertainty analysis can help to identify the key sources of uncertainty which merit further attention as well as those that may be safely ignored. Furthermore, it can provide an insight into the level of confidence in estimates.
The purpose of quantitative uncertainty analysis is to use currently available information to quantify the degree of confidence in the existing data and models. The purpose is not to somehow ‘reduce’ uncertainty — reduction in uncertainty can only come from improved knowledge.
Nevertheless it is important to be aware of the importance of uncertainties and ways of handling them by using appropriate design margins.
Several techniques have been developed, within an academic context, for studying predictive uncertainties.
These include differential sensitivity analysis (DSA), Monte Carlo analysis (MCA) and stochastic sensitivity analysis (SSA)(A1.5). Of these the most commonly employed is DSA, in which individual parameters are varied between simulations, depending upon estimates of their uncer- tainty, and the results analysed. Such sensitivity analyses help to identify not only the overall uncertainty in model outputs but also the sensitivity of performance to input uncertainties. This may then prompt the user to investigate means for reducing input uncertainties (e.g. by acquiring better quality information) or by ensuring that the design is sufficiently robust so that performance is not contingent upon particular (uncertain) modelling assumptions. The issue of sensitivity analysis is discussed further in CIBSE AM11(A1.6).
5.A1.3 Fitness for purpose
As mentioned earlier, it is important to be aware of approximations in the mathematical models of the software being used, and the possible implications of these approximations. It is also important to be aware of how the software has performed in recent validation studies.
This will help to define the range of applicability of the software or means for accommodating predictive limitations. This might be achieved by representing predictive uncertainty in some meaningful way (section 5.A1.2), or surmounting this uncertainty using additional support software, by identifying some means for emulating a physical process which is not being explicitly simulated*.
Guidance on the selection of suitable software is given in CIBSE AM11(A1.6). Furthermore, CIBSE has published some standard validation tests(A1.2)which can be used to
* As an example, many dynamic thermal models represent solar shading due to nearby buildings, but sky radiation is unchanged (with corresponding over-predictions) — although models are emerging to resolve this deficiency(A1.7). To account for this within the affected space(s), one might use predictions from a ray tracing program to calibrate the (temporary) scaling of diffuse solar radiation in the climate file.
determine the accuracy with which some of the fundamental heat transfer processes are being modelled whilst at the same time providing a basis for judging whether the software has been correctly deployed (this then acting as a tool to assist with staff training).
It is also important to be aware of the software’s basis for environmental control. For example some programs will, by default, deliver all of the energy that is required to achieve a space set point within a single simulation time- step (typically one hour). Plant sizes may therefore be over predicted. One way of accounting for this is to bring forward the plant start time (say by one hour) and manually adjust the plant capacity (initially by say a third of that predicted) until the smallest size which reaches the set condition has been identified.
In addition to internal simplifications, it is generally desirable for the user to impose simplifications to the degree of detail with which a problem is represented within software (i.e., not to be seduced into describing problems in unnecessary detail by easy to use CADtools).
Failing to do so may render the models unmanageably complex (and lead to QA difficulties) and/or lead to unnecessarily high computational overheads, whilst over- simplification may cause predictive inaccuracies. It is important therefore to achieve a balance — keeping the model as simple as possible consistent with the avoidance of significant errors. To help with this, specific guidelines for abstracting problems into simulation metaphor are described in some detail in CIBSE AM11(A1.6). Note that the correct application of these guidelines will tend to require both a sound knowledge of building physics fundamentals and a reasonable grounding in the theoretical basis and internal assumptions of the software being used.
5.A1.4 QA procedures
It has been suggested(A1.6) that three categories of individual are involved in the development and imple- mentation of QAprocedures in the context of using BEEM
software: (i) the QA manager, (ii) the simulation team manager, and (iii) the program user. Whilst this may be true of relatively large organisations, the personnel associated with these roles and indeed the division of responsibilities may vary between organisations. The key issues are:
— that clear lines of responsibility are identified
— that senior personnel take responsibility for establishing and policing QA
— that a sound QAphilosophy pervades all aspects of simulation work
— that QAprocedures are established and continually refined for oft-repeated tasks.
It is perhaps useful to identify some key QAprocedures with recourse to a typical simulation process* (see Table 5.41 for a summary).
5.A1.4.1 Problem definition
The simulation team manager should initially determine the need for environmental modelling and the types of software to be used. Following from this, perhaps in liaison with the program user, the modelling strategy for resolving the design questions of interest should be developed. This should involve defining the form of the base case model (including the approach for abstracting the design problem into simulation metaphor) and agreeing upon the need for and range of variations from this model. Sources of input data should be identified.
Indeed a library of information resources should generally be developed and managed — ensuring that software time conventions are respected.
* Note that this assumes that the simulation team manager and personnel designated with overall QAresponsibility together develop a relevant QAstatement and supporting procedures and thereafter ensure the proper implementation of these procedures.
Table 5.41 Example of a QAchecklist including personnel responsible Person responsible Task
Quality assurance — Developing a quality statement
(QA) manager — Developing and implementing QAprocedures
— Refining and updating of QAprocedures Simulation team — Developing ‘performance assessment method’
manager (PAM) style documents for commonly occurring problems
— Developing a documentation skeleton for generic simulation tasks and guidelines for adaptation to novel problems
— Preparing skeleton documents for reporting results to clients (see CIBSE AM11(A1.6), section 5.5)
— Procedures for archiving documentation on each job and the associated program input and output data
— Devising project-specific simulation strategies
— Checking users have followed QAcheck lists
— Checking plausibility of output results
— Identifying need for and arranging client meetings to review progress
— Identifying the need for new staff or staff training
— Recommending acquisition of new software or computing resources
— Identifying need for internal development of productivity aids/supporting software and overseeing their development and testing Program user — Developing and maintaining standard databases
— Developing and maintaining archives of simulation input and output
— Adopting standard file and model attribute naming conventions
— Maintaining a log book of elegant solutions as well as common mistakes and means for resolution
— Checking plausibility of output results
— Accounting for sensitivity to input uncertainties
— Developing and applying checks to ensure correctness of the simulation results
— Documenting procedures and databases used in a series of simulations
— Making routine backups of modelling project folders
— Routine testing of new programs against validation data sets
— Recommending need for new programs and computer hardware
— Developing supporting software tools and productivity aids
Important inputs should be agreed with the client and circulated to the design team (e.g. to inform the team manager of important departures from these), as should input assumptions. Input uncertainties and their effects should be quantified wherever possible, or tech- niques/third party software executed to minimise them.
The reference model should then be defined, according to internally agreed directory/file/attribute naming conven- tions. If appropriate, an internally developed performance assessment method (PAM)(A1.8)should be followed. This should be constructed to facilitate ‘painless’ changes to the model following from modelling or conclusions or other design changes and, in some circumstances, to track the design process later.
5.A1.4.2 Simulation
With the reference model defined, initial simulations should focus upon the program user understanding the behaviour of the building and systems being investigated;
ensuring the model has been properly defined (checking input files as well as comparing the range of output variables with benchmarks or simplified calculations), that means for emulating system controls deliver the appropriate responses. Uncertainties to key inputs should also be studied as should, time permitting, performance sensitivity to key design variants. A review of the model and associated results with the team manager should act as a final test of the validity of the model.
The previously agreed list of model variants to be tested should be updated following any conclusions from initial ad-hoc simulations. Model variants should then be prepared and checked; modelling work should also be periodically backed-up. If supported, it is prudent to prepare scripts to conduct the simulations and extract the required result — again ensuring that no mistakes have been made, so as to avoid wasted simulation time. With large numbers of model variants, the need for preparing new or amending existing results analysis programs should then be internally reviewed. Such programs can save time and represent a source of consistency in analysis and presentation, but again checks should ensure that they are free from errors. Indeed, as with information sources, a library of such quality assured utility programs should be developed.
5.A1.4.3 Interpretation
In cooperation with the team manager, results should be reviewed in detail to relate cause to effect and conse- quently to prepare design advice. This may, particularly in the case of unintuitive or unforeseen trends in results, involve further checking of model input files and also further simulations and interrogation of results.
5.A1.4.4 Presentation
The team manager should arrange and attend periodic review meetings. Initially to check model inputs and agree design variants (both type and range) to be tested and later to review results. The need for and range of further simulations should then be agreed.
5.A1.4.5 Documentation
There are two strands to documentation: client reporting and project documentation. In the former, following the format of either a generic documentation skeleton or a task-specific reporting format as may be defined in a PAM, the modelling methodology, inputs (and any associated uncertainty analysis), reference and variant model results and conclusions should be logically documented. It is helpful to include in an appendix design drawings which associate tabulated construction build-ups with construc- tional elements, occupancy characteristics, internal heat gains etc.
Project documentation should include filing of hardcopy material and the thorough archiving of electronic material. Model documentation should be explained by a
‘readme’ file that identifies the project objectives, strategy and key assumptions, variants tested and associated file names/locations.
Finally, it is good practice to carry out project de-briefing.
This is an opportunity to discuss the effectiveness of the modelling procedure and its influence on design in an open forum with a view to identifying scope for improve- ment in the efficacy and quality of the process as well as the need for staff training, recruitment or software/com- puting resources. At this stage the need for revising or creating new PAMs should also be identified and tasked and an ‘elegant solution’ logbook should be updated.
Note that the emphasis above is clearly placed upon the planning of the modelling study (note that sound planning generally pays dividends later), defining and checking the reference model. This is the most time- consuming aspect of the simulation process, but also the most critical. The preparation, checking and simulation of subsequent aspects of this model tend not to be excessively time consuming — particularly if automation opportu- nities are exploited. The temptation to move quickly onto other work without completing the project documentation process should be avoided. It is important that a project should be capable of being quickly resurrected in the future due to unforeseen circumstances or, indeed, because the staff member responsible is not available some time soon after the work was completed.
Regarding the software used, it is prudent to maintain a list of desired software features (and possible bugs) and liaise with software developers with a view to these being integrated (or resolved) in future releases. Alternatively, in the case of open source software, it may be worth investing the time required to understand the software structure so that such features may be embedded in-house, taking care subsequently to check the validity of the software.
5.A1.5 Summary
Software should be selected which is appropriate for the task in hand. The software should also be tested against standard validation data sets to understand the range of applicability of the software as well as to ensure that it is being correctly deployed.
Libraries of input data sets and productivity aids should be internally developed. Input data sets should respect software timing conventions and their basis should be
corroborated. Productivity aids (including results analysis programs) should be documented and independently checked.
A quality assurance infrastructure should be established including personnel as well as documentation including
QA checklists, PAMs, log book and documentation skeletons.
Quality assurance procedures should be defined and used to guide the modelling process.
The need for niche software as well as staff training, recruitment and computing resources should be period- ically reviewed to ensure the best available resources are brought to bear on project work.
References
A1.1 Parand F and Bloomfield D Introducing quality assurance in practices using software for evaluation of building performance (Can small firms afford QA?) Proc. CIBSE Conf., Computers in the Construction Industry, May 1993 (London: Chartered Institution of Building Services Engineers) (1993)
A1.2 CIBSE standard tests for the assessment of building services design software CIBSE TM33 (London: Chartered Institution of Building Services Engineers) (2004)
A1.3 Chapman J Data accuracy and model reliability Proc. BEPAC Conf., Building Environmental Performance ’91, Canterbury, 10-11 April 1991 (Reading: Building Environmental Performance Analysis Club) (1991)
A1.4 Parand F and Bloomfield D Integrated knowledge-based system for performance assessment methods Proc. BEPAC Conf., University of York, April 1994 263–272 (Reading: Building Environmental Performance Analysis Club) (1994)
A1.5 Lomas K J and Eppel H Sensitivity analysis techniques for building thermal simulation programs Energy and Buildings 19 21–44 (1992)
A1.6 CIBSE Building energy and environmental modelling CIBSE AM11 (London: Chartered Institution of Building Services Engineers) (1998)
A1.7 Robinson D and Stone A Solar radiation modelling in the urban context Solar Energy 77 (3) 295–309 (2004)
A1.8 Parand F and Bloomfield D Quality Assurance in Environmental Prediction Proc. BEPAC Conf., Building Environmental Performance ’91, Canterbury, 10–11 April 1991 237–246 (Reading: Building Energy Performance Analysis Club) (1991)
5.A2.1 Brief history of CIBSE methods
In order to place this edition in context this appendix presents a brief history of previous CIBSE methods and a review of the various approaches to dynamic modelling in other countries.
5.A2.1.1 Steady state methods
These methods are only used for the sizing of heat emitters and a number of different approaches have been adopted by CIBSE in the past. The only significant differ- ence between these approaches is the space temperature (index temperature) used to determine the fabric heat loss;
that is, in the equation:
Qf= U A (θi– θo) (5.64)
where Qf is the fabric heat loss (W), U is the thermal transmittance of the surface (Wãm–2ãK–1), A is the surface area (m2), θiis the index temperature (°C) and θois the external temperature (°C).
Air temperature method
The rate of heat loss through a wall is balanced by the convective gain from the room air to that surface and longwave radiant interchange between room surfaces.
Both processes are complex so any practical calculation technique needs to introduce approximations. The approximation made in the 1959 IHVE Guide(A2.1) was that all heat transfer occurred between the surface and the air temperature. This approximation is acceptable for well- insulated spaces but will generally lead to under-estimation
for radiant heating systems and over-estimation for fully convective heating systems.
Environmental temperature method
The deficiencies of the 1959 approach were partly addressed in the 1970 IHVE Guide A(A2.2)by the introduc- tion of the concept of environmental temperature. (Note that the only difference between the 1970 method and that given in the 1986 CIBSE Guide A(A2.3)is in the replace- ment of the environmental temperature by the operative temperature as the design temperature.) The intention was to combine the effects of radiant and convective heat transfer within a single air temperature index. While the method gives a fairly accurate representation of heat losses(A2.4) the presentation of the theory has been questioned(A2.5). The main problem occurs in the representation of long wave radiant heat transfer between room surfaces. A complete representation of this process is given in Appendix 5.A1 of the 1986 CIBSE Guide A(A2.3). Simplifications may take the form of approximations to the view factors, or, as chosen by the 1970 IHVE Guide A(A2.2), reduction of the radiant exchange to that between a single surface and an enclosure at some mean radiant temperature. An alternative proposal by Davies(A2.6)(the
‘Two Star Method’) uses exact view factor values for a room having six surfaces. In practice, there is little difference between heat losses calculated by that method and those calculated using the 1970 IHVE Guide A method. The Two Star Method suffered the disadvantage that it was impractical to do the calculation by hand but the widespread availability of computers now makes that method viable.