1. Trang chủ
  2. » Kỹ Thuật - Công Nghệ

Tiêu chuẩn iso 16730 2008

46 1 0

Đang tải... (xem toàn văn)

Tài liệu hạn chế xem trước, để xem đầy đủ mời bạn chọn Tải xuống

THÔNG TIN TÀI LIỆU

Thông tin cơ bản

Tiêu đề Fire Safety Engineering — Assessment, Verification And Validation Of Calculation Methods
Trường học International Organization for Standardization
Chuyên ngành Fire Safety Engineering
Thể loại tiêu chuẩn
Năm xuất bản 2008
Thành phố Geneva
Định dạng
Số trang 46
Dung lượng 1,34 MB

Các công cụ chuyển đổi và chỉnh sửa cho tài liệu này

Cấu trúc

  • 4.1 General (10)
  • 4.2 Technical documentation (10)
  • 4.3 User's manual (12)
  • 5.1 General (13)
  • 5.2 Verification (16)
  • 5.3 Validation (17)
  • 5.4 Sensitivity analysis (20)
  • 5.5 Quality assurance (22)

Nội dung

Part of the calculation method development includes the identification of precision and limits of applicability, and independent testing, b developers of calculation methods individuals

General

Technical documentation must be detailed enough for qualified individuals to reproduce calculation results with specified accuracy and precision Comprehensive documentation of calculation methods and software is crucial for evaluating the scientific and technical foundations of these methods and ensuring the accuracy of computational procedures Proper documentation also helps prevent the misuse of calculation methods Any assessments of specific calculation methods should be included in the documentation The validity of a calculation method should be established by comparing results to real-world data, surveys, or approximations, using quality-assurance methodologies to ensure compliance with defined quality criteria.

⎯ technical documentation that explains the scientific basis of the calculation method; see 4.2;

⎯ a user's manual, in the case of a computer program; see 4.3

Sections 4.2 and 4.3 outline the essential requirements for technical documentation and user manuals While the list is extensive, it is designed to be inclusive of additional information that can help users evaluate the applicability and usability of the calculation method.

Technical documentation

Technical documentation is essential for evaluating the scientific foundation of calculation methods, and it is the responsibility of model developers to provide this documentation It must comprehensively detail the calculation method, demonstrate its effectiveness, and equip users with the necessary information for correct application Users should refer to relevant standards or scientific literature when utilizing algebraic equations derived from experimental results or analytical solutions Additionally, when developing standards for calculation methods in fire-safety engineering, it is crucial to include the sources of these methods along with the required technical documentation The description of the calculation method should encompass all necessary details.

⎯ define the problem solved or function performed;

⎯ describe the results of the calculation method;

⎯ include any feasibility studies and justification statements;

⎯ describe the underlying conceptual model (governing phenomena), if applicable;

⎯ describe the theoretical basis of the phenomena and physical laws on which the calculation method is based, if applicable;

3) implementation of theory, if applicable:

⎯ describe the mathematical techniques, procedures and computational algorithms employed and provide references to them;

Copyright International Organization for Standardization

⎯ identify all the assumptions embedded in the logic, taking into account limitations on the input parameters that are caused by the range of applicability of the calculation method;

⎯ discuss the precision of the results obtained by important algorithms and, in the case of computer models, any dependence on particular computer capabilities;

⎯ describe results of the sensitivity analyses;

⎯ provide information on the source of the data required;

⎯ for computer models, list any auxiliary programs or external data files required;

Data libraries for computer models are essential resources that provide valuable information regarding their sources, contents, and applications A thorough assessment of the calculation method, including both verification and validation processes, must be comprehensively detailed to ensure accuracy and reliability in results.

⎯ the results of any efforts to evaluate the predictive capabilities of the calculation method in accordance with Clause 5, which should be presented in a quantitative manner;

The documentation for computer models must include references to reviews, analytical tests, comparison tests, experimental validation, and code checking that have been conducted In cases where the validation of the calculation method relies on beta testing, it is essential to provide a profile of the testers, detailing their involvement in the development of the method, their level of expertise as users, and whether they received any additional instructions not available to the final product's intended users.

⎯ the extent to which the calculation method meets this International Standard

Technical documentation for computer models should be compiled into a single manual When using explicit algebraic equations to address fire-safety engineering issues, it is important to reference relevant technical documentation from the specified sources.

The verification and validation processes are incomplete until a third party conducts an independent audit, supported by relevant quality-assurance methods This auditing measures the quality of a calculation method to ensure it meets user requirements, as outlined in the ISO/IEC JTC1's Software Quality Requirements and Evaluation (SquaRE) documents The evaluation aims to compare a calculation method's quality against user needs or to select the best method among alternatives Additionally, the technical documentation must include one or more worked examples, which illustrate the necessary input data, their limitations, and the applicability range of the calculation method Required input data examples include geometry, material properties, and boundary conditions, highlighting the validation scope of the calculations.

International Standards on calculation methods, such as ISO 16734 through 16737, must include worked examples in an informative annex These examples provide essential guidance on the correct application of the standards, detailing required components and addressing limitations and input parameters Real-world scenarios, like the temperature of a steel member or fire conditions, are utilized to illustrate these concepts effectively.

In a nuclear power plant, it is crucial to address potential insults to cables The inclusion of worked examples in an informative annex to an International Standard on calculation methods can be satisfied by referencing textbooks that provide such examples.

User's manual

A user's manual is essential for computer models, providing users with the necessary understanding of the model's application and methodology It should allow for the reproduction of the operating environment and results from sample problems, as well as enable users to modify data inputs and run the program across specified parameter ranges and extreme cases The manual must be concise enough to serve as a reference for preparing input data and interpreting results It may include installation, maintenance, and programming documentation, or these may be provided separately Additionally, it should contain adequate information for program installation and clearly define the specific version of the calculation method, along with the organization responsible for its maintenance and support.

A comprehensive user manual for computer models is essential, as it must provide all necessary information for users to apply the model correctly This includes a detailed program description.

⎯ a self-contained description of the model;

⎯ a description of the basic processing tasks performed and the calculation methods and procedures employed (a flowchart can be useful);

⎯ a description of the types of skills required to execute typical runs; b) installation and operating instructions:

⎯ identify the minimum hardware configuration required;

⎯ identify the computer(s) on which the program has been executed successfully;

⎯ identify the programming languages and software operating systems and version in use;

⎯ provide instructions for installing the program;

⎯ provide the typical personnel time and setup time to perform a typical run;

⎯ provide information necessary to estimate the computer execution time on applicable computer systems for typical applications; c) program considerations:

⎯ describe the functions of each major option available for solving various problems with guidance for choosing these options;

It is essential to determine the limits of applicability, which includes understanding the range of scenarios where the underlying theory is considered valid and the extent of input data for which the calculation method has been tested.

⎯ list the restrictions and/or limitations of the software, including appropriate data ranges and the program's behaviour when the ranges are exceeded; d) input data description:

⎯ name and describe each input variable, its dimensional units, the default value (if any) and the source (if not widely available);

⎯ describe any special input techniques;

Copyright International Organization for Standardization

⎯ identify limits on input based on the stability, accuracy and practicality of the data and the applicability of the model, as well as their resulting limitations to output;

⎯ describe any default variables and the process for setting those variables to user-defined values;

⎯ if handling of consecutive cases is possible, explain the conditions of data retention or re-initialization from case to case; e) external data files:

⎯ describe the contents and organization of any external data files;

⎯ provide references to any auxiliary programs that create, modify or edit these files; f) system control requirements:

⎯ detail the procedure required to set up and run the program;

⎯ list the operating system control commands;

⎯ list the program's prompts, with the ranges of appropriate responses;

To pause a program during execution, you can use specific commands or keyboard shortcuts, depending on the operating system Resuming or exiting the program can typically be achieved through the same interface After an interruption, it's crucial to check the status of files and data to ensure no loss of information has occurred Additionally, output information should be clearly displayed to inform users of the current state of the program.

⎯ describe the program output and any graphics display and plot routines;

⎯ provide instructions on judging whether the program has converged to a good solution, where appropriate; h) sample problems/worked examples:

To ensure users can verify the program's functionality, it is essential to provide sample data files along with their corresponding outputs These sample problems should cover a wide range of the program's features, as referenced in section 4.2 c) Additionally, effective error handling is crucial for a seamless user experience.

⎯ list error messages that can be generated by the program;

⎯ provide a list of instructions for appropriate actions when error messages occur;

⎯ describe the program's behaviour when restrictions are violated;

General

Verification and validation of a calculation method are essential processes that assess its accuracy in representing real-world scenarios and its alignment with the developer's conceptual framework Verification focuses on confirming that the equations are solved correctly, while validation ensures that the results align with real-world expectations.

Figure 1 illustrates the phases of modeling and simulation, highlighting the significance of verification and validation in the context of computer fire models.

Figure 1 — Phases of development and assessment of computer(ized) models

The conceptual model is developed through the analysis of real-world systems, incorporating mathematical data and equations that represent physical phenomena, including the Navier-Stokes equations and principles of energy and mass conservation, along with models for turbulence, human behavior, structural integrity, and risk Verification focuses on the connection between the conceptual model and the computerized model, whereas validation assesses the relationship between the computational model and actual reality.

Figure 2 develops Figure 1 and presents it in the form of a flowchart for general application, illustrating the potential use of algebraic equations where deemed appropriate

The process begins with a foundational understanding of tests, experiments, or surveys to accurately depict real-world phenomena A conceptual model is then created, providing a detailed verbal description of the relevant processes, which evolves into a series of mathematical relationships These relationships are systematically analyzed, allowing for solutions to be derived by simplifying complex equations step by step This approach incorporates approximations to ensure that the problem can be resolved with adequate accuracy while maintaining reasonable demands on time and computational resources.

The calculation method of a computer model must be evaluated by experts knowledgeable in fire science and computational techniques, who are not part of the model's development This evaluation should focus on the thoroughness of the documentation, especially concerning numerical approximations Reviewers need to determine if there is adequate scientific evidence in the literature to support the methodologies employed Additionally, the accuracy of data used for constants and default values in the code should be critically assessed.

Copyright International Organization for Standardization

In the context of calculation methods and their intended use, it is crucial to define practical upper and lower limits for input variables This ensures that the application remains within a proven range of applicability, especially when numerical constants may vary for specific scenarios.

To ensure the reliability of a solution system, it is essential to conduct verification and validation processes to identify potential sources of error While algebraic equations are addressed in this International Standard, their complexity in relation to fire-related phenomena makes them unnecessary for evaluating empirical calculation methods.

Figure 2 — Flowchart representation of model assessment, including validation and verification

The methodology extends beyond fire spread issues, encompassing the assessment, validation, and verification of calculation methods related to human behavior and movement, structural behavior, and risk assessment, where risk is defined as the product of the probability of occurrence and the consequence, as outlined in ISO 16732.

Verification

Verification involves confirming that a calculation method is implemented correctly according to the developer's conceptual description It ensures that the equations are solved accurately, but it does not assess the appropriateness of the governing equations themselves The focus is on the fidelity of the implementation to the intended calculation method and its solution.

The verification process aims to ensure code correctness and evaluate numerical error control, which can be categorized into three types: round-off error, arising from the finite representation of real numbers in computers; truncation error, occurring when a continuous process is approximated by a finite one, such as truncating an infinite series or stopping iterations upon meeting a convergence criterion; and discretization error, which happens when a continuous process, like a derivative, is approximated by a discrete analog, such as a divided difference.

An assessment of a computational method should include an analysis and discussions of the methods used and the inherent limitations in the particular choices that were made

Program code can be evaluated structurally, either manually or through code-checking software, to identify irregularities and inconsistencies Clearly documenting the techniques used and any deficiencies found enhances confidence in the program's data processing reliability However, this process does not guarantee the program's overall adequacy or accuracy While errors may not render a program completely unusable, documenting these "bugs" is essential to prevent the use of affected functions.

Mathematical models are typically represented by differential or integral equations, which can be quite complex, making analytical solutions challenging to obtain To find approximate solutions, numerical techniques are essential These methods involve discretizing the continuous mathematical model into a discrete numerical format, with discretization occurring in both time and space (grid).

A continuous mathematical model can be discretized in various ways, leading to multiple discrete models For an accurate approximation of the continuous model's solution, the discrete model must replicate its properties and behavior This ensures that the discrete solution converges to the continuous problem's solution as the discretization parameters, such as time step and space mesh, decrease Achieving this convergence requires meeting the criteria of consistency and stability Consistency ensures that the discrete model closely approximates the continuous model, while stability guarantees that error terms do not increase over time.

The formal order of error in spatial and temporal discretization is crucial and requires thorough discussion While a comprehensive analysis may not be feasible, it is essential to evaluate the accuracy of transforming equations into discrete numerical forms as part of the verification process.

Fire-related issues often arise from the interplay of various physical processes, including chemical, thermal, and mechanical responses The significant differences in time and space scales of these processes can lead to numerical challenges As a result, solving the associated differential equations becomes complex.

Copyright International Organization for Standardization

When selecting time and space steps for numerical computations, it is crucial to exercise caution to ensure stability, particularly regarding the time step in transient analyses, and to achieve sufficient convergence Employing numerical techniques that dynamically monitor discretization parameters can help meet stability and accuracy requirements, such as using a posteriori error estimation combined with dynamic mesh refinement for spatial discretization These methods are especially recommended for addressing time stability in nonlinear problems, as seen in zone models Comprehensive code documentation should detail the implementation of these techniques, and numerical experiments should validate the algorithms used However, users are still encouraged to conduct their own studies on time and space convergence for specific computations, with the systematic approach to selecting discretization parameters left to their discretion.

5.2.4 Iterative convergence and consistency tests

It is important to check that the implementation of the conceptual model as a computer program is done correctly For this purpose, the following procedures should be executed, when applicable

⎯ Check the residual error criteria

⎯ Check for the stability of output variables

⎯ Apply global checks on the conservation of appropriate quantities

⎯ To the extent possible, do a comparison against analytic solutions

⎯ Do a comparison against more accurate solutions obtained by more complete models that are known to have been verified and validated

⎯ Check the effects of artificial boundary conditions for open-flow problems

5.2.5 Review of the numerical treatment of models

Verifying a model's assessment involves ensuring that the documented equations and methods are accurately implemented This process includes evaluating the documentation, checking the implementation of equations in the code, and analyzing the discretization and numerical methods employed.

Validation

The first step involves comparing model predictions with relevant data, emphasizing the distinction between theoretical models and real-world experimental data It is crucial to ensure that the chosen models and input data accurately reflect the experiments being analyzed Both models and data have their limitations, necessitating the inclusion of inherent errors and uncertainty statements in the comparison Validation is achieved when the model produces suitable results for the input data corresponding to the scenarios being examined.

General analytic solutions for fire problems are non-existent, even in the simplest scenarios, meaning closed-form solutions are unavailable However, validation can be achieved through two methods: first, by comparing individual algorithms with experimental data, and second, through straightforward experiments like conduction and radiation that yield asymptotic results For instance, in a basic single-compartment test case without fire, temperatures should asymptotically stabilize to a single value, which a model must be able to replicate Additionally, it is feasible to compute solutions for scenarios with known analytic answers, even if such situations are not commonly encountered.

Correlations are valid predictive tools and it is necessary that they be validated in the same sense as detailed computer models, using similar statistical methods

The process of validation includes a statement of the range of validity of the input data

The data, in general, are expected to ensure

⎯ the completeness of environmental data, like temperature gradients in buildings or temperature differences between the building interior and the outside, wind effects,

Using accurate property data is crucial; for instance, when constants are employed, a sensitivity analysis must demonstrate their impact on the results If constants replace temperature-dependent variables, it is essential to assess the implications of this approximation within the model's or calculation method's applicable range.

Data sourced from literature must be properly referenced, including handbooks, standards, journals, and research reports If the data is not derived from peer-reviewed literature, it should be verified against credible evidence.

The principles governing the prediction of fire behavior in buildings and evacuation processes remain consistent, regardless of the complexity of the models or calculation methods employed Additionally, human behavior significantly impacts these outcomes and should be assessed using the same foundational principles For further insights, refer to Reference [3], which provides data on deterministic fire models.

5.3.2 Comparison of the complete calculation method against appropriate results

5.3.2.1 Comparison of a single-value prediction with data

Algebraic equations typically yield single-value predictions, similar to those generated by computer models It is essential to validate these single-value predictions against experimental or survey data when available, ensuring that the data corresponds to the same initial and boundary conditions For further information, refer to section 5.3.3, which provides additional context applicable to this discussion.

5.3.2.2 Comparison of a time-value prediction with data

Annex B outlines a method for quantifying the similarities and differences between two curves, such as the upper-layer temperature time history from model predictions and experimental data By treating these curves as infinite dimensional vectors, the method employs vector analysis to articulate the differences This approach offers a quantitative means to validate fire models and assess the uncertainty in experimental data.

To assess the differences between two curves, two key metrics are essential The first is the relative difference, which quantifies the disparity between the curves, yielding a value of 0 for identical curves and increasing as the difference grows The second metric, the cosine, measures the similarity in shape between the two curves.

The value ranges from 1 to -1, where 1 indicates that the curves share the same shape, -1 signifies that the curves are mirror images of each other, and a value of 0 denotes that the curves have no similarities.

Annex B describes the method and appropriate equations and shows examples of how to do the comparison

5.3.2.3 Review of the theoretical and experimental basis of probabilistic models

In a probabilistic model used for risk assessment, the equations primarily define risk through a probabilistic function based on various scenarios and help derive necessary probabilities from more accessible ones A thorough review of these equations should address their accuracy and reliability.

Copyright International Organization for Standardization

⎯ Does the model use only well defined probability variables and parameters?

Probabilistic modeling and risk assessment rely on experiential databases and engineering judgment to generate probability variables and parameters The accuracy of single-value estimates is validated by comparing them with alternative estimates derived from independent data sources For instance, judgment values from one group of experts can be contrasted with those from another group, emphasizing the importance of expert characteristics that may influence their judgments Additionally, experientially based probabilities, such as ignition probabilities, can be verified by comparing them to similar probabilities derived from different locations or time periods.

Risk assessment output variables are generally determined by probabilities and consequences, with the latter originating from a deterministic model The predictions made by this model can be validated as outlined in the relevant International Standard To ensure accuracy, the overall risk calculation or that of a specific subsystem can be validated against actual loss experiences When using experiential probability values, it is essential that the loss experiences for validation are sourced from the same locations and timeframes used to establish those probability values.

⎯ Do the probability variables, parameters and calculations follow the laws of probabilities (e.g probabilities must fall between 0 and 1)?

⎯ Are all equations that employ conditional probabilities complete?

EXAMPLE For the equation P(A) = [P(A given B) × P(B)] + [P(A given not B) × P(not B)]

To ensure clarity in the expression, it is essential to explicitly state that either P(not B) or, more commonly, P(A given not B) is zero or nearly zero if the second part of the expression is omitted.

⎯ Is risk defined by an explicit expression linking the measure of risk to probabilities and consequences of scenarios? If not, is there an underlying expression?

⎯ Does the expression defining risk in terms of scenarios capture all possible scenarios? If not, does the calculation comprehensively address the impact of the omitted scenarios on the calculation?

⎯ Are the uncertainties associated with the probabilistic variables and parameters explicitly addressed in the calculation? Are both random uncertainties and sources of systematic bias considered and addressed?

⎯ If any equations are simplified from the complete forms, have they been compared for accuracy with their complete counterparts?

5.3.3 Comparison of subsystem models or of submodels against appropriate results

Validating the predictions of subsystem models, such as smoke filling or plume models, against experimental data is essential for establishing confidence in their predictive capabilities When phenomena are not fully understood, empirical validation serves as a crucial method to ensure that the model's representation is suitable for its intended application Predictions must be generated independently of the experimental data used for validation, although necessary input data from bench scale tests is exempt from this rule It is important to systematically account for uncertainties in measurements, and no adjustments should be made to fit the predictions to the experimental results.

Comparison of predictions with data requires

⎯ thorough understanding of the sources of uncertainty in the data,

⎯ quantification of these sources of uncertainty,

⎯ sensitivity analysis to assess the effect of the uncertainty on the predictions,

⎯ data/program comparison techniques to account for such uncertainty

Most published work on the comparison of model predictions with experimental data is qualitative, i.e reported as “satisfactory”, “good”, or “reasonable” Some guidance on quantification is found in

All these comparisons may be against a) analytical solutions (c.f verification), b) benchmark cases

(well-defined, accurate results known), c) a set of experimental results, d) other computer codes, e) surveys.

Sensitivity analysis

A sensitivity analysis of a calculation method examines how variations in specific parameters influence the results produced It highlights that predictions can be affected by uncertainties in input data, the rigor of the modeling of relevant physics and chemistry, and the adequacy of numerical treatments A thorough sensitivity analysis identifies dominant variables, establishes acceptable value ranges for input variables, and illustrates how output variables respond to changes in input data Additionally, it informs potential users about the necessary care in selecting inputs and running the model, while also providing guidance on which parameters to monitor in large-scale experiments.

Performing a sensitivity analysis on a complex fire model is challenging due to the extensive input data required and the multiple output variables predicted over a simulated time frame The selection of the analysis technique is influenced by the study's objectives, desired outcomes, available resources, and the model's complexity.

Two basic approaches exist for obtaining sensitivity information

Local methods generate sensitivity measures for specific input parameters, necessitating repetition across a range of inputs to assess overall model performance Finite difference methods can be utilized without altering the model's equations, but they require careful selection of input parameters for accurate estimates Direct methods enhance the model's equation set by integrating sensitivity equations derived from it, allowing for the simultaneous solution of these equations to obtain sensitivities Incorporating direct methods into fire model design is essential, as they are often lacking in existing fire models.

Global methods generate sensitivity measures that are averaged across the full spectrum of input parameters However, these methods necessitate an understanding of the probability density functions of the input parameters, which are typically unknown in the context of fire models.

Local methods are most easily applied Global methods are appropriate if the range of input information is known, for example in risk calculations for fire safety engineering

Despite the ability to define sensitivities and develop various computation methods, conducting a sensitivity analysis remains challenging Iman and Helton [7] highlight several characteristics of complex computer models that complicate this analysis.

⎯ There are many input and output variables

⎯ Discontinuities can exist in the behaviour of the model

⎯ Correlations can exist among the input variables, and the associated marginal probability distributions are often non-normal

Copyright International Organization for Standardization

⎯ Model predictions are nonlinear, multivariate, time-dependent functions of the input variables

⎯ The relative importance of individual input variables is a function of time

The sensitivity equations exhibit comparable characteristics, indicating that for a specific model output and input, there are time intervals where the model output is responsive to the input, as well as periods where it remains unaffected by the same parameter.

A sensitivity analysis of a fire model can address two key questions, particularly focusing on how sensitive the model is to specific inputs This analysis aims to evaluate the significance of each input in relation to others by considering a wide range of model inputs that reflect the model's applicability By examining the model outputs in response to these broad changes, one can gain valuable insights into the relative importance of individual input variables on selected outputs, ultimately offering a comprehensive understanding of the model's behavior.

The second question addresses the necessity of specifying a particular input This inquiry focuses on understanding how uncertainties in selected inputs influence the model's outputs, rather than just the model's overall behavior To explore this, small perturbations in the inputs can be analyzed If a specific scenario is relevant, the effects of input perturbations within that scenario can be investigated.

As suggested by Iman and Helton [7] , an average relative difference can thus be used to characterize the model sensitivity for comparing individual inputs and outputs

Figure 3 illustrates a fire model's response surface, highlighting the impact of varying heat-release rates (HRR) and vent sizes on upper layer temperature The analysis shows that both peak HRR and vent opening influence the peak upper-layer temperature, with model calculations normalized to base scenario values The data, represented by a spline-interpolated surface grid, indicates that HRR significantly affects peak temperature more than vent width As expected, temperature increases with higher HRR and decreases with larger vent openings, although these effects are not linear.

1 basic scenario values to which the actual model calculations are normalized

NOTE This figure shows the effect of a changing heat-release rate and vent size on the upper-layer temperature

Figure 3 — Response surface for a fire model to characterize the model sensitivity for comparing individual inputs and outputs

Quality assurance

Evaluating computer programs for quality assurance certification is essential, as it allows for the assessment of software quality through established quality-assurance models These models evaluate both external quality, which is crucial for end-users, and internal quality, which ensures the program functions correctly Software quality is defined by six key characteristics: functionality, reliability, usability, efficiency, maintainability, and portability, each of which is further divided into sub-characteristics To determine the quality of software, a set of measured internal attributes should be used for each characteristic and sub-characteristic, while external measurements should assess the capabilities provided by the software system.

Figure 4 — Quality model for external and internal quality showing characteristics and sub-characteristics

To evaluate the quality of a calculation method, the procedures specified in the ISO/IEC 25000 series, which are based on ISO/IEC 9126-1 and ISO/IEC 14598-1, should be followed A concise summary of this procedure is available in Annex C.

6 Requirements for reference data to validate a calculation method

Reference data for validating a calculation method can be sourced from experiments, surveys of single or multiple well-documented fire events, or other established calculation methods relevant to the parameters and quantities being validated.

Data used to define, set or estimate a quantity in a calculation method cannot be used to validate that quantity It is necessary that independent data be used for validation

Differences between a calculation-method quantity and data used to validate that quantity can be due to errors in the quantity or errors in the data

To validate the calculation method, it is essential to evaluate the magnitude and nature of errors in the reference data, as well as their impact on the differences between quantities and data This requires a thorough assessment of the completeness, quality, and precision of the data involved.

Copyright International Organization for Standardization

`,,```,,,,````-`-`,,`,,`,`,,` - and bias of the reference data be characterized, for example, by one of the following, before it can be used for validation

To ensure the reliability of experimental data, it is essential to evaluate the repeatability and reproducibility of the test procedures used The precision and bias of the reference data can be determined by analyzing the existing precision and bias characterizations of the measurement device or apparatus employed in the data generation process.

To ensure the validity of statistical survey data, it is essential to evaluate the representativeness of both the survey design and the resulting sample Additionally, assessing the uncertainty associated with the sample size is crucial for accurate analysis.

⎯ For forensic analysis data, it is necessary to state how these data were collected

To ensure the reliability of data obtained from other validated calculation methods, it is essential to provide evidence of their validation, along with a detailed characterization of their precision, bias, and the sources and magnitudes of any errors.

An evaluation is essential to ensure that the conditions under which reference data is collected align with the assumptions made by the calculation method, particularly regarding initial and boundary conditions For instance, reference data derived from experiments focused solely on young, healthy adults cannot adequately validate a calculation method intended for a more diverse population.

Validating the entire calculation method, along with its subsystems and sub-models, is essential It is crucial to identify and acquire the appropriate data for each level of validation.

To ensure the validity of reference data, it is essential to evaluate any reduction, conversion, or interpretation processes applied to raw data For instance, if raw survey data is limited to specific high-rise office buildings in one city, its relevance to different building types, heights, or locations in other countries remains questionable Acknowledging and assessing this imperfect match is crucial.

It is necessary to assess the independence of the reference-data generation and review from the development of the calculation method

Comprehensive validation of a calculation method necessitates evaluating all potential outputs across various cases and conditions To achieve a thorough assessment, reference data is essential, and any limitations regarding the range of outputs and cases evaluated should be clearly documented This includes specifying the constraints on the application of the validated calculation method For instance, if experimental data is solely available for ceiling temperatures, the predictions made by the calculation method for temperatures in other areas of the room remain unvalidated.

Much of Clause A.1 is taken from Reference [1]

Clause A.1 aids experimenters in quantifying measurement uncertainty and assists model users in evaluating the relevance of experimental data for empirical model validation However, it is important to note that not all published experimental data provide uncertainty information.

Measurement results are essentially approximations of the specific quantity being measured, and they are only complete when accompanied by a quantitative statement of uncertainty The uncertainty in measurement results typically comprises several components, which can be categorized into two groups based on the methods used to estimate their numerical values, as outlined by the International Council on Weights and Measures.

⎯ type A: those that are evaluated by statistical methods;

⎯ type B: those that are evaluated by other means

Uncertainty in measurements is typically categorized into two types: random and systematic Each type contributes to the overall uncertainty, represented by an estimated standard deviation known as standard uncertainty, denoted as \( u_i \), which is the positive square root of the estimated variance \( u_i^2 \) For category A uncertainty components, the standard deviation \( s_i \) is derived statistically, with degrees of freedom \( v_i \), making the standard uncertainty \( u_i \) equal to \( s_i \) Conversely, category B uncertainty components are represented by \( u_j \), an approximation of the standard deviation, calculated as the positive square root of \( u_j^2 \), which is based on an assumed probability distribution and available information Thus, for category B, the standard uncertainty is simply \( u_j \).

A.1.2 Type A evaluation of standard uncertainty

A type A evaluation of standard uncertainty utilizes valid statistical methods for data analysis, such as calculating the standard deviation of the mean from independent observations This can involve using the least squares method to fit a curve to the data, allowing for the estimation of curve parameters and their standard deviations For detailed statistical techniques related to these evaluations, please refer to References [9] through [12].

A.1.3 Type B evaluation of standard uncertainty

Ngày đăng: 12/04/2023, 18:18

TÀI LIỆU CÙNG NGƯỜI DÙNG

TÀI LIỆU LIÊN QUAN

w