In IT Audit or Financial Audit, error rates are commonly used as a metric to evaluate the accuracy of financial or operational data. The expected error rate is the percentage of errors that are expected to occur in a particular sample, while the tolerable error rate is the maximum allowable error rate for that sample.
For example, let's say a company is conducting an audit of its accounts payable process. The auditor takes a sample of 100 invoices and performs a review of the invoice data to determine if there are any errors.
The expected error rate might be 5%, meaning the auditor expects that 5 of the 100 invoices will have errors. The tolerable error rate might be set at 3%, meaning that no more than 3 of the 100 invoices can have errors without triggering further investigation or remediation.
If the auditor finds that only 2 of the 100 invoices have errors, then the audit would be considered a success and the company would be satisfied that its accounts payable process is operating effectively. However, if the auditor finds that 4 or more of the 100 invoices have errors, then additional investigation or remediation may be required to address the issues.