In today's post we will look at some more challenges with algorithms for numerical computations.
The second type of problems is round off errors. Here we use bounded representations. i.e e use the following format to specify values : sign, mantissa, an implicit base, an exponent. We use this to represent 6.2 x 10 ^ 23 as + . 602 (10) 24
However this does not represent numbers that don't have a finite number of digits such as pi which will then be truncated.
Binary representations are p bits where base is 2
More precision implies more accuracy but it also implies slower computations.
Single digit precision is 6/7 digits after, double digits precision is 13/14 digits after and extended precision is 19/20 digits after.
This rounding off error has to be measured.
We do it this way.
If we denote the actual value of a numeric variable as alpha-* and its representation as alpha, then the error is represented by the magnitude of (actual - representation)
There are other errors to representation such as
1. overflow This means that value is too large to represent.
We handle this by reordering operators. For example a large number 1 multiplied by large number 2 divided by a large number 3 could be calculated by first doing the division and then multiplication.
Another approach is to use alternate representations where we pre-compute or use notations such as to avoid doing 100!/(2!98!) and instead use 100.99/2
The third way to solve overflow is to use logarithms since they reduce the numbers considerably.
The second problem is underflow.
This manifests itself when values are too close to zero and computation proceeds with the value zero, instead it should be marked with a flag to indicate underflow. This is different from overflow in that computation could proceed as if the value was zero.
The third problem is that arithmetic operations are not exact.
i.e if we take the value of pi and remove the seventh place number which happens to be the digit 6, we expect the difference between old and new values to be 6 * 10 ^ (-7). but this is taken as zero because the relative error is large.
The errors cascade too in the numerical computations. For example, early errors are propagated to later stages. Numbers in a computation may change slightly and this could result in a completely different answer.
The second type of problems is round off errors. Here we use bounded representations. i.e e use the following format to specify values : sign, mantissa, an implicit base, an exponent. We use this to represent 6.2 x 10 ^ 23 as + . 602 (10) 24
However this does not represent numbers that don't have a finite number of digits such as pi which will then be truncated.
Binary representations are p bits where base is 2
More precision implies more accuracy but it also implies slower computations.
Single digit precision is 6/7 digits after, double digits precision is 13/14 digits after and extended precision is 19/20 digits after.
This rounding off error has to be measured.
We do it this way.
If we denote the actual value of a numeric variable as alpha-* and its representation as alpha, then the error is represented by the magnitude of (actual - representation)
There are other errors to representation such as
1. overflow This means that value is too large to represent.
We handle this by reordering operators. For example a large number 1 multiplied by large number 2 divided by a large number 3 could be calculated by first doing the division and then multiplication.
Another approach is to use alternate representations where we pre-compute or use notations such as to avoid doing 100!/(2!98!) and instead use 100.99/2
The third way to solve overflow is to use logarithms since they reduce the numbers considerably.
The second problem is underflow.
This manifests itself when values are too close to zero and computation proceeds with the value zero, instead it should be marked with a flag to indicate underflow. This is different from overflow in that computation could proceed as if the value was zero.
The third problem is that arithmetic operations are not exact.
i.e if we take the value of pi and remove the seventh place number which happens to be the digit 6, we expect the difference between old and new values to be 6 * 10 ^ (-7). but this is taken as zero because the relative error is large.
The errors cascade too in the numerical computations. For example, early errors are propagated to later stages. Numbers in a computation may change slightly and this could result in a completely different answer.
No comments:
Post a Comment