Common issues encountered during longevity tool development include:
1. Test configuration loads only one assertion and skips over others.
2. Other assertion definitions are not clean
3. TestConfiguration and TaskConfiguration are not distinct
4. TestConfiguration parameters are not all honored.
5. TaskConfiguration parameters have a lot of copying and are not predictable with overlapping overwrites
6. TestConfiguration specifies payload but it is not exercised in the code
7. The starting parameters for position in the stream belong to the taskConfiguration
8. The streampolicies are not propagated to the Pravega controller
9. The eventTypes are not valid
10. The numReaders and numWriters are not strictly enforced
11. The different types of readers have different semantics
12. The assertions don’t all run, some are invalid
13. The custom configuration does not work as intended
14. The TestRuntime results are not all accurate
15. The TestRuntime performance counters are not invoked in all cases
16. The exception handling gets broken in a few cases as changes are scattered.
17. The unexpected exceptions appear in some cases when the controller is unavailable
18. The logging goes missing in a few cases such as in the TestRuntimeManager
19. The configuration picked up by the tool may be different from the tests
20. The configuration changes are not logged by the tool and it is hard to tell what changed
21. The reconfiguration of the tool redeploys the readers and writers but it is not visible
22. The tool does not exercise all the needed codepaths and are some are left out
23. The tool may not leverage all the policies and parameters available for configuring readers and writers on stream
24. The tool does not separate out the results for configuration on a one-on-one basis.
These are just some of the issues encountered in longevity tool development.
1. Test configuration loads only one assertion and skips over others.
2. Other assertion definitions are not clean
3. TestConfiguration and TaskConfiguration are not distinct
4. TestConfiguration parameters are not all honored.
5. TaskConfiguration parameters have a lot of copying and are not predictable with overlapping overwrites
6. TestConfiguration specifies payload but it is not exercised in the code
7. The starting parameters for position in the stream belong to the taskConfiguration
8. The streampolicies are not propagated to the Pravega controller
9. The eventTypes are not valid
10. The numReaders and numWriters are not strictly enforced
11. The different types of readers have different semantics
12. The assertions don’t all run, some are invalid
13. The custom configuration does not work as intended
14. The TestRuntime results are not all accurate
15. The TestRuntime performance counters are not invoked in all cases
16. The exception handling gets broken in a few cases as changes are scattered.
17. The unexpected exceptions appear in some cases when the controller is unavailable
18. The logging goes missing in a few cases such as in the TestRuntimeManager
19. The configuration picked up by the tool may be different from the tests
20. The configuration changes are not logged by the tool and it is hard to tell what changed
21. The reconfiguration of the tool redeploys the readers and writers but it is not visible
22. The tool does not exercise all the needed codepaths and are some are left out
23. The tool may not leverage all the policies and parameters available for configuring readers and writers on stream
24. The tool does not separate out the results for configuration on a one-on-one basis.
These are just some of the issues encountered in longevity tool development.