dc.description.abstract | Autonomic software is typically characterized by dynamic adaptation, a self-management process in which the system adds, removes, and replaces its own components at runtime. At the end of maintenance, the modified software would be retested to: (1) validate added or updated software features, and (2) ensure that new errors were not introduced into previously tested components, i.e., regression testing. King et al [14] introduced an implicit self-test characteristic into autonomic software. His approach modifies the success of a runtime testing approach for autonomic software depends on the way tests are selected and scheduled for execution, which still remains an open problem. This paper focuses on determining the information that would be collected during initial test runs to facilitate selecting and prioritizing automated tests. Finally, a test case metadata provider is developed which analyze the generated test case information, which can assist in dynamics regression test scheduling. | en_US |