Self-Healing Test Automation Framework using AI and ML
DOI:
https://doi.org/10.47604/ijsm.2843Keywords:
Self-Healing Test Automation, Dynamic Locator Identification, Intelligent Waiting Mechanisms, Anomaly Detection, Reinforcement Learning, Predictive AnalyticsAbstract
Purpose: In the lifecycle of Product Development and Management, automated testing has become a cornerstone for ensuring product quality and accelerating release cycles. However, the maintenance of test automation suites often presents significant challenges, particularly due to the frequent changes in application interfaces that lead to broken tests. This paper explores the development and implementation of self-healing test automation frameworks that leverage Artificial Intelligence (AI) and Machine Learning (ML) techniques to automatically detect, diagnose, and repair broken tests.
Methodology: By integrating AI/ML models capable of dynamic locator identification, intelligent waiting mechanisms, and anomaly detection, these frameworks can significantly reduce the maintenance burden associated with automated testing. The paper presents a comprehensive architecture of a self-healing test automation framework, detailing the AI/ML techniques employed and the workflow of the self-healing process. A real-world case study is included to demonstrate the practical application and benefits of the proposed framework.
Findings: Evaluation results show substantial improvements in test suite reliability and reductions in maintenance time and costs. The AI/ML techniques used in the framework, such as dynamic locator identification and intelligent waiting mechanisms, proved effective in reducing the maintenance burden and improving the robustness of automated testing processes.
Unique Contribution to Theory, Practice and Policy: This paper aims to provide insights into the potential of self-healing test automation frameworks to enhance the robustness and efficiency of automated testing processes. By adopting these frameworks, organizations can achieve more resilient and maintainable test automation strategies, ultimately contributing to higher product quality and faster release cycles.
Downloads
References
Battina, D. S. (2019). Artificial intelligence in software test automation: A systematic literature review. International Journal of Emerging Technologies and Innovative Research (www. jetir. org| UGC and issn Approved), ISSN, 2349-5162.
Khankhoje, R. (2023). An In-Depth Review of Test Automation Frameworks: Types and Trade-offs. International Journal of Advanced Research in Science, Communication and Technology (IJARSCT), 3(1), 55-64.
Pelluru, K. (2024). AI-Driven DevOps Orchestration in Cloud Environments: Enhancing Efficiency and Automation. Integrated Journal of Science and Technology, 1(6), 1-15.
Jiménez‐Ramírez, A., Chacón‐Montero, J., Wojdynsky, T., & González Enríquez, J. (2023). Automated testing in robotic process automation projects. Journal of Software: Evolution and Process, 35(3), e2259.
Liu, Z., Chen, C., Wang, J., Chen, M., Wu, B., Che, X., ... & Wang, Q. (2023). Chatting with gpt-3 for zero-shot human-like mobile automated gui testing. arXiv preprint arXiv:2305.09434.
Schäfer, M., Nadi, S., Eghbali, A., & Tip, F. (2023). An empirical evaluation of using large language models for automated unit test generation. IEEE Transactions on Software Engineering.
Feldt, R., Kang, S., Yoon, J., & Yoo, S. (2023, September). Towards autonomous testing agents via conversational large language models. In 2023 38th IEEE/ACM International Conference on Automated Software Engineering (ASE) (pp. 1688-1693). IEEE.
Kumar, S. (2023). Reviewing software testing models and optimization techniques: an analysis of efficiency and advancement needs. Journal of Computers, Mechanical and Management, 2(1), 43-55.
Pargaonkar, S. (2023). A Study on the Benefits and Limitations of Software Testing Principles and Techniques: Software Quality Engineering.
Li, K., Zhu, A., Zhou, W., Zhao, P., Song, J., & Liu, J. (2024). Utilizing deep learning to optimize software development processes. arXiv preprint arXiv:2404.13630.
Schäfer, M., Nadi, S., Eghbali, A., & Tip, F. (2023). An empirical evaluation of using large language models for automated unit test generation. IEEE Transactions on Software Engineering.
Lukasczyk, S., Kroiß, F., & Fraser, G. (2023). An empirical study of automated unit test generation for Python. Empirical Software Engineering, 28(2), 36.
Guo, Q., Cao, J., Xie, X., Liu, S., Li, X., Chen, B., & Peng, X. (2024, February). Exploring the potential of chatgpt in automated code refinement: An empirical study. In Proceedings of the 46th IEEE/ACM International Conference on Software Engineering (pp. 1-13).
Alshahwan, N., Chheda, J., Finogenova, A., Gokkaya, B., Harman, M., Harper, I., ... & Wang, E. (2024, July). Automated unit test improvement using large language models at meta. In Companion Proceedings of the 32nd ACM International Conference on the Foundations of Software Engineering (pp. 185-196).
Jalil, S., Rafi, S., LaToza, T. D., Moran, K., & Lam, W. (2023, April). Chatgpt and software testing education: Promises & perils. In 2023 IEEE international conference on software testing, verification and validation workshops (ICSTW) (pp. 4130-4137). IEEE.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Sutharsan Chiranjeevi Partha Saarathy, Suresh Bathrachalam, Bharath Kumar Rajendran
This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution (CC-BY) 4.0 License that allows others to share the work with an acknowledgment of the work's authorship and initial publication in this journal.