International Journal of Leading Research Publication

E-ISSN: 2582-8010     Impact Factor: 9.56

A Widely Indexed Open Access Peer Reviewed Multidisciplinary Bi-monthly Scholarly International Journal

Call for Paper Volume 6 Issue 6 June 2025 Submit your research before last 3 days of to publish your research paper in the issue of June.

Enhancing Software Testing Efficiency with Generative AI and Large Language Models

Author(s) Mohnish Neelapu
Country United States
Abstract This research introduces a Generative AI-powered software testing framework for improving the efficiency, precision, and speed of software quality assurance activities. Utilizing Large Language Models(LLMs) like GPT-4, CodeT5, and StarCoder, the framework streamlines test case generation, document analysis, and code rationalization through increased contextual understanding. Some of the key capabilities built into the system are intelligent test planning, memory-based prompt expansion, and service orchestration for smooth interfacing with code bases and test environments. Methods like Retrieval-Augmented Generation (RAG), prompt tuning, and hallucination removal further increase output dependability and traceability. The architecture introduced here minimizes human effort by a large extent, increases test coverage correctness, and shortens total testing cycles, providing a scalable solution to contemporary software development pipe.
Keywords Test case generation, Automated testing, Machine learning in testing, Generative AI and Large Language Models.
Published In Volume 5, Issue 12, December 2024
Published On 2024-12-07
Cite This Enhancing Software Testing Efficiency with Generative AI and Large Language Models - Mohnish Neelapu - IJLRP Volume 5, Issue 12, December 2024. DOI 10.70528/IJLRP.v5.i12.1586
DOI https://doi.org/10.70528/IJLRP.v5.i12.1586
Short DOI https://doi.org/g9mw5q

Share this