The field of generative artificial intelligence brought about a revolution in technology and other disciplines in the year 2022. The development and incredible success of foundation models enabled the creation of realistic and complex content of various kinds and introduced new approaches in creativity, machine translation and decision-making.
In our work, we explore the use of large language models for generating source code documentation. We examine prompt engineering approaches, design and develop a prototype of the generator and evaluate the performance of large language models on the set task. We highlight the challenging nature of language models, whose output can undesirably differ between runs, and the problem of tuning our method to one specific language model.
The work concludes with the finding that the implementation of our method satisfies the needs of DevRev and represents an alternative to existing documentation generators that do not use language models.
We also present possible improvements that include the use of language models from different families and the integration of our prototype into DevRev's Airdrop service.
|