We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
PassSum: Leveraging paths of abstract syntax trees and self‐supervision for code summarization.
- Authors
Niu, Changan; Li, Chuanyi; Ng, Vincent; Ge, Jidong; Huang, Liguo; Luo, Bin
- Abstract
Code summarization is to provide a high‐level comment for a code snippet that typically describes the function and intent of the given code. Recent years have seen the successful application of data‐driven code summarization. To improve the performance of the model, numerous approaches use abstract syntax trees (ASTs) to represent the structural information of the code, which is considered by most researchers to be the main factor that distinguishes code from natural language. Then, such data‐driven methods are trained on large‐scale labeled datasets to obtain a model with strong generalization capabilities that can be applied to new examples. Nevertheless, we argue that state‐of‐the‐art approaches suffer from two key weaknesses: (1) inefficient encoding of ASTs; (2) reliance on a large labeled corpus for model training. As a result, such drawbacks lead to (1) oversized model, slow training, information loss and instability; (2) inability to be applied to programming languages with only a small amount of labeled data. In light of these weaknesses, we propose PassSum, a code summarization approach that addresses the aforementioned weaknesses via (1) a novel input representation which contains an efficient AST encoding method; (2) introducing three pretraining objectives and pretraining our model with a large amount of (easy‐to‐obtain) unlabeled data under the guidance of self‐supervised learning. Experimental results on code summarization for Java, Python, and Ruby methods demonstrate the superiority of PassSum to state‐of‐the‐art methods. Further experiments demonstrate that the input representation we use has both temporal and spatial advantages in addition to performance leadership. In addition, pretraining is also shown to make the model more generalizable with less labeled data, and also to speed up the convergence of the model during training.
- Subjects
SYNTAX (Grammar); PROGRAMMING languages; NATURAL languages; TREES; RESEARCH personnel; SUPERVISED learning; DEEP learning
- Publication
Journal of Software: Evolution & Process, 2024, Vol 36, Issue 6, p1
- ISSN
2047-7473
- Publication type
Article
- DOI
10.1002/smr.2620