Qianyu Feng
2025
Mastering the Craft of Data Synthesis for CodeLLMs
Meng Chen
|
Philip Arthur
|
Qianyu Feng
|
Cong Duy Vu Hoang
|
Yu-Heng Hong
|
Mahdi Kazemi Moghaddam
|
Omid Nezami
|
Duc Thien Nguyen
|
Gioacchino Tangari
|
Duy Vu
|
Thanh Vu
|
Mark Johnson
|
Krishnaram Kenthapadi
|
Don Dharmasiri
|
Long Duong
|
Yuan-Fang Li
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
Large language models (LLMs) have shown impressive performance in code understanding and generation, making coding tasks a key focus for researchers due to their practical applications and value as a testbed for LLM evaluation. Data synthesis and filtering techniques have been widely adopted and shown to be highly effective in this context. In this paper, we present a focused survey and taxonomy of these techniques, emphasizing recent advancements. We highlight key challenges, explore future research directions, and offer practical guidance for new researchers entering the field.
2023
Uncovering Limitations in Text-to-Image Generation: A Contrastive Approach with Structured Semantic Alignment
Qianyu Feng
|
Yulei Sui
|
Hongyu Zhang
Findings of the Association for Computational Linguistics: EMNLP 2023
Despite significant advancements in text-to-image generation models, they still face challenges when it comes to producing highly detailed or complex images based on textual descriptions. In order to explore these limitations, we propose a Structured Semantic Alignment (SSA) method for evaluating text-to-image generation models. SSA focuses on learning structured semantic embeddings across different modalities and aligning them in a joint space. The method employs the following steps to achieve its objective: (i) Generating mutated prompts by substituting words with semantically equivalent or nonequivalent alternatives while preserving the original syntax; (ii) Representing the sentence structure through parsing trees obtained via syntax parsing; (iii) Learning fine-grained structured embeddings that project semantic features from different modalities into a shared embedding space; (iv) Evaluating the semantic consistency between the structured text embeddings and the corresponding visual embeddings. Through experiments conducted on various benchmarks, we have demonstrated that SSA offers improved measurement of semantic consistency of text-to-image generation models. Additionally, it unveils a wide range of generation errors including under-generation, incorrect constituency, incorrect dependency, and semantic confusion. By uncovering these biases and limitations embedded within the models, our proposed method provides valuable insights into their shortcomings when applied to real-world scenarios.