This study examines how the DevGPT dataset was created, cleaned, and prepared for research into developer–ChatGPT interactions. Drawing from over 16,000 shared GitHub conversations, researchers filtered out duplicates, non-English exchanges, and limited analyses to eight-turn dialogues. The final dataset offers a rich foundation for exploring how developers use ChatGPT within real-world coding workflows.This study examines how the DevGPT dataset was created, cleaned, and prepared for research into developer–ChatGPT interactions. Drawing from over 16,000 shared GitHub conversations, researchers filtered out duplicates, non-English exchanges, and limited analyses to eight-turn dialogues. The final dataset offers a rich foundation for exploring how developers use ChatGPT within real-world coding workflows.

Building the DevGPT Dataset for Developer–ChatGPT Studies

2025/11/12 23:30

Abstract

1 Introduction

2 Data Collection

3 RQ1: What types of software engineering inquiries do developers present to ChatGPT in the initial prompt?

4 RQ2: How do developers present their inquiries to ChatGPT in multi-turn conversations?

5 RQ3: What are the characteristics of the sharing behavior?

6 Discussions

7 Threats to Validity

8 Related Work

9 Conclusion and Future Work

References

\

Data Collection

In this section, we introduce the used dataset for our study (Section 2.1), followed by the used method utilized in preprocessing dataset (Section 2.2) and preparing datasets for research questions (Section 2.3). Figure 2 shows an overview of the data collection process and the used data for each of our RQs.

2.1 Data Source

Our research leverages the DevGPT dataset (Xiao et al., 2024) as the primary data source. DevGPT constitutes of an extensive archive of DeveloperChatGPT interactions, featuring 16,129 prompts and ChatGPT’s replies. Each shared conversation is coupled with the corresponding software development artifacts to enable the analysis of the context and implications of these developer interactions with ChatGPT. This collection was assembled by extracting shared ChatGPT links found in various GitHub components, such as source code, commits, pull requests, issues, discussions, and threads on Hacker News, over the period from July 27, 2023, to October 12, 2023. The DevGPT dataset is publicly available in a GitHub repository 4 , offering several snapshots. In this study, we focus on the most recent snapshot available as of October 12, 2023. 5

2.2 Data Preprocessing

As our analysis exclusively focuses on shared conversations occurring within GitHub issues and pull requests, we only consider the corresponding data provided by DevGPT, referred to as DevGPT-PRs and DevGPT-Issues. Based on our observations, we then perform the following two data preprocessing steps:

  1. The shared conversations contain sentences (prompts and replies) written in various human languages. To avoid potential misunderstanding from translating other languages different than English, we decided to only keep the conversations written in English. We utilized a Python library named lingua6 to identify conversations containing non-English content and removed those conversations. Specifically, we excluded 46 non-English conversations from DevGPT-PRs and 114 from DevGPT-Issues.
  2. The shared conversations contain duplicates, i.e., conversations with identical prompts and responses. We detected duplicate conversations and kept only one instance for analysis. Specifically, we removed 20 duplicated conversations from DevGPT-PRs and 83 duplicated conversations from DevGPT-Issues. After performing the above two data preprocessing steps, we ended up with 220 conversations from DevGPT-PRs, and 401 conversations from DevGPTIssues.

\ 2.3 Preparing Data for RQs

Figures 3 and 4 show the distribution of conversational turns within the preprocessed datasets. As shown in these figures, a large majority of shared conversations in both DevGPT-PRs (66.8%) and DevGPT-Issues (63.1%) are single-turn interactions. Meanwhile, conversations extending beyond 8 turns - comprising 8 prompts and their 8 corresponding replies - are notably infrequent, accounting for only 4% in DevGPT-PRs and 6% in DevGPT-Issues.

\ Given this distribution, we choose to implement a cutoff at 8 turns for RQ1-3 involving both datasets. This approach allows us to concentrate our investigation on the most prevalent patterns of interaction, thereby ensuring that our analysis remains closely aligned with the conversational dynamics that characterize the vast majority of the dataset. Following this cutoff, the finalized datasets encompass a total of 212 conversations for DevGPT-PRs and 375 for DevGPT-Issues As shown in Figure 2, in RQ1, we analyzed the contents of 580 initial prompts 7 .

\ In RQ2, we analyzed the content in the 645 prompts within all 189 multi-turn conversations. For RQ3, we extend our manual analysis to pull requests and issues that contain shared conversations. Specifically, we randomly sampled a statistically significant set containing 90 GitHub pull request comments and 160 GitHub issue comments containing shared conversations from DevGPT-PRs and DevGPT-Issues. The sampling is based on the results of RQ2. The detailed process is presented in the approach of RQ3 in Section 5.

:::info Authors

  1. Huizi Hao
  2. Kazi Amit Hasan
  3. Hong Qin
  4. Marcos Macedo
  5. Yuan Tian
  6. Steven H. H. Ding
  7. Ahmed E. Hassan

:::

:::info This paper is available on arxiv under CC BY-NC-SA 4.0 license.

:::

\

Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

SOL Moves Sideways While Ozak AI Token Targets Life-Changing Gains for Presale Investors

SOL Moves Sideways While Ozak AI Token Targets Life-Changing Gains for Presale Investors

The post SOL Moves Sideways While Ozak AI Token Targets Life-Changing Gains for Presale Investors appeared on BitcoinEthereumNews.com. In the world of crypto, two tokens are making waves, albeit with different trajectories. While Solana (SOL) continues to move sideways, the Ozak AI token is gaining significant momentum with impressive presale results. With Ozak AI’s presale showing growth of over 1,100%, investors are eyeing substantial returns as the presale progresses. Ozak AI Presale Performance: Rapid Growth and Strong Fundamentals The Ozak AI token is in Phase 6 of its presale, with the price fixed at $0.012. The project has made remarkable strides, seeing its token grow by more than 1,100% since the beginning of the event. Over 905 million tokens have been sold, raising over $3.2 million. As the presale moves forward, the next price increase will take the token to $0.014, requiring a minimum investment of $100. Ozak AI has a total supply of 10 billion tokens, with 30% allocated to presale. Other allocations include ecosystem incentives, reserves, liquidity, and the project team. The distributions support both growth and sustainability, ensuring a balanced supply for adoption and development. Key Features and Partnerships Supporting Ozak AI’s Growth Ozak AI offers significant value beyond just speculation. The platform utilizes machine learning with decentralized networks to provide predictive analytics for financial markets. Ozak AI offers real-time data feeds, customizable prediction agents, and decentralized applications (dApps) to users. The integration of the Ozak AI Rewards Hub adds a unique feature to the platform, where users can participate in staking, governance, and rewards. This initiative also raises awareness about the presale success. Ozak AI has partnered with various leading platforms. Pyth Network enhances the reliability of its predictive models and provides accurate financial data across blockchains. Additionally, Dex3’s liquidity solutions improve the platform’s trading experience, enabling seamless transactions. The integration of Weblume’s no-code tools and the SINT protocol for one-click AI upgrades makes…
Share
BitcoinEthereumNews2025/09/18 23:49
Solana News: SOL Faces Liquidity Crunch as $500M in Longs Sit on the Brink

Solana News: SOL Faces Liquidity Crunch as $500M in Longs Sit on the Brink

The post Solana News: SOL Faces Liquidity Crunch as $500M in Longs Sit on the Brink appeared on BitcoinEthereumNews.com. Key Insights On-chain insights suggest Solana liquidity has thinned to levels typically seen in a bear market. Institutional capital continues to pour into spot Solana ETFs, which have seen $17.72 million in net inflows this week, almost matching last week’s $20.30 million. Roughly $500 million in long positions could be exposed if the price slips just 5.5%. On-chain insights suggest Solana’s liquidity has thinned to levels typically seen in a bear market. According to a top analyst,  roughly $500 million in long positions could be exposed if the price slips just 5.5%. Meanwhile, Bitcoin’s mid-week buying burst lifted most major altcoins. Even so, Solana isn’t sharing in that confidence. Its liquidity continues to pull back, and the overall market remains uneasy, leaving the token on fragile footing despite the recent lift across the sector. Solana Realized Losses Outpace Profits as Liquidity Shrinks Solana’s 30-day average realized profit-to-loss ratio has remained below one since mid-November, according to a Wednesday tweet from on-chain analytics platform Glassnode. A ratio under one shows that realized losses are outpacing profits. This suggests liquidity has contracted to levels typically seen in a bear market. Solana realized profit/loss ratio data by Glassnode A tweet by Altcoin Vector pointed out that Solana is undergoing a full liquidity reset. This signal has marked the start of new liquidity cycles in the past and often leads to bottoming phases. If the current pattern mirrors April’s setup, a market reignition could take about four more weeks, potentially lining up with early January. The reset is being driven by several factors. Realized losses are prompting sell-offs, futures open interest is declining, market-makers are pulling back, and liquidity is fragmenting across trading pools. The mid- to long-term outlook for the market remains slightly bullish, particularly if macroeconomic pressures ease. In the near term,…
Share
BitcoinEthereumNews2025/12/11 14:11