In this weeks TL;DR article, we look at a newly discovered GitHub Desktop exploit and the rise of an AI chatbot designed to assist cybercriminals in crafting malicious code.
GitHub Desktop vulnerability via malicious URLs
Multiple security vulnerabilities, collectively termed “Clone2Leak,” identified in GitHub Desktop and other Git-related projects, this can allow attackers to access users’ Git credentials. Discovered by GMO Flatt Security researcher Ry0taK, these flaws stem from improper handling of messages within the Git Credential Protocol, leading to credential leakage.
The key vulnerabilities are:
- CVE-2025-23040 (CVSS score: 6.6): Crafted remote URLs can cause GitHub Desktop to leak credentials.
- CVE-2024-50338 (CVSS score: 7.4): A carriage-return character in a remote URL allows malicious repositories to expose credentials in Git Credential Manager.
- CVE-2024-53263 (CVSS score: 8.5): Git LFS permits credential retrieval via crafted HTTP URLs.
- CVE-2024-53858 (CVSS score: 6.5): Recursive repository cloning in GitHub CLI can leak authentication tokens to unauthorized submodule hosts.
These vulnerabilities exploit improper handling of control characters, such as carriage returns (“\r”), in URLs. For instance, a maliciously crafted URL can manipulate GitHub Desktop into sending credentials to an attacker-controlled host. Similarly, Git Credential Manager and Git LFS are susceptible to credential exposure through crafted URLs containing control characters. In the case of GitHub CLI, manipulation of environment variables is possible during repository cloning, leading to token leaks.
Exploiting these flaws enables attackers to access privileged resources using the compromised authentication tokens. In response, the Git project has addressed the credential leakage issue related to carriage return smuggling in version v2.48.1, identified as CVE-2024-52006 (CVSS score: 2.1).
Users are advised to update their Git-related tools to the latest versions to mitigate these vulnerabilities.
GitHub URL vulnerability TL;DR
Discovered “Clone2Leak” vulnerabilities in GitHub Desktop, Git Credential Manager, Git LFS, and GitHub CLI allow attackers to steal credentials via crafted URLs. Users should update their tools immediately to prevent unauthorized access.
Ghost GPT: New AI chatbot for cyberattackers
GhostGPT is a recently introduced AI chatbot designed to bypass the ethical constraints present in mainstream AI systems like ChatGPT, Claude, Google Gemini, and Microsoft Copilot. This uncensored model enables users to generate malicious code, develop malware, and craft convincing phishing emails for business email compromise (BEC) scams.
Discovered by Abnormal Security researchers in mid-November, GhostGPT has gained traction among cybercriminals due to its unrestricted capabilities. The service offered through a Telegram channel, comes in multiple pricing tiers. In tests, GhostGPT successfully produced convincing phishing emails, such as a fraudulent DocuSign notification, highlighting its potential to facilitate cybercrime. Additionally, the emergence of tools like GhostGPT underscores the evolving threat landscape, where generative AI is used for malicious purposes. Security experts emphasize the need for heightened awareness and robust defences to counteract the misuse of AI in cyberattacks.
AI Chatbot for hackers TL;DR
GhostGPT, an uncensored AI, allows cybercriminals to create malware and phishing emails, bypassing ethical safeguards in mainstream AI tools. Sold via Telegram, it highlights the growing misuse of AI in cyberattacks.
Stay ahead by limiting potential threat vectors
These stories clearly highlight the ever-evolving tactics that malicious actors use to exploit vulnerabilities and access corporate data. Moreover, they show the growing sophistication of tools designed to create malicious code. The best defence is to restrict access and reduce opportunities for malicious actors to employ these methods. Get in touch with us today to learn how we can help you stay protected.