CVE-2025-67743: CWE-918: Server-Side Request Forgery (SSRF) in LearningCircuit local-deep-research
Local Deep Research is an AI-powered research assistant for deep, iterative research. In versions from 1.3.0 to before 1.3.9, the download service (download_service.py) makes HTTP requests using raw requests.get() without utilizing the application's SSRF protection (safe_requests.py). This can allow attackers to access internal services and attempt to reach cloud provider metadata endpoints (AWS/GCP/Azure), as well as perform internal network reconnaissance, by submitting malicious URLs through the API, depending on the deployment and surrounding controls. This issue has been patched in version 1.3.9.
AI Analysis
Technical Summary
CVE-2025-67743 is a Server-Side Request Forgery (SSRF) vulnerability identified in the LearningCircuit product local-deep-research, specifically affecting versions from 1.3.0 up to but not including 1.3.9. The vulnerability stems from the download_service.py component making HTTP requests using raw requests.get() calls without leveraging the application's built-in SSRF protections implemented in safe_requests.py. This design flaw allows an attacker who can interact with the API to submit crafted URLs that the server will fetch, potentially accessing internal network resources that are otherwise inaccessible externally. Critical targets include cloud provider metadata endpoints such as those from AWS, Google Cloud Platform, and Microsoft Azure, which can expose sensitive credentials or configuration data. The vulnerability requires the attacker to have at least low-level privileges to interact with the API but does not require user interaction beyond that. The attack complexity is high due to the need to craft specific requests and potentially bypass surrounding network controls. The vulnerability impacts confidentiality by exposing sensitive internal resources but does not affect integrity or availability directly. The issue was publicly disclosed on December 23, 2025, with a CVSS v3.1 base score of 6.3, categorized as medium severity. No known exploits have been reported in the wild, but the risk remains significant due to the potential exposure of cloud metadata services. The vendor addressed the vulnerability in version 1.3.9 by ensuring all HTTP requests utilize the safe_requests.py protections to prevent SSRF attacks.
Potential Impact
For European organizations, this SSRF vulnerability poses a significant risk to confidentiality, particularly for those deploying local-deep-research in environments with access to sensitive internal services or cloud infrastructure. Exploitation could lead to unauthorized disclosure of cloud metadata, which often contains temporary credentials or tokens that can be leveraged for lateral movement or privilege escalation within cloud environments. This is especially critical for organizations relying on AWS, GCP, or Azure, as exposure of metadata endpoints can compromise entire cloud workloads. Internal network reconnaissance enabled by SSRF can also facilitate further attacks against internal services, increasing the attack surface. Given the AI assistant's role in research, organizations in sectors such as academia, finance, healthcare, and government may be targeted due to the sensitivity of their data and research outputs. The medium CVSS score reflects moderate ease of exploitation combined with significant confidentiality impact. The absence of known exploits reduces immediate risk but does not eliminate it, especially as attackers may develop exploits post-disclosure. Failure to patch or mitigate this vulnerability could lead to data breaches, regulatory non-compliance under GDPR, and reputational damage.
Mitigation Recommendations
European organizations should immediately upgrade local-deep-research to version 1.3.9 or later, where the SSRF vulnerability has been patched. If immediate upgrading is not feasible, implement strict network egress filtering to block outbound HTTP requests from the application server to internal IP ranges and cloud metadata IP addresses (e.g., 169.254.169.254 for AWS). Employ application-layer firewall rules or API gateways to validate and sanitize URLs submitted to the download service, ensuring only trusted domains are reachable. Conduct thorough code reviews and penetration testing focusing on SSRF vectors within the application. Monitor logs for unusual outbound requests or access patterns indicative of SSRF exploitation attempts. Additionally, restrict API access to trusted users and enforce strong authentication and authorization controls to limit the attack surface. For cloud environments, implement metadata service protections such as AWS IMDSv2 or equivalent to reduce risk from SSRF attacks. Finally, maintain an incident response plan tailored to SSRF exploitation scenarios.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Italy, Spain, Belgium, Poland, Finland
CVE-2025-67743: CWE-918: Server-Side Request Forgery (SSRF) in LearningCircuit local-deep-research
Description
Local Deep Research is an AI-powered research assistant for deep, iterative research. In versions from 1.3.0 to before 1.3.9, the download service (download_service.py) makes HTTP requests using raw requests.get() without utilizing the application's SSRF protection (safe_requests.py). This can allow attackers to access internal services and attempt to reach cloud provider metadata endpoints (AWS/GCP/Azure), as well as perform internal network reconnaissance, by submitting malicious URLs through the API, depending on the deployment and surrounding controls. This issue has been patched in version 1.3.9.
AI-Powered Analysis
Technical Analysis
CVE-2025-67743 is a Server-Side Request Forgery (SSRF) vulnerability identified in the LearningCircuit product local-deep-research, specifically affecting versions from 1.3.0 up to but not including 1.3.9. The vulnerability stems from the download_service.py component making HTTP requests using raw requests.get() calls without leveraging the application's built-in SSRF protections implemented in safe_requests.py. This design flaw allows an attacker who can interact with the API to submit crafted URLs that the server will fetch, potentially accessing internal network resources that are otherwise inaccessible externally. Critical targets include cloud provider metadata endpoints such as those from AWS, Google Cloud Platform, and Microsoft Azure, which can expose sensitive credentials or configuration data. The vulnerability requires the attacker to have at least low-level privileges to interact with the API but does not require user interaction beyond that. The attack complexity is high due to the need to craft specific requests and potentially bypass surrounding network controls. The vulnerability impacts confidentiality by exposing sensitive internal resources but does not affect integrity or availability directly. The issue was publicly disclosed on December 23, 2025, with a CVSS v3.1 base score of 6.3, categorized as medium severity. No known exploits have been reported in the wild, but the risk remains significant due to the potential exposure of cloud metadata services. The vendor addressed the vulnerability in version 1.3.9 by ensuring all HTTP requests utilize the safe_requests.py protections to prevent SSRF attacks.
Potential Impact
For European organizations, this SSRF vulnerability poses a significant risk to confidentiality, particularly for those deploying local-deep-research in environments with access to sensitive internal services or cloud infrastructure. Exploitation could lead to unauthorized disclosure of cloud metadata, which often contains temporary credentials or tokens that can be leveraged for lateral movement or privilege escalation within cloud environments. This is especially critical for organizations relying on AWS, GCP, or Azure, as exposure of metadata endpoints can compromise entire cloud workloads. Internal network reconnaissance enabled by SSRF can also facilitate further attacks against internal services, increasing the attack surface. Given the AI assistant's role in research, organizations in sectors such as academia, finance, healthcare, and government may be targeted due to the sensitivity of their data and research outputs. The medium CVSS score reflects moderate ease of exploitation combined with significant confidentiality impact. The absence of known exploits reduces immediate risk but does not eliminate it, especially as attackers may develop exploits post-disclosure. Failure to patch or mitigate this vulnerability could lead to data breaches, regulatory non-compliance under GDPR, and reputational damage.
Mitigation Recommendations
European organizations should immediately upgrade local-deep-research to version 1.3.9 or later, where the SSRF vulnerability has been patched. If immediate upgrading is not feasible, implement strict network egress filtering to block outbound HTTP requests from the application server to internal IP ranges and cloud metadata IP addresses (e.g., 169.254.169.254 for AWS). Employ application-layer firewall rules or API gateways to validate and sanitize URLs submitted to the download service, ensuring only trusted domains are reachable. Conduct thorough code reviews and penetration testing focusing on SSRF vectors within the application. Monitor logs for unusual outbound requests or access patterns indicative of SSRF exploitation attempts. Additionally, restrict API access to trusted users and enforce strong authentication and authorization controls to limit the attack surface. For cloud environments, implement metadata service protections such as AWS IMDSv2 or equivalent to reduce risk from SSRF attacks. Finally, maintain an incident response plan tailored to SSRF exploitation scenarios.
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2025-12-11T18:08:02.946Z
- Cvss Version
- 3.1
- State
- PUBLISHED
Threat ID: 694a15813b5cae87d6da842c
Added to database: 12/23/2025, 4:07:29 AM
Last enriched: 12/23/2025, 4:07:51 AM
Last updated: 12/23/2025, 10:57:57 AM
Views: 9
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
Italy Antitrust Agency Fines Apple $116 Million Over Privacy Feature; Apple Announces Appeal
MediumCVE-2025-14548: CWE-79 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') in kieranoshea Calendar
MediumCVE-2025-14388: CWE-158 Improper Neutralization of Null Byte or NUL Character in kiboit PhastPress
CriticalCVE-2025-14163: CWE-352 Cross-Site Request Forgery (CSRF) in leap13 Premium Addons for Elementor – Powerful Elementor Templates & Widgets
MediumCVE-2025-14155: CWE-862 Missing Authorization in leap13 Premium Addons for Elementor – Powerful Elementor Templates & Widgets
MediumActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.